(Lecture ... _ Lecture Notes in Artificial Intelligence) Ian F.C. Smith - Intelligent Computing in Engineering and Architecture_ 13th EG-ICE Workshop 2006, Ascona, Switzerland, June 25-30, 2006, Revis.pdf
(Lecture ... _ Lecture Notes in Artificial Intelligence) Ian F.C. Smith - Intelligent Computing in Engineering and Architecture_ 13th EG-ICE Workshop 2006, Ascona, Switzerland, June 25-30, 2006, Revis.pdf
Intelligent Computing
in Engineering
and Architecture
13
Series Editors
Jaime G. Carbonell, Carnegie Mellon University, Pittsburgh, PA, USA
Jörg Siekmann, University of Saarland, Saarbrücken, Germany
Volume Editor
Ian F. C. Smith
École Polytechnique Fédérale de Lausanne
Station 18, GC-G1-507, 1015 Lausanne, Switzerland
E-mail: [email protected]
CR Subject Classification (1998): I.2, D.2, J.2, J.6, F.1-2, I.4, H.3-5
ISSN 0302-9743
ISBN-10 3-540-46246-5 Springer Berlin Heidelberg New York
ISBN-13 978-3-540-46246-0 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting,
reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,
in its current version, and permission for use must always be obtained from Springer. Violations are liable
to prosecution under the German Copyright Law.
Springer is a part of Springer Science+Business Media
springer.com
© Springer-Verlag Berlin Heidelberg 2006
Printed in Germany
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India
Printed on acid-free paper SPIN: 11888598 06/3142 543210
Preface
Providing computer support for tasks in civil engineering and architecture is hard.
Projects can be complex, long and costly. Firms that contribute to design, construction
and maintenance are often worth less than the value of their projects. Everyone in the
field is justifiably risk adverse. Contextual variables have a strong influence making
generalization difficult. The product life cycle may exceed one hundred years and
functional requirements may evolve during the service life. It is therefore no wonder
that practitioners in this area have been so reluctant to adopt advanced computing
systems.
After decades of research and industrial pilot projects, advanced computing sys-
tems are now being recognized by many leading practitioners to be strategically im-
portant for the future profitability of firms involved in engineering and architecture.
Engineers and architects with advanced computing knowledge are hired quickly in the
market place. Closer collaboration between research and practice is leading to more
comprehensive validation processes for new research ideas. This is feeding develop-
ment of more useful systems, thus accelerating progress. These are exciting times.
This volume contains papers that were presented at the 13th Workshop of the Euro-
pean Group for Intelligent Computing in Engineering. Over five days, 70 participants
from around the world listened to 59 paper presentations in a single session format.
Attendance included nearly everyone on the Scientific Advisory Committee, several
dynamic young faculty members and approximately ten doctoral students. The first
paper is a summary of a panel session on the Joint International Conference on Com-
puting and Decision Making in Civil and Building Engineering that finished in Mont-
real nine days earlier. The remaining papers are listed in alphabetical order of their
first author.
Organizational work began with requests for availability and funding in September
2004. This was followed by tens of personal invitations to experts from around the
world during 2005. I would like to thank the Organizing Committee, and particularly
from January 2006 its Secretary, Prakash Kripakaran, for assistance with the countless
details that are associated with running meetings and preparing proceedings. The
meeting was sponsored primarily by the Swiss National Science Foundation and the
Centro Stefano Franscini. Additional support was gratefully received from the Ecole
Polytechnique Fédérale de Lausanne (EPFL), the Technical Council on Computing
and Information Technology of the American Society of Civil Engineers and the Ecole
de Technologie Supérieure, Montréal.
Organizing Committee
Ian Smith, Acting Chair
Martina Schnellenbach-Held, Chair(resigned)
Gerhard Schmitt, Vice-Chair
Prakash Kripakaran, Secretary
Bernard Adam
Suraj Ravindran
Sandro Saitta
EG-ICE Committee
Ian Smith, Chair
John Miles, Vice-Chair
Chimay Anumba, Past Chair
Yaqub Rafiq, SecretaryTreasurer
The Joint International Conference on Computing and Decision Making in Civil and
Building Engineering in Montreal finished nine days before the beginning of the 13th
Workshop of the European Group for Intelligent Computing in Engineering in As-
cona. This provided an opportunity at the workshop to organize a panel session in
Ascona to discuss outcomes after a week of retrospection. The Joint International
Conference in Montreal was the first time that five leading international computing
organizations (ASCE, ICCCBE, DMUCE, CIB-W78 and CIB-W102) joined forces to
organize a joint conference. The meeting was attended by nearly 500 people from 40
countries around the world. It was unique in that this was the first time so many
branches of computing in civil engineering were brought together.
It was also unique because the majority of participants in Montreal were not active
researchers in computational mechanics, the traditional domain of computing in civil
engineering. While a movement away from numerical analysis has long been pre-
dicted by leaders in the field, this was the first clear confirmation of a general trend
across civil engineering. Numerical analysis methods indeed remain important for
engineers and they are often found to be embedded within larger systems. However,
this area now seems mature for most new computing applications and many innova-
tive contributions are found elsewhere.
The panel started with brief presentations by five leaders in the field who attended
the conference in Montreal in various capacities. All panelists actively combine inno-
vative research and teaching with strategic organizational activity within international
and national associations. Their contributions are summarized below.
DCC, DMUCE, ACADIA, CAADFutures, etc.) They all serve a purpose, some of
them are regional and some are focused on particular topics. This results in a very
fragmented research community. A researcher cannot attend all the events. Therefore,
less people attend individual conferences.
At the Montreal conference, a survey was performed at the opening ceremony and
it was observed that a majority of the attendees present had not previously attended
any of the five conference streams. This indicated that bringing these groups together
had created a synergy that attracted large numbers of new people.
The focus of IT applications in civil and building engineering has greatly expanded
over the recent years and this could be noticed explicitly in the Montreal conference.
If previously the focus was predominantly on buildings and structures, now the areas
of construction and infrastructure are growing in importance. This development also
has a strong influence on proposed technologies and methodologies.
In building and structures the scientific focus has shifted from structural analysis to
aspects of data and process integration and to distributed cooperation between engi-
neering teams. In integration there is still a major effort noticeable towards establish-
ing Industry Foundation Classes as an industry standard - even though some frustra-
tion was voiced over the rate of progress. In distributed cooperation, a strong group
from Germany presented results from a German Priority Program with a focus on
cooperative product and process models and on agent technologies.
Construction, infrastructure and transportation accounted for considerably more
contributions than building and structures. Major aspects of scientific value included
4D-modeling techniques, cost performance issues and the utilization of sensor tech-
nologies. The influence of web-based and mobile technologies has had a major impact
over recent years and much effort is now concentrated in these areas.
Overall, participants benefited from a broad range of aspects presented in the con-
ference and from large numbers of experts from different fields. If integrated civil
engineering remains an important field, conferences, such as Montreal, that bring
together experts from many fields will retain their value - even if this is at the expense
of many parallel sessions.
What is new?
Being a researcher associated with the field since the 90’s, it is exciting to see a new
generation of young assistant professors bringing in young students, sharing new
ideas and strengthening as well as refreshing the computing community in CEE.
Montreal was a very good conference from the research point of view. Examples of
interesting research areas are activities in 4D, IFC, etc.
A lot of “angst” was detected while speaking with people in Montreal. It seems
that researchers were concerned that their research work was not used in industry,
their question being "why is our work not used in practice?". One answer is that the
process of going from research to industry is a slow one. This is especially true in the
construction industry.
Outcomes of the Joint International Conference 5
Such slow process between research and industry is normal. Indeed, it has always
taken a long time to adopt new technology. As research brings new ideas, it needs
time to get accepted and incorporated in industry. Therefore, research made twenty
years ago is now coming into practice. This is the way it has always been.
For example finite element modelling and optimization have taken a long time to
appear in industry, starting in the 60's with the program STRESS (developed by S.
Fenves who was present at this workshop), then with the first microcomputers used in
the 70's. He concludes that good software finally appeared in the 90's. This is an ex-
ample of the slowness of research adoption by the industry. Furthermore, he observes
that final adoption of research is usually imperceptible.
The big question seems to be "what to do then?" Grierson thinks that one answer
could be "nothing or just wait". He makes a comparison with trying to sell bibles a
long time ago, even though nearly no one was able to read. It is the same for new
research. Only 5-20% of companies use new research technologies. In general, other
companies do not use ideas and results directly from the research community.
Discussion
A lively discussion followed these remarks. It was agreed that the Montreal meeting
was a huge success and that the organisers should be congratulated for their work and
their innovative efforts to bring diverse groups together. This was appreciated by all
people who attended.
Working in the spirit that even outstanding events can be improved upon, many
people provided suggestions for subsequent meetings. Issues such as the scientific
quality of papers, objectives of conferences, conference organization, media for pro-
ceedings, the emergence of synergies and the importance of reviewing were evoked.
Although a detailed discussion is out of the scope of this paper, the following is a
non exhaustive list of suggestions that were provided by the audience. Classify papers
into categories according to the degree of industrial validation so that well validated
proposals are distinguished from papers that contain initial ideas. Maintain high qual-
ity reviewing. Ensure open access to all documentation. Encourage links to concurrent
industrial events. Foster bridge building between research and practice. Do not sacri-
fice opportunities for synergy and transmission of new ideas for the sake of paper
quality. Maintain scientific quality even when papers are short. Allow films and other
media in electronic proceedings. Investigate the possibility to relax page limits to add
more science. Encourage key references in abstracts to see foundations of ideas. En-
sure that contributions recognize and build on previous work. Include review criteria
for authors in the call for papers.
While some ideas were relatively new, many suggestions reflected well established
challenges that have always been associated with all large meetings. There are multi-
ple objectives that require tradeoffs. The best point on the “Meeting Pareto front” is
difficult to identify especially since it depends on so many contextual parameters.
Indeed, there is much similarity between the challenges of conference organization
and the challenges we face when providing computing support for civil engineering
tasks. The panel session at Ascona turned out to be a very useful validation exercise
for conference designs!
6 I.F.C. Smith
I would like to thank the panellists whose insightful contributions did much to en-
sure that the subsequent discussion was of high quality. Finally, thanks are due to P.
Kripakaran, S. Saitta, B. Adam, H. Pelletier and S. Ravindran who helped record the
contributions of the panellists.
Self-aware and Learning Structure
1 Introduction
Tensegrities are spatial, reticulated and lightweight structures. They are composed of
compressed bars and tensioned cables and stabilized by self-stress states. When
equipped with an active control system, they are attractive for shape control. This
topology could to be used for structures such as temporary roofs, footbridges and
antennas. Over the past decade, it has been established that active shape control of
cable-strut structures is a complex task. While studying shape and stress control of
prestressed truss structures, difficulties were identified in validating numerical results
through experimental testing [1]. Most studies addressing tensegrity structure control
involve only numerical simulation and only simple structures [2 – 6]. One of the few
experimental results of tensegrity active control on a full-scale structure is presented
in [7]. Shape control involves maintaining the top surface slope through changes in
active strut length. Displacements are measured for three nodes at the edge of the top
surface: 37, 43, 48 (Figure 1), in order to calculate the top surface slope value (1).
Since there is no closed-form solution for active-strut movements, control commands
are identified using a generate-test algorithm together with stochastic search and case-
based reasoning [8]. However, the learning algorithm did not use information pro-
vided by sensor devices of the structure and while command identification time
decrease with time, no enhancement of control quality over time was demonstrated.
Moreover, in these studies, it was assumed that both load position and magnitude
were known.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 7 – 14, 2006.
© Springer-Verlag Berlin Heidelberg 2006
8 B. Adam and I.F.C. Smith
2 Methodologies
Studies on self diagnosis [24] observed that multiple loads at several locations can
induce the same behavior. Multi-objective control commands are associated with sets
of similar behavior, in order to create cases. Responses to applied loads define case
attributes. Case retrieval involves comparing case attributes with responses of the
structure to the current load.
In the event of no retrieved case, load identification is performed. This involves
determination of load location and magnitude. Since precision errors of the active
control system are taken into account, solutions are a set of good candidate pairs of
locations and magnitudes. These candidate solutions exhibit a behavior that is similar
to the behavior of the structure subjected to the current load. Once current loading is
known, a control command is identified using a multi-objective search method and
Self-aware and Learning Structure 9
applied to the structure for shape control. Response of the structure, control command
and experimentally observed slope compensation are then memorized in order to
create a new case.
48 51
z 48 z′48
9
8 7 34 50
39 26 32
10
4 6
6 12
R 1 2 16 22
43
3 5
37
z′37 z37 41 45
z37 + z 43
z 43
2
S
′ + z ′43
z37
z′43
2
Fig. 1. View of the structure from the top with the 10 active struts numbered in squares and
upper nodes indicated by a circle. Nodes 37, 43 and 48 are monitored for vertical displacement.
Support conditions are indicated with large circles and arrows. Top surface slope S and trans-
versal rotation R are indicated.
⎛ z + z 48 ⎞
S = ⎜ z 43 − 37 ⎟ L (1)
⎝ 2 ⎠
where zi is the vertical coordinate of node i and L the horizontal length between node
43 and the middle of segment 37 – 48 (Figure 1). For these calculations, slope unit is
mm/100m.
Transversal rotation, R: This second indicator is the rotation direction of segment
37 – 43 (Figure 1). It can be equal to 1 or -1:
(z ′48 − z 37′ ) − (z 48 − z 37 )
R= (2)
abs(( z 48
′ − z 37 ′ ) − ( z 48 − z 37 ))
where z’i is the vertical coordinate of node i after load has been applied and zi the
vertical coordinate of node i before load has been applied.
Influence vector v is the third indicator. Slope variations due to 1mm elongation of
each of the 10 active struts (Figure 1) are put together in order to create the influence
vector v:
10 B. Adam and I.F.C. Smith
where ΔS(i) is the slope variation which results of 1mm elongation of active strut i.
2.1 Learning
S c′ − S m′ ≤ 10 mm / 100m (4)
where S’c is the top surface slope of the structure subjected to the current load and S’m
is the top surface slope of the memorized case (1),
Rc = Rm (5)
where Rc is the transversal rotation direction of the structure subjected to the current
load and Rm is the transversal rotation direction of the memorized case (2),
10
vc − v m = ∑ (ΔS ( j ) − ΔS ( j )) ≤ 0.15 mm / 100m
2
c m (6)
j =1
where vc is the influence vector of the structure subjected to the current load and vm is
the influence vector of the memorized case (3). ΔSm(j) is the measured slope varia-
tions of a case for 1mm elongation of active strut j, ΔSc(j) the measured slope varia-
tions for 1mm elongation of active strut j on the laboratory structure subjected to the
current load.
If conditions (4), (5) and (6) are true for a memorized case, the behavior of the
structure subjected to the current load is similar enough to the memorized case for
retrieving its control command. Once a case is retrieved, the control command is
adapted for shape control of the structure subjected to current loading. For the pur-
pose of this study, a simple adaptation function is proposed, based on a local elastic-
linear assumption. The experimentally observed slope compensation of the case is
used to adapt control command in order to fit to the top surface slope induced by
current loading as follows:
Self-aware and Learning Structure 11
S c′
CC c = CC m (7)
S m′ ⋅ SC m
where CCc is the command for shape control of the structure subjected to current
loading, S’c is the top surface slope of the structure subjected to the current load, S’m
is the top surface slope of the case, SCm is the experimentally observed slope compen-
sation of the case and CCm is the control command of the case.
Once the new control command is applied to the structure for shape control, ex-
perimentally observed slope compensation is measured. If experimentally observed
slope compensation of the command is less than the precision of the active control
system, adaptation function improves experimentally observed slope compensation
quality and the memorized case is replaced by the current case.
In the event that no memorized case is close to the response of the structure sub-
jected to the current load, the three indicators (1), (2) and (3) are used for load identi-
fication. A control command is then identified by a multi-objective search algorithm
[25] and once applied successfully to the structure, a new case is created.
For the purposes of this paper, load identification task involves magnitude evaluation
and load location. Since the structure is monitored with only three displacement sen-
sors, system identification is used to identify load. The advantage of using system
identification is that it requires neither intensive measurements nor the use of force
sensors. The methodology is based on comparing measured and numerical responses
with respect to the indicators (1), (2) and (3). In this study, the following assumptions
are made regarding loading: loading events are single static vertical point loads. They
are applied one at a time on one of the 15 top surface nodes (Figure 1).
Step 1: Top surface slope is the first indicator. When the laboratory structure is
loaded, load magnitude evaluation involves numerically determining, for each of the
15 nodes, which load magnitude can induce the same top surface slope as the one
measured in the laboratory structure. This evaluation is performed iteratively for each
node. Load magnitude is gradually increased until the numerically calculated top
surface slope is equal to the one measured on the laboratory structure. The load is
incremented in steps of 50N.
Step 2: Transversal rotation is the second indicator. Candidate solutions exhibiting
inverse transversal rotation with respect to laboratory structure measurements are
rejected. Experimental measurements show that 0.1 is an upper bound for precision
error for transversal rotation. In situations where transversal rotation is less than 0.1,
no candidate solutions are rejected by this indicator.
Step 3: The influence vector is the third indicator. The influence vector is evaluated
for the laboratory structure through measurements. For the candidate solutions, the
influence vector is evaluated through numerical simulation. The candidate influence
vector that exhibits the minimum Euclidian distance with the influence vector of the
laboratory structure subjected to the current load indicates the candidate that is the
12 B. Adam and I.F.C. Smith
This process results in a set of good candidate solutions. This information is used
as input to identify a control command for shape control task [25].
3 Experimental Results
3.1 Learning
The laboratory structure is loaded with 859 N at node 32 (Figure 1). The measured
top surface slope is equal to 133.6 mm/100m when load is applied. Three candidate
solutions exhibit a behavior that is close to the behavior of the laboratory structure:
770 N at node 32, 1000 N at node 51 and 490 N at node 48. For these three solutions,
control commands are identified using a multi-objective search algorithm [25]. The
three control commands have been applied to the laboratory structure. The evolution
of top surface slope, where zero slope is the target, is shown in Figure 2 for these
commands. Top surface slope is plotted versus steps of 1mm of active strut move-
ment. Experimentally observed slope compensation ranges between 91 % and 95 %,
even when the control command is associated with a load identification solution that
does not exactly represent the real loading. The three solutions are thus considered to
be equivalent. The best experimentally observed slope compensation of 95 % is the
closest: 770 N at node 32.
Self-aware and Learning Structure 13
Fig. 2. Shape control for load case 5: 859 N at node 32, for the three load identification solu-
tions: 770 N at node 32, 1000 N at node 51 and 490 N at node 48
4 Conclusions
The learning methodologies described in this paper allows for two types of learning:
increased rapidity and increased quality over time. The success of both types is re-
lated to the formulation of retrieval and adaptation, as well as the number of cases.
More generally, it is also demonstrated that interactivity between learning algorithms
and sensor devices is attractive for control tasks.
System identification algorithms contribute to self-awareness in active structures
and lead to successful load identification. Load identification solutions are used effi-
ciently for shape control in situations of unknown loading event. Experimental testing
supports the strategy involving initial generation of a set of good solutions rather than
direct (and often erroneous) application of a single control command.
References
1. Kawaguchi, K., Pellegrino, S. and Furuya, H., (1996), “Shape and Stress Control Analysis
of Prestressed Truss Structures”, Journal of Reinforced Plastics and Composites, 15, 1226-
1236.
2. Djouadi, S., Motro, R., Pons, J.C., and Crosnier, B., (1998), “Active Control of Tensegrity
Systems”, Journal of Aerospace Engineering, 11, 37-44.
3. Sultan, C., (1999) “Modeling, design and control of tensegrity structures with applica-
tions”, PhD thesis, Purdue Univ., West Lafayette, Ind.
4. Skelton, R.E., Helton, J.W., Adhikari, R., Pinaud, J.P. and Chan, W., (2000), “An intro-
duction to the mechanics of tensegrity structures”, Handbook on mechanical systems de-
sign, CRC, Boca Raton, Fla
5. Kanchanasaratool, N., and Williamson, D., (2002), “Modelling and control of class NSP
tensegrity structures”, International Journal of Control, 75(2), 123-139
14 B. Adam and I.F.C. Smith
6. Van de Wijdeven, J. and de Jager, B., (2005), “Shape Change of Tensegrity Structures:
Design and Control”, Proceedings of the American Control Conference, Protland, OR,
USA, 2522-2527
7. Fest, E., Shea, K., and Smith, I.F.C., (2004), “Active Tensegrity Structure”, Journal of
Structural Engineering, 130(10), 1454-1465
8. Domer, B., and Smith, I.F.C., (2005) “An Active Structure that learns”, Journal of Com-
puting in Civil Engineering, 19(1), 16-24
9. Kolodner, J.L. (1993). “Case-Based Reasoning”, Morgan Kaufmann Publishers Inc., San
Mateo, CA.
10. Leake, D.B. (1996), Case-based reasoning: Experiences, lessons, & future directions, D.B.
Leake, ed., California Press, Menlo Park., Calif.
11. Leake, D. B., and Wilson, D. C. (1999)"When Experience is Wrong: Examining CBR for
changing Tasks and Environments." ICCBR 99, LNCS 1650, Springer Verlag.
12. Müller, K. R., Mika, S., Rätsch, G., K., T., and Shölkopf, B. (2001). "An Introduction to
Kernel-Based Learning Algorithms." IEEE Transactions on Neural Networks, 12(2), 181-
201.
13. Purvis, L., and Pu, P. "Adaptation Using Constraint Satisfaction Techniques." ICCBR-95,
LNAI 1010, Springer Verlag, Sesimbra, Portugal, 289-300
14. Marling C, Sqalli M, Rissland E, Munoz-Avila H, Aha D. “Case-Based Reasoning Integra-
tions”, AAAI, Spring 2002
15. Waheed, A. and Adeli, H., (2005), “Case-based reasoning in steel bridge engineering”,
Knowledge-Based Systems, 18, 37-46.
16. Bailey, S. and Smith, I.F.C., "Case-based preliminary building design", Journal of Com-
puting in Civil Engineering, ASCE, 8, No. 4, pp 454-68,1994
17. Ljung, L., (1999), System identification-theory for the users, Prentice-Hall, Englewood
Cliff, N.J.
18. Park, G., Rutherford, A.C., Sohn, H. and Farrar, C.R., (2005), “An outlier analysis frame-
work impedance-based structural health monitoring”, Journal of Sound and Vibration, 286,
229-250
19. Vanlanduit, S., Guillaume, P., Cauberghe, B., Parloo, E., De Sitter, G. and Verboven, P.,
(2005), “On-line identification of operational loads using exogenous inputs”, Journal of
Sound and Vibration, 285, 267-279
20. Haralampidis, Y., Papadimitriou, C. and Pavlidou, M., (2005), “Multi-objective framework
for structural model identification”, Earthquake Engineering and Structural Dynamics, 34,
665-685
21. Maeck. J. and De Roeck, G., (2003), “Damage assessment using vibration analysis on the
z24-bridge”, Mechanical Systems and Signal Processing, 17, 133-142
22. Lagomarsino, S. and Calderini, C., (2005), “The dynamical identification of the tensile
force in ancient tie-rods”, Engineering Structures, 27, 846-856
23. Robert-Nicoud, Y., Raphael, B. and Smith, I.F.C., (2005), “System Identification through
Model Composition and Stochastic Search”, Journal of Computing in Civil Engineering,
19(3), 239-247
24. Adam, B. and Smith, I.F.C, (in review), “Self Diagnosis and Self Repair of an Active Ten-
segrity Structure”, Journal of Structural Engineering.
25. Adam, B. and Smith, I.F.C, (2006), “Tensegrity Active Control: a Multi-Objective Ap-
proach”, Journal of Computing in Civil Engineering.
Capturing and Representing Construction Project
Histories for Estimating and Defect Detection
1 Introduction
The history of a construction project can have a multitude of uses in supporting
decisions throughout the lifecycle of a facility and on new projects. Capturing and
modeling construction project history not only helps in active project monitoring and
situation assessment, but also aids in learning from the trends observed so far in a
project to make projections about project completion. After the completion of a pro-
ject, a project history also provides information useful in estimations of upcoming
projects.
Many challenges exist in capturing and modeling a project’s history. Currently,
types of data that should be collected on a job site are not clearly identified. Existing
formalisms (e.g., time cards) only consider a single view, such as cost accounting
view, on what data should be captured; resulting in sparse data collection that do not
meet the requirements of other stake-holders, such as cost estimators and quality con-
trol engineers. Secondly, most of the data is captured manually resulting in missing
information and errors. Thirdly, collected data are mostly stored in dispersed docu-
ments and databases, which do not facilitate integrated assessment of what happened
on a job site. As a result, there are not many decision support systems available for
engineers to fully leverage the data collected during construction.
This paper provides an overview of findings from various case studies, showing
that: (1) current data collection and storage processes are not effective in gathering the
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 15 – 22, 2006.
© Springer-Verlag Berlin Heidelberg 2006
16 B. Akinci, S. Kiziltas, and A. Pradhan
data needed for developing project histories that can be useful for defect detection and
cost estimation of future projects; (2) sensing technologies enable robust data collec-
tion when used in conjunction with a formalized data collection procedure; however,
the accuracy of the data collected using such technologies is not well-defined. Find-
ings suggest a need for a formalized approach for capturing and modeling construc-
tion project histories. An example of such an approach, as described in this paper,
starts with identifying different users’ needs from project histories, provides guidance
in collecting data, and incorporates a framework for fusing data from multiple
sources. With such formalism, it would be possible to leverage project histories to
support active decision-making during construction (e.g., active defect detection) and
proactive decision-making for future projects (e.g., cost estimation of future projects).
information was described based on some indirect measures (e.g., number of truck-
loads of dirt moved out); resulting in inaccuracies in the data collected and stored.
Utilization of sensing technologies (e.g., laser scanners, equipment OBIs), reduces
the percentage of missing information. However, there can still be inaccuracy issues,
if a sensor is not well calibrated and its accuracy under different conditions is not well
defined. In certain cases, data collected from such sensors needs to be processed fur-
ther and fused to be in a format useful for decision makers.
Issues with and the need for fusing data from multiple sources:
Currently, data collected on job sites is stored in multiple dispersed documents and
databases. For example, daily crew and material data are kept on time cards, soil con-
ditions are described on reports and production data are stored on databases associ-
ated with equipment OBIs. To get a more comprehensive understanding of how ac-
tivities were performed, one needs to either fuse data stored in such various sources or
rely on his/her tacit knowledge, which might not be accurate. In a case study, when an
engineer was asked to identify reasons for explaining the fluctuations in the excava-
tion work, he attributed it to fog in the mornings and the soil conditions. When the
data collected from equipment OBIs merged along with the data collected in time-
cards, the soil profiles defined by USGS and the weather data, it was observed that the
factors identified by him did not vary on the days when there were large deviations on
production [3]. While this showed the benefits of integrating such data to analyze a
given situation, the research team observed that it was tedious and time-consuming to
do the integration manually. For instance, it took us approximately forty hours to fuse
daily production data of a single activity with the already collected crew, material,
and daily contextual data for a typical month.
The data requirements of decision makers from job sites need to be incorporated prior
to data collection. The first step in doing that is to understand what these requirements
are, and how they can be derived or specified (Figure 1). Since our research focus has
been to support defect detection and cost estimation, we identified construction speci-
fications and estimators’ knowledge of factors impacting activity production rates as
sources of information to generate a list of measurement goals.
The approach leverages an integrated product and process model, depicting the as-
design and schedule information, and a timeframe for data collection, as input. Using
18 B. Akinci, S. Kiziltas, and A. Pradhan
this, it first identifies activities that will be executed during that timeframe and the
corresponding measurement goals, derived based on specifications and factors affect-
ing the production rates of those activities. Next, measurement goals identified for
each activity are utilized to identify possible sources for data collection with the goal
of reducing manual collection. Sources of data include sensors (e.g., laser scanners,
equipment OBIs) and general public databases (e.g., USGS soil profiles, weather
database). With this, the approach generates a data collection plan as an output.
generate a list of measurement goals for each activity, map them to a set of sensors
and publicly-available databases that would help in collecting some of the data
needed. As a result it would be possible to generate a data collection plan associated
with each group of activity to be executed.
3.2 Data Fusion and Analysis for Creating and Using Project Histories
Once a data collection plan is generated, it can be executed at the job site to collect
the data needed. The next step is to process the data gathered from sensors and data-
bases and fuse them to create an integrated project history model that can serve as a
basis to perform analysis for defect detection and cost estimation (Figure 2). Different
components of such an approach are described below.
ments [6]. Similarly, laser scanner accuracy varies considerably based on its incidence
angle and distance from the target object [7]. While in most cases, the accuracy and
the reliability of the data were observed to be better than the manual approaches, it is
still important to have a better characterization of accuracies of sensors under differ-
ent conditions (e.g. incidence angle) when creating and analyzing project history
models. Currently, we are conducting experiments for that purpose.
Fig. 2. An approach for data fusion and analysis for creating and using project histories
The third level of fusion described in [8] is the decision level fusion, where the data
fused at the sensor and feature levels are further integrated and analyzed to achieve a
decision. We are leveraging different models such as-built product/process model and
data collection model for decision level fusion (Fig 2). Decision level fusion is chal-
lenging compared to sensor and feature level fusions, since the formalisms used in
sensor and feature level fusions are well defined and can be identical across multiple
domains. However formalisms for decision-level fusion differ among domains since
they need to support different decisions [9].. As discussed in Section 3.1., different
tasks require different sets of data being collected and fused. Hence, the decision-
level fusion requires customized formalisms to be developed to enable the integration
and processing of the data to support specific decisions. In our approach, decision-
level fusion formalisms are designed to generate the views (e.g. from the estimator’s
perspective) that are helpful in supporting decisions to select a proper production rate.
These formalisms are not meant to perform any kind of predictions or support case-
based reasoning.
3.2.3 Formalisms for Data Interaction and Analysis to Support Active Defect
Detection and Cost Estimating
In this research, we have explored project history models to support defect detection
during construction and in estimating production rates of future activities. An ap-
proach implemented for active defect detection leverages the information represented
in as-design models, construction specifications, and the as-built models, generated by
processing the data collected from laser scanners. It uses the information in specifica-
tions to identify the features of the components that are of interest for defect detection
and compares the design and as-built models accordingly. When there is a deviation
between an as-design and an as-built model, it refers to the specifications to assess
whether the deviation detected exceeds the tolerances specified. If it exceeds the
tolerances, then it flags the component as a defective component [4].
In supporting estimators’ decision-making, we have been focusing on identifying
and generating views from integrated project history models, so that estimators can
navigate through the model and identify the information that they need to determine
the production rates of activities in future bids. Initial interviews with several estima-
tors from two companies showed that estimators would like to be able to navigate
through production data in multiple levels (e.g., zone level, project level) and in mul-
tiple perspectives (e.g., based on a certain contextual data, such as depth of cut), and
be able to compare alternatives (e.g., comparing productions on multiple zones) using
such a model. These views will enable estimators to factually learn from what hap-
pened on a job site, and make the estimate for a similar upcoming activity based on
this learning. We are currently implementing mechanisms to generate such views for
estimators.
4 Conclusions
This paper describes the need for capturing and representing construction project
histories and some issues associated with it for cost estimation and defect detection
purposes. The approach described in the paper starts with identifying some data
22 B. Akinci, S. Kiziltas, and A. Pradhan
capture needs and creating data collection plan for each activity to satisfy those needs.
Since several case studies demonstrated that manual data collection is inaccurate and
unreliable, the envisioned approach focuses on leveraging the data already stored in
publicly-available databases and data collection through a variety of sensors. Once the
data is captured from a variety of sensors, they should be fused to create an integrated
project model that can be analyzed in a comprehensive way. Such analyses include
defect detection and situation assessment during the execution of a project, and gen-
eration of information needed for estimators in determining the production rates of
future activities.
Acknowledgements
The projects described in this paper are funded by two grants from the National Sci-
ence Foundation, CMS #0121549 and 0448170. NSF’s support is gratefully acknowl-
edged. Any opinions, findings, conclusions or recommendations presented in this
paper are those of authors and do not necessarily reflect the views of the National
Science Foundation.
References
[1] Akinci, B., Boukamp, F., Gordon, C., Huber, D., Lyons, C., Park, K. (2006) “A Formalism
for Utilization of Sensor Systems and Integrated Project Models for Active Construction
Quality Control.” Automation in Construction, Volume 15, Issue 2, March 2006, Pages
124-138
[2] Kiziltas, S. and Akinci, B. (2005) “The Need for Prompt Schedule Update By Utilizing
Reality Capture Technologies: A Case Study.” Constr. Res. Cong., 04/2005, San Diego,
CA.
[3] Kiziltas, S., Pradhan, A., and Akinci, B. (2006) “Developing Integrated Project Histories
By Leveraging Multi-Sensor Data Fusion”, ICCCBE,, June 14-16, Montreal, Canada.
[4] Frank, B., and Akinci, B. (2006) “Automated Reasoning about Construction Specifications
to Support Inspection and Quality Control” , Automation in Construction, under review.
[5] Kiritsis, D., Bufardi, A. and Xirouchakis, P., “Research issues on product lifecycle man-
agement and information tracking using smart embedded systems”, Advanced Engineering
Informatics, Vol. 17, Numbers 3-4, 2003, pages 189-202.
[6] Ergen. E., Akinci, B. Sacks, R. “Tracking and Locating Components in a Precast Storage
Yard Utilizing Radio Frequency Identification Technology and GPS.” Automation in Con-
struction, under review.
[7] Axelson, P. (1999). “Processing of Laser Scanner Data – Algorithms and Applications,”
ISPRS Journal of Photogrammetry & Remote Sensing, 54 (1999) 138-147.
[8] Dasarathy, B. (1997). “Sensor Fusion Potential Exploitation-Innovative Architectures and
Illustrative Applications”, IEEE Proceedings, 85(1).
[9] Hall, D. L. and Llinas, J. (2001). “Handbook of Mulitsensor Data Fusion,” 1st Ed., CRC.
Case Studies of Intelligent Context-Aware Services
Delivery in AEC/FM
1 Introduction
The potential of mobile Information Technology (IT) applications to support the in-
formation needs of mobile AEC/FM workers has long been understood. To exploit the
potential of emerging mobile communication technologies, many recent research
projects have focused on the application of these technologies in the AEC/FM sector.
However, from a methodological viewpoint, a key limitation of the existing mobile IT
deployments in the construction sector is that they see support for mobile workers as a
"simple" delivery of the information (such as project data, plans, technical drawings,
audit-lists, etc.). Information delivery is mainly static and is not able to take into
account the worker’s changing context and the dynamic project conditions. Many
existing mobile IT applications in use within the construction industry rely on asyn-
chronous methods of communication (such as downloading field data from mobile
devices onto desktop computers towards end of the shift and then transferring this
information into an integrated project information repository) with no consideration
of user-context . Even though in some projects real time connectivity needs of mobile
workers are being addressed (using wireless technologies such as 3G, GPRS, WiFi),
the focus is on delivering static information to users such as project plans and docu-
ments or access to project extranets. Similarly, most of the commercially available
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 23 – 31, 2006.
© Springer-Verlag Berlin Heidelberg 2006
24 C. Anumba and Z. Aziz
mobile applications for the construction industry are designed primarily to deliver
pre-programmed functionality without any consideration of the user context. This
often leads to a contrast between what an application can deliver and the actual data
and information requirements of a mobile worker. In contrast to the existing static
information delivery approaches, work in the AEC/FM sector, by its very nature, is
dynamic. For instance, due to the unpredictable nature of the activities on construc-
tion projects, construction project plans, drawings, schedules, project plans, budgets,
etc often have to be amended. Also, the context of the mobile workers operating on
site is constantly changing (such as location, task they are currently involved in, con-
struction site situations and resulting hazards, etc) and so do, their information re-
quirements. Thus, mobile workers require that supporting systems rely on intelligent
methods of human-computer interaction and deliver the right information at the right
time on an as-needed basis. Such a capability is possible by a better understanding of
the user-context.
The paper is organised as follows. Section 2 introduces the concept of context-
aware computing and reviews the state of the art. Section 3 presents the service deliv-
ery architecture which facilitates context capture, context brokerage and integration
with back-end systems using a Web Services model. Section 4 presents case-studies
related to the deployment of context-aware applications. Conclusions are drawn about
the possible future impact of context-aware service delivery technologies in the
AEC/FM sector.
3.1 Context-Capture
This tier helps in context capture and also provides users access to the system. The
context is drawn from different sources, including:
• Current location, through a wireless local area network-based positioning sys-
tem [17]. A client application running on a user’s mobile device or a tag sends
constant position updates to the positioning engine over a WLAN link. This al-
lows real time position determination of users and equipment. It is also possible
to determine a user’s location via telecom network-based triangulation;
• User Device Type (e.g. PDA, TabletPC, PocketPC, SmartPhone, etc.), via W3C
CC/PP standards [18]. These standards allow for the description of capabilities
and preferences associated with mobile devices. This ensures that data is deliv-
ered according to the worker’s device type;
• User identity (e.g. Foreman, Electrician, Site Supervisor, etc.), via the unique IP
address of their mobile device. User profile is associated with user identity;
• User’s current activity (e.g. inspecting work, picking up skips, roof wiring, etc.),
via integration with project management/task allocation application;
• Visual context, via a CCTV-over-IP camera;
• Time via computer clock.
The use of IP-based technologies enables handover and seamless communication
between different wireless communication networks such as wireless wide area net-
works, local area networks and personal area networks. Also, both push and pull
modes of interaction are supported. Thus, information can be actively pushed to mo-
bile workers (through user-configured triggers), or a worker can pull information
through ad-hoc requests, on an as-needed basis. As application content, logic and data
processing reside on the wired network, the mobile client is charged with minimal
memory and processor consuming tasks.
to log-in. On a successful log-in, the site-server pushed the worker’s task list and asso-
ciated method statement (as assigned by the site supervisor using an administration
application) based on the worker’s profile (Fig 2 (a & b)). Completion of tasks were
recorded in real-time and an audit trail was maintained. Also, application and service
provisioning to site workers was linked to their context (i.e. location, profile and as-
signed task) e.g. based on changing location, relevant drawings and data was made
available (Fig 3). The context broker played the key role of capturing the user context
and mapping the user context to project data, at regular time intervals. Real-time loca-
tion tracking of site workers and expensive equipment was also used to achieve health
and safety and security objectives. Also, WLAN tags were used to store important
information about a bulk delivery item. XML schema was used to describe the tag
information structure. As the delivery arrives at the construction site, an on-site wire-
less network scans the tag attached to the bulk delivery and sends an instant message to
the site manager’s mobile device, prompting him/her to confirm the delivery receipt.
Fig. 2. Profile based task allocation (a & b) and inventory logistics support (c)
The site manager browses through the delivery contents (Fig 2(c)) and records any
discrepancies. Once the delivery receipt is confirmed, data is synchronized with the
site server, resulting in a real-time update of the inventory database.
5 Conclusions
This paper has presented an architecture for context-aware services delivery and three
implementation case-studies. Awareness of the user-context has the potential to cause
a paradigm shift in AEC/FM sector, by allowing mobile workers access to context-
specific information and services on an as-needed basis. Current approaches of sup-
porting AEC/FM workers often involve the complexities of using a search engine,
moving between files or executing complicated downloads. In comparison, context-
awareness makes human-computer interaction more intuitive, thereby reducing the
need for training. Also, new application scenarios are becoming viable by the ongoing
miniaturisation, developments in sensor networking, the increase in computational
power, and the fact that broadband is becoming technically and financially feasible.
However, the case studies have demonstrated that context-aware services delivery in
the AEC/FM sector needs to satisfy the constraints introduced by technological com-
plexity, cost, user needs and interoperability. Also there is a need for more successful
industrial case studies; these will be explored as part of further field trials.
Acknowledgements. This project was funded by EPSRC and Fanest Business Intelli-
gence ltd.
Case Studies of Intelligent Context-Aware Services Delivery in AEC/FM 31
References
1. Burrell, J. & Gay, K. (2001). “Collectively defining context in a mobile, networked com-
puting environment,” CHI 2001 Extended abstracts, May 2001.
2. Pashtan, A. (2005). Mobile Web Services. Cambridge University Press.
3. Kortuem, G., Bauer, M., Segall, Z (1999) “NETMAN: the design of a collaborative wear-
able computer system”, MONET, 4(1), pp. 49-58
4. Fleck, M.F, Kindberg, T, Brien-Strain, E.O, Rajani, R and Spasojevic, M (2002) "From in-
forming to remembering: Ubiquitous systems in interactive museums". IEEE Pervasive
Computing 1: pp.13-21
5. Marmasse, N., Schmandt, C. (2002) “A User-Centered Location Model. Personal and
Ubiquitous Computing”, Vol:5, No:6, pp:318–321
6. Aittola, M., Ryhänen, T., Ojala, T. (2003), “SmartLibrary - Location-aware mobile library
service”, Proc. Fifth International Symposium on Human Computer Interaction with Mo-
bile Devices and Services, Udine, Italy, pp.411-416
7. Chen, H., Finin,T & Joshi, A. (2004), “Semantic Web in the Context Broker Architecture”,
IEEE Conference on Pervasive Computing and Communications, Orlando, March 2004,
IEEE Press,2004, pp. 277–286
8. Coen, M.H. (1999). “The Future Of Human-Computer Interaction or How I Learned to
Stop Worrying and Love my Intelligent Room”, IEEE Intelligent Systems 14(2): pp. 8–19
9. Laukkanen, M., Helin, H., Laamanen, H. (2002) “Tourists on the move”, In Cooperative
Information Agents VI, 6th Intl Workshop, CIA 2002, Madrid, Spain, Vol 2446 of Lec-
ture Notes in Computer Science, pages 36–50. Springer
10. Fischmeister, S., Menkhaus, G., Pree, W. (2002), “MUSA-Shadows: Concepts, Implemen-
tation, and Sample Applications: A Location-Based Service Supporting Multiple Devices”,
In Proc. Fortieth International Conference on Technology of Object-Oriented Languages
and Systems, Sydney, Australia. 10. Noble, J. and Potter, J., Eds., ACS. pp. 71-79
11. Davies,N., Cheverst, K., Mitchell, K & Friday, A. (1999), “Caches in the Air: Disseminat-
ing Information in the Guide System”. Proc. of the 2nd IEEE Workshop on Mobile Com-
puting Systems and Applications, Louisiana, USA, February 1999, IEEE Press, pp. 11-19
12. Goker, A., Cumming, H., & Myrhaug, H. I. (2004). Content Retrieval and Mobile Users:
An Outdoor Investigation of an Ambient Travel Guide. Mobile HCI 2004 Conference, 2nd
intl Workshop on Mobile and Ubiquitous Information Access , Glasgow, UK.
13. Lonsdale P., Barber C., Sharples M., Arvantis T. (2003) "A context-awareness architecture
for facilitating mobile learning". In Proceedings of MLEARN 2003, London, UK
14. Griswold,W.G., Boyer,R., Brown,S.W., Truong, T.M., Bhasket, E., Jay,R., Shapiro, R.B
(2002) “ActiveCampus: Sustaining Educational Communities through Mobile Technol-
ogy”, Uni. of California, Dept. of Computer Science and Engineering, Technical Report
15. Arnstein, L., Borriello,G., Consolvo,S., Hung, C & Su, J. (2002).“Labscape: A Smart En-
vironment for the Laboratory”, IEEE Pervasive Computing, Vol. 1, No. 3, pp. 13-21
16. Aziz, Z., Anumba, C.J., Ruikar, D., Carrillo., P.M., Bouchlaghem.,D.N. (2005), “Context-
aware information delivery for on-Site construction operations,” 22nd CIB-W78 Conf on
ITin Construction, Germany, CBI Publication No:304
17. Ekahau (2006) Ekahau Positioning Engine [Online] http://www.ekahau.com
18. CC/PP (2003): http://www.w3.org/TR/2003/PR-CCPP-struct-vocab-20031015/
19. RDF (2005) [Online] http://www.w3.org/RDF/
Bio-inspiration: Learning Creative Design Principia
1 Introduction
Intelligent computing is usually understood to utilize heuristics and stochastic
algorithms in addition to knowledge in the form of deterministic rules and procedures.
As a result, it may produce outcomes usually associated only with human/intelligent
activities in terms of novelty and unpredictability. In particular, intelligent computing
may lead to an emergence of unexpected patterns and design concepts, which are
highly desirable, potentially patentable, and may drive progress in engineering.
Unfortunately, novelty of results reflects the extent and nature of knowledge used.
Therefore, if the goal is exploring novelties, the key issue in intelligent computing is
not the computing algorithm but acquiring proper knowledge. In this context, our
paper on bio-inspiration is directly related to intelligent computing. Bio-inspiration
may be considered as a potentially attractive source of design-relevant knowledge.
Traditional conceptual design is typically deductive. In most cases, the approach is
to select a design from a variety of known design concepts and, at most, slightly
modify it. No unknown or new design hypotheses/concepts are generated and therefore
no abduction takes place. In accordance to Altshuller, such design paradigms are called
“selection” and “modification,” respectively [1, 2, 3, 4]. Gero [5] calls such paradigms
“exploitation,” because he views the designer as probing a relatively small, static, well-
known, and domain-specific design representation space. Exploitation is relatively
well understood and design researchers work on various methods and exploitation
tools, with recent efforts focusing on evolutionary design [6].
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 32 – 53, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Bio-inspiration: Learning Creative Design Principia 33
We live in a rapidly changing world – one that constantly generates new challenges
and demands for engineering systems that cannot be easily met by reusing or
modifying known design concepts. This creates a need for novel design concepts,
which are unknown yet feasible and potentially patentable. Such new and unknown
concepts can be generated only abductively using our domain-specific knowledge as
well as knowledge from other domains. Altshuller [1] refers to such design processes
as “innovation,” “invention,” and “discovery,” depending on the source of the outside
knowledge. Gero [5] refers to such design paradigm as “exploration,” because
knowledge from outside a given domain is utilized.
Exploration represents the frontier of design research. Little is known about how
exploration might be achieved and how the entire process could be formalized and
implemented in various computer tools. Existing computer tools for exploration,
including, for example, IdeaFisher (IdeaFisher Systems, Inc.), MindLink (MindLink
Software Corporation), and WorkBench (Ideation International) are based on
Brainstorming [7] Synectics [8] and Theory of Solving Inventive Problems (TRIZ))
[1], respectively. They all provide high-level abstract knowledge for designers
seeking inspiration from outside their own domains. Unfortunately, these tools require
extensive training, are not user friendly, and, worst of all, their output requires
difficult interpretation by domain experts.
Exploration can be interpreted in computational terms as an expansion of the
design representation space by acquiring knowledge from outside the problem domain
and conducting a search in this expanded space. The key to exploration is knowledge
acquisition. It can be conducted automatically using machine learning or manually by
knowledge engineers working with domain experts. Machine learning in design
knowledge acquisition is promising, but, unfortunately, the last fifteen years of
research have produced limited results. Research clearly demonstrates that the use of
machine learning to acquire design rules is feasible in the case of specific, well-
understood and relatively small design domains [9]. Unfortunately, the practicality of
using machine learning to acquire more abstract design rules is still not known.
This paper takes a different approach to knowledge acquisition. Our focus is on
human design knowledge acquisition. In particular, we are interested in acquiring
abstract design rules from various domains. In contrast to TRIZ, briefly discussed in
Section 2, we want to acquire knowledge from outside the field of engineering,
specifically from the natural sciences fields and especially biology. Knowledge
acquisition is understood here as the process of learning abstract design rules, or
creative design principia, which can help designers develop novel design concepts.
These rules are not deterministic and are not always right. They are heuristics,
representing potentially useful knowledge, but without any guarantee of their actual
usefulness, or even of their relevance.
Heuristics in conceptual design can be considered as a source of inspiration from
outside a considered design domain. Therefore, inspiration can be described as
knowledge from outside the problem domain, in the form of a collection of weak
decision rules or heuristics. This knowledge is needed and potentially sufficient to
produce novel designs. For example, inspiration from the domain of pre-stressed
concrete arch bridges could be used in the development of novel design concepts for
large span arch steel bridges. We are particularly interested in heuristics related to the
evolution of both human and animal body armor. Such heuristics are called “patterns
34 T. Arciszewski and J. Cornell
of evolution” in the Directed Evolution method [10, 11]. They provide superior
understanding of the evolution of engineering systems over long time periods, and are
most valuable to designers working to develop their products in a specific direction
through the use of novel design concepts.
Bio-inspiration looks to natural environmental processes for inspiration in
engineering design. Bio-inspiration in design can be used on several levels, including
nano, micro, and macro levels. The nano-level deals with individual atoms in the
system being designed, the micro-level deals with the individual system’s
components, and the macro-level deals with an entire engineering system. Pioneering
research at MIT exploring bio-inspiration on the nano-level focuses on structural and
functional design from mollusk shells, called mother-of-pearl, to potentially improve
human body armor [12]. This research may have a potentially significant impact on
on the development of new types of body armor. Evaluation of its results and
implementation, however, may be many years ahead. Therefore, our research focuses
on bio-inspiration on both the micro- and macro-levels. In this case, the results may
be used within a much shorter time frame and may also provide additional inspiration
for research on the nano-level.
1. Stages of evolution
An engineering system evolves through periods of infancy, growth, maturity, and
decline. Its major characteristic versus time can be plotted as an S-curve. Example:
evolution of airplanes over the last hundred years versus their speed.
2. Resources utilization and increased ideality
An engineering system evolves in such a direction as to optimize its utilization of
resources and increase its degree of ideality, which is understood [10] as the ratio of
all useful effects to all harmful effects. Example: evolution of I-beams.
3. Uneven development of system elements
A system improves monotonically but the individual subsystems improve
independently and individually. Example: evolution of ocean tankers in which
evolution of the propulsion system is not matched by evolution of the braking system.
4. Increased system dynamics
As an engineering system evolves, it becomes more dynamic and parts originally
fixed become moveable or adjustable. Example: evolution of landing gear in
airplanes.
5. Increased system controllability
As an engineering system evolves, it becomes more controllable. Example: evolution
of heating and cooling systems.
6. Increased complexity followed by simplification
As an engineering system evolves, periods of growing complexity are followed by
periods of simplification. Example: electronic watches were becoming more and more
complex with many functions before the process of simplification began, leading to
mechanical-like watches with only one or two functions. These simplier watches are
in turn becoming more complex with new functions being added continually.
7. Matching and mismatching of system elements
As an engineering system evolves, its individual elements are initially unmatched
(randomly put together), then matched (coordinated), then mismatched (separate s-
curves), and finally a dynamic process of matching-mismatching occurs. Example:
evolution of car suspension from a rigid axis to dynamically adaptive pneumatic
suspension.
8. Evolution to the micro-level and increased use of fields
An engineering system evolves from macro to micro level and expands to use more
fields. Example: 1. Evolution to the micro-level: computer based on tubes evolves
into one based on integrated circuits, 2. Increased use of fields: a mechanical system
uses an electric controller, next an electromagnetic controller, etc.
36 T. Arciszewski and J. Cornell
Fig. 1.
Virtual inspiration has long been the subject of scholars, particularly in the context
of utilization in design of forms found in nature. Many interesting examples of
various natural forms, potentially design-inspiring, are provided in [14, 18].
Unfortunately, visual inspiration is only skin-deep. A human designer must fully
understand the functions of the various animal organs and be able to copy these
organs in a meaningful way. In other words, in such a way that their shape and
essential function are preserved – for example the protection of internal organs -while
other secondary features may be eliminated. There is always a danger that the final
product will preserve primarily the secondary features of the original animal
weakening the effectiveness of visual inspiration.
“Conceptual inspiration” can be described as the use of knowledge in the form of
heuristics from outside the design domain in order to develop design concepts. It is
potentially more applicable to engineering design than visual inspiration. Conceptual
inspiration is more universal since it provides not visuals but knowledge representing
our understanding of an outside domain applicable to the design domain. In this case,
a designer uses principia found in nature for design purposes. Various examples of
such principia are discussed in Section 4.2. Design principia can be also interpreted
as design rules, or design patterns [19]. In this context, knowledge acquired from
nature may be formally incorporated in model-based analogy design [20].
1
Beetle photo: http://home.primus.com.au/kellykk/010jrbtl.JPG, Image of Japanese Samuri –
Imperial Valley College Located at Pioneers Park Museum 373 East Aten Road (Exit I-8 at
Hwy 111 North to Aten Road) Imperial, CA 92251 Phone: (760) 352-1165
http://www.imperial.cc.ca.us/Pioneers/SAMURAI.JPG
38 T. Arciszewski and J. Cornell
While there are countless styles of ancient post-neolithic armor, there are seven major
types of body armor, as listed in Table 1. This table summarizes historical armor
types and shows a natural analogue. The table reveals parallel spectra of historic and
natural body armors. The left column refers to historic armor and the while the right
one to natural armor. The table demonstrates that all types of historic armor have their
analogues in nature. However, most likely an opposite statement is not correct and
this may represent great promise for the development of a new generation of human
body armor conceptually inspired by nature in which various evolution patterns from
nature are utilized in engineering context. In fact, artificial and natural body armors
are usually considered separately and there is very little commonality in our
understanding of both domains. The development of a unifying understanding might
significantly improve our ability to design modern novel body armor that satisfies
ever-growing requirements.
This section focuses on European metal body armor, primarily plate body armor. Its
evolution is compared with natural processes and discussed in the context of the
tradeoff between protection and mobility. Plate armor consists of a solid metal plate,
or several plates covering most of the body, with articulations only at joints. This
Bio-inspiration: Learning Creative Design Principia 39
armor is heavy, prohibits fast movement, and although it can provide extensive
protection, it has many drawbacks. A possible line of evolution for plate armor is
shown in Fig. 22.
Cuirbolli (leather boiled in oil or wax to countless animals with thick hides or
harden it, usually molded to a torso shape) thin, flexible exoskeletons
The examples provided are mostly of Austrian, German, and Polish ancestry.
However, they are representative of European trends. In medieval Europe, the
development and manufacturing of body armor was mostly concentrated at a limited
number of centers in Germany (Cologne and Nurenberg), Austria (Innsbruck), Italy
(Milan), and Poland (Cracow) [31]. There was continual exchange of information
among “body armor builders” who traveled and worked in various countries.
2
All nature photos taken by Joanna Cornell and all pictures of historic body armor taken by
Tomasz Arcsizewski, Fig. 2.7 taken from Woosnam-Savaga, R.C. and Hill, A., “Body Armor,”
Merlin Publications, 2000, and Fig. 2.8 from http://us.st11.yimg.com/store1.yimg.com/I/
security2020_1883_28278127
40 T. Arciszewski and J. Cornell
structure) is capable of better energy absorption capability than a flat surface (a plate
structure). In nature, shells are usually curved. Again, we want to stress the point that
those shells found in nature were not “inspired” by each other, but simply arrived at
those designs because they worked better than any other. Natural selection is excellent
at finding the “most perfect” (or “least imperfect”) solution to any problem.
Unfortunately, it is slow whereas our exponentially increasing computational power can
evaluate all the dead ends and inefficient paths at a faster rate. The deformations of a
shell structure are much smaller than those of a flat surface under the same impact force,
significantly reducing internal injuries. In addition, forging hardens metal increasing its
ability to withstand blows by sharp penetrating objects like spears or arrows. The
evolution from a flat surface to a curved shell represents the use of the TRIZ’s inventive
principle of “spheroidaility.” (This principle says, among others, “replacing flat surface
with curved one.”) [32]. There are also two specific armor design inventive principia
(heuristics), which can be acquired from the described transition:
1. Differentiate thickness to reduce deformations
2. Use forging to reduce penetration
Both principia suggest ways to improve the behavioral characteristics of armor on a
global level (the entire armor) or a local level (only the central part of the armor). One
is related to the entire piece of armor and it provides a heuristic on a global level of the
42 T. Arciszewski and J. Cornell
entire armor considered. The second one is related only to the central part of armor and
provides a heuristic, which is valid only locally. Both, however, are complementary.
The third stage of evolution is illustrated in Fig. 2.3 with Polish 13th century one-
piece body armor, which provides both front and back protection and even some side
protection. The breastplate is so shaped that a rib is formed, which acts as a stiffener
from the structural point of view. In this case, three inventive principia can be
acquired:
3. Increase volume of a given piece of armor to absorb more energy
4. Increase the spatial nature of armor to improve its global stiffness
5. Introduce ribs to increase local stiffness wherever necessary
The next transformation resulted in a multi-piece armor in which front and back plates
were separated (Fig. 2.4 - 14th century German armor). Also, additional multi-plate
armor for upper arms and legs emerged, which allowed some degree of mobility. In
this case, a well-known TRIZ inventive principle of segmentation is used (“divide an
object into independent parts or increase the degree of object’s segmentation”). This
type of armor gradually evolved into full armor, shown in Fig. 2.5. (15th century
Austrian armor). In this case, a simple inventive principle can be acquired:
6. To increase protection, expand armor
Fig. 2.6 shows Polish light cavalry (“husaria”) armor from the 17th Century. The
armor is significantly reduced in size and complexity and provides protection only for
the vital parts of the body. This type of armor is considered a successful compromise
between protection and mobility and was in use for several centuries. Its development
was driven by a simple inventive principle:
7. Protect only battlefield vital body parts while providing maximum mobility
Finally, Fig. 2.6 and 2.7 provide examples of 20th century body armors. The first was
developed in Italy during the 1st World War while the second is a modern ceramic
breastplate, commercially available.
The entire identified line of evolution can be interpreted from a perspective of the
Directed Evolution Method and its nine evolution patterns. In this way, better insight
into the conceptual inspiration provided by the described evolution line can be
acquired. The analysis provided below demonstrates also that engineering evolution
patterns are valid for the considered line of evolution of body armor and provide its
additional understanding.
1. S-curve Pattern
When battlefield survivability is considered, four periods of evolution can be
distinguished. The first one, called “infancy,” and represented by Fig. 2.1. and 2.2.,
ends approximately in the 10th century. During this period, first attempts were made in
Europe to develop effective breastplates. Next, during the period of growth
(approximately 10th – 14th century), rapid evolution of body armor can be observed,
including the emergence of many new concepts and growing sophistication (see Fig.
2.3 and 2.4). The 16th century can be considered a maturity period, when progress
stagnated and only various quantitative refinements occurred. Finally, after the 16th
century the period of decline begins, when body armor was gradually reduced in size
and complexity and its decorative function became progressively important. A new
Bio-inspiration: Learning Creative Design Principia 43
S-curve begins in the 20th with the emergence of body armors during the 1st World
War (Fig. 2.7) and of modern ceramic body armors (Fig. 2.8) developed only recently.
2. Resources Utilization and Increased Ideality
Evolution of body armor is driven both by rational and irrational factors. When
relatively short time periods are considered (20-30 years), the resources are often
utilized in a suboptimal way because of the tradition, autocratic inertia, culture, etc.
However, when much longer time periods are considered (a century or two), the
resources are usually utilized in an optimal way.
Ideality is understood as a ratio between the useful effects of body armor
(survivability) and its harmful effects (immobility). New types of armor were
developed, tested, and used over periods of time and survived the evolutionary
process only because they had increased ideality with respect to their predecessors.
Obviously, types of armor with decreased ideality were soon eliminated and we may
not even know about them.
3. Uneven Development of System Elements
Unfortunately, this pattern is also valid in the case of body armor. The best example
is a comparison between the development of corpus protection and eye protection.
Breastplates rapidly evolved and improved in the early medieval centuries, providing
excellent front protection. Evolution of helmets until recently has not provided
sufficient eye protection.
4. Increased System Dynamics
In the case of body armor, increased system dynamics means increased mobility.
This principle has been a driving force during the evolution of armor. A good
specific example is the transition from a single breastplate (Fig. 4.5) to 14
interconnected narrow plates (Fig. 4.6). Such a configuration allows movement of
individual plates and increases warrior mobility.
5. Increased System Controllability
Development of multi-piece body armor is a good example of this evolution pattern.
Such armor allows precise positional control of the individual pieces with respect to
adjacent pieces and allows their adjustment depending on battle conditions, mood of
the knight, his changing weight as he ages, etc.
6. Increased Complexity followed by Simplifications
During stages 1 through 5 (Fig. 2.1 – 2.5) the complexity of armor grew, while stages
6 through 8 show subsequent simplifications (Fig. 2.6 – 2.8). The most recent
ceramic body armor is multi-piece armor (not shown in Fig. 2), indicating that a new
cycle has just begun. A single piece of ceramic body armor (Fig. 2.8) represents the
beginning of this cycle.
7. Matching and Mismatching of System Elements
Body armor can be considered as a subsystem of a body protection system. At first,
breastplates were simply put on the top of ordinary clothes (Fig. 2.1). Next, a
breastplate was matched to the maille, but its evolution followed a separate S-curve
and mismatching could be observed. Finally, the entire body protection system was
considered as a single system and its subsystems, body armor and clothes, were being
developed coordinating their development (Fig. 2.3 – Fig. 2.6).
8. Transition to the Micro-level and Increased use of Fields
The best example of transition to the micro level in the development of body armor is
present research on armor materials in the context of nanotechnology. The gradually
44 T. Arciszewski and J. Cornell
increased use of fields (temperature and stress fields) can be also observed. For
example, in medieval times the initial cold hammering of armor plates evolved into
various forging methods in which not only stress field (the result of hammering) but
also heating and cooling (temperature field) were used. In modern days, production of
ceramic plates requires sophisticated use of stress field during compression of ceramic
materials and temperature field, which is applied to plates in an oven during the final
baking process.
9. Transition to Decreased Human Involvement
One can interpret this to mean that body armor should gradually become easier to put
on and adjust during use. This is most likely the case. During the last millennium,
however, in Europe the labor costs did not affect the evolution of armor. On the
contrary, during this period a culture of knighthood emerged in which complex
armor-donning rituals held important social and psychological significance. A knight
was constantly surrounded by many servants, whose official occupation was to take
care of his armor. Their unofficial purpose was to serve as symbols of his social
position and power. There was no strong push to reduce the number of servants or
minimize the effort required to use body armor. The most recent experience with
body armor in Iraq indicates that soldiers want effective, light, and easy-to-put-on
armor, validating the decreased human involvement principle.
The above interpretation may also be applied to natural body armor, as discussed in
the next section.
The extent to which biological evolution (BE) and engineering evolution (EE) are
directly comparable is still a research question. In BE, change proceeds as variations
on a theme and must progress through incipient stages before a new functional
structure is complete. Essentially, early mammals (or their genes) couldn’t just look at
a fly and then decide to grow wings and become bats. There had to be intermediate
stages of design, which were inadequate for flight of any kind. EE can take leaps --
humans are capable of inventing armor on a different time scale than similar armor
could evolve in a natural setting. It is the human brain -- human intelligence and
creativity -- that distinguishes BE from EE. EE is not necessarily bound by the
constraints of small baby steps. Also, living organisms evolve over log time periods in
dynamic environments while a designer evolves his/her designs over a short time
period operating in a closed world of his/her static body of knowledge representing
the state of the art at the time of designing, as discussed in Section 3. For all these
reasons, the chapter highlights the use of principia from natural evolution to stimulate
and accelerate exploration of a design space. The ultimate goal is to find novel design
concepts within a limited domain, which satisfy the requirements and constraints of a
given engineering problem.
The parallels and principia that exist in the relationship between human armor and
animal armor are particularly interesting because the evolution of both is driven by
the same basic force: maximization of survivability. Poorly designed human armor
results in higher mortality rates, and poorly evolved natural armor does the same and
can even lead to extinction of a species. Although many other factors come into play
Bio-inspiration: Learning Creative Design Principia 45
when designing armor – for example, economics - increased survival is the basic
driving force in the evolution of both human and animal armor.
In the natural world, a wide range of different species use plate armor for
protection, with adaptations for movement. As discussed in Section 2, there are two
contradictive requirements for armor: maximize body protection and maximize
mobility. Interestingly, in nature separate lines of evolution can be distinguished in
which the focus is only on a single requirement.
Maximization of
protection
Compromise
Maximization of mobility
Fig. 3.
When considering the Gopher Turtle, Figure 3, it is evident that its body gains
almost full protection from its plated shell. There is additional armoring along its legs.
The fragile components of its body are protected underneath and on top with a thick
plate. Humans have imitated this natural armor with increasing creativity. Early armor
(see Fig. 2.4) weighed more than armor shown in Fig. 2.6 and impeded human
movement. The Romans even imitated the function of the turtle shell with a military
maneuver called the Tortoise or Turtle; in which soldiers marched in a rectangular
formation with those at the head holding their shields in front, those on the side
holding their shields to the side, and soldiers in the middle holding their shields over
their heads. This created a box or turtle shell with all the men protected within. The
analysis of body armor evolution in turtles results in several creative design
principia/heuristics provided below:
1. Maximize size and volume of body armor
2. Create some smooth surfaces
3. Create multilayer body armor
4. Introduce shock absorbing layers
46 T. Arciszewski and J. Cornell
Over time, humans lightened armor and added more articulations. There are some
evolutionary parallels that can be made and some heuristics learned. In this case, they
can be formulated as:
5. Minimize weight
6. Maximize articulation
Although turtle shells have been comparatively stable morphologically for 200
million years, sustaining the popular conception of turtles as "living fossils,” the
turtle’s skull, neck, and other structures have evolved diverse and complex
specializations [33]. One interesting example is the Kayentachelys, the earliest
known turtle to exhibit a shell that has all the features usually associated with an
aquatic habitat. These include sharp, tapered edges along the low-domed shell, the
absence of limb armor and coarse sculpturing on the shell [33]. Turtles have been
around since the Mesozoic. Their basic body plan has served them well. However,
not all turtles are heavily armored. Leatherbacks and softshells have, secondarily, lost
their heavy shells. Apparently, these animals found such protection unnecessary.
Rove beetles, too, are relatively lightly-armored against fast-moving predators, often
ants. Apparently, speed is a better defense against outraged ants than all the armor in
the world. Softshells and rove beetles are both examples of the flexibility of
evolution, reverting back to speed instead of protection. Note that, in both cases, the
original (pre-turtle or pre-beetle) condition was not the result, but a new version that
achieved the same result.
3
Fig. 4 .
Although a heavy-shelled turtle can successfully survive with its limitations, some
organisms evolved by lightening their loads. An example is the three-spine
stickleback, Gasterosteus aculeatus, Fig. 4. It is a widely studied fish featured in
thousands of scientific papers [34]. It has three life-history modes: fully marine,
resident freshwater, and anadromous (entering freshwater only to breed). Freshwater
populations are theorized to have independently evolved from marine and
anadromous ones [35, 36, 37]. Several of its marine characteristics changed
repeatedly in the freshwater environment. For example, many aspects of extensive
bony armor found in marine fish were reduced [37, 38, 39]. Marine sticklebacks are
built for battle with prominent spines sticking out behind their lower fins and as many
3
Photo of three-spined stickleback barrow, 2005. Aquarium Project. http://web.ukonline.co.uk/
aquarium/pages/threespinestickleback.html, due to copyright issues, we are unable to include
photos that show the armored fish in salt water and the lack of armoring in freshwater, but the
images are available online.
Bio-inspiration: Learning Creative Design Principia 47
as 35 plates covering their bodies--presumably to fend off predators [34]. But once
the sticklebacks evolved to live in freshwater habitats, their spines and plates were
reduced or disappeared. There is an advantage to losing the armor [39]. Armoring and
spines reduce speed. Fish living in lakes need to be faster because they typically have
to hide from predators more often and there is an advantage in being able to dart into
a hiding place quickly. Also, since fresh water lacks calcium reserves of salt water,
bony armor could be too costly to make. Sticklebacks are unusual because a
population can lose their armoring in just a few generations. It is this high rate of
evolution that makes the Stickleback so popular to biologists, as it is rare that changes
in nature follow such a quick time frame. The analysis yields a simple heuristic:
7. When operating in various environments, develop a flexible system, a body suite
Unlike turtles, armadillos can move quickly. Armadillos achieve a balance between
armoring and mobility. The armadillo, considered to be an ancient and primitive
species, is one of the only living remnants of the order Xenarthra. It is covered with
an armor-like shell from head to toe, except for its underbelly, which is basically a
thick skin covered with coarse hair [39] (Fig. 5).
Fig. 5.
The carapace (shell) is divided into three sections – a scapular shield, a pelvic
shield, and a series of bands around the mid-section [40]. This structure consists of
bony scutes covered with thin keratinous (horny) plates. The scutes cover most of the
animal’s dorsal surface. They are connected by bands of flexible skin behind the head,
and, in most species, at intervals across the back as well. The belly is soft and
unprotected by bone, although some species are able to curl into a ball. The limbs
have irregular horny plates at least partially covering their surfaces, which may also
be hairy. The top of the head is always covered by a shield of keratin-covered scutes,
and the tail is covered by bony rings [41].
Limited information is available about the evolution of the armadillo. Its closest
relatives are sloths and anteaters, which also belong to the order Xenarthra. Both
relatives lack armoring. The order Xenarthra first arose around fifty million years ago
[42]. Armadillos from 10,000 years ago were much bigger, so evolution decreased
48 T. Arciszewski and J. Cornell
their size. One theory suggests that the giant armadillos became extinct due to an
increase in large canine and feline predator migrations [43]. It could be theorized that
a smaller size helped armadillo survival by lightening the weight of the body. In
general, human-made armor has grown lighter in weight over time as long as
protection was not significantly sacrificed. The advantage of the lorica segmenta
design over traditional plate armor is that movement is much easier due to the
increased number of moveable parts
Another unique example of balance between mobility and armoring is the pangolin
[44]. A pangolin is covered with large, flat, imbricated, horny scales and it resembles
the New World armadillo in terms of its feeding habits and use of a curled up,
hedgehog-like defensive posture. Its body resembles a walking pinecone. The
pangolin has what may be slightly less complete protection than that of the armadillo,
but it has the benefit of greater flexibility. It is able to flex the entire length of its body
in all directions and climb trees.
Regarding balance and tradeoffs between protection and mobility, it is important to
consider a less obvious design constraint. Armoring does come at a cost to species --
otherwise animals of every lineage would be armored. As in the discussion of the
stickleback, there may be metabolic costs associated with mineral-rich armor
requiring specialized diets that are more laborious to procure. Also, while armadillos
have the ability to move fast and even jump, their bodies are not flexible. This lack of
flexibility may make them vulnerable to parasites, which could take up residence
between their armor bands, transmitting diseases and remaining safe from the host’s
attempts to groom or dislodge them. These limitations apply to humans as well.
Difficulties in removing armoring and maintaining proper hygiene may lead to
consequences like illness or even death. Another major problem of the armadillo’s
design, as well as human armor throughout history, is one of thermoregulation. The
armor surfaces do not insulate nearly as well as fur, nor can they sweat. This imposes
limitations on the spatial and temporal range of the animal. Applied to human armor,
thermo-regulation issues may result in illness or death, and this is clearly a concern to
designers of modern body armor, intended for use in tropical climate zones. Both lack
of flexibility and thermoregulation are issues in regards to natural and human body
armor. These are just a few additional considerations to take into account when
assessing mobility versus protection.
Modern armors exist which can protect a person against even the high-velocity
rounds fired by assault, battle or sniper rifles. There are even complete body armors
that, theoretically, can save a person from a direct blast by a modest sized bomb or
Improvised Explosive Devices (IED). However, they are so bulky and restrictive that
no army would field them in large numbers and no soldier would wear them for long
periods. Throughout the centuries, there is a clear pattern found in both natural and
human armor design: the simple principle to protect the head and vital organs first.
Protection of everything else is perceived as a luxury. Limbs are usually left free to
move, allowing troops to keep their best defense: mobility and the ability to fight
back. The same is true in the animal world. Certain parts of the body are usually more
heavily protected than others. Limbs are unnecessary for critical survival, but quite
necessary for movement, and so are rarely armored. The heart, lungs, central nervous
systems, etc. are usually well protected. Once again, constraints such as heat
exhaustion and the need to remove the armor to perform basic body functions can
limit the practicality of the heaviest armors.
Bio-inspiration: Learning Creative Design Principia 49
The cost of armoring prevents all species from developing such protection. There
is a range of other defenses, however. For example, canines and felines use speed,
strength, and intelligence to survive without any armoring. It could be postulated that
armor is exponentially more costly if the species is high on the food chain. There is
an added cost to decreased mobility (speed and agility are critical to a predator),
coupled with narrower energy margins limiting “disposable metabolic income” (i.e.
an ecosystem can support far fewer lions than zebras, because of the inefficiency of
predation and energy lost in digestion, respiration, defecation, and basic
thermodynamics). A better comparison to the armadillo is the opossum, which is a
similar-sized mammal, occurring in the same areas and feeding at a similarly mid-
level trophic position, but without armor. The opossum lacks armor, but has other
survival mechanisms, like a cryptic, nocturnal lifestyle and behavioral specializations.
Although our natural armoring discussion focused on several animals, armoring is
evident in all living organisms. Fig. 5 shows a tropical palm tree armored to protect it
from fire, and other environmental factors. Looking at the natural world through the
lens of armoring results in fascinating observations. Humans are often able to think
beyond mere survival and can apply creativity to abstract problem solving. In nature,
the goal is more basic: to delay mortality while maximizing reproductive success.
Fig. 6.
In this paper we choose to focus on plate armor. However, a few points about
maille armor should be made. Maille armor was the dominant form of armor for at
least 2000 years, from before the Roman Empire until the 14th century. Many other
types arose in the regions under discussion herein, but while several armors were used
during that huge time span, none had the degree of flexibility, availability, ease of
repair/replacement and general utility of maille. With proper padding, riveted maille
was relatively lightweight and flexible and while it did not dissipate forces as well as
the finest, fluted plate armors of the Renaissance, it was very effective at preventing
the worst battlefield injuries. Perhaps maille’s best attribute was that, while labor-
intensive to produce, it required relatively little in the way of specialized skills, tools
or facilities.
50 T. Arciszewski and J. Cornell
5 Summary
This chapter reports preliminary results of interdisciplinary research that addresses the
challenging issue of conceptual bio-inspiration in design. This is accomplished in the
context of evolution in nature and in engineering. In both cases, body armor is
analyzed. The research proved to be more difficult than anticipated and can be
considered only a first step in the direction of understanding conceptual bio-
inspiration in design.
Evolution, as this chapter’s central theme attests, is remarkably efficient at finding
the optimum combination of traits for a given set of requirements and constraints. It
is, both in nature and in the form of evolutionary computation, very slow because of
its stochastic nature and depending on nothing more than guided trial and error and a
lot of dead (literally in nature) ends. On the other hand, abstract design knowledge
(design intelligence), driven by human creativity, can make revolutionary jumps,
rather than merely evolutionary steps. Human creativity is the key to pushing the state
of the art to new levels within directed evolution. By relocating the arena of the
“design space” from the natural world to the human mind’s theatre of conceptual
abstraction and, now, to the digital theatre of computational modeling, many of those
dead ends can be circumvented.
Directed Evolution Method, applicable only to engineering systems, can
effectively circumvent many of those dead ends using abstract engineering knowledge
in terms of evolution patterns. Whereas natural evolution relies on only what is given
and on random operators as mechanisms producing raw variability, directed evolution
can use “inspiration” from completely separate sources to push search for design
concepts in very different directions. However, in the case of directed evolution still
only engineering knowledge is used, although the entire engineering design space is
searched, which is much larger than an engineering domain-specific design space.
Such search is obviously an engineering exploitation with all consequences in the
form of limited expectations to find truly novel design concepts.
Conducted analysis of evolution line of human plate body armor produced seven
creative design principia/heuristics, which are abstract and may be used in the
development of modern body armor. Similarly, the analysis of evolution of body
armor in nature led to discovery of seven heuristics, which are also applicable to
modern body armor. More importantly, the discovery of these heuristics means that
the Directed Evolution Method can be expanded by incorporating knowledge from
biology. Unfortunately, such expansion is still infeasible since much more heuristics
must be discovered first. That will require extensive research involving both
engineers and evolutionary biologists.
Evolution, both in nature and engineering, can be considered on various levels. The
most fundamental level is that of basis operations (mutation and crossover) conducted
on the genetic material in nature or on strings of allees describing design concepts. On
this level, evolution has stochastic character and its results are often unpredictable.
However, lines of evolution can be considered on the level of evolution
patterns/heuristics driving evolution. Then, these principia can be discovered and
compared for evolution in nature and in engineering. Such comparison can reveal
missing “links” or principia for both types of evolution and may enable the creation of
complete sets of heuristics. That raises an intriguing research question. If evolution
Bio-inspiration: Learning Creative Design Principia 51
Acknowledgement
The authors have the pleasure of acknowledging the help provided by Dr. Witold
Glebowicz, the Curator, Department of Old History, Polish Army Museum, Warsaw,
Poland. We would like to thank Andy May, a graduate student at University of South
Florida, for his invaluable suggestions and technical feedback. Lastly, thanks to Paul
Gebski, a Ph.D. student at George Mason University, for his technical assistance.
References
1. Altshuller, G.S.: Creativity as an Exact Science, Gordon & Breach, New York (1988).
2. Arciszewski, T., Zlotin, B.: Ideation/Triz: Innovation Key To Competitive Advantage and
Growth. Internet http://www.ideationtriz.com/report.html (1998).
3. Terninko, J., Zusman, A., Zlotin, B.: Systematic Innovation, An Introduction to TRIZ, St.
Lucie Press (1998).
4. Clarke, D.: TRIZ: Through the Eyes of an American TRIZ Specialist, Ideation (1997)
5. Gero, J.: Computational Models of Innovative and Creative Design Processes, special
double issue. Innovation: the key to Progress in Technology and Society. Arciszewski, T.,
(Guest Editor), Journal of Technological Forecasting and Social Change, North-Holland,
Vol. 64, No. 2&3, June/July, pp. 183-196, (2000)
6. Kicinger, R., Arciszewski, T., De Jong, K.A.: Evolutionary Computation and Structural
Design: a Survey of the State of the Art. Int. J. Computers and Structures, Vol. 83, pp.
1943-1978, (2005).
7. Lumsdaine, E., Lumsdaine, M.: Creative Problem Solving: Thinking Skills for a Changing
World, McGraw-Hill (1995).
8. Gordon, W.J.J.: Synectics, The Development of Creative Capacity (1961).
9. Arciszewski, T., Rossman, L.: (Editors) Knowledge Acquisition in Civil Engineering, The
ASCE (1992).
10. Clarke, D.W.: Strategically Evolving the Future: Directed Evolution and Technological
Systems Development. Int. J. Technological Forecasting and Social Change, Special
Issue,: Innovation: The Key to Progress in Technology and Society, Arciszewski, T. (ed.),
Vol. 62, No. 2&3, pp. 133-154, (2000).
11. Zlotin, B., Zusman, A.: Directed Evolution: Philosphy, Theory and Practice, Ideation
International (2005).
12. Bruet, B.J.F., Oi, H., Panas, R., Tai, K., Frick, L., Boyce, M.C., Ortiz, C.: Nanoscale
morphology and indentation of individual nacre tablets from the gastropod mollusc
Trochus niloticus, J. Mater. Res. 20(9), (2005) 2400-2419 http://web.mit.edu/cortiz/www/
Ben/BenPaperRevisedFinal.pdf Information about lab: http://web.mit.edu/cortiz/www/
52 T. Arciszewski and J. Cornell
13. Arciszewski, T., Uduma K.: Shaping of Spherical Joints in Space Structures, No.3, Vol. 3,
Int. J. Space Structures, pp. 171-182 (1988).
14. Vogel, S.: Cats’ Paws and Catapults, W. W. Norton & Company, New York and London,
(1998).
15. Arciszewski, T., Kicinger, R.: Structural Design inspired by Nature. Innovation in Civil
and Structural Engineering Computing, B. H. V. Topping, (ed.), Saxe-Coburg
Publications, Stirling, Scotland, pp. 25-48 (2005).
16. Balgooyen, T.G.: Evasive mimicry involving a butterfly model and grasshopper mimic.
The American Midland Naturalist Vol. 137 n1, Jan (1997) pp. 183 (5).
17. Wickler, W.: Mimicry in plants and animals. (Translated by R. D. Martin from the
German edition), World Univ. Library, London, pp. 255, (1968).
18. D'Arcy, Thompson's, On Growth and Form: The Complete Revised Edition, Dover
Publications, ISBN 0486671356, (1992)
19. Goel, A. K., Bhatta, S. R., “Use of design patterns in analogy based design”, Advanced
Engineering Informatics, Vol. 18, No 2, pp. 85-94, (2004).
20. De Jong, K.: Evolutionary computation: a unified approach. MIT Press, Cambridge, MA
(2006).
21. Wolfram, S., New Kind of Science, Wolfram Media, Champaign, Il., (2002).
22. Goldberg, D.E., Computer-aided gas pipeline operation using genetic algorithms and rule
learning, Part I: genetic algorithms in pipeline optimization, Engineering with Computers,
pp. 47-58, (1987).
23. Goldberg, D.E., Genetic Algorithms in Search, Optimization, and Machine Learning,
Addison-Wesley Pub. Co., Reading, MA, (1989).
24. Murawski, K., Arciszewski, T., De Jong, K.: Evolutionary Computation in Structural
Design, Int. J. Engineering with Computers, Vol. 16, pp. 275-286, (2000).
25. Kicinger, R., Arciszewski, T., De Jong, K. A.: Evolutionary Designing of Steel Structures
in Tall Buildings, ASCE J. Computing in Civil Engineering, Vol. 19, No. 3, July, pp. 223-
238, (2005)
26. Koza, R. J., Genetic Programming II: Automatic Discovery of Reusable Programs, MIT
Press, (1994).
27. Koza, John R., Bennett III, Forrest H, Andre, David, and Keane, Martin A.: Genetic
Programming: Biologically Inspired Computation that Creatively Solves, MIT Press,
(2001).
28. Ishino, Y and, Jin, Y., “Estimate design intent: a multiple genetic programming and
multivariate analysis based approach”, Advanced Engineering Informatics, Vol. 16, No 2,
(2002), pp. 107-126.
29. Kicinger, R., Emergent Engineering Design: Design Creativity and Optimality Inspired by
Nature, Ph.D. dissertation, Information Technology and Engineering School, George
Mason University, (2004).
30. Kicinger, R., Arciszewski, T., and De Jong, K. A. "Generative Representations in
Structural Engineering," Proceedings of the 2005 ASCE International Conference on
Computing in Civil Engineering, Cancun, Mexico, July, (2005).
31. Arciszewski, T., DeJong K.: Evolutionary Computation in Civil Engineering: Research
Frontiers. Topping, B.H.V., (Editor), Civil and Structural Engineering Computing pp. 161-
185, (2001).
32. Zlotin, B. Zusman, A.: Tools of Classical TRIZ, Ideation International, pp. 266, (1999).
33. Eugene, S., Gaffney, J., Hutchison, H., Farish, A., Lorraine, J., Meeker, L.: Modern turtle
origins: the oldest known cryptodire. Science, Vol. 237, pp. 289, (1987).
Bio-inspiration: Learning Creative Design Principia 53
34. Pennisi, E.: Changing a fish's bony armor in the wink of a gene: genetic researchers have
become fascinated by the threespine stickleback, a fish that has evolved rapidly along
similar lines in distant lakes. Science, Vol. 304, No. 5678, pp. 1736, (2004).
35. McPhail, J. and Lindsey, C. Freshwater fishes of northwestern Canada and Alaska.
Bulletin of the Fisheries Research Board of Canada (1970) 173:1-381.
36. Bell, M.: Evolution of phenotypic diversity in Gasterosteus aculeatus superspecies on the
Pacific Coast of North America. Systematic Zoology 25, pp. 211-227, (1976).
37. Bell, M., Foster, S.: Introduction to the evolutionary biology of the threespring stickle-
back. Editors: Bell, M.A., Foster, S.A., The evolutionary biology of the three spine
stickleback, Oxford University Press, Oxford, pp. 1-27, (1993).
38. Bell, M., Orti, G., Walker, J., Koenings, J.: Evolution of pelvic reduction in threespine
stickleback fish: a comparison of competing hypotheses. Evolution, No. 47, Vol. 3, pp.
906-914, (1993).
39. Storrs, E.: The Astonishing Armadillo. National Geographic. Vol. 161 No. 6, pp. 820-830,
(1982).
40. Fox, D.L. 1996 January 18. Dasypus novemcinctus: Nine-Banded Armadillo.
http://animaldiversity.ummz.umich.edu/acounts/dasypus/d._novemcinctus.html
(November 3, 1999).
41. Myers, P.: Dasypodidae (On-line), Animal Diversity Web. Accessed February 08, 2006 at
http://animaldiversity.ummz.umich.edu/site/accounts/information/Dasypodidae.html
42. Breece, G., Dusi, J.: Food habits and home range of the common long-nosed armadillo
Dasypus novemcinctus in Alabama. In The evolution and ecology of armadillos, sloths
and vermilinguas. G.G. Montgomery, ed. Smithsonian Institution Press, Washington and
London, p. 419-427, (1985).
43. Stuart, A.: Who (or what) killed the giant armadillo? New Scientist. 17: 29 (1986)
44. Savage, R.J.G., Long, M.R.: Mammal Evolution, an Illustrated Guide. Facts of File
Publications, New York, pp. 259, (1986).
Structural Topology Optimization of Braced Steel
Frameworks Using Genetic Programming
Abstract. This paper presents a genetic programming method for the topologi-
cal optimization of bracing systems for steel frameworks. The method aims to
create novel, but practical, optimally-directed design solutions, the derivation of
which can be readily understood. Designs are represented as trees with one-bay,
one-story cellular bracing units, operated on by design modification functions.
Genetic operators (reproduction, crossover, mutation) are applied to trees in the
development of subsequent populations. The bracing design for a three-bay, 12-
story steel framework provides a preliminary test problem, giving promising
initial results that reduce the structural mass of the bracing in comparison to
previous published benchmarks for a displacement constraint based on design
codes. Further method development and investigations are discussed.
1 Introduction
Design of bracing systems for steel frameworks in tall buildings has been a challeng-
ing issue in a number of high-profile building projects, including the Bank of China
building in Hong Kong and the CCTV tower in Beijing, often due to unique geometry
and architectural requirements. The complex subsystem interaction and design issues,
coupled with the quantity of design constraints makes automated design and optimiza-
tion of bracing systems difficult for practical use. These challenges are reflected in the
volume of research within structural topology optimization that has addressed bracing
system design, as discussed in the next section.
One difficulty with applying computational structural optimization in practice, es-
pecially to topological design, is that designers often find it difficult to interpret and
trust the results generated, due to a lack of active involvement in design decisions
during design evolution. Thus better means for following the derivation of and ration-
ale behind optimized designs are required. In contrast to other evolutionary methods,
Genetic Programming (GP) [1] evolves "programs" containing instructions for gener-
ating high-performance designs from a low-level starting point. This allows designers
to examine the "blue-prints" of these by executing the branches of corresponding
program trees. In common with other evolutionary methods, GP is population based
and stochastic, facilitating the generation of a set of optimally-directed designs for
further consideration according to criteria, such as aesthetic value, that are difficult to
model computationally. Successive populations are developed through the genetic
operations of reproduction, crossover and mutation. However, previous research using
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 54 – 61, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Structural Topology Optimization of Braced Steel Frameworks Using GP 55
genetic programming for structural optimization [2, 3] has been limited to evolving
tree representations of designs, rather than programs for generating designs.
The next section discusses previous work in the field of bracing design for steel
frameworks as well as relevant GP research. Section 3 introduces the proposed GP
methodology. There follows a description and results of a test problem design task,
taken from previous work by Liang et al [4]. Finally, results and conclusions are pre-
sented, noting potential extensions of the method for increased practicality and scale.
2 Related Work
Optimization of steel frame structures, including bracing systems, has been used as a
demonstration problem for methods adopting discrete and continuum physical repre-
sentations of bracing systems. Amongst discrete representations, Arciszewski et al [5]
include general bracing system parameters in a demonstration of machine learning of
design rules. Murawski et al [6] report a series of experiments in which evolutionary
algorithms are used to seek optimal designs for a three-bay, 26-story tall building with
type of bracing in each cell and connectivity of beams, columns and supports as vari-
ables. Section sizes are optimized using SODA [7]. Kicinger et al. [8] use an evolu-
tionary strategy (ES), noting that this approach is more suited to small population
sizes. This is relevant when objective function evaluation is computationally expen-
sive, as is frequently the case in large-scale structural analysis. Kicinger [9] combines
cellular automata and a genetic algorithm to generate and optimize designs, observing
emergent behavior. Baldock et al. [10] previously applied a modified pattern search
algorithm to optimize lengths of bracing spirals on a live tall building project. It is
noteworthy that none of the above considers variation in size of basic bracing units,
something that the current research aims to address.
Mijar et al. [11] and Liang et al. [4] use continuum structural topology optimiza-
tion formulations to evolve bracing systems for simple two-dimensional multistory
frames. A mesh of small 2D plane-stress finite elements is superimposed onto a vier-
endeel frame and elements are gradually removed by a deterministic process driven
by minimizing the product of structural compliance and bracing tonnage. A three-bay,
12-story framework is adopted from Liang et al [4] as the test problem in this paper.
Genetic Programming (GP) is a class of evolutionary algorithm developed in the
early 1990s [1], which manipulates tree representations containing instructions for
solving a task, such as a design problem. Despite various attempts at using GP in civil
engineering [12], in the field of structural topology optimization, to the authors' best
knowledge, the full potential of GP has not been fully exploited. This is because func-
tions have not taken the form of operations, but rather a component of the design
itself [2] or an assembly of lower level components [3]. The current research aims to
demonstrate how tree representations of the development of full bracing system de-
signs from fundamental components can be manipulated by genetic operations to
evolve optimally directed solutions.
* /
5 - 4 *
7 x x x
YES NO
Population Reproduction
complete? NO
Select genetic
YES operation
Offspring
NO
physically Select parent(s)
feasible?
Select crossover/
Create offspring
mutation point(s)
operations, parent individuals are selected from the previous generation with linear
weighting towards the fittest. Crossover is applied with probability Pc to two parent
individuals, mutation is applied with probability P m (=1-Pc) to a single parent. In both
cases a branch (highlighted by dashed boxes in Fig. 3) is randomly selected from the
parent and either replaced by a branch from another parent tree (crossover), or ran-
domly regenerated (mutation). The optimization process terminates when neither the
best-of-generation individual fitness nor the lowest average fitness of a generation has
been improved for 10 generations. The method is implemented in Matlab.
12
FRAMEWORK
(a) SEEDED
11
10
9
ROOT
8
7
6
ROOT
12
(b) PARENT 1
Scale Unite
5
X: 3,12
(0,1) (1,2; 2,2)
4
3
2
ROOT
12
(c) PARENT 2
11
10
Reflect
7
X: 2,2 X: 2,7
(1,0)
6
5
Irregular Rep
4
(-1,9,1)
3
2
1
X: 2,4
(d) OFFSPRING 1
ROOT
12
11
10
Reflect
7
Unite X: 2,7
(1,2; 2,2) (1,0)
6
5
Irregular Rep
4
1 2 3
X: 2,4
Fig. 3. Development of an individual in the initial population (a),( b) and the genetic crossover
operation (c),(d)
58 R. Baldock and K. Shea
ADJUST
SCALE SPLITS
[X enlargement, [X divisions,
Y enlargement] Y divisions]
[-1,-2] [1,2] [1,1] [1,3]
ORTHOGONAL
REPEAT ROTATE TRANSLATE
IRREGULAR
UNITE REPEAT REFLECT
Fig. 4. Design modification operations, which are incorporated into GP trees as program func-
tions. Beneath the arrows are detailed the function parameters appearing in Fig. 3.
The test problem adopted in this paper was originally proposed by Liang et al [4] as a
demonstration of compliance-driven Evolutionary Structural Optimisation. A planar
3-bay 12-story tall steel building framework is subjected to uniformly distributed
loading on each side. The steel sections in the framework have been selected to meet
strength requirements under gravity loading. Throughout the evolution of the bracing
topology, the framework sections are fixed and gravity loading is neglected. The
unbraced framework has a maximum lateral displacement of 0.660m† under the pre-
scribed loading, well above the h/400 drift limit (h is total building height) of 0.110m.
In the continuum solution published by Liang et al [4], using an element thickness of
25.4mm, the product of mean compliance and steel mass is minimized, yielding a
total bracing volume of 4.82m3 and a maximum lateral displacement 0.049m†. The
ESO design is interpreted as a discrete bracing layout shown in Fig. 5, noting that
further sizing optimization is required. The current research adopts a more practical
objective of minimizing steel mass subject to a limit on maximum lateral displace-
ment of 0.1m (just under h/400).
Applying the Genetic Programming method described to the above problem, the
optimization model can be expressed as follows:
n
Minimize: L = ∑ Le + max(0, p (δ * −δ max )) (1)
e =1
†
Reproduced and analysed in Oasys GSA 8.1 - also used for objective function evaluation.
Structural Topology Optimization of Braced Steel Frameworks Using GP 59
where:
L = total length of bracing elements
Le = length of bracing element e
n = total number of bracing elements
δ* = limit on maximum lateral displacement
δmax = maximum lateral displacement observed in structure
p = penalty factor imposed on designs violating constraint on maximum lateral dis-
placement
The total number, length and location of bracing elements are variable in the evolu-
tionary process. Fixed parameters in the structural model include framework geome-
try, applied loads and section size of bracing members (Ae), beams and columns.
Issues of strength and buckling are recognized as important but not included at this
stage for means of comparison.
33.4kN
66.7kN
66.7kN
66.7kN
66.7kN
12 @ 3.658m
66.7kN
66.7kN
66.7kN
66.7kN
66.7kN
66.7kN
66.7kN
3 @ 6.096m
Fig. 5. Test problem geometry and loads, with discrete interpretation of optimal bracing layout
from Liang et al [4]. Framework specifications can be found in [4].
Fig. 7. Sample best designs generated using different bracing section sizes
References
1. Koza, J.R.: Genetic programming: on the programming of computers by means of natural
selection. Cambridge, MA: MIT Press. (1992)
2. Yang, Y., Soh, C.K. "Automated optimum design of structures using genetic program-
ming" Computers and Structures (2002) 80: 1537-1546
3. Liu, P.: Optimal design of tall buildings: a grammar-based representation prototype and the
implementation using genetic algorithms. PhD thesis. Tongji University, Shanghai.(2000)
4. Liang, Q.Q., Xie, Y.M., Steven, G.P.: Optimal topology design of bracing systems for
multistory steel frames J. Struct. Engrg. (2000) 126(7) pp823-829.
5. Arciszewski, T., Bloedorn, E., Michalski, R.S., Mustafa, M., Wnek, J.: Machine learning
of design rules: methodology and case study. ASCE J. Comp. Civ. Engrg. (1994) 8(2):
286-309.
6. Murawski, K., Arciszewski, T., De Jong, K.: Evolutionary computation in structural de-
sign. Engineering with Computers (2000) 16: 275-286.
7. Grierson, D.E., and Cameron, G.E.: Microcomputer-based optimization of steel structures
in professional practice. Microcomput. Civ Eng. (1989) 4, 289-296.
8. Kicinger, R., Arciszewski, T., DeJong, K.: Evolutionary designing of steel structures in
tall building. ASCE J. Comp. Civ. Engrg. (2005)
9. Kicinger, R.: Emergent engineering design: design creativity and optimality inspired by
nature. PhD Thesis, George Mason University (2004)
10. Baldock, R., Shea, K., Eley, D.: Evolving optimized braced steel frameworks for tall build-
ings using modified pattern search. ASCE Conference on Computing in Civil Engineering,
Cancun, Mexico.(2005)
11. Mijar, A.R., Swan, C.C., Arora, J.S., Kosaka, I.: Continuum topology optimization for
concept design of frame bracing systems. J. Struct. Engrg. (1998) 5, p541-550
12. Shaw, D., Miles, J., Gray, A.: Genetic Programming within Civil Engineering. Organisa-
tion of the Adaptive Computing in Design and Manufacture Conference (2004) 20-22
April, Bristol, UK.
On the Adoption of Computing and IT by Industry: The
Case for Integration in Early Building Design
Claude Bédard
1 Introduction
Undoubtedly, professionals in the AEC industry (architecture, engineering,
construction) are now routinely using computing and IT tools in many tasks. While
this situation would indicate that the industry is keeping up with technological
developments, a quick comparison with other industries such as automotive or
aerospace reveals that computing applications in construction have been sporadic and
unevenly distributed across the industry, with a major impact only on a few
tasks/sectors. The significance of such a situation on the construction industry in
North America cannot be overstated. It has resulted in a loss of opportunities, indeed
competitiveness on domestic and foreign markets, and a level of productivity that lags
behind that of other industries. Even among researchers and reflecting on the past 20
years of conferences about computing in construction, one can easily note a
progressive “lack of enthusiasm” for computing research over the last few years, to
the point where the frequency and size of annual events have been questioned
(particularly true for ASCE-TCCIP, American Society of Civil Engineers – Technical
Council on Computing and Information Technology).
This paper will first attempt to understand better the current status of IT use and
developments in the AEC industry as well as the main roadblocks for widespread
adoption of better tools and solutions by practitioners that should inform our
collective R&D agenda. A research project will also be presented briefly to illustrate
innovative ways of advancing integration in building design.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 62 – 73, 2006.
© Springer-Verlag Berlin Heidelberg 2006
On the Adoption of Computing and IT by Industry 63
a) Frequency of usage
IT usage
Technology or communication mode M SD
Email without attached document 4.58 0.64
Email with attached document 4.54 0.58
Phone with one colleague 4.50 0.65
Face-to-face meetings 4.35 0.63
Fax 4.12 0.86
Regular cell phone 3.58 1.27
Private courier 3.42 0.90
Electronic planner without cell phone capacity 2.85 1.29
Phone or video conferencing 2.75 0.53
Document obtained from an FTP site 2.72 0.89
Portable computer on construction site 2.58 1.10
Pager 2.31 0.84
Chat 2.29 1.04
Walkie-talkie type cell phone 2.28 1.10
Electronic planner with cell phone capacity 2.24 1.09
Document obtained from web portal 2.17 0.95
Groupware 2.00 1.08
Note: Scale for frequency: 1=unknown technology, 2=never,
3=sometimes, 4=often, 5=very often.
On the Adoption of Computing and IT by Industry 65
b) Perceived efficiency
Perceived efficiency
because of IT usage
Technology or communication mode M SD
Email with attached document 4.80 0.58
Face-to-face meetings 4.76 0.52
Email without attached document 4.76 0.52
Phone with one colleague 4.72 0.61
Fax 4.44 0.65
Private courier 4.12 1.01
Document obtained from an FTP site 3.78 1.54
Regular cell phone 3.64 1.66
Phone or video conferencing 3.33 1.55
Electronic planner without cell phone capacity 2.61 1.83
Document obtained from web portal 2.42 1.77
Portable computer on construction site 2.39 1.67
Chat 1.75 1.26
Walkie-talkie type cell phone 1.70 1.40
Groupware 1.68 1.29
Electronic planner with cell phone capacity 1.67 1.34
Pager 1.65 1.19
Note: Scale for efficiency: 1=does not apply, 2=strongly disagree,
3=somewhat disagree, 4=somewhat agree, 5=strongly agree.
function of key stakeholder, results clearly show that the telephone is the method of
choice. Overall, participants favored using the phone individually to communicate
with internal team members (69 %), with internal stakeholders (73 %), with clients
(54 %), with professionals (62 %), with general contactors (50 %), and with higher
management (58 %). With respect to which technology or communication mode was
considered the most (or the second most) efficient as a function of project phase,
results are also quite clear. Participants favored face-to-face meetings to communicate
during the feasibility study (50 %), during construction design (46 %), during
construction to coordinate clients, professionals and contractors (50 %), during
construction to manage contactors and suppliers (54 %), commissioning (46 %), and
during project close-out (39 %). Hence, participants clearly favoured traditional
communication modalities such as the phone or face-to-face meetings, irrespective of
project phase and internal or external stakeholders.
deeply fragmented structure and mode of operation of the construction industry are to
be blamed for such a situation. The implementation of integrative solutions
throughout the entire building delivery process, i.e. among various people and
products involved from project inception until demolition, would appear as key to
counteract such fragmentation, with the adoption of computing and IT by the industry
playing a capital role in facilitating the development of such integrated solutions. The
aforementioned studies reveal a contradiction in the adoption of new technologies: on
the one hand, computerization and IT can now be relied upon in many tasks
performed by the majority of stakeholders in the AEC industry, yet on the other hand,
promises brought by the new technologies remain unfulfilled, thus leaving
practitioners to contend with new complexities, constraints and costs that make them
stick with traditional approaches, with the ensuing poor performance. Many factors
were pointed out in the above studies as impeding the adoption of computing and IT,
and these corroborate the findings of other researchers.
At the 2003 conference of CIB W78 on Information Technology for Construction,
Howard identified patterns in the evolution of IT developments over a 20 year-period
in six areas as hardware, software, communication, data, process and human change.
While he qualified progress in the first three as having surpassed initial expectations,
he deplored only slow progress in the remaining areas – the lack of well organized,
high quality building data and our inability to change either processes or peoples’
attitudes [5]. Whereas CIB reports on the conditions of the construction industry
world-wide, the above comments would only be more relevant to the North American
context with a profoundly fragmented industry that is incapable of developing a long-
term coherent vision of its own development nor to invest modest amounts to fund its
own R&D. The few notable exceptions only cater to the R&D needs of their own
members, such as FIATECH which groups a number of large capital projects
construction/consulting companies in the US. Similarly with reference to computing
support in the field of structural engineering, Fenves and Rivard commented on the
drastic disparity between two categories of environments, generative (design) systems
vs analysis tools, in terms on their impact on the profession. Generative systems
produced by academic research have had negligible impact on the profession, unlike
analysis tools, possibly because of a lack of stable and robust industrial-strength
support environment [6]. One can argue also that engineers worldwide are still
educated to view design as a predominantly number-crunching activity, like analysis
for which computers represent formidable tools, rather than a judgment-intensive
activity relying on qualitative (as well as quantitative) decisions.
In short, computing and IT advances have been numerous and significant in the
AEC industry in terms of hardware, software and communications. However the
industry remains profoundly divided and under-performing compared to its peers
because these technologies are still incapable of accounting properly for human
factors like :
• the working culture, style and habits, which ultimately determine the level of
acceptation or resistance to change toward new environments ;
• the training needs of individuals who have to feel “at ease” with new technology
in order to maintain interest and adopt it on a daily basis ;
On the Adoption of Computing and IT by Industry 67
passive since it validates or confirms design decisions that have already been made.
However, these tools lack the knowledge required to assist the engineer to explore
design alternatives and make decisions actively. A knowledge-based approach is
proposed that aims at providing interactive support for decision-making to help the
engineer in the exploration of design alternatives and efficient generation of structural
solutions. With this approach a structural solution is developed by the engineer from
an abstract description to a specific one, through the progressive application of
knowledge interactively.
Researchers have applied artificial intelligence (AI) techniques to assist engineers
in exploring design alternatives over a vast array of possible solutions under
constraints. Relevant techniques that have been explored over the last 30 years are:
expert systems, formal logic, grammars, case-based reasoning (CBR) systems,
evolutionary algorithms and hybrid systems that combine AI techniques such as a
CBR system with a genetic algorithm. The impact of AI-based methods in design
practice however has been negligible mainly because the proposed systems were
standalone with no interactions with design representations currently employed in
practice, such as BIM’s. In fact, only few of the research projects [12] used
architectural models with 3D geometry as input for structural synthesis. In the
absence of such models, only global gravity and lateral load transfer solutions could
be explored to satisfy overall building characteristics and requirements. These
solutions needed actual architectural models to be substantiated and validated.
Another disadvantage of the above research systems that hindered their practical use
was that the support provided was mainly automatic and the reasoning monotonic (i.e.
based on some input, these systems produced output that met specified requirements).
By contrast, a hierarchical decomposition/refinement approach to conceptual
design is adopted in this research [13] where different abstraction levels provide the
main guidance for knowledge modeling. This approach is based on a top-down
process model proposed by Rivard and Fenves [14]. To implement this approach the
structural system is described as a hierarchy of entities where abstract functional
entities, which are defined first, facilitate the definition of their constituent ones.
Figure 1 illustrates the conceptual structural design process. In Figure 1, activities
are shown in rectangles, bold arrows pointing downwards indicate a sequence
between activities, arrows pointing upwards indicate backtracking, and two horizontal
parallel lines linking two activities indicate that these can be carried out in parallel.
For clarity, in Figure 1 courier bold 10 point typeface is used to identify structural
entities. As shown in Figure 1, the structural engineer first defines independent
structural volumes holding self-contained structural skeletons that are assumed to
behave as structural wholes. These volumes are in turn subdivided into smaller sub-
volumes called structural zones that are introduced in order to allow definition of
structural requirements that correspond to architectural functions (i.e. applied loads,
allowed vertical supports and floor spans). Independent structural volumes are also
decomposed into three structural subsystems, namely the horizontal, the vertical
gravity, and the vertical lateral subsystems (the foundation subsystem is not
considered in this research project). Each of these structural subsystems is further
refined into structural assemblies (e.g. frame and floor assemblies), which are made
out of structural elements and structural connections. The arrangement of structural
elements and structural connections makes up the “physical structural system”.
During activity number 2 in Figure 1 (i.e. Select Structural Subsystems), the engineer
On the Adoption of Computing and IT by Industry 69
1.a. Select
Select
Independent Structural
Structural Volumes
Volumes
2. Select Structural
Structural Subsystems 1.b. Select
Select
Structural Zones
Structural Zones
Overall load transfer solutions
Determine
Lay out Structural Grids applied loads
• Structural Assembly support
• Material(s)
3. Select
Define
and position
& position
each
Structural
Structural Assemblies
Assembly
As seen in Table 2 the main tasks performed by the engineer, the ASM and the
DKM are the following:
(1) The engineer queries the ASM model, selects entities, specifies, positions and
lays out assemblies and elements, and verifies structural solutions.
(2) The ASM model displays and emphasizes information accordingly, elaborates
engineer’s decisions, performs simple calculations on demand, and warns the
engineer when supports are missing.
(3) The DKM suggests and ranks solutions, assigns loads, and elaborates and refines
engineer’s structural selections and layouts.
Each activity performed by the engineer advances a structural solution and
provides the course of action to enable the ASM and the DKM to perform subsequent
tasks accordingly. The knowledge-based exploration of structural alternatives takes
place mostly at the abstraction levels of activities 2, 3, and 4 in Figure 1 and Table 2.
At each subsequent level more information and knowledge is made available so that
previously made decisions can be validated and more accurate ones can be made.
The implementation of the approach is based on an existing prototype for
conceptual structural design called StAr (Structure-Architecture) that assists engineers
in the inspection of a 3D architectural model (e.g. while searching for continuous load
paths to the ground) and the configuration of structural solutions. Assistance is based
on geometrical reasoning algorithms (GRA) [15] and an integrated architecture-
structure representation model (ASM) [16]. The building architecture in the ASM
representation model describes architectural entities such as stories, spaces and space
aggregations, and space establishing elements such as walls, columns and slabs. The
structural system is described in StAr as a hierarchy of entities to enable a top-down
design approach. The geometric algorithms in StAr use the geometry and topology of
the ASM model to construct new geometry and topology, and to verify the model.
The algorithms are enhanced with embedded structural knowledge regarding layout
and dimensional thresholds of applicability for structural assemblies made out of cast-
in-place concrete. However, this knowledge is not sufficient for assisting engineers
during conceptual design. StAr provides the kind of support described in the second
column of Table 2, plus limited knowledge-based support (column 3) at levels 1.b and
4. Therefore, StAr is able to generate and verify a physical structure based on
information obtained from precedent levels. However, no knowledge-based support is
provided by StAr for exploration at levels 2, 3 and 4.
A structural design knowledge manager (DKM) is therefore developed that gets
architectural and/or partial structural information from the ASM directly or via GRA
to assist the engineer to conceive, elaborate and refine structural solutions
interactively. Once the engineer accepts a solution suggested by the DKM, it
automatically updates (i.e. elaborates or refines) the partial ASM. Architectural
requirements in the form of model constraints (e.g. floor depths, column-free spaces,
etc.) from the ASM model are also considered by the DKM for decision-making. The
DKM encapsulates structural design knowledge by means of a set of technology
nodes [17]. The type of knowledge incorporated in the nodes is heuristic and
considers available materials, construction technologies, constructability, cost and
On the Adoption of Computing and IT by Industry 71
Table 2. Interactivity table between the engineer, the ASM and the DKM
5 Conclusions
Practitioners in the AEC industry have benefited from computing and IT tools for a
long time, yet the industry is still profoundly fragmented in North America, which
translates into poor productivity and a lack of innovation compared to other industrial
sectors. Recent surveys reveal a contradiction in the adoption of new technologies: on
the one hand, they appear to be used in many tasks performed by the majority of
stakeholders in the industry, yet on the other hand, they fall short of delivering as
promised, thus leaving practitioners to contend with new complexities, constraints
and costs that make them stick with traditional approaches, with the attending poor
performance. The fact that critical human factors are not given due consideration in
the development of new computing and IT tools can explain in part why such
technologies are often not adopted by the practice as readily as expected. In this
context, the development of integrated approaches would appear highly effective in
counteracting the currently fragmented approaches to multidisciplinary building
design. An on-going research project is presented briefly to illustrate innovative ways
of advancing integration in the conceptual design of building structures.
References
1. Bédard, C. and Rivard, H.: Two Decades of Research Developments in Building Design.
Proc. of CIB W78 20th Int’l Conf. on IT for Construction, CIB Report: Publication 284,
Waiheke Island, New Zealand, April 23-25, (2003) 23-30
2. Rivard, H.: A Survey on the Impact of Information Technology on the Canadian
Architecture, Engineering and Construction Industry. Electronic J. of Information
Technology in Construction, 5(http://itcon.org/2000/3/). (2000) 37-56
3. Rivard, H., Froese, T., Waugh, L. M., El-Diraby, T., Mora, R., Torres, H., Gill, S. M., &
O’Reilly, T.: Case Studies on the Use of Information Technology in the Canadian
Construction Industry. Electronic J. of Information Technology in Construction,
9(http://itcon.org/2004/2/). (2004) 19-34
On the Adoption of Computing and IT by Industry 73
4. Chiocchio, F., Lacasse, C. Rivard, H., Forgues, D. and Bédard, C.: Information
Technology and Collaboration in the Canadian Construction Industry. Proc. of Int’l Conf.
on Computing and Decision Making in Civil and Building Engineering, ICCCBE-XI,
Montréal, Canada, June 14-16, (2006) 11 p.
5. Howard, R.: IT Directions – 20 Years’ Experience and Future Activities for CIB W78.
Proc. of CIB W78 20th Int’l Conf. on IT for Construction, CIB Report: Publication 284,
Waiheke Island, New Zealand, April 23-25, (2003) 23-30
6. Fenves, S.J. and Rivard, H.: Generative Systems in Structural Engineering Design. Proc. of
Generative CAD Systems Symposium, Carnegie-Mellon University, Pittsburgh, USA
(2004) 17 p.
7. Bédard, C.: Changes and the Unchangeable : Computers in Construction. Proc. of 4th Joint
Int’l Symposium on IT in Civil Engineering, ASCE, Nashville, USA (2003) 7 p.
8. IAI (International Alliance for Interoperability) www.iai-international.org (2006)
9. NIST (National Institute of Standards and Technology): Cost Analysis of Inadequate
Interoperability in the US Capital Facilities Industry. NIST GCR 04-867 (2004)
10. Bédard, C. and Gowri, K.: KBS Contributions and Tools in CABD. Int’l J. of Applied
Engineering Education, 6(2), (1990) 155-163
11. Khemlani L.: AECbytes product review: Autodesk Revit Structure, Internet URL:
http://www.aecbytes.com/review/RevitStructure.htm (2005)
12. Bailey S. and Smith I.: Case-based preliminary building design, ASCE J. of Computing in
Civil Engineering, 8(4), (1994) 454-467
13. Mora, R., Rivard, H., Parent, S. and Bédard, C.: Interactive Knowledge-Based Assistance
for Conceptual Design of Building Structures. Proc. of the Conf. on Advances in
Engineering, Structures, Mechanics and Construction. University of Waterloo, Canada
(2006) 12 p.
14. Rivard H. and Fenves S.J.: A representation for conceptual design of buildings, ASCE J. of
Computing in Civil Engineering, 14(3), (2000) 151-159
15. Mora R., Bédard C. and Rivard H.: Geometric modeling and reasoning for the conceptual
design of building structures. Submitted for publication to the J. of Advanced Engineering
Informatics, Elsevier (2006)
16. Mora R., Rivard H. and Bédard C.: A computer representation to support conceptual
structural design within a building architectural context. ASCE J. of Computing in Civil
Engineering, 20(2), (2006) 76-87
17. Fenves S.J., Rivard H. and Gomez N.: SEED-Config: a tool for conceptual structural
design in a collaborative building design environment. AI in Engineering, 14(1), Elsevier,
(2000) 233-247
Versioned Objects as a Basis for
Engineering Cooperation
Karl E. Beucke
Abstract. Projects in civil and building engineering are to a large degree de-
pendent upon an effective communication and cooperation between separate
engineering teams. Traditionally, this is managed on the basis of Technical
Documents. Advances in hardware and software technologies have made it pos-
sible to reconsider this approach towards a digital model-based environment in
computer networks.
So far, concepts developed for model-based approaches in construction pro-
jects have had very limited success in construction industry. This is believed to
be due to a missing focus on the specific needs and requirements of the con-
struction industry. It is not a software problem but rather a problem of process
orientation in construction projects. Therefore, specific care was taken to take
into account process requirements in the construction industry.
Considering technological advances and new developments in software and
hardware, a proposal is made for a versioned object model for engineering co-
operation. The approach is based upon persistent identification of information
in the scope of a project, managed interdependencies between information, ver-
sioning of information and a central repository interconnected via a network
with local workspaces.
An implementation concept for the solution proposed is developed and veri-
fied for a specific engineering application. The open source project CADEMIA
serves as an ideal basis for these purposes.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 74 – 82, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Versioned Objects as a Basis for Engineering Cooperation 75
Digital technologies have not changed this approach fundamentally yet. Most often
application software is used to produce the same documents as before, digital
exchange formats are used to interchange information between separate software
applications and computer networks are used to speed up the process of information
interchange. This optimizes individual steps in the process but does not change the
fundamental approach with all its inherent problems.
For several years now, it was proposed to change this fundamental paradigm of en-
gineering cooperation to a new approach based on a single consistent model as a basis
for engineering cooperation. This is often referred to as the model-oriented approach.
So far, success has been very limited. A lack of acceptance in the construction indus-
try has even led to major uncertainties regarding commercial products developed for
these purposes.
engineer
increased to such an extent that we are now able to share large sets of data between
locally dispersed engineering teams.
Software technology has advanced from former procedural methods to object-
oriented methods for design and implementation of application systems. Modern
application software is built around a set of objects with attributes and methods that
are strongly interrelated and connected amongst each other. Links between objects are
modeled in different ways. The most common concept are internal references between
objects. However, most applications still work with a proprietary object model that is
not available to others. Some will allow application programmers to extend the object
model with own, individual object definitions that are transparently embedded within
the application, but still the core of the software is not transparent.
Object-oriented software systems conceptually would be able to support a persistent
identification of information (objects) in the scope not only of its own system but even
in a global sense in form of, for example, Globally Unique Identifiers (GUID) as re-
quired by systems that are distributed over the Internet. Persistent identification of
information is believed to be of crucial importance for engineering cooperation via
computer networks. Many “old” application systems will still not support such a con-
cept. They will not store identifiers of objects permanently but rather generate them at
runtime when an application is started. Data serialization into files in Java will also
generate identifiers but these will be totally independent from internal identifiers in an
application. Therefore, the identifiers generally will change from session to session and
uniqueness of identification can not be ensured. Uniqueness of links between objects in
separate applications would also require unique identifiers in the scope of the complete
project. This must also be ensured when corresponding solutions are defined.
concept of versioned objects provides a much better basis for flexible configurations of
information in construction projects as opposed to a single rigidly defined configuration.
One major advantage of this approach is the opportunity to preserve the validity of
bindings between objects (referential integrity). If, in the context of a project, bindings
between objects are established within an application by different users or between
separate applications, these bindings will possibly be invalid or wrong when the origi-
nal object referred to was changed or deleted. If, however, the “old” object is preserved
as a specific version and any modifications are reflected in a “new” version of that
object, all bindings to the “old” object will still remain valid thus preserving referential
integrity in the context of a complete engineering project. This is much like a new
edition of a book in a library, where previous editions are not removed but rather kept
in the library for any references to that book in order to remain valid and accessible.
The third main aspect is based upon the idea of private workspaces for supporting
cooperation in phase 2 of the concept above. In phase 2 separate, isolated states of
information are accepted to develop which are not necessarily consistent between
each other. Each separate state is developed using specific application software for a
period of days and maybe even weeks. This phase is regarded as a single long transac-
tion. At specified points in time an engineer can decide or project guidelines may
require to synchronize separate, individual results of such long transactions with a
central data store called the repository. Any information produced in phase 2 is not
immediately propagated into the repository but rather maintained in the private work-
space and regarded as a deferred transaction.
The fourth aspect is based upon a distinction between application specific informa-
tion kept in the object model of an application in form of attributes to objects and
application independent information kept and maintained in the project and private
workspaces in form of elements with specific features. This is necessary since object
models of different applications must be supported which may even in some cases not
be transparent to the users. Also, the process of selection of information from the
repository must be supported independent from the functionality and models of indi-
vidual applications. The engineers working in phase 2 must be able to query the re-
pository or the private workspace with functionality that reflects their specific needs
independent from specific applications and across the functionality of different appli-
cations. Such features of elements are modeled via an approach developed originally
in [4]. The original approach was adopted in [5] for problems related to Software
Configuration Management (SCM). In the context of this work it is called Feature
Logic and serves as the basis for a corresponding query language.
Unique links will be established between the elements and application specific ob-
jects. Not considered in this contribution but a matter of further research would be the
implementation of the elements proposed in this context as Industry Foundation
Classes (IFC) developed under the guidance of the International Alliance for Interop-
erability (IAI) Modeling Support Group [6].
Finally, much work has been done on the topics of Change Management and Revi-
sion Management. Most of this work has been done in collaborative software devel-
opment under the term Software Configuration Management - SCM (e.g. [7]). Major
projects with hundreds of contributors, thousands of files and millions of lines of code
are managed with Version Control Systems (VCS). Several of these systems were
investigated for its suitability in engineering cooperation. An initial approach was
based upon the software objectVCS [8]. Eventually, it was concluded that much of the
Versioned Objects as a Basis for Engineering Cooperation 79
functionality offered by these systems can be used very well in the context of engi-
neering cooperation. The software Subversion [9] was consequently selected for the
purposes of this work [10]. It is based upon the concept of a central repository and an
additional set of several local environments – called Sandboxes. A Sandbox consists
of project data and additional versioning information required for the synchronization
with the central repository. It is stored in a specific hierarchy in the file system. Op-
erations required for that approach are given in Fig. 2.
merge process
checkout
load
commit Object
Repository update Sandbox store model
release
Fig. 2. Operations required for the Application with a specific Object model, for the Sandbox
and Repository
An application with a corresponding object model can be processed with the func-
tionality provided by the application which is further enhanced to load objects from
and store objects into a local Sandbox. The Sandbox may be connected to a centrally
organized Repository with functionality for checking out objects, for committing
objects into it and for updating objects in the Sandbox. Specific release states may be
defined for the Repository and it may be merged with another Repository.
5 Implementation Concept
Based on the concepts outlined above the following proposal for an implementation
was developed for the support of synchronous engineering cooperation in civil and
building engineering projects (Fig. 3):
Project
Repository Sandbox
The complete project data are stored and maintained in a Repository on a central
server accessible via Internet technology. Object versions are maintained by a version
control system (VCS). Element information needed for a specific purpose can be
selected with extended functionality of the Feature Logic.
A number of private workspaces are connected to an application and are able to
work independently from a connection to the central project data within a Sandbox
environment which contains objects, elements and version data.
The system architecture and implementation concept are explained in detail in [10].
The key elements of this concept are the ideas that an application should not just store
the latest version of an object but rather that it should store the version history of the
objects involved in a project. Also, in order to support independent engineering work, a
Sandbox environment allows to work independent from network restrictions. Finally,
local Sandbox information may be synchronized with central project information.
Applications operating on workspace information can either be specific implemen-
tations designed for such an environment or also commercially available products if
they satisfy certain requirements. The implications in utilizing the concept proposed
in conjunction with existing commercial applications are discussed in [11]. Such
systems must be built upon object-oriented technology with an individual object
model and they must provide an Application Programming Interface (API) in order to
extend the standard product by the functionality and commands required for the
workspace connections. An example would be the software AutoCAD with its API
called AutoCAD Runtime Extension (ARX).
6 Engineering Applications
The Open Source project CADEMIA [12] is an engineering platform that serves as an
ideal basis for a verification of the concepts outlined above.
Acknowledgements
The author gratefully acknowledges the financial support by the German Research
Foundation (Deutsche Forschungsgemeinschaft DFG) within the scope of the priority
program ‘Network-based Co-operative Planning Processes in Structural Engineering’.
References
1. Pahl, P.J., and Beucke, K., “Neuere Konzepte des CAD im Bauwesen: Stand und
Entwicklungen”. Digital Proceedings des Internationalen Kolloquiums über Anwendungen
der Informatik und Mathematik in Architektur und Bauwesen (IKM) 2000, Bauhaus-
Universität Weimar.
2. Katz, Randy H., “Towards a Unified Framework for Version Modeling in Engineering Da-
tabases”, ACM Computing Surveys, Vol. 22, No. 4, December 1990.
3. Firmenich, B., „CAD im Bauplanungsprozess: Verteilte Bearbeitung einer strukturierten
Menge von Objektversionen“, PhD thesis (2001), Civil Engineering, Bauhaus-Universität
Weimar.
4. Smolka, G., “Feature Constraints Logics for Unification Grammars“, The Journal of Logic
Programming (1992), New York.
5. Zeller, A., “Configuration Management with Version Sets“, PhD thesis (1997),
Fachbereich Mathematik und Informatik der Technischen Universität Braunschweig.
6. Liebich, T., “IFC 2x, Edition 2, Model Implementation Guide”, Version 1.7, Copyright
1996-2004, (2004), International Alliance for Interoperability.
7. Hass, A.M.J., “Configuration Management Principles and Practice”, The Agile software
development series. Boston[u.a.], (2003), Addison-Wesley.
82 K.E. Beucke
8. Firmenich, B., Koch, C., Richter, T., Beer, D., “Versioning structured object sets using text
based Version Control Systems”, in Scherer, R.J. (Hrsg); Katranuschkov, P. (Hrsg),
Schapke, S.-E. (Hrsg.): CIB-W78 – 22nd Conference on Information Technology in Con-
struction: Institute for Construction Informatics, TU Dresden, Juli 2005.
9. Collins-Sussman, B., Fitzpatrick, B. W., Pilato, C.M, “Version Control with Subversion”,
Copyright 2002-2004, http://svnbook.red-bean.com/en/1.1/index.html (2004).
10. Beer, D. G., „Systementwurf für verteilte Applikationen und Modelle im
Bauplanungsprozess“, PhD thesis (2006), Civil Engineering, Bauhaus-Universität Weimar.
11. Beucke, K.; Beer, D. G., Net Distributed Applications in Civil Engineering: Approach and
Transition Concept for CAD Systems. In: Soibelman, L.; Pena-Mora, F. (Hrsg.): Digital
Proceedings of the International Conference on Computing in Civil Engineering
(ICCC2005)American Society of Civil Engineers (ASCE), July 2005, ISBN 0-7844-0794-0
12. Firmenich, B., http://www.cademia.org, (2006).
13. SUN, JavaTM 2 Platform, Standard Edition, v 1.5, API Specification, Copyright 2004 Sun
Microsystems, Inc.
14. Olivier, A.H., „Consistent CAD-FEM Models on the Basis of Object Versions and Bind-
ings”, International Conference on Computing in Civil and Building Engineering XI,
Montreal 2006.
The Effects of the Internet on Scientific Publishing –
The Case of Construction IT Research
Bo-Christer Björk
1 Introduction
Scientific communication has gone through a number of technology changes, which
fundamentally have changed the border conditions for how the whole system works.
The invention of the printing press was of course the first, and IT and in particular the
Internet the second. Currently we are witnessing a very fast change to predominantly
electronic distribution of scientific journal articles. Yet the full potential of this
change has not been fully utilised, due to the lack of competition in the area of journal
publishing, and the unwillingness of the major publishers to change their currently
rather profitable subscription-based business models. In the early 1990’s scattered
groups of scientists started to experiment with a radical new model, nowadays called
Open Access, which means that the papers are available for free on the Internet, and
that the funding of the publishing operations are either done using voluntary work, as
in Open Source development or Wikipedia, or lately using author charges.
Originally scientific journals were published by scientific societies as a service to
their members. Due to the rapid growth in the number of journals and papers during
the latter half of the 20th century, the publication process was largely taken over by
commercial publishers. Due to the enormous growth in scientific literature a network
of scientific libraries evolved to help academics find and retrieve interesting items,
supported by indexing services and inter-library loan procedures.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 83 – 91, 2006.
© Springer-Verlag Berlin Heidelberg 2006
84 B.-C. Björk
This process worked well until the mid nineteen nineties. The mix between pub-
licly funded libraries on one hand and commercial publishers and indexing services
on the other was optimal, given the technological border condition. The quick emer-
gence of the Internet, where academics were actually forerunners as users, radically
changed the situation. At the same time there has been a trend of steadily rising sub-
scription prices (“the serials crisis”) and mergers of publishers. Today one publisher
controls 20 % of the global market. In reaction, a new breed of publications emerged,
published by scientists motivated not by commercial interests, but by a wish to fulfil
the original aims of the free scientific publishing model, now using the Internet to
achieve instant, free and global access. The author of this paper belongs to this cate-
gory of idealists, and has since 1995 acted as the editor-in-chief of one such publica-
tion (the Electronic Journal of Information Technology in Construction). He was
participated in the EU funded SciX project, which aimed at studying the overall proc-
ess and at establishing a subject-based repository for construction IT papers.
There is widespread consensus that the free availability of scientific publications in
full text on the web would be ideal for science. Results from a study carried out by
this author and his colleague [1] clearly indicated that scientists prefer downloading
papers from the web to walking over to a library. Also, web material that is readily
available, free-of-charge is preferred to that which is paid for or subscription based
(Figure 1).
Fig. 1. Some results from a web-survey of the reading and authoring habits of researchers in
construction management and construction IT [2]. Material, which is available free-of-charge
on the web, is the most popular means for accessing scientific publications.
There is, however, a serious debate about the cost of scientific publishing on the
web [3]. It is clearly in the interest of commercial publishers to claim that web pub-
lishing is almost as expensive as ordinary paper based publishing, in order to justify
The Effects of the Internet on Scientific Publishing 85
the increasingly expensive subscriptions. Advocates of free publishing cite case ex-
amples of successful endeavours where costs have been markedly lower [4]. While
commercial publishers state that the publishing costs per article are from 4000 USD
upwards, it is interesting to note that the two major open access publishers, commer-
cially operating BioMedCentral and the non-profit Public Library of Science, charge
authors a price of between 1000 and 1500 USD for publishing an article. Recently a
number of major publishers (Springer, Blackwell, Oxford University Press) have
announced possibilities for authors to open up individual articles at prices in the range
of 2500-3000 USD.
It is not only the publishing itself, which is becoming a battleground between
commercial interests and idealistic scientists. Since the emergence of data base tech-
nology in the 1960’s a number of commercial indexing services have emerged, which
libraries subscribe to. Traditionally these have relied on manual and or highly struc-
tured input of items to be included, a costly and also selective (and thus discrimina-
tory) process. Now scientists are building automated web search engines which use
the same web crawler techniques as used by popular tools, such as Google, and which
apply them to scientific publications published in formats such as PDF or postscript.
These are called harvesters and rely of the tagging of Open Access content using a
particular standard (OAI) If technically successful, such engines can be run at very
low cost and thus be made available at no cost. The combination of free search en-
gines and eprint repositories is providing what is called the green route to Open Ac-
cess. Currently around 15 % of journal papers are estimated to be available via this
route.
A repository for construction IT papers was set up as part of the EU-funded SciX
project (http://itc.scix.net/). Currently the repository houses some 1000+ papers, with
the bulk consisting of the proceedings of the CIB W78 conference series going as far
back as 1988. This was achieved via digitising the older proceedings. The experi-
ences concerning the setting up of the repository are described more in detail else-
where [5].
The overall experience with the ITC repository is mixed. Ideally agreements
should have been made with all major conference organisers in our domain for up-
loading their material, at least in retrospect. This was, however, not possible due to
copyright restriction, the ties between conference organisers and commercial publish-
ers, fears of losing conference attendees or society members if papers were made
freely available etc. As a concrete example take this conference. After this author has
signed the copyright agreement with Springer he is still allowed to post a copy of the
paper on his personal web pages (the publisher recommends waiting 12 months) but it
would be illegal to post a copy of the paper to the ITC repository.
Due to problems like this the repository has not reached the hoped for critical
mass. On the other hand the technical platform built for the repository has success-
fully been used for running a number of repositories in other research areas. The pa-
pers in the repository are also easy to find via general search engines. For instance a
Google search with the following search terms: “Gielingh AEC reference model” will
show a link to the paper shown in figure 2.
86 B.-C. Björk
Fig. 2. In the setting up of the ITC Open Access repository the conference series from CIB
W78 was scanned as far back as 1988. Some of the papers are important contributions to our
discipline which otherwise would be very difficult to get hold of.
3 Benchmarking ITcon
ITcon has recently been benchmarked against a group of journals in the field of con-
struction information technology [6], [7]. This sub-discipline numbers a few hundred
academics worldwide, mostly active in the architectural and civil engineering depart-
ments of universities as well as in a few government research institutes. It is a rela-
tively young field where speed of publication should be a very important factor, due
to the fast developments in the technology. Despite the fact that the field is relatively
small there are half-a-dozen peer reviewed journals specialised in the topic, most with
circulations in the hundreds rather than exceeding one thousand copies. In 2004 these
journals published 235 peer-reviewed articles. The benchmarking at this stage con-
centrated on factors which were readily available or could be calculated from journal
issues. For some of the factors, journals in the related field of construction manage-
ment, which often publish papers on construction IT, were also studied to get a wider
perspective. The following factors were studied:
The subscription prices are easily available from the journal web sites. The institu-
tional subscriptions to electronic versions are by far the most important and were
used. In order to make the results comparable the yearly subscription rates were di-
vided by the number of scientific articles. The price per article ranged from 7.1 to
33.3 euro (Figure 3). Two of the journals compared were open access journals.
Readership is one factor for which it very difficult to obtain data. First the number
of subscribers, in particular institutional subscribers, does not equate to the number of
readers. Second most journals tend to keep information about the number of subscrib-
ers as trade secrets, since low numbers of subscribers might scare off potential sub-
mitting authors.
Society published journals tend on the average to have much lower prices. In eco-
nomics the price ratio, per article, between society journals and purely commercial
journals, is 1 to 4 [8]. In practice this means that commercial journals have often
opted for much smaller subscription bases where their overall profits are maximised.
Society journals often offer very advantageous individual subscription to members,
which tends to increase the readership. Consider for instance the above figure, in
which ECAM, CACIE and CME are published by big commercial publishers, and
JCCE by a Society. Also IJAC is essentially a journal published by the eCAADe
society, with its sister organisation on other continents.
Data on the downloads of published papers by readers could only be studied for
one of the journals. It would be a very useful yardstick to compare journals. For ITcon
the web download figures from the past three years were used. In order to make the
88 B.-C. Björk
33,3
26,1
21,3
14,1
10,6
8,6
7,1
0,0 0,0
data usable, downloads by web search engines and other non-human users were as far
as possible excluded (which resulted in a reduction of the figures by 74%). The
downloads of the full text PDFs were counted, since this would come closest to actual
readings. Over the three-year period each of the 120 published papers was on the
average downloaded 21.2 times per month (with a spread of 4.7 –47.3). In addition to
the number of average downloads per month, the total readership for each article over
a longer period, as well as differences in level of readership between articles, are
interesting (Figure 4).
Three of the journals are indexed in the Science Citation Index but with rather low
impact factors (0.219 – 0.678) and none of the journals in the whole sample is clearly
superior to the others in prestige or scientific quality. This is in contrast to many other
scientific areas, where there often is one journal with a very rigorous peer review and
low acceptance rate which is clearly superior in quality.
It is relatively straightforward to calculate the geographic spread of journal authors
since the affiliations of the authors are published with the articles. The actual analysis
was done on a country-by-country basis from articles published in 2001-2005. Thus
European authors had 28 % of authorships, North American 37 % and Asian 30 %.,
from where 28 % of the articles stem, has been divided into four regions (UK, Central
1
ECAM = Engineering, Construction and Architectural Management, CACIE = Computer
Aided Civil and Infrastructure Engineering, CME = Construction Management and Econom-
ics, CI = Construction Innovation, AIC = Automation in Construction, IJAC = International
Journal of Architectural Computing, JCCE = Journal of Computing in Civil Engineering, It-
con = Electronic Journal of Information Technology in Construction, IJDC=International
Journal of Design Computing.
The Effects of the Internet on Scientific Publishing 89
40
35
30
Amor et al 2002
25
15
10
0
0 200 400 600 800 1000 1200 1400 160
Number of downloads
Fig. 4. Total number of downloads per IT-con paper over a three-year period as a function of
the number of months on the web
Europe, Scandinavia, Eastern and Southern Europe). The more precise figures for the
individual journals show a wide variation [6]. For instance JCCE had 67 % North
American authors and AIC 49 % Asian authors. ITcon and the Open Access mode of
publishing has been embraced by in particular European authors (69%), less so by
North Americans (20 %) and significantly little by Asian authors (8 %).
The speed of publication (from submission to final publication of accepted papers)
is an important factor for submitting authors. For ITcon the full publication delays
where calculated from available databases. For some journals complete or incomplete
information could be gathered from the submission and acceptance dates posted with
the articles and the publication delay ranged from 7.6 to 21.8 months. For other jour-
nals this calculation was not possible to do. The figure for the IEEE journal Transac-
tions on Geoscience and Remote Sensing has been reported by Raney [9].
The recent study on open access publishing performed by the Kaufman-Wills
Group [10] provides statistics on acceptance rates for around 500 journals from dif-
ferent types of publishers, covering both subscription based and open access journals.
Thus the average acceptance rate for the subscription-based journals published by the
Association of Learned and Professional Society Publishers was 42%. The average
for open access journals indexed by the Directory of Open Access Journals (DOAJ)
was 64%, but if one excludes two large biomedical open access publishers (ISP and
BioMedCentral) the average was 55%. Construction Management and Economics has
made quite detailed statistics on submissions and acceptance rates available on its
web site [11]. Over the period 1992-2004 the acceptancy rate was 51%. Also the
ASCE journal for Computing in Civil Engineering has recently reported its
90 B.-C. Björk
ITcon 6,7
CI 18
AIC 18,7
0 5 10 15 20 25
Fig. 5. The speed of publication (from submission to final publication of accepted papers)
acceptance rate to be 47 % [12]. The overall acceptance rate for ITcon was calculated
from the records and proved to be 55%, which is to very close to the DOAJ average
excluding the two biomedical publishers.
4 Conclusions
All in all the experience with ITcon has shown that it is possible to publish a peer-
reviewed journal which is on par with the other journals in its field in terms of scien-
tific quality, using an Open Source like operating model, which requires neither
subscriptions nor author charges. As the authorship study shows ITcon has a very
globally balance range of authors. ITcon outperforms its competitors in terms of
speed of publication. Concerning the total amount of readership it is impossible to
obtain comparable figures for other journals. The analysis of journal pricing does,
however, indicate that the pricing of some journals is so high that the number of sub-
scribers is likely to be low.
The one parameter where ITcon still lags many of its competitors is “prestige”, in
terms acceptance of an ITcon article as an equally valuable item when comparing
CVs for tenure purposes, research assessment exercises etc. Here inclusion in SCI, or
having a well known society or commercial publisher, still makes a difference. Only
time and more citations can in the long run remedy this situation.
The Effects of the Internet on Scientific Publishing 91
References
1. Björk, B.-C., Turk, Z.: How Scientists Retrieve Publications: An Empirical Study of How
the Internet Is Overtaking Paper Media. Journal of Electronic Publishing: 6(2),(2000).
2. Björk, B-C., Turk, Z.: A Survey on the Impact of the Internet on Scientific Publishing in
Construction IT and Construction Management, ITcon Vol. 5, pp. 73–88 (2000).
http://www.itcon.org/
3. Tenopir, C., King, D.: Towards Electronic Journals, Realities for Scientists, librarians, and
Publishers, Special Libraries Association, Washington D.C. 2000.
4. Walker, T. J.: Free Internet Access to Traditional Journals, American Scientist, 86(5) Sept-
Oct 1998, pp. 463–471. http://www.sigmaxi.org/amsci/articles/98articles/walker.html
5. Martens, B., Turk, Z., Björk, B.-C.: The SciX Platform - Reaffirming the Role of Profes-
sional Societies in Scientific Information Exchange, EuropIA 2003 Conference, Istanbul,
Turkey.
6. Björk, B.-C., Turk, Z., Holmström, J.: ITcon - A longitudinal case study of an open access
scholarly journal. Electronic Journal of Information Technology in Construction, Vol 10.
pp. 349–371 (2005). http://www.itcon.org/
7. Björk, B.-C., Holmström, J (2006). Benchmarking scientific journals from the submitting
author’s viewpoint. Learned Publishing. Vol 19 No. 2, pp. 147–155.
8. Bergstrom, C. T., Bergstrom, T. C (2001). The economics of scholarly journal publishing.
http://octavia.zoology.washington.edu/publishing/
9. Raney, K.: Into the Glass Darkly. Journal of Electronic Publishing, 4(2) (1998).
http://www.press.umich.edu/jep/04-02/raney.html
10. Kaufman-Wills Group, The facts about Open Access, ALPSP, London, 2005.
11. Abudayyeh, O., DeYoung, A., Rasdorf, W., Melhem, H (2006).:Research Publication
Trends and Topics in Computing in Civil Engineering. Journal of Computing in Civil En-
gineering, 20(1) 2–12
12. CME. Home pages of the journal Construction Management and Economics.
http://www.tandf.co.uk/journals/pdf/rcme_stats.pdf).
Automated On-site Retrieval of Project Information
Ioannis K. Brilakis
1 Introduction
Field construction tasks like inspection, progress monitoring and others require access
to a wealth of project information (visual and textual). Currently, site engineers,
inspectors and other site personnel, while working on construction sites, have to
spend a lot of time in manually searching piles of papers, documents and drawings to
access the information needed for important decision-making tasks. For example,
when a site engineer tries to determine the sequence and method of assembling a steel
structure, information on the location of each steel member in the drawings must be
collected, as well as the nuts and bolts needed for each placement. The tolerances
must be reviewed to determine whether special instructions and techniques must be
used (i.e. for strict tolerance limits) and the schedule must be consulted to determine
the expected productivity and potential conflicts with other activities (e.g. for crane
usage).
All this information is usually scattered in different sources and often conflicts
with expectations or other information, which makes the urgency and competency of
retrieving all the relevant textual, visual or database-structured data even more
important. However, manual searches for relevant information is a monotonous, time-
consuming process, while manual classification [1] that really helps speed up the
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 92 – 100, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Automated On-site Retrieval of Project Information 93
search process only transfers that problem to the earlier stage. As a possible
alternative to user-based retrieval, this paper builds on previous modeling, virtual
design and collaboration research efforts (i.e. [2]) and presents a computer vision type
approach that, instead of requiring browsing through detailed drawings and other
paper based media, it can automatically retrieve design, specifications and schedule
information based on the camera’s field of view and allow engineers to directly
interact with it in digital format.
The computer vision perspective of this approach is based on a multi-feature
retrieval framework that the author has previously developed [3]. This framework
consists of complementary techniques that can recognize construction materials [4; 5]
and shapes [6] that, when augmented with temporal and/or spatial information, can
provide a robust recognition mechanism for construction-related objects on-site. For
example, automatically detecting at a certain date and time (temporal) that a linear
horizontal element (shape) made out of red-painted steel (material) is located on the
south east section of the site (location) is in most cases sufficient information to
narrow down the possible objects matching such description to a small and easily
manageable number.
This paper initially presents previous work of the author that serves as the base for
the computer vision perspective of this research and continues with the overall
approach that was designed and the relationship between its various components.
Conclusions and future work are then presented. At this stage, it is important to note
that this work is a collaboration effort with the National Institute of Standards and
Technology (NIST) in steel structure inspection, and therefore, all case studies and
examples are focused on steel erection.
2 Previous Work
The following two sub-sections present the findings of recent research efforts of the
author in construction site image classification based on the automatic recognition of
materials and shapes within the image content [3; 4; 5; 6], which is the basis for the
proposed on-site project information retrieval approach that will be presented in the
following sections. The purpose is to familiarize the reader with some of the main
concepts used in the mechanics of this research.
The objective of this research [4; 6] was to devise methods for automating the search
and retrieval of construction site related images. Traditional approaches were based
on manual classification of images which, considering the increasing volume of
pictures in construction and the usually large number of objects within the image
content, is a time-consuming and tedious task, frequently avoided by site engineers.
To solve this problem, the author investigated [7] using Content Based Image
Retrieval (CBIR) tools [8; 9; 10] from the fields of Image and Video Processing [11]
and Computer Vision [12]. The main concept of these tools is that entire images are
matched with other images based on their features (i.e. color, texture, structure, etc).
This investigation revealed that CBIR was not directly applicable to this problem and
94 I.K. Brilakis
The objective of this research [6] was to enhance the performance of the previously
presented material-based image classification approach by adding the capability of
Automated On-site Retrieval of Project Information 95
The linearity and (if linear) orientation of the “object’s spine” of each cluster is
evaluated. Both are determined by computing the maximum cluster dimension (MCD)
and the maximum dimension along the perpendicular axis of MCD (PMCD) (Fig. 2).
These dimensions are then used to determine the linearity and orientation under three
assumptions (i) If MCD is significantly larger than PMCD, then the object is linear, (ii)
If the object is linear, then the tangent of the MCD edge points represents its direction
on the image plane; the object’s “spine”, (iii) If the computed direction is within 45
degrees from the vertical/horizontal image axis then the linear object is a column/beam,
respectively. This method was tested on the same collection of more than a thousand
images from several projects. The results showed that images can be successfully
classified according to the construction shapes visible within the image content.
96 I.K. Brilakis
The as-built objects are then recognized based on Euclidian distance matching.
Each attribute (texture response, shape directionality, etc) in the multi-dimensional
material and shape signature represents a different dimension of comparison. By
comparing the distance of each attribute of the extracted signatures with the
corresponding attributes of the object types in the 3D CAD model, the similarity of
each signature with each object type in the model can be represented mathematically.
The design object type with the highest similarity (least distance) is then selected to
represent the recognized object.
The position where the camera was located on the site and the direction in which it
was facing are useful in narrowing down the possible construction objects that might
match the detected object and its material and shape information [3]. This is where
off-the-shelf GPS cameras can be really useful since the location and orientation
information that they provide is enough to determine the camera’s line-of-sight and
the corresponding viewing frustum. This information, along with a camera coordinate
system that is calibrated with the coordinate system used in creating the design of the
constructed facility, can then assist in more accurately matching with the design
objects that are expected to be in the camera’s view. Calibration is essential in this
case, since CAD designs typically use a local coordinate system.
In this approach, the object attributes are enhanced with camera position and
orientation information and a Euclidian distance matching is repeated. The difference
in this step is that specific objects are sought instead of generic object types. For
example, while any steel beam is sufficient to determine the type of an as-built steel
beam object in the previous step, the specific steel beam that it corresponds to in the
98 I.K. Brilakis
design model is needed in this case. The CAD models used for these comparisons
were based on the CIS/2 standard that provides data structures for multiple levels of
detail ranging from frames and assemblies to nuts and bolts in the structural steel
domain (Fig. 5). The CIS/2 standard is a very effective modeling standard and was
successfully deployed on a mobile computing system at NIST [14].
Fig. 5. CIS/2 product models: (left) Structural frame of large, multistory building and (right)
Connection details with bolts [14]
After minimizing the number of possible matches, the next step is to provide the
user with the relevant design information needed. In this case, the information related
to each possible match is acquired from the model objects and isolated. This way, the
user need only browse through small subsets of information (i.e. a few drawings, a
few specification entries, a segment of the schedule, etc.).
4 Conclusions
Designing and implementing a pattern recognition model that allows the identification
of construction entities and materials visible in a camera’s field of view at a given
time was the base for this ongoing research work. The long-term goal is to reduce the
cost and effort currently needed for search and retrieval of project information by
using the automatically detected visual characteristics of project-related items to
determine the possibly relevant information that the user needs. Thus, the innovative
aspects of this research lie in the ability to automatically identify and retrieve project
information that is of importance for decision-making in inspection and other on-site
tasks and, to achieve this, a new methodology that can allow rapid identification of
construction objects and subsequent retrieval of relevant project information for field
construction, inspection, and maintenance was developed. The merit of its technical
approach lies in taking advantage of the latest developments in construction material
and object recognition to provide site personnel with automated access to both as-
built and as-designed project information. Automated retrieval of information can
also, for example, serve as an alerting mechanism that can compare the as-built and as-
designed information and notify the site (or office) personnel of any significant
deviations, like activities behind schedule and materials not meeting the specifications.
Reducing the human-intervention from this tedious and time-consuming process is also
Automated On-site Retrieval of Project Information 99
References
1. Abudayyeh, O.Y, (1997) "Audio/Visual Information in Construction Project Control,"
Journal of Advances in Engineering Software, Volume 28, Number 2, March, 1997
2. Garcia, A. C. B., Kunz, J., Ekstrom, M. and Kiviniemi, A., “Building a project ontology
with extreme collaboration and virtual design and construction”, Advanced Engineering
Informatics, Vol. 18, No 2, 2004, pages 71-85.
3. Brilakis, I. and Soibelman, L. (2006) "Multi-Modal Image Retrieval from Construction
Databases and Model-Based Systems", Journal of Construction Engineering and
Management, American Society of Civil Engineers, in print
4. Brilakis, I., Soibelman, L. and Shinagawa, Y. (2005) "Material-Based Construction Site
Image Retrieval" Journal of Computing in Civil Engineering, American Society of Civil
Engineers, Volume 19, Issue 4, October 2005
5. Brilakis, I., Soibelman, L., and Shinagawa, Y. (2006) "Construction Site Image Retrieval
Based on Material Cluster Recognition", Journal of Advanced Engineering Informatics,
Elsevier Science, in print
6. Brilakis, I., Soibelman, L. (2006) "Shape-Based Retrieval of Construction Site
Photographs", Journal of Computing in Civil Engineering, in review
7. Brilakis, I. and Soibelman, L. (2005) "Content-Based Search Engines for Construction
Image Databases" Journal of Automation in Construction, Elsevier Science, Volume 14,
Issue 4, August 2005, Pages 537-550
8. Rui, Y., Huang, T.S., Ortega, M. and Mehrotra, S. (1998) “Relevance Feedback: A Power
Tool in Interactive Content-Based Image Retrieval”, IEEE Tran on Circuits and Systems
for Video Technology, Vol. 8, No. 5: 644-655
100 I.K. Brilakis
9. Natsev, A., Rastogi, R. and Shim, K. (1999) “Walrus: A Similarity Retrieval Algorithm for
Image Databases”, In Proc. ACM-SIGMOD Conf. On Management of Data (SIGMOD
’99), pages 395-406, Philadelphia, PA
10. Zhou, X.S. and Huang, T.S. (2001) “Comparing Discriminating Transformations and SVM
for learning during Multimedia Retrieval”, ACM Multimedia, Ottawa, Canada
11. Bovik, A. (2000) “Handbook of Image and Video Processing”. Academic Press, 1st edition
(2000) ISBN:0-12-119790-5
12. Forsyth, D., and Ponce, J. (2002) “Computer Vision - A modern approach”, Prentice Hall,
1st edition (August 14, 2002) ISBN: 0130851981
13. Shin, S. and Hryciw, R.D. (1999) “Wavelet Analysis of Soil Mass Images for Particle Size
Determination” Journal of Computing in Civil Engineering, Vol. 18, No. 1, January 2004,
pp. 19-27
14. Lipman R (2002). “Mobile 3D Visualization for Construction”, Proceedings of the 19th
International Symposium on Automation and Robotics in Construction, 23-25 September
2002, Gaithersburg, MD
Intelligent Computing and Sensing for Active Safety on
Construction Sites
1 Introduction
According to the United States Bureau of Labor Statistics’ 2004 Census of Fatal Occu-
pational Injuries (CFOI) study, out of a total of 1,224 on-the-job fatalities that occurred
in the construction industry, accidents involving from heavy equipment operation (e.g.:
transportation accidents and contact incidents with objects and equipment) represented
about 45% [1]. Clearly, attention to the safety issues surrounding heavy equipment
operation plays an important role in reducing fatalities. However, since most construc-
tion sites are cluttered with obstacles, and heavy equipment operation is based on hu-
man operators, it is virtually impossible to avoid the general lack of awareness of
work-site-related hazards and the relative unpredictability of work-site environments
[2]. With the growing awareness of the risks construction workers face, the demand for
automated safety features for heavy equipment operators has increased.
The main objective of the research presented here is to develop a framework and
efficient algorithms for obstacle avoidance and path planning which have the potential
not only to prevent collisions between heavy equipment vehicles and other on-site
objects, but also to allow autonomous heavy equipment to move to target positions
quickly without incident. A research prototype laser sensor mounted on heavy equip-
ment can monitor both moving and static objects in an obstacle-cluttered environment
[3] [4]. From such a sensor’s field of view, a real-time three-dimensional modeling
method can quickly extract the most pertinent spatial information of a job site,
enabling path planning and obstacle avoidance. Beyond generating an efficient and
effective real-time 3D modeling approach, the proposed framework is expected to
contribute to the development of active safety features for construction job sites.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 101 – 108, 2006.
© Springer-Verlag Berlin Heidelberg 2006
102 C.H. Caldas et al.
First, the world model and the sensor’s field of view need to be divided into a 3D grid
system. When dividing the entire space into a 3D grid system, it is important to first
determine the grid size. If the tracking of small objects is required, a high resolution
grid map should be used. However, when higher resolution grids are used, the proc-
essing time is greater, and because noise becomes more prevalent, results are gener-
ally poorer. Therefore, the effective cell size should be chosen in accordance with the
local environment’s type and size, keeping in mind the incoming data processing
capability of the hardware.
After the grid map is built, the sensor range data can be plotted into the cells of the
grid. Each cell of the grid can have zero, single, or multiple range points. Once each
cell meets a certain threshold count of range points, its center is filled with the value 1
and can be called occupied. All other cells which have fewer range points than the
threshold count, such as zero-occupied or only one-occupied, are considered to hold
an extreme range value and fall in the category of noise. For example, if the threshold
value for counting cells as occupied is three, only cells which have more than three
range points are considered occupied cells. Cells which have less than three range
points are considered noise. In this case, occupied cells have the value 1, and noise
cells have the value zero.
A reliable way of reducing the number of points in cells without losing valuable
data is important for noise treatment and makes for faster image processing speed. If
the above-mentioned noise removal, which is based on counting range points, is con-
sidered the first level of noise removal, the second level of noise removal only deals
with occupied cells which have a value of 1. Second-level noise can happen when
occupied cells exist alone in the 3D space – cells that are actually empty but that pro-
ject a virtual image. To safely eliminate single-occupied noise cells, their surrounding
neighbor cells should be investigated. If a certain number of neighbor cells around
these cells are also occupied, the original value is kept as a value 1. If not, the original
value is rejected as noise. All the above parameters (grid size, range point threshold
value, and neighbor threshold value) are user input data. Many different sets of grid
mapping parameters are available; their variety helps users find modeling conditions
best-suited for the real environment.
A set of occupied cells can be made to represent one object by applying a cell clus-
tering method. In this research, a nearest neighbor clustering algorithm [7] was
adapted by following several steps. First, positions of every occupied cell were com-
pared with each other, and if a distance between two cells was less than a given
threshold value, it meant that the two cells belonged in the same cluster. Conversely,
if a distance was larger than the threshold value, it meant that the two cells belonged
in different clusters. This cell-to-cell comparison was iterated between whole occu-
pied cells and as a result, all cells were modeled into the correct clusters.
After grouping occupied cells, cluster information such as the center of gravity
value of each cluster and the cluster size was determined. This cluster information is
valuable for tracking objects and for the path planning of objects because knowing the
center of gravity value and the cluster size is basic to calculating the conditions of
moving objects like velocity and acceleration vectors.
104 C.H. Caldas et al.
4 Path Planning
The ultimate purpose of real-time obstacle detection and environmental modeling is to
plan a collision-free path under the real-world constraints of a job site, and the
planned path represents the shortest, safest, and most visible path [8] [9] [10]. The
first step of the proposed path planning algorithm is to determine a starting position,
an ending position, and interim path nodes within the static environment, with safety
margins established around static objects. Then the interim nodes are revised as the
dynamic object’s moving conditions are tracked according to a dynamic path tree
algorithm. This dynamic path tree algorithm uses dynamically allocated points
through real-time searching, sensing, and reasoning in the environment. This algo-
rithm is able to find the visible points of any local position in the environment and;
from that data, can plan a collision-free path and motion trajectory by projecting an-
gles to partition the obstacle-space. By making the node-with-no-obstacles state a
higher priority, this algorithm chooses the shortest cost state as the discrete goal and
keeps iterating this goal-oriented action until the mobile vehicle reaches its planned
destination.
Fig. 1. Task and interim node settings Fig. 2. Created discrete paths
autonomous vehicle’s position and the moving object’s position at a certain time t are
determined. Then, the distance between the two positions are calculated to figure out
whether the autonomous vehicle’s path is influenced by the moving object. If the
distance is larger than a safety threshold, there is no danger of the vehicle colliding
with the moving object. However, if the distance is smaller than the threshold, one
more node should be added onto the map to avoid collision (Figure 3). After adding a
new node, a new path is created and designated as New_Path(Starting, New,
Dist_New). The previous path, Path 1(S, 1, Dist 1), is deleted from the set of discrete
paths and New_Path replaces Path 1. Once a new node is added, it is necessary to
repeat the entire path creation process, incorporating the new set of nodes. The proc-
ess of creating discrete paths and considering dynamic objects should be repeated
until all discrete paths become collision-free paths.
The final step of the algorithm is to calculate a shortest path. All possible paths
from the starting position to the goal position are considered and their total travel
distances are stored for comparison with each other. Finally, the shortest path for the
autonomous vehicle is determined from among all possible trajectories. This selected
path allows the autonomous vehicle to reach the target position in the least amount of
time without any collision, even within an obstacle-cluttered environment (Figure 4).
5 Simulation Results
With the proposed real-time 3D modeling path planning algorithms providing the
virtual model environment, a computer simulation was generated using the C++ pro-
gramming language in Microsoft Visual Studio .NET 2003. This experimental envi-
ronment was constructed in the FSCAL. The experimental environment consisted of a
3D video range camera sensor (FlashLADAR), a static box, a moving wire-controlled
cart transporting a vertically mounted pipe, and a background wall.
The simulation held five basic assumptions: (1) Path planning is based on the two-
dimensional approach. (2) An autonomous mobile vehicle first plans its collision-free
paths based on a path planning algorithm in a static position, and then starts moving
with a constant forwarding velocity (cm/sec). (3) A moving cart transporting a pipe
moves with a constant forwarding velocity (cm/sec), and no acceleration. For the path
planning simulation, only four frames captured within 0.14 seconds are used to calcu-
late the moving object’s velocity. The autonomous vehicle waits until four frames are
captured to avoid measuring the initial acceleration of the moving object. 0.14 sec-
onds is also a short enough time span not to cause meaningless idling time of the
autonomous vehicle. (4) The 0.14-second time span is also enough time for the vehi-
cle to update image frames while it is moving to its target position. While the pro-
posed path planning algorithm allows the vehicle to update local image frames every
0.14 seconds, in the current simulation, it trusts the planned path without any new
image update while it is moving. (5) All frames are captured from a static sensor.
There were four major saved data derived from the 3D modeling process: occupancy
grid data, cluster information, a sensor position, and velocity vectors. These simula-
tion results were exported into Matlab software to show how well the saved data rep-
resented the local environment as a 3D image. All results were based on a 10cm oc-
cupancy grid size, on a three-point threshold for determining occupied cells, and on
occupied cells having four occupied neighbors to establish their validity.
First, velocity vectors of moving objects were calculated from frame captures using
the Flash LADAR. Next, an initial sensor position was set as a starting position for
the autonomous equipment. These data came from the results of the occupancy grid
processing. After the local conditions were considered, the goal position and the
safety threshold value were set. The simulation results showed that, when the
autonomous vehicle’s speed was low, the shortest path was influenced by the moving
object’s position, and as a result, a new node was incorporated into the shortest path
(Figure 5). However, when the autonomous vehicle moved at a high speed, the posi-
tion of the moving object did not intersect the shortest path; therefore, the shortest
path did not incorporate any new node (Figure 6).
Intelligent Computing and Sensing for Active Safety on Construction Sites 107
Fig. 5. Results with 70cm/sec – shortest path Fig. 6. Results with 150cm/sec - shortest path
Acknowledgements
This material is based in part upon work supported by the National Science Founda-
tion under Grant Number CMS 0409326. Any opinions, findings, and conclusions or
recommendations expressed in this material are those of the authors and do not neces-
sarily reflect the views of the National Science Foundation.
108 C.H. Caldas et al.
References
1. BLS (Bureau of Labor Statistics). U.S. Department of Labor, Washington D.C.,
http://stats.bls.gov/iff/home.htm, Accessed on November 21, 2005.
2. Kim, C. Spatial Information Acquisition and Its Use for Infrastructure Operation and
Maintenance. Ph.D. Diss., Dept. of Civil Eng., The University of Texas at Austin (2004).
3. Gonzalez-Banos, H.H., Gordillo, J.L., Lin, D., Latombe, J.C., Sarmiento, A., and Tomasi,
C.: The Autonomous Observer: A Tool for Remote Experimentation in Robotics. Tele-
manipulator and Telepresence Technologies VI, November 1999, vol. 3840.
4. Gonzalez-Banos, H.H., Lee, C.Y., and Latombe, J.C.: Real-Time Combinatorial Tracking
of a Target Moving Unpredictably Among Obstacles. IEEE International Conference on
Robotics and Automation, Washington, DC (2002)
5. Teizer, J., Bosche, F., Caldas, C.H., Haas, C.T., and Liapi, K.A.: Real-Time, Three-
Dimensional Object Detection and Modeling in Construction. Proceedings of the 22nd In-
ternat. Symp. on Automation and Robotics in Construction (ISARC), Ferrara, Italy (2005)
6. Moravec, H. and Elfes, A.: High-resolution Maps from Wide-angle Sonar. Proc. of IEEE
Int. Conf. on Autonomous Equipments and Automation, 116-121, Washington, DC (1985)
7. Ertoz, L., Steinbach, M. and Kumar, V.: A New Shared Nearest Neighbor Clustering Algo-
rithm and its Applications. Workshop on Clustering High Dimensional Data and its Appli-
cations at 2nd SIAM International Conference on Data Mining (2002)
8. Soltani, A.R., Tawfik, H., Goulermas, J.Y., and Fernando, T.: Path Planning in Construc-
tion Sites: Performance Evaluation of the Dijstra, A*, and GA Search Algorithms. Ad-
vanced Engineering Informatics, 16(4), 291-303 (2002)
9. Wan, T.R., Chen, H., and Earnshaw, R.A.: A Motion Constrained Dynamic Path Planning
Algorithm for Multi-Agent Simulations. Proc. Of the 13-th International Conference in
Central Europe on Computer Graphics, Plzen, Czech Republic (2005)
10. Law, K., Han, C., and Kunz, C.: A Distributed Object Component-based Approach to
Large-scale Engineering Systems and an Example Component Using Motion Planning
Techniques for Disabled Access Usability Analysis. Proc. of the 8th International Confer-
ence on Computing in Civil and Building Engineering. ASCE, Stanford, CA (2000)
GENE_ARCH: An Evolution-Based Generative Design
System for Sustainable Architecture
Luisa Caldas
1 Introduction
GENE_ARCH is an evolution-based Generative Design System (GDS) that uses adap-
tation to shape architectural form [1]. It was developed to help architects in the creation
of energy-efficient and sustainable architectural solutions, by using goal-oriented de-
sign, a method that allows to set goals for a building’s performance, and have the com-
puter search a given design space for architectural solutions that respond to those re-
quirements. The system uses a Pareto Genetic Algorithm as a search engine, and the
DOE2.1E building simulation software as the evaluation module (Fig. 1). Other exist-
ing GDS related to architecture include those by Shea [2] and Monks [3].
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 109 – 118, 2006.
© Springer-Verlag Berlin Heidelberg 2006
110 L. Caldas
The software was initially tested within a test building with a simple geometry, with
similar box-like offices facing the four cardinal directions [4]. GENE_ARCH’s task
was to locate the best window dimension for each space and orientation. The problem
was set up in such way that the optimal solutions were known, despite the considerable
size of the solution space, over 16 million. The testing was performed with a Micro
Genetic Algorithm [5] and did not apply Pareto optimization, but a single fitness value.
The objective function used was annual energy consumption of the building, which
combined, even if as a simple average, both energy spent for space conditioning (heat-
ing, cooling and ventilating the building) and for illumination. Those are the two main
final energy uses in buildings, and are usually in conflict with each other, as solutions
that are more robust in terms of thermal performance - typically by reducing the num-
ber and size of openings in the building envelope - tend to score not so well in terms of
capturing daylight, and vice-versa. The effectiveness of the building in capturing day-
light was measured by placing virtual photocells at two reference points in each room
of the building, and simulating a dimming artificial lighting system, which, at any
point in time, would provide just enough artificial light to make up for the difference
between the available daylight in the space (in lux), and the desirable lighting levels -
determined by the architect, and in this case set to 500 lux, the typical illumination
level recommended for office buildings. The simulations were done using real weather
date for selected sites, in TMY format, which represents a Typical Meteorological
Year based on statistical analysis of 30 years of actual measurements. Simulations
were carried out hourly, in a total of 8760 hours per simulation, performed over a com-
plete three-dimensional model of the building. This included a detailed geometrical
description of spaces, facades, roofs and other construction elements, building materi-
als, including their thermal and luminous properties, and much other information re-
garding not only architectural aspects, but also mechanical and electrical installations.
The results from the tests showed that, for a solution space of over 16 million, solu-
tions found by GENE-ARCH were within around 0.01% of the optimal.
the software in a complex design context, assess methods for encoding design inten-
tions, and analyse the trade-offs reached by the system when dealing with conflicting
requirements. In this study, the overall building geometry and space layout were left
unchanged, and the system was applied solely to the generation of alternative façade
solutions. GENE-ARCH worked over a detailed three-dimensional description of the
building and used natural lighting and year-round energy performance as objective
functions to guide the generation of solutions. The experiments also research the en-
coding of architectural design intentions into the system, using constraints derived
from Siza’s original design, that we considered able to capture some of the original
architectural intentions. Experiments using this generative system were performed on
three different geographical locations to test the algorithm’s capability to adapt solu-
tions to different climatic characteristics within the same language constraints, but
only results for Oporto are presented here.
Some of the more interesting results generated related to implicit trade-offs between
façade elements. These are relations that were not explicitly incorporated in the con-
straints (as were the compositional axes, for example), but, being performance-based,
112 L. Caldas
emerged during the evolutionary process. An example is the trade-off between shading
and fenestration elements in the south-facing studio teaching rooms, where the exist-
ing deep overhangs (2 meters depth) forced the system to propose window sizes as
large as permitted by the constraints, in order to allow some daylight into the space
(see figure 2.8). In a subsequent experiment, when the system was able to change
overhang depth too, it did propose much shallower elements (60 cm), which could
still shade the high-level south sun, while simultaneously allowing into the space both
natural light and useful winter solar gains.
The east-facing 4th floor studio was another example of emerging implicit relations.
In Siza’s design, a large east-facing strip window illuminates most of the room, while
a much smaller south-facing window occupies the end wall (figure 2.7, left). The
morning sun makes the room overheat and is a cause of glare, as could be observed
during a visit to the building, where students glued large sheets of drawing paper to
the windows in order to have some comfort. Simultaneously, the room tends to be-
come too dark in the afternoon. GENE_ARCH detected this problem and proposed a
much larger south window, close to the upper bound of the constraints, and a small
east window just to illuminate the back of the room. Figure 2.7 compares daylight
levels in the two solutions, at 3pm in the afternoon. GENE_ARCH’s solution displays
daylight levels about three times higher than the existing one, while causing less dis-
comfort.
Finally, the single space that occupies the top floor, dedicated to life drawing
classes, provided another interesting case study. The system implicitly related the
design of the north-facing strip clerestory windows to that of the south-facing loggia.
The large clerestory window was reduced because it represented a significant heat
loss source, and it coincided, it terms of daylighting, with the area covered by the
loggia. The system simultaneously proposed a significant increase in the fenestrations
inside the loggia, since they were already shaded and had a convenient solar orienta-
tion. As for the northern clerestory, the system proposed it should be increased, as it is
the only light source of that side of the room (figure 2.3).
The results generated by this experiment were extremely interesting, as they related
to an actual building and proved that GENE_ARCH could indeed deal with the com-
plexity of a real case. Capturing the architect’s architectural intention into the genera-
tive design system becomes a major challenge. It was also interesting to notice that,
for the north façade, the system generated a solution that almost exactly resembled
that of Siza, apart from some ‘melodic’ variations in the original design.
2.3 Pareto Genetic Algorithms Applied to the Choice of Building Materials and
Respective Environmental Impact
the embodied energy of materials, that is, the energy spent to manufacture them. The
reasoning was that, in global greenhouse gas emissions terms, it might not make sense
to save energy at the building level if more energy is being spent, even if at a remote
location, to build those materials.
In these experiments, Pareto Genetic Algorithms were used as the optimization
technique, as the problem involved conflicting design criteria. Pareto Genetic Algo-
rithms provide a frontier of solutions representing the best trade-offs for a given prob-
lem [8], instead of single, optimal solutions as more traditional methods do, often
based on sometimes arbitrary weighting factors assigned to each objective.
GENE_ARCH was given a test building and a library of building materials, including
thermal and luminous properties, typical costs/m2, and Global Warming Potential
[GWP] expressed in KgCO2/Kg. The three objective functions used were annual en-
ergy consumption of the building, initial cost of materials, and GWP. Experiments
generated well-defined, uniformly sampled Pareto fronts between the criteria consid-
ered, by using appropriate ranking and niching strategies [9]. Figure 3 shows the evo-
lution of an experiment along 200 generations, from an initial series of scattered
point, to a well-defined frontier of trade-offs, where each point represents a different
wall configuration. Results suggest this is a practical way for choosing construction
materials, and that new solutions emerge that may represent viable and energy-
efficient alternatives to those commonly used in construction.
1 4 0 0 0
cost ($)
1 2 0 0 0
1 0 0 0 0
8 0 0 0
6 0 0 0
4 0 0 0
2 0 0 0 g e n 1
g e n 1 0 0
g e n 2 0 0
0
2 1 2 2 2 3 2 4 2 5 2 6 2 7 2 8
M W h
In the forth application, Pareto optimality was applied to Siza’s building, using two
conflicting objectives: daylight use and thermal performance. Figure 4 shows the five
points (from a Pareto Micro Genetic Algorithm) that formed the final frontier. The top
image shows the best solution in terms of heating energy, which simultaneously
achieves the best possible daylight performance without degrading the thermal – a
characteristic of Pareto optimality. The bottom image shows the best performance for
daylight, with the best possible trade-off with heating. The other images show the
remaining points of the Pareto frontier, representing other possible trade-offs.
114 L. Caldas
Fig. 5. Left: Problem schematics; Right: Some random geometries generated by GENE_ARCH
GENE_ARCH: An Evolution-Based Generative Design System 115
A problem faced was that the most immediate way GENE_ARCH found to reduce
energy consumption was to decrease the overall building size, to the minimum al-
lowed by the dimensional constraints of each room. In order to force solutions to be
within certain areas set by the architect, a system of penalty functions was introduced.
Penalties degraded the energy-based fitness function to an extent dependent on area
violation. However, this tended to confound the algorithm, since it was giving mixed
information to the GA, combining both energy performance and floor area in a single
fitness-function. For that reason, another outcome measure was applied: Energy In-
tensity Use, which represented the amount of energy used per unit area. This approach
also had its limitations, as discussed in reference [11], but was the basis for the results
shown in figure 6. It is also interesting to notice how more extreme geometries ini-
tially generated, later stabilized in rather more compact and discrete ones.
Fig. 6. GENE-ARCH’s generation of 3D architectural solutions for Oporto, using Energy Use
Intensity as fitness function: 1. 1st floor constraints; 2. Overall constraints, including roofs; 3, 4,
5. Partial views of generated solution within constraints. 6. SE view of final solution; 7. South
elevation of final solution; 9. West elevation.
The sixth application concerns again 3D shape generation, but this time responding to
conflicting objectives. To achieve this, Pareto Genetic Algorithms were applied once
more [11]. The two conflicting objective functions were maximizing daylighting, and
minimizing energy for heating the building, in the cold Chicago climate.
GENE_ARCH generated a uniformly sampled, continuous Pareto front, from which
seven points were visualized in terms of the proposed architectural solutions and
environmental performance (Fig. 7). It can be seen that the best solution in terms of
heating bases its strategy on creating a deep, compact volume which is surrounded, to
the south and partially to the west, by narrow, highly glazed spaces that act as green-
houses, collecting solar gains to heat up the main space. Although it is hard to daylit
those deep areas, savings in heating energy compensate for the spending in artificial
lighting. On the contrary, the best solution for lighting generates narrow spaces that
are easy to lit from the periphery, and tend to face the sun’s predominant direction, in
a flower-like configuration. However, it is also possible to notice the use of
116 L. Caldas
south-facing sunspaces, highly glazed, like in the previous solution. The intermediate
points in the frontier represent other good trade-offs, and it is rather interesting to
notice how solutions tend to gradually ‘morph’ from solution 1 to 7.
Fig. 7. Pareto frontier for Chicago, with best solution for heating (1) visualized on the left, and
best solution for lighting (7) on the right; Both bottom images illustrate southeast views, show-
ing the highly glazed south-facing sunspaces generated by GENE_ARCH
solutions that are more sustainable and consume less energy. Part of the robustness of
the program comes from applying as the calculation engine, for energy simulation, the
software DOE2.1E, that is well respected in the field and is able to consider a very
wide range of building variables in its calculations. In terms of the search engine, the
standard GA has proven to be able to locate high quality designs in large solution
spaces. Since GA’s are heuristic procedures, and it is usually not possible to know the
optimal solution for the type of problems faced in architecture, it is difficult at this
stage to know what are the limits for the size of problems to be approached by
GENE_ARCH. Given reasonable solution spaces, the system seems to have facility in
solving problems like façade design, including openings geometry, materials and
shading elements, given relatively stable geometries for the building.
The generation of complete 3D architectural solutions, that is, of a complete build-
ing description, poses much more complex questions. First of all, there are issues of
representation of the architectural problem, in such way that both expresses the archi-
tect’s design intentions, and allows the system to manipulate them and generate new
solutions. Secondly, there are complex questions in terms of the method to evaluate
solutions, since the issues involved are not only energy-related, but include functional
and spatial characteristics and compliance with given requirements and intentions.
The on-going experiments with shape grammars suggest that the method may be too
limited to provide the necessary handles on complex three-dimensional problems,
suggesting the need for other paradigms.
Acknowledgements
This paper was developed with the support from project POCTI/AUR/42147/2001,
from Fundação para a Ciência e a Tecnologia, Portugal. Some of the graphical images
in figure 2 were developed with the collaboration of João Rocha.
References
1. Caldas, L.G.: An Evolution-Based Generative Design System: Using Adaptation to Shape
Architectural Form, Ph.D. Dissertation in Architecture: Building Technology, MIT(2001)
2. Shea, K. and Cagan J.: Generating Structural Essays from Languages of Discrete Struc-
tures, in: Gero, J. and Sudweeks, F., eds., Artificial Intelligence in Design 1998, Kluwer
Academic Publishers, London (1998) 365-404
3. Monks, M., Oh, B. and Dorsey, J.: Audioptimization: Goal based acoustic design, IEEE
Computer Graphics and Applications, Vol. 20 (3), (1998) 76-91
4. Caldas, L and Norford, L: Energy design optimization using a genetic algorithm. Automa-
tion in Construction, Vol. 11(2). Elsevier (2002) 173-184
5. Krishnakumar, K.: Micro-genetic algorithms for stationary and non-stationary function op-
timization, in Rodriguez, G. (ed.), Intelligent Control and Adaptive Systems, 7-8 Nov.,
Philadelphia. SPIE – The International Society for Optical Engineering (1989) 289-296
6. Caldas, L., Norford, L., and Rocha, J.:An Evolutionary Model for Sustainable Design,
Management of Environmental Quality: An Int. Journal, Vol. 14 (3), Emerald (2003) 383-
397
118 L. Caldas
Mark J. Clayton
1 Introduction
A review of architectural CAD research of the last twenty years reveals a quandary.
Although many of the ambitions and expectations of researchers have been achieved,
the expected promised land of high quality design has not been reached. The
quandary is revealed by comparing two papers written in the mid 1980’s by
influential researchers in architectural computing. In a paper by Don Greenberg, he
expressed optimism that newly invented radiosity and ray tracing methods could lead
to great steps forward in design quality [1]. The computer would then enhance the
essential visual and graphic processes of design. Yessios rebutted the assertion in a
subsequent paper, warning that non-graphic information is critically necessary to
support engineering analysis [2]. He suggested that “… architectural modeling should
be a body of theory, methods, and operations which (a) facilitate the generation of
informationally complete architectural models and (b) allows them to behave
according to their distinct architectural properties and attributes when they are
operated upon.”
Interestingly, these two papers foreshadowed the themes of commercial CAD
development over the next two decades. Ray tracing and radiosity rendering are the
culmination of Greenberg’s dream of complete and accurate visual representations of
designs. In the late 1980’s and early 1990’s, software such as AutoCAD, 3D Studio,
and Microstation achieved great strides forward in rendering ability, steadily bringing
photorealism to the typical architectural firm. Answering Yessios’ critique, the late
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 119 – 126, 2006.
© Springer-Verlag Berlin Heidelberg 2006
120 M.J. Clayton
A long thread of research has investigated a notion that there are three fundamental
kinds of representations that support design: those defining the intent for the artifact,
those defining the artifact itself, and those defining how the artifact performs. The
idea already appears in investigations that helped formulate artificial intelligence as a
field [6]. Simon stated that “Fulfillment of purpose or adaptation to a goal involves a
relation among three terms: the purpose or goal, the character of the artifact, and the
environment in which the artifact performs.” (p. 6).
Gero has elaborated and extended a similar theme in his notion of Function,
Behavior and Structure (FBS) [7, 8]. A design object is described by three kinds of
variables. Function variables describe what the object is for. Behavior variables
describe what the object does. Form variables describe what the object is, in terms of
components and their relationships. A designer derives behavior from structure and
ascribes function to behavior.
In the FBS theory, there are eight processes (or operations) in design. These
processes include the definition of function in terms of expected behavior, the
synthesis of structure, analysis that derives predicted behavior, evaluation that checks
whether the predicted behavior is as expected, and documentation. The final three
processes are reformulation of structure, behavior and function that constitute various
iterative loops in the design process.
This cognitive model of design has been examined in several empirical
observations of designers at work [9, 10].
The VPM was derived from a cognitive model of design rather than a product model.
While most product modeling research has attempted to describe exhaustively the
physical, economic, and production characteristics of design products, the VPM
derived from a theory of the patterns of thought that produce a design. It provided an
object-oriented representation of the conceptual objects supporting design cognition.
The objects for representing a design in the VPM were derived from three related but
independent hierarchies: form objects, function objects, and behavior objects. In the
VPM, the form was defined as the geometry and materials. AutoCAD was used as
the form modeler. Function was defined as the requirements, intents, and purpose of
the design. Behavior was defined to be the performance of the design artifact. These
root objects were related through fundamental operations (methods) of design
cognition: interpret, which mapped function objects to the design form; predict,
which mapped form objects to behavior objects; and assess which compared
behaviors to the functions to determine whether the design was satisfactory.
The software adopted a hermeneutic philosophical stance that declares that the
meaning in designed objects is ascribed by people rather than being inherent in the
object [13]. The interpret method in the software implemented a key capability. It
allowed one or more designers to apply engineering expertise from a specific domain
to the pure form model by identifying and classifying the features of the design form
that affect performance in that domain. Thus, in theory an architect could work freely
in a graphic environment and then pass the model to an engineer who would interpret
the CAD graphic model into an active engineering model that could derive
performance. To test the generality of the ideas, four example performance models
and supporting “interpretation” interfaces were constructed: energy analysis, cost
analysis, building code analysis, and spatial function analysis. Each interpretation
defined subclasses of function and behavior that could plug into the VPM framework
and respond appropriately to method calls. The process of developing the various
interpretations was the iterative creation of object hierarchies of function and behavior
and careful consideration of where in the hierarchies a particular concept should be
located. The development process was exploratory and heuristic; more recent
research has formulated principles for the development of an object hierarchy for
function [14].
The VPM produced model-based evidence in support of the form, function, and
behavior (or function, behavior, structure) theory of design. It showed that the theory
was sufficiently complete to permit working, usable software that appeared to closely
conform to natural design cognitive processes. A handful of testers used the software
Mission Unaccomplished: Form and Behavior But No Function 123
to analyze three designs for a small medical facility and produced results that were in
some ways better than results done with manual methods. Although the evidence did
not prove that the theory is accurate in describing natural processes, the evidence did
not disprove that the theory is accurate. In the field of artificial intelligence, such a
level of proof is generally accepted as significant.
In conjunction with the normal CAD modeling operations for defining the geometry
of the design, interpret, predict and assess produced extensive object hierarchies of
form, function and behavior. The user interface allowed one to follow the relations of
a form to a function and to a behavior as a triad of objects addressing an engineering
domain. A successful design resulted in the resolution of form, function, and
behavior objects into stable, balanced triads in which the behavior, computed
automatically from the form, fell into acceptable ranges defined by function objects.
An unexpected side effect of the software was that, as a user manipulated the CAD
model, interpreted it into various engineering models, predicted the performance, and
assessed the behavior against functions, the software constructed elaborate hierarchies
of forms, functions, and behaviors. These constituted the “virtual” product model;
rather than a predetermined product model schema established as a rigid class
hierarchy of building components that the user explicitly instantiated, the VPM
invented new combinations of forms, functions, and behaviors on the fly in response
to the designers’ actions. There were no components in the usual sense of walls,
columns, doors, and windows, but merely geometric forms that had a declared
function of being a wall, column, door, or window from whatever conventional or
conflicting sets of definitions necessary to support the engineering interpretation. A
Virtual Component object aggregated and managed the relations among a form object
and various functions and behaviors.
Although the possible list of functions and behaviors was necessarily finite, the
software achieved extensibility by providing a mechanism for very late binding and
polymorphism that allowed the addition of new functions and behaviors by
downloading applications over the Internet. Even if no prewritten software existed, a
development interface allowed a software developer to add rudimentary functions or
behaviors that required human editing and inspection to determine whether the design
satisfied the functions. However, the software lacked a user interface for defining new
functions.
A concrete example can explain the utility of the interpretation capability. Using
the VPM, one could use AutoCAD to draw a cylindrical tube. This tube could be
interpreted as a structural column, or it could be interpreted as pipe for conveyance of
fluids, or it could be interpreted as a handrail. Actually, it could be simultaneously
interpreted to be all three. One could easily draw non-vertical walls, or walls with
sloped sides, or columns with a star-shaped cross section, or any other clever or novel
architectural form and then declare the intent for that form in terms related to
performance. The software did not constrain inventiveness with respect to the
building form as do CAD systems (or BIM systems) that rely upon predefined
components that unify form and function. With the VPM, there was no need for a stair
tool, or a roof tool, or a wall tool that confounded form and function and constrained
the designer to conventional shapes, materials, parameters and modeling sequences.
124 M.J. Clayton
Some architectural practice theory endorses this alternative way of conceiving the
building design process and the roles of designers. A method called “problem
seeking” distinguishes between the architectural programmers who are responsible for
developing the description of needs and requirements (called a “brief” in the United
Kingdom) and design architects, who synthesize the proposed solution and represent
it so that it can be evaluated [15]. The argument is that the design architects are overly
permissive towards changing the program to match the beautiful forms that they
invent. The programmers should have sufficient authority to establish a complete,
thoughtful, and relatively fixed program to which the design architects must comply.
Through a process akin to knowledge engineering, specialists in programming, if
given independence and authority, can work closely with the architect’s client to
establish a clear and complete statement of the requirements.
Similarly, design architects have become increasingly reliant upon consultants and
engineers to analyze and predict the expected performance of the building design. In
the United States, it is common to have a design team that includes forty or fifty
consultants in addition to the design architects.
Perhaps the design professions are trifurcating into function experts, form experts, and
behavior experts. The concept of integrated but distinguished function models, form
models, and behavior models neatly mirrors the kinds of talents among designers and
the division of labor on design projects. In the future, the industry could reorganize its
design professions into specialists in each of these three areas. A function architect
would work with the client to define explicitly the requirements for the project. A
form architect would prepare a CAD model to depict a solution alternative. The
behavior architect would use software tools to predict the performance of the solution.
The function architect would then reenter the process to verify whether functions have
been satisfied and if not, to consider altering the function definitions and reinitiate the
process.
Mission Unaccomplished: Form and Behavior But No Function 125
Clearly, all three of these kinds of architects must document their work. Thus,
documentation is a simultaneous, ancillary activity that parallels the design process.
As described, this new process appears to be a waterfall process that is highly
sequential and departmentalized. Probably an iterative process would be better and
would allow the emergence of functions during design and the emergence of forms
during analysis. A team of function architect, form architect, and behavior architect
working synchronistically may be better than individuals working sequentially.
This argument suggests that future software must integrate tools for documenting
function. Once the function representations are integrated, the software can truly
support the cognitive processes of design. As BIM thoroughly represents form and
provides strong connections to simulation software that can predict behavior, the
addition of function would create a complete tool for supporting design cognition.
Building Information Modeling plus function (BIM+Fun) could provide support
for a cognitive model of design rather than merely the outward product of design.
Such software might enable profound transformation of the design industries and even
contribute to better design.
References
1. Greenberg, D. P.: Computer Graphics and Visualization. In: Pipes, A. (eds.): Computer-
Aided Architectural Design Futures. Butterworth Scientific, Ltd., London (1986) 63-67.
2. Yessios, C.: What has yet to be CAD. In: Turner, J. (ed.) Architectural Education,
Research and Practice in the Next Decade. Association for Computer Aided Design in
Architecture (1986) 29-36
3. Bjork, B-C.: Basic structure of a proposed building product model, Computer-Aided
Design, Vol. 21, No. 2, March. Butterworth Scientific, Ltd., London (1989) 71-78.
4. Willems, P.H.: A Meta-Topology for Product Modeling. In Proceedings: Computers in
Building W74 + W78 Seminar: Conceptual Modelling of Buildings. Lund, Sweden, (1988)
213-221.
5. Bedell, J. R., Kohler, N.: A Hierarchical Model for Building Applications. In: Flemming,
U., Van Wyk, S. (eds.): CAAD Futures ’93. Elsevier Science Publishers B.V., North
Holland (1993). 423-435.
6. Simon, H. A.: The Sciences of the Artificial. The M.I.T. Press, Cambridge, MA. (1969).
7. Gero, J. S.: Design prototypes: a knowledge representation schema for design, AI
Magazine, Vol. 11, No. 4. (1990) 26-36.
8. Gero, J. S.: The role of function-behavior-structure models in design. In Computing in
civil engineering, vol. 1. American Society of Civil Engineers, New York. (1995) 294 -
301.
9. Gero, J. S.,, Kannengiesser, U.: The situated function–behaviour–structure framework ,
Design Studies Vol. 25, No. 4. (2004) 373-391.
10. McNeill, T., Gero, J. S., Warren, J. Understanding conceptual electronic design using
protocol analysis, Research in Engineering Design Vol. 10. (1998) 129-140..
11. Clayton, M. J., Kunz, J. C., Fischer, M. A.: Rapid conceptual design evaluation using a
virtual product model, Engineering Applications of Artificial Intelligence, Vol. 9, No. 4
Elsevier Science Publishers B.V. North Holland. (1996) 439-451.
12. Clayton, M. J., Teicholz, P., Fischer, M., Kunz, J.: Virtual components consisting of form,
function and behavior. Automation in construction, Vol. 8. Elsevier Science Publishers
B.V. North Holland. (1999) 351-367.
126 M.J. Clayton
13. Winograd, T. and Flores, F. Understanding computers and Cognition, A New Foundation
for Design. Reading, Massachusetts: Addison-Wesley Publishing Company, (1986).
14. Kitamura, Y., Kashiwase, M., Fuse, M: and Mizoguichi, R., Deployment of an ontological
framework of functional design knowledge. Advanced Engineering Informatics, Vol. 18,
No 2. (2004) 115-127.
15. Peña, W., Parshall, S., Kelly, K.: Problem Seeking -- An Architectural Programming
Primer, 3rd edn. AIA Press, Washington (1987).
The Value of Visual 4D Planning in the UK
Construction Industry
1 Introduction
This study is a collaborative research project between the Centre for Construction
Innovation & Research at the University of Teesside and Architectural3D. The aim of
this study is to deliver a set of industry based 4D performance measures and to
identify how project performance can be improved by the utilisation of 4D planning.
Visual 4D planning is a technique that combines 3D CAD models with construction
activities (time) which has proven to be more beneficial than traditional tools. In 4D
models, project participants can effectively visualise, analyse, and communicate
problems regarding sequential, spatial, and temporal aspects of construction
schedules. As a consequence, more robust schedules can be generated and hence
reduce reworks and improve productivity. Currently, there are several research
prototypes and commercial software packages that have the ability to generate 4D
models as a tool for analysing, visualising, and communicating project schedule.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 127 – 135, 2006.
© Springer-Verlag Berlin Heidelberg 2006
128 N. Dawood and S. Sikka
However, the potential value and benefits of such systems have not been identified.
This has contributed to a slow intake of such technologies in the industry.
The industry based Key Performance Indicators (KPIs) that were developed by the
Department of Trade and Industry (DTI) sponsored construction best practice program
are too generic and do not reflect the value of deploying IT system for construction
planning and in particularly 4D planning. The objective of this research study is to
overcome the presence of a generalised set of KPIs by developing a set of 4D based
KPIs at project level for the industry. Information Technology applications are
progressing at a pace and their influence on working practice can be noticed in almost
every aspect of the industry. The potential of IT applications is significant in terms of
improving organisation performance, management practices, communication, and
overall productivity. 4D planning allows project planners to visualise and rehearse
construction progress in 3D at any time during the construction process. According to
Dawood et al. (2002, using 4D planning, participants in the project can effectively
visualise and analyse the problems considering the sequencing of spacial and temporal
aspects of the construction time schedule. The thrust for improved planning efficiency
and visualisation methodology has resulted into the development of 4D planning.
The Construction Industry Institute (CII) conducted research in the use of three-
dimensional computer models on the industrial process and commercial power sector
of AEC (architectural, engineering and construction) from 1993 to 1995 (Griffis et al.,
1995). Major conclusions of the CII research are that the benefits of using a 3D
technology include reduction in interference problems; improved visualisation;
reduction in rework; enhancement in engineering accuracy and improved jobsite
communications. Songer (1998) carried out a study to establish the benefits and
appropriate application of 3D CAD for scheduling construction projects. Songer
(1998) has demonstrated that the use of 3D-CAD and walk-thru technologies during
planning stage can assist in enhancing the scheduling process by reducing the number
of missing activities, missing relationships between various activities, invalid
relationships in the schedule and resource fluctuation for complex construction
processes. Center for Integrated Facility Engineering (CIFE) research group at
Stanford University has documented the applications and benefits of 3D and 4D
modelling in their CIFE technical reports (Koo & Fischer-1998, Haymaker &
Fischer-2001 and Staub-French & Fischer-2001). These reports discuss the benefits
of 3D and 4D modelling by considering individual project separately. The application
of Product Model and Fourth Dimension (PM4D) approach at Helsinki University of
Technology Auditorium Hall 600 (HUT-600) project in Finland has demonstrated the
benefits of 4D modelling approach in achieving higher efficiency; better design and
quality, and early generation of reliable budget for the project (Kam et al. 2003).
The above studies lack a well-established metrics that would allow the
quantification of 4D planning at project level. In the absence of well-defined
measures at the project level, the priority of this research project is to establish a set of
key performance indicators that will reflect the influence of 4D applications on
construction projects. This will assist in justification of investments in advanced
technologies in the industry. The remainder of the paper will discuss the research
methodology adopted and research findings.
The Value of Visual 4D Planning in the UK Construction Industry 129
2 Research Methodology
The methodology compromises of three interrelated phases:
• Identification of performance measures through literature review and authors
experience in the application of 4D.
• Conducting semi-structured interviews with project managers/planners to
establish and prioritise the performance measures.
• Data collection to quantify the identified performance measures.
Three major construction projects in London (currently under construction and
combined value of project is £230 million) were selected for study and for data
collection. Project managers and construction planners from the three projects were
selected for interviews to identify and prioritise the 4D KPIs. A semi-structured
interview technique was used to elicit information from the project managers, and
construction planners’ viewpoint about the key performance indicators at project
level. The semi-structured interviews used a methodological procedure known as the
Delphi technique for data collection. This technique is ideal for modelling real world
phenomena that involve a range of viewpoints and for which there is little established
quantitative evidence (Hinks & McNay 1999). The subsequent sections of the paper
describe the process of identifying KPIs on the basis of semi-structured interviews
conducted with project managers, ranking of 4D KPIs and research findings.
Measure Definition
Time It can be defined as percentage number of times projects is
delivered on / ahead of schedule. The timely completion of project
measures performance according to schedule duration and is often
incorporated to better understand the current construction
performance. Schedule performance index (Earned value
Approach) has been identified to monitor the performance of
schedule variance.
130 N. Dawood and S. Sikka
Table 1. (continued)
industry were also used for the identification of performance measures. Project
managers and construction planners from three construction projects were invited for
interviews. Table 1 shows a brief definition of the identified KPIs.
The Value of Visual 4D Planning in the UK Construction Industry 131
The first task for the interviewees were to identify and rank the performance
measures using a four (4) point Likert Scale. The second task was to identify the
information required to quantify each measure. Their input was considered to be
critical in the success of this research. The concept behind conducting semi-structured
interviews was to evaluate how mangers and planners perceive the importance of
performance measures. This will assist in the identification of industry based
performance measure that can be used to quantify the value of 4D planning. The
interview included both open and closed questions to gain a broad perspective on
actual and perceived benefits of 4D planning. Due consideration has been given to the
sources from where data has to be collected in a quantitative or qualitative way. So far
ten semi-structured interviews have been conducted with senior construction planners.
The research team intends to continue the interviewing process for ten more senior
construction planners. This will assist in gathering more substantial evidence about
KPIs.
Interviewees were asked to rank the identified KPIs. The ranking of the KPIs was
done by using a four (4) point Likert Scale. For the prioritisation process, each KPI
can be graded on a Likert scale of 1 to 4 (where 1= Not important, 2 = fairly
important, 3 = Important and 4 = Very important) to measure the importance of each
performance measure. The benefits of 4D planning will be quantified on the basis of
prioritised KPIs. The performance measures will be further classified in qualitative
terms (rating on a scale) and quantitative terms (measurement units).
40%
30%
15%
20%
10%
0%
y
n
ce
n
ity
nc
y
io
io
Sa ety
m me
or t
an
lit
tiv
at
ie
ct
Pe Co
m
ua
om Ti
nt Saf
ic
ic
fa
uc
un
ff
Q
tis
od
E
rf
ng
Pr
ni
m
C
lie
an
ea
C
Pl
Using responses from a four (4) point Likert Scale, the average percentage value
for each of the performance measures was calculated. Figure 1 represents the ranking
of the performance measures in descending order on the basis of the views of the
The Value of Visual 4D Planning in the UK Construction Industry 133
This paper reports on the first stage of the research project. The current and future
research activities will include:
• Continuing the interview process to further confirm the 4D KPIs and method of
data collection.
• Establish a methodology for data collection and quantification of the KPI
indices for the three identified construction projects.
• Benchmarking the KPIs indices with industry norms and identifying the
improvements in construction processes resulted due to the application of 4D
planning.
• Identifying the role of supply chain management in the development and
updating of construction schedule for the 4D planning. As per the main
contractor’s viewpoint 4D is unable to bring any confirmed value as compared
to their own planning system. Interviews with project managers have revealed
that there are varying views between the main contractors and trade contractors
on the usage of 4D planning on a construction project. The concern at the
moment is the availability of the information, time used in the collection of
information and cost factor attached in the implementation of the 4D
technology. All the stakeholders were agreed that an early deployment of 4D
brings about lot of transparency to resolve the conflicts among the various
trades during the preconstruction phase.
6 Conclusions
References
1. Al-Meshekeh, H.S., Langford, A.: Conflict management and construction project
effectiveness: A review of the literature and development of a theoretical framework. J.
Construction. Procurement. 5(1) (1999) 58-75
2. Albert, C., Ada C.: Key Performance Indicators for Measuring Construction Success.
Benchmarking: An International Journal. Vol.11. (2004) 203-221
3. Bassioni, A.H., Price, A.D., Hassan, T.M.: Performance Measurement in Construction.
Journal of Management in Engineering. Vol. 20. (2004) 42-50
4. Chan, A.P.C. Determining of project success in the construction industry of Hong Kong.
PhD thesis. University of South Australia, Australia (1996)
5. Chan, A.P.C., David, S., Edmond, W.M.L.: Framework of Success Criteria for
Design/Build Projects Journal of Management in Engineering. Vol. 18. No 3. (2002)
120-128
6. Construction Best Practice Program- Key Performance Indicators (CBPP-KPI-2004),
(available at http://www.dti.gov.uk/construction/kpi/index.htm
7. Dawood, N., Eknarin, S., Zaki, M., Hobbs, B.: 4D Visualisation Development: Real Life
Case Studies. Proceedings of CIB w78 Conference, Aarhus, Denmark, 53-60.
8. Egan, J. Sir.: Rethinking Construction: The Report of the Construction Task Force to the
Deputy Prime Minister. Department of the Environment, Transport and the Regions,
Norwich (1998)
9. Griffs, Hogan., Lee.: An Analysis of the Impacts of Using Three-Dimensional Computer
Models in the Management of Construction. Construction Industry Institute. Research
Report (1995) 106-11
10. Haymaker, J., Fischer, M.: Challenges and Benefits of 4D Modelling on the Walt Disney
Concert Hall Project. Working Paper 64, CIFE, Stanford University, Stanford, CA (2001)
11. Hinks, J., McNay.: The creation of a management-by-variance tool for facilities
management performance assessment. Management Facilities, Vol.17, No. 1-2, (1999)
31-53
12. Kam, C., Fischer, M., Hanninen, R., Karjalainen, A., Laitinen, J.: The Product Model And
Fourth Dimension Project. IT Con Vol. 8. (2003) 137-165
13. Kaplan, R.S., Norton, P.: The Balanced Scorecard -- Measures that Drive Performance.
Harvard Business Review, Vol. 70, No. 1 (1992) 47-54
14. Koo, B., Fischer, M.: Feasibility Study of 4D CAD in Commercial Construction. CIFE
Technical Report 118. (1998)
15. Latham, M. Sir.: Constructing the Team: Final Report of the Government/Industry Review
of Procurement and Contractual Arrangements in the UK Construction Industry. HMSO,
London (1994)
16. Li, H., Irani, Z., Love, P.: The IT Performance Evaluation in the Construction Industry.
Proceedings of the 33rd Hawaii International Conference on System Science (2000)
The Value of Visual 4D Planning in the UK Construction Industry 135
17. Naoum, S. G.: Critical analysis of time and cost of management and traditional contracts.
J. Construction Management, 120(4), (1994) 687-705
18. Robert, F.C., Raja, R.A., Dar, A.: Management’s Perception of Key Performance
Indicators for Construction. Journal of Construction Engineering & Management, Vol.
129, No.2, (2002) 142-151.
19. Songer, A.: Emerging Technologies in Construction: Integrated Information Processes for
the 21st Century. Technical Report, Colorado Advanced Software Institute, Colorado State
University, Fort Collins, CO 80523-1 873 (1998)
20. Staub-French, S., Fischer, M.: Industrial Case Study of Electronic Design, Cost and
Schedule Integration. CIFE Technical Report 122. (2001)
21. Yin, R. K.: Case study research design and method. 2nd edition. Sage publication Inc, CA
(1994)
Approximating Phenomenological Space
Christian Derix
1 Introduction
Bill Mitchell published the Logic of Architecture in 1990 [1] and thus laid some of
the most influential foundations for computational design in architecture. He did
make it clear that he would ‘treat design primarily as a matter of formal composition
[…] in order to produce beauty’. Beauty for Mitchell was an expression of the func-
tional fit of (visual and geometric1) elements and subsystems to the program of the
building.
The computer as design tool has since been perceived as either an optimization tool
for functional aspects of the building or as graphic pattern generator where the pattern
represents architecture or a sub-set of it (façade), the algorithm the design process.
This representation of architecture in computational design is no surprise if one re-
gards the history of architectural representation. With few exceptions, architecture has
been represented as geometric projections. Metric descriptions of buildings and
spaces make sense for visual impressions and construction information, and thus
enhance imagination of space. But as Robin Evans would argue, it only reflects one
dimension or expression of space and it rarely translates into the built object [2].
1
It is extraordinary to find that in his book about the Logic of Architecture, only drawings are
found, which the rules for the grammars and vocabulary are extracted from. No photographs,
diagrams or other representations.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 136 – 146, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Approximating Phenomenological Space 137
Often, architects tend to design on the basis of their tools or media, not on the basis
of actual space or the construction of it. The tools used in the medium of projective
representation are non-dynamic inert objects like ruler and pen and especially the
drawing board, forming part of a dynamic system with the designer. According to
Marshall McLuhan, every medium contains another medium [3]. If the drawing
board, ruler and pen contained the medium of the projective geometry, then what
medium does the computer contain in the case of architectural design? Why do com-
putational designers emulate the same medium of projective representation like the
first cast iron bridges mimicked wooden constructions? Shouldn’t the computer with
its capacity to represent data dynamically via any interpretation and its immense
processing capacity be used to describe dynamic relations of data? The computer as a
medium should therefore contain non-static representations of space – phenomena of
spatial and social nature, Gestalten.
2 Phenomenal Space
‘When, in a given bedroom, you change the position of the bed, can you say you are
changing rooms, or else what?’ -Georges Perec [4]
While Mitchell follows Chomsky’s structural linguistic model, where meaning can
be produced given a rigid syntax and a well-considered vocabulary, Peter Eisenman
and other architectural theorists like Jenks advocated Derrida’s linguistics of deferred
meaning, where syntax and vocabulary are ephemeral, representing a function of
meaning [5].
That embodies a direct echo of the Gestalt theory and of course initiated the call for
complexity theory as a role model for architectural design. The medium for designing
such complex architecture is supposed to be the computer. One should be able to
simulate any kind of system through programmed algorithms.
Gestalt theory as much as Derrida’s language model or complexity theory all try to
describe the idea of the whole being greater than the sum of its parts [6]. This whole
as an emergent phenomenon organizes the perception of its parts and their construc-
tion rules. By perception, the observer’s perception is intended.
If one stands outside a house, a wall forms part of the delimiting shell, standing at
the inside, say within a room, the same wall becomes commodity to hang things off.
The perceived phenomenon of ‘house’ or ‘room’ makes the same wall appear to us as
different elements within its context.
Maturana [7] and Luhmann [8] argued that through structural coupling of systems
communicative and perceptual domains would emerge that have their lead-
distinctions which are defining those domains and organizing all possible instances
that can occur (Leitdifferenz). Those distinctions can only be binary when seen from
within one of those domains (i.e., lawfulness is the Leitdifferenz for the domain of the
judiciary). The gestalt psychologist’ equivalent to domains were ordering ‘schemata’,
or the learnt context within which one perceives the role of a part of a system [9].
In Wittgenstein’s words: ‘The form of the object consists through the possibilities
of its appearances (seines Vorkommens) via relationships of instances (Sachverhalte).
[…] There is no “thing in itself” (Ding an sich)’ [10].
138 C. Derix
The Gestalt psychologist most striking example was the Phi-phenomenon to ex-
plain the idea of a phenomenon: taken two light sources in a dark room, switching
them on and off in sequence with a precise distance between them doesn’t create the
sensation of seeing two light bulbs being switched on and off in a row, but to perceive
‘movement’ or a line [9]. Enter the cinema.
Properties of objects, be they architectural or other, don’t rest within the object it-
self but form part of phenomenal whole that we observe and interpret. Thus parts and
their expression through an ordering phenomenon are dependent on the context they
occur in and the intention of the observer.
Additionally, the configuration of the parts of the occurrence is important to inter-
pretation and perception. The Phi-phenomenon would not work if the light bulbs were
too distant or the sequential switching too slow or too fast. They would appear as
either two points or a line with direction. Meaning or quality doesn’t reside in the
objects but in their relationships and the observer. Relationships and ordering princi-
ple of phenomena are equivalent to topologies. If the topology of an organizing sys-
tem changes, so does its gestalt. Perec wanted to question that condition.
Heinz von Foerster states that sensual stimuli don’t convey qualities but changing
quantities [11]. Quantities of sensual stimuli are computed on via neuro-physiologcal
configurations – neuronal patterns. Lots of simple gates (neurons) computing differ-
ences to previous or later patterns. Interpretation or meaning is distributed over the
network of neurons and occurs when a change of pattern is generated via stimuli.
Arnheim would concur with von Foerster’s description when he said that ‘space
between things turns out not to look simply empty’ [12].
Distributed representation should therefore afford the representation of phenomena.
The architecture of the computer does just that: patterns of differences of simple logi-
cal gates generating via organizational rules various types of representation. The
computing of architecture should afford the designer to generate phenomena of archi-
tecture – distributed representation of space, rather than just projective and geometric
patterns2.
There are few proponents in the design world who would argue that one can either
identify drivers for phenomena of space or even incorporate catalysts for such quali-
ties into generative computer code.
The paradigm of systems and complexity theory has been adapted in concept think-
ing by some practising architects in the past but it has hardly ever translated into their
design process (with rare exceptions like Tschumi’s Park della Villette), let alone
been implemented via computational design.
2
Miranda puts it as follows [13]: ‘Electronic digital computation is built upon processes of
accumulations of electric charges and their transformation, on to which, as Claude Shannon
found in the late 1930s, it is possible to map the rules and logic syntax of Boolean algebra.
From here, […], it is possible to transfer into Boolean algebras any phenomenon describable
in terms of logic, and that, according to the project of natural sciences, accounts for about
everything.’
Approximating Phenomenological Space 139
In this section, I would like to look only at some examples of computational de-
signer who have attempted to generate emergent phenomena in architecture.
Within such a discussion, one cannot omit the two towering inventors of analogue
computing in architecture, Gaudi and Frei Otto. Both introduced the architectural
design world to the notion of representing form not through projective geometry but
through natural parallel computation. The drawings were strictly necessary for the
builders and visualization not for finding from and space. All the hallmarks of gestalt
theory were present in their work since the elements that compute the edges, surfaces
and spaces produced an emergent whole that instilled the observer to think of it as
architectural expression – a phenomenon nowhere to be found in the description of
the system that produced it.
Although Christopher Alexander [14], Christian Norberg-Schulz [9] and Rudolf
Arnheim [12] managed to decode some architectural phenomena it took another 20
years before Bill Hillier showed through computational simulation how an architec-
tural phenomenon could be quantified and expressed through simple algorithms3. In
his seminal book ‘The Social Logic of Space’, Hillier attempts to disclose correla-
tions between configurations of shapes on plan and how those configurations influ-
ence the occupants’ perception of space and subsequently, how those pattern influ-
ence actions [15].
One of his key concepts is the justified graph or depth map. He used graphs repre-
sentations to show that a building would be perceived differently from one space to
another. Therefore, adjacency understanding would change and perception of distances
throughout a building. His graphs represent graphically this change of perception or the
phenomenon of perceived distance, not in metric but through topology.
Steadman and March had already used graph visualization to generate topological
representations of buildings but limited themselves to objective space configurations
where a graph of a building would remain homogeneous from any location – the ex-
ternal observer [16]. Hillier opened the way to heterogeneous graph representations
dependent on location in buildings – the internal or embedded observer.
Via axial lines analysis and convex shapes in the plan, he could demonstrate per-
ceived hierarchies of urban tissue. This type of analysis (later done via Benedikt’s
concept of the isovist [17]) expressed the integration of spatial locations via visual
connectivity. If a location was better connected than another, pedestrians tend to
probabilistically end up more likely in the well connected location. No further as-
sumptions were needed to understand space as configurational or topological mecha-
nisms (‘Space is the Machine’) [18]. The elements of the system – locations – and the
mechanism of analysis – isovists – are distributed and don’t describe the observed
phenomenon.
While Hillier managed to give architecture an insight into distributed representa-
tion to understand second-order phenomena, he never tried to understand if his meth-
ods could be used to generate gestalt.4
In computational design it was notably John Frazer who advocated a post-
structuralist distributed and computed representation of architectural space [20].
3
Robin Evans shortly after also shows the relationship between architectural lay-outs and
social hierarchies and relationships [2].
4
Lately, a few generative applications based on Visual Graph Analysis (VGA) are being deve-
loped, i.e. Kraemer & Kunze ‘Design Code’ [19].
140 C. Derix
5
Apart from those early proponents of distributed representation of phenomena in architecture,
there have been a few more individuals who attempt to design spatial phenomena through
computation (and I don’t mean the hordes of flashy algorithmic patterns). A few notable ones
are Pablo Miranda’s swarm architectures at the Interactive Institute Sweden, Lars Spuybroek
of Nox Architects, individuals at the CAAD chair of ETH Zurich and Kristi Shea at TU Mu-
nich to name but a few.
Approximating Phenomenological Space 141
sets on the basis of their differences is the key axiom of gestalt theory. In complexity
theory this represents the occurrence of emergent phenomena based on the interaction
of simple elements.
Thus, three steps in the application and development of SOMs will be described
below. At the end, I will attempt a forecast of the next development on this approach.
For the first application of the SOM, a representation of space was to be found that
would be rooted in Euclidean space but otherwise as free of assumptions as possi-
ble. This tool was intended to help understand alternative descriptions of spatial
qualities [24].
The sets of signals to train the network with were composed purely of three dimen-
sional vertices. Those vertices were taken from a CAD model describing a site in
London and stored in an array. No inferences towards higher geometric entities like
line, face or volume were given. The topology of the network consisted of a general
three dimensional orthogonal neighbourhood. The learning function a general Hebb
rule:
wij(t+1) = wij (t) + kid(t)[x – wij (t)] . (23)
where wij represents the weight of a node at topological position I,J in the network, x
the input signal and k the learning rate dependent on the position within the
neighbourhood of the winning node [23].
The network ‘learned’ by adjusting the volume it occupied at any point to a new
input volume. To do so, it had to distribute the difference from present shape to input
space to all its nodes. As the nodes are constrained by a topology, the nodes would
distribute within the input volume and thus describe potential sub-volumina of the
input space.
Fig. 1. a – vertices of site model; b – network adaptation; c – description of network space via
an implicit surface and d – the same input space as generally described
142 C. Derix
The resulting network structure, consisting of point nodes and lines for synaptic
connections, were visualized through an implicit surface (marching cube) algorithm.
This seemed to be a good choice since implicit surfaces indicate the boundary be-
tween densities of elements. Therefore, the surface outlines the perceptive fields of
the network’s clusters and distinguishes the outer boundary towards the environment.
That boundary coincides with what is called the probability density, which describes
the probability with which a signal can occur within an area of the total input space.
Since the intention was to allow the network to select its own input from the site,
each node’s search radius was limited by its relationship to its topological neighbours.
This bias ensured that the network would be able to collect new points from the site
according to its previous learning performances.
This SOM experiment was successful in the sense that it not just implemented all
salient properties of complex systems and their underlying mechanism of distributed
representation of contextual stimuli but it also led to some unexpected outcomes:
Firstly, although it established an isomorphic relationship between the signal space
and its own structure, it also highlighted the differences between objects through its
shape and location on site. It mapped out spaces of differences between generally
perceived geometric objects, pointing towards Arnheim’s statement that ‘space be-
tween things turns out not to look simply empty’ [12].
Secondly, an unpredictable directionality of movement could be observed. I sup-
pose it has something to do with the bias on the search space of the nodes but that
doesn’t necessarily explain why it would tend towards one direction rather than an-
other if the general density of new signals surrounding the network is approximately
even. Thus, locations with ‘richer’ features would be discerned from ‘less interesting’
locations.
Further, since signal spaces depend on the structural make-up of the network after
a previous learning phase, it could be argued that the learning or perception capacity
of this SOM is body or structure dependent. Heinz von Foerster said that ‘change of
shape’ causes a ‘change of perception’ and vice versa, generating ever new descrip-
tions of reality [11].
Fig. 2. An expression of the SOM in the site model at the end of a training period
Approximating Phenomenological Space 143
One of the key problems for a good match between the dimension of the network and
the dimension of the signal space in many applications is that the size of the signal
space is not known yet; especially if the signal space is dynamic.
Hence, a growing neural gas algorithm was implemented as a variation of the stan-
dard SOM to account for fluctuations of density in the signal space [25]. The topology
of the network would grow according to occurrences of signals in space.
This approach has just been started and is promising to help understand the rela-
tionship between events and spatial locations. It could be envisaged that cellular
automata states or agent models serve as dynamic signals.
Another new problem posed itself from the first SOM implementation. The iso-
morphic representations of space produce interesting alternative descriptions of the
qualities of locations on site. The differences could be observed over generations.
However, it would seem helpful to understand if recurring or general patterns of de-
scriptions occur. This would indicate not just differences of feature configurations but
also point towards spaces that share non-explicit qualities.
To this end, a dot product SOM was trained reminiscent of Kohonen’s examples of
taxonomies of animals [23].
The input consisted as previously of three dimensional points for randomly config-
ured Euclidean spaces. On the basis of the differences between the total coordinates,
the map organized the input spaces into categories [26].
Although the input was highly reduced in complexity, the results were very success-
ful, since the observer could understand the perceived commonalities between the
inputs in each category and the differences to other emerging categories according to
semantic and spatial phenomena. The SOM had mapped the input spaces inadvertently
into binary distinctions of ‘narrow’ and ‘wide’, ‘long’ and ‘short’, ‘high’ and ‘low’.
Fig. 3. Left: Kohonen’s animal categories; Right: emergent categories of phenomenal decrip-
tions of space
On one occasion, a model of several spatial SOMs based on the first applications
was built that would generate design proposals for volumetric building layouts. The
building was not conceived of as rooms but rather activities and the commonalities
between the described activities would generate spatial diagrams as suggestive build-
ing layouts where each activity was represented by a SOM [27].
Fig. 4. Multiple SOMs reading each other as input to generate spatial categories of activities
While this type of generative use of self-organizing maps was interesting since the
networks would try to group themselves into volumes according to given features, it
does not use the SOMs to ‘find’ implicit relationships between features.
The next steps to be taken to test the capacity of SOMs or other neural networks to
disclose configurations of phenomena should be twofold:
necessary to achieve certain phenomena.6 Such an approach could for example map
out the configuration necessary between light sources and time delay to produce the
gestaltist Phi-phenomenon.
The more advanced and ambition application of feature mapping in architectural de-
sign, would entail artificial software designers which feed of databases of learned
input mappings.
The above mentioned project of multiple-SOMs generating volumetric diagrams
according to activity features, should first be trained with a large cross-section of
spatial configurations that contain given activities7. This would then lead to a cogni-
tive artificial designer who aids the architect who trained the network. The samples
chosen to train the networks would ensure that designs stay personalized but can also
lead to common public databases (standards) that produce a vast amount of similar
designs.
6 Conclusion
The SOMs discussed briefly above all produced a step towards the understanding of
architectural space and phenomena as configurations of features, some dynamic some
stable. The qualities mapped so far are of low complexity whereas the generative
approaches have been a little more ambitious.
I hope to build (with the help of my students at the University of East London) a
model within the near future that would help the designer to find indications as what
features and their magnitudes might give rise to certain phenomena.
In the meantime also other methods for distributed representation of space should
be continued to be explored, as the SOM represents by its very nature a good method
but by no means the only one.
References
1. Mitchell, W. J.: The Logic of Architecture. Design, Computation, and Cognition. MIT
Press, Cambridge, Massachusetts (1990)
2. Evans, R.: Translations from Drawing to Building and Other Essarys. Architectural Asso-
ciation Publications, London (1997)
3. McLuhan, M.: Understanding Media. The Extensions of Man. Routledge, London and
New York (1964)
4. Perec, G.: Species of Spaces and Other Pieces. Editions Galilee, Paris (1974)
6
Mixing immaterial features to create dynamics within space is reminiscent of Yves Klein’s
‘air-architectures’. Klein used elements like heat, light and sound in order to influence how
people would occupy spaces [28].
7
Professor Lidia Diappi at the Politecnico di Milano has trained SOMs with changes in urban
land-use patterns. The resulting differences were used as a basis for transition functions of
cellular automata to predict future changes [29].
146 C. Derix
5. Jenks, C.: The Architecture of the Jumping Universe, Academy, London & NY (1995).
Second Edition Wiley (1997)
6. Cilliers, P.: Complexity and Postmoderism. Routledge, London (1998)
7. Maturana, H. and Varela, F.:The Tree of Knowledge: Goldmann, Munich (1987)
8. Kneer and Nassehi: Niklas Luhmanns Theorie sozialer Systeme. Wilhelm Finkel Verlag,
Munich (1993)
9. Norberg-Schulz, C.: Intentions in Architecture. MIT Press, Cambridge, (1965)
10. Wittgenstein, L.: Tractatus Logico-Philosophicus. Routledge, London (1922)
11. von Foerster, H.: Understanding Understanding. Essays on Cybernetics and Cognition.
Springer, New York (2003)
12. Arnheim, R.: Dynamics of Architectural Form. University of California Press, Berkley and
Los Angeles (1977)
13. Miranda Carranza, P.: Out of Control: The media of architecture, cybernetics and design.
In: Lloyd Thomas, K.: Material Matters. Routledge, London (forthcoming 2006)
14. Alexander, C.: Notes on the Synthesis of Form. Harvard University Press, Cambridge,
Mass (1967)
15. Hillier, B. and Hanson, j.: The Social Logic of Space. Cambridge University Press, Avon
(1984)
16. March, L. and Steadman, P.: The Geometry of Environment. MIT Press, Cambridge, Mass
(1974)
17. Benedikt, M.L.: To take hold of Space: Isovists and Isovist Fields. In: Environment and
Planning B 6 (1979)
18. Hillier, B.: Space is the Machine. Cambridge University Press, Cambridge (1996)
19. Kraemer, J. and Kunze, J. O.: Design Code. Diploma Thesis, TU Berlin (2005)
20. Frazer, J.: An Evolutionary Architecture. Architectural Association, London (1995)
21. Negroponte, J.: The Architecture Machine. MIT Press, Cambridge Mass (1970)
22. Coates, P. S.: New Modelling for Design: The Growth Machine. In: Architects Journal
(AJ) Supplement, London, (28 June 1989) 50-57
23. Kohonen, T.: Self-Organizing Maps, Springer, Heidelberg (1995)
24. Derix, C.: Self-Organizing Space. Master of Science Thesis, University of East London
(2001)
25. Coates, P., Derix, C., Lau, T., Parvin, T. And Pussepp, R.: Topological Approximations
for Spatial Representations. In proceedings: Generative Arts Conference, Milan (2005)
26. Coates, P., Derix, C. And Benoudjit, A.: Human Perception and Space Classification. In
proceedings: Generative Arts Conference, Milan (2004)
27. Derix, C. And Ireland, T.: An Analysis of the Poly-Dimensionlity of Living. In
proceedings: eCAADe Conference, Graz (2003)
28. Noever, P.: Yves Klein: Air Architecture. Hatje Cantz Publishers (2004)
29. Diappi, L., Bolchi, P. and Franzini, L.: The Urban Sprawl Dynamics: does a Neural Net-
work understand the spatial logic better than a Cellular Automaton?. In proceedings: 42nd
ERSA Congress, Dortmund (2002)
KnowPrice2: Intelligent Cost Estimation for
Construction Projects
1 Introduction
Correct estimation of construction costs is one of the most important factors for pro-
ject success. Risks vary with the contractual context in which estimations or price
biddings are presented. Table 1 discusses three standard configurations.
Decisions that have the biggest impact on project success (that is, related to cost)
are taken in early project phases. Cost control over project duration can only prevent
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 147 – 152, 2006.
© Springer-Verlag Berlin Heidelberg 2006
148 B. Domer, B. Raphael, and S. Saitta
an established budget from getting out of control. It cannot correct fundamental errors
made in the beginning.
At the start of a project, many details are unknown and cost estimators have to
make several assumptions based on their experience with similar projects in the past.
Construction managers are not only interested in accurate estimates, but also in the
level of risk associated with estimates. A systematic methodology for performing this
task is desirable. This paper presents a methodology for cost estimation that has been
developed and implemented in a software package called KnowPrice2. The primary
objective of this paper is to compare the methodology with current practices in the
industry, rather than to provide a detailed description of the approach, which is given
in [10].
Cost
Cost
Influence on costs
Fig. 1. Relationship between cost and the possibility to influence them in different project
phases [1]
The Swiss Research centre for Rationalization in Building and Civil Engineering
(CRB) proposes methodologies such as the costs by element method [7]. Cost data of
construction elements is provided and updated each year, but it does not account for
regional variation of prices or building types. This approach needs a very detailed
breakdown of all building elements and is therefore not the method of choice for
quick estimates.
Data of existing buildings have been collected as well. Germany’s BKI, the com-
munity of architectural chambers publishes each year a well structured and exhaustive
catalogue of project costs [8]. The structure divides buildings into different types,
gives examples for each type and provides standard deviation for price data. Regional
data is included as well, since prices vary locally.
Although it is very tempting to use this data collection for swiss projects, construc-
tion is not yet as “globalized” as other industries. As discussed previously, data col-
lections of other countries cannot be re-used without serious cleaning and adaptation.
3 KnowPrice2 Strategy
3.1 Background
KnowPrice2 is a software package for cost estimation that was developed by the
Swiss Federal Institute of Technology (EPFL) in collaboration with an industrial
partner, Tekhne management SA. It links case data with a unique approach for estab-
lishing construction project budgets. The employed methodology differs significantly
from other database approaches such as [9], since case-based reasoning strategies are
combined with relationships between variables that are discovered by data mining.
3.2 Methodology
The total project cost is estimated using knowledge from different sources. Knowlege
sources include generic domain knowledge in the form of rules, cases consisting of
data related to past projects and relationships that are discovered by mining past pro-
ject data. Depending on what data is known about the current project, appropriate type
of knowledge is used. For example, if all the variables that are needed for applying
the rules are available, generic domain knowledge is used. Otherwise, relationships
that are discovered by data mining are used. Only when relevant relationships are not
available, case data is used.
Generic domain knowledge contains equations for computing costs of building ele-
ments by summing up costs of components or using unit costs. Each case contains
characteristics of a building which includes information such as types, quantities and
costs of elements. Data mining aims to discover relationships between building char-
acteristics and costs. Rules of the following form are discovered through this process:
Where q1, q2, etc. are quantities and c1, c2, .. b are coefficients that are determined
by regression using relevant case data. The data mining process is guided by the
knowledge of dependencies which is provided by domain experts. A dependency
relationship indicates that certain symbolic and quantitative variables might influence
a cost variable. The exact relationship between variables is determined by the data
150 B. Domer, B. Raphael, and S. Saitta
mining module through analysing case data. More details of the data mining tech-
nique are presented elsewhere [10].
The CBR module selects relevant cases to a new project using a similarity metric.
By default, a similarity metric that gives equal importance to all the input variables is
used. In addition, users can define their own similarity metrics by specifying relevant
variables and weights. Selected cases are used to estimate the variations in the values
of independent variables.
IF symbolic_variable EQUALS value THEN
(1)
cost_variable = c1 * q1 + c2 * q2 + c3 * q3 + cn * qn +b.
Steps involved in the application of the cost estimation methodology are the fol-
lowing:
1. Users input known data related to a new project
2. The system creates a method for computing the total cost using generic rules and
relationships that are discovered by data mining
3. Variations in the values of independent variables are determined from similar past
cases and are represented as probability density functions (PDF).
4. Monte-carlo simulation is carried out for obtaining the probability distribution of
total cost using PDFs of independent variables.
Since the methodology computes the PDF of the total cost instead of a determinis-
tic value, information such as the likelihood of an estimate exceeding a certain value
is also available. This permits choosing the bid price at an acceptable level of risk.
This is not possible using conventional deterministic cost estimation.
The knowledge base is structured into five modules, namely, Generic domain knowl-
edge, Dependencies, Cases, Ontology and Discovered knowledge.
The first module “Generic domain knowledge” implements rules to achieve the ob-
jective. Here, the objective is to compute the overall building cost, obtained by the
summation of building cost classifications (BCC).
The second module “Dependencies” describes dependencies between a) the de-
pendent variable and b) the influencing variable. This module is, from the industrial
partner’s point of view, one of the major achievements of KnowPrice2. Whereas in
other programs the user has to describe relationships in a deterministic way, Know-
Price2 employs a different approach. In most cases, cost estimators cannot provide the
deterministic expressions to relate building characteristics with BCCs directly. It is
much easier to indicate that there is a relationship without giving precise values. The
major achievement of Knowprice2 is that it evaluates the relationship between vari-
ables using existing data and data mining techniques [10].
Cases are input in a semi-automatic procedure using an Excel spreadsheet in which
data is grouped into building characteristics such as surfaces, volumes, etc. and costs,
structured according to the Swiss cost management system [5]. The same interface
allows to management of cases.
KnowPrice2: Intelligent Cost Estimation for Construction Projects 151
The ontology module contains the decomposition hierarchy for organising vari-
ables, the data type of each variable, default values of variables and possible values of
symbolic variables.
The module “discovered knowledge”, contains a decision tree that organises all the
relationships that are discovered by data mining.
3.4 Challenges
Collaboration between EPFL and Tekhne has been very close which means that the
project did not suffer from major problems. One main challenge was to adapt the
Dependencies module such that it can treat non-deterministic relations.
The second challenge was (and still is) to provide data for testing. In a first effort, a
database with past projects of Tekhne has been created. This database has been used
to test the basic functionality of KnowPrice2. It is not sufficient to do the fine tuning
of the software.
4 Advantages of KnowPrice2
From the industrial partner’s point of view, KnowPrice offers several advantages:
The approach used for the cost estimation depends no longer on personal experi-
ence only but links it with intelligent computational techniques to support the user.
Employed methods have a sound scientific basis and increase the client’s confidence
in budgets thereby.
Costs can be estimated with incomplete project data (this means, when all project
details are not known). It is up to the user to increase the degree of precision by in-
creasing the number of case data and case variety.
Cases can be entered via Excel spreadsheets. This is very convenient for the user.
Values for descriptive variables are proposed to guide user input. The amount of data
entered depends on information present and not pre-fixed by the software.
The final cost is presented as a price range with associated probabilities. The client
can choose the degree of risk he would like to take: either going for a low budget with
a rather high risk of exceeding the estimate or to announce higher budgets with lower
risks.
Costs can be related to similar previous projects and can thus be justified.
5 Future Work
The quality of results highly depends on the number and variety of cases entered. So
far, only cases of the industrial partner (Tekhne) have been entered. KnowPrice2
works correctly so far, but needs definitely more data for proper testing.
Even though building cost classifications are defined in codes, they are not unam-
biguous. They leave space for interpretation and each user might apply them differ-
ently. The effects of this have not yet been examined.
152 B. Domer, B. Raphael, and S. Saitta
KnowPrice2 has to be tested under real project conditions. This test might reveal
necessities for changes in the program itself and necessary user interface adaptation.
Acknowledgements
This research is funded by the Swiss Commission for Technology and Innovation
(CTI) and Tekhne management SA, Lausanne.
References
1. Büttner, Otto: Kostenplanung von Gebäuden. Phd Thesis, University of Stuttgart (1972)
2. DIN 277: Grundflächen und Rauminhalte von Bauwerken im Hochbau. Beuth Verlag
(2005)
3. SIA 416: Flächen und Volumen von Gebäuden. SIA Verlag, Zürich (2003)
4. DIN 276: Kosten im Hochbau. Beuth Verlag (1993)
5. CRB (Ed.): Building cost classification. CRB, Zürich (2001)
6. CEEC (Ed.): Le code européen pour la planification des coûts, CEEC (2004)
7. CRB (Ed.): Die Elementmethode – Informationen für den Anwender. CRB, Zürich (1995)
8. BKI (Ed.): Baukosten Teil 1: Statistische Kostenkennwerte für Gebäude. BKI, Stuttgart
(2005)
9. Schafer, M., Wicki, P.: Baukosten Datenbank “BK-tool 2.0”, Diploma thesis, HTL Luzern
(2003)
10. Raphael, B., et. Al.: Incremental development of CBR strategies for computing project
cost probabilities, submitted to Advanced Engineering Informatics, 2006.
11. Campi, A., von Büren, C.: Bauen in der Schweiz – Handbuch für Architekten und
Ingenieure, Birkhäuser Verlag (2005)
RFID in the Built Environment: Buried Asset
Locating Systems
School of the Built and Natural Environment, Glasgow Caledonian University, Glasgow,
G4 OBA,
Scotland
1 Introduction
Building services and hidden infrastructure i.e. buried pipes and supply lines carry
vital services such as water, gas, electricity and communications. In doing so, they
create what may be perceived as a hidden map of underground infrastructure.
In the all too common event of damage being occasioned to these services, the
rupture brings about widespread disruption and significant ‘upstream’ and
‘downstream’ losses. Digging in the ground without knowledge of where the buried
assets lie could isolate a whole community from emergency services such as fire,
police and ambulance, as well as from water, gas and electricity services. It is not
only dangerous for people who are directly affected by the damage but also for
workers who are digging, for example, near the gas pipes without knowing their
specific location (Dial-Before-You-Dig, 2005).
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 153 – 162, 2006.
© Springer-Verlag Berlin Heidelberg 2006
154 K. Dziadak, B. Kumar, and J. Sommerville
Various methods are used to pinpoint the location of buried assets. Some of these
approaches utilise destructive methods, such as soil borings, test pits, hand
excavation, and vacuum excavation. There are also geophysical methods, which are
non-destructive: these involve the use of waves or fields, such as seismic waves,
magnetic fields, electric fields, temperature fields, nuclear methods and gas detection,
to locate underground assets (Statement of need, 1999).
The most effective geophysical method is Ground Penetrating Radar (GPR). This
technique has the capability to identify metal assets but is not able to give accurate
data about the depth of the object, which is important information for utility
companies (Olheoft, 2004). GPR has been used for pipe location with varying
success, partly because radar requires a high-frequency carrier to be injected into the
soil. The higher the frequency is, the greater the resolution of the image. However,
high-frequency radio waves are more readily absorbed by soil. Also, high-frequency
operation raises the cost of the associated electronics (GTI, 2005). This system is also
likely to be affected by other metallic objects in close proximity to the asset being
sought.
Another widely used method of locating underground infrastructure is Radio-
detection, which is based on the principle of low frequency electromagnetic radiation
which reduces the cost of electronics and improves depth of penetration. This
technique is unable to detect non-metallic buried plastic, water, gas and clay drainage
pipes (Radio-detection, 2003). Combining Radio-detection with GPR opens up the
possibility of locating non-metallic pipes (Stratascan, 2005). However, the technique
becomes complicated and expensive.
All of the above methods are useful in varying degrees and each of them has its
benefits but none gives the degree of accuracy required by SUSIEPHONE and UK
legislation e.g. the New Roads and Street Works Act 1991, the Traffic Management
Act 2004 and Codes of Practice. Unfortunately, thus far none of these methods is
able to provide accurate and comprehensive data on the location of non-metallic
buried pipes (ITRC, 2003). The shortcomings of the above methods are summarized
below:
3.1 Phase 1
This phase determined an appropriate RFID tag, antennae and reader configuration
which would give accurate depth and location indications at up to, and including,
2.0m below surface level. It will result in indications as to the size and shape of
antenna which can achieve the required depth and accuracy.
Depth of 2m was set as a target in phase 1. Most of the existing pipes are located at
depth between 0.5-3m below the ground. Second reason behind it is RFID specific
devices and operating frequency that we are allowed to work on.
These tests were run to determine the greatest signal reception range between the
antennae and the tags. The best results are summarized in the Table 5 below.
V3
V2
Antenna
V1
V4
Signal shells
H3
H2
Antenna
H1 Signal shells
H4
In Figure 1 the antenna was positioned vertically. There are two sizes of shells;
bigger shells lie on axes V1 and V2 and smaller on V3 and V4. The reason for this is
the size of the antenna: the larger the antenna, the greater the capture of the magnetic
field/signal generated by the tag.
Figure 2 shows the antenna in horizontal orientation. The description is similar to
the one given in Figure 1. Again we can observe two sizes of the shells which show
the reception range of the signal in this orientation.
H3
H1, V1 V4
Signal shell
H4
Fig. 3. Superimposed reception shells
Figure 3 indicates the combined reception shells for both orientations. It is clear
that the antenna is capable of directionally locating the tag. This directional
capability allows us to eliminate spurious signals and so concentrate on the desired
signal from the tag i.e. the larger signals can be attenuated.
At this stage of the field trials each of the antennae and each of the tag were
successfully tested. Tests were carried out at increasingly different depths until the
required 2m depth was achieved.
An implicit part of the investigation is aimed at ascertaining the extent to which
soil conditions that could affect the reception of the reading signal.
For completeness we carried out and compared tests when:
• the separation between the tag and antenna was only soil (Figure 4)
• half of the distance was in soil and the other half was air (Figure 5)
ANTENNA
1m
ANTENNA AIR
AIR
SOIL
SOIL
1m
2m
TAG
TAG
These tests showed that the results in the presence of soil lose only 3% of the
reading distance in comparison with the results achieved in ideal condition (Table 4).
However, in the United Kingdom there are six general types of soil: clay, sand, silt,
peat, chalk, and loam, all of which have their own characteristics. The most important
properties of soil are hydraulic conductivity, soil moisture retention and pathways of
water movement (Jarvis, 2004) and it is possible that different soil condition/types can
affect the performance and its accuracy.
Parameters such as the operating frequency, tag size and type (active or passive)
and antenna size and shape can affect the performance characteristics of the system
and therefore the maximum depth that the tag can read. This is why during this phase
our target was to modify tag’s and antennae’s specifications in order to find out the
best correlation between them.
In the first phase the efficacy of the RFID location system was proven, enabling us
to move to the second phase.
4 Future Work
Future work will focus on the Phase 2 of the research, which is presented below.
4.1 Phase 2
After the principles of the location system have been proven in Phase 1, Phase 2 focus
on the following steps:
160 K. Dziadak, B. Kumar, and J. Sommerville
The Location Operating System (LOS) was created to facilitate the connection
between the data captured during the field work and its later processing/configuration.
A general operating of the system and its components is presented in Figure 6 below.
The LOS scheme is divided into two parts: components which are geared towards
Capturing Buried Asset Data (CBAD) and a system for Processing Buried Asset Data
(PBAD).
RFID in the Built Environment: Buried Asset Locating Systems 161
The first part contains components that will help users to capture the data from the
field. The latitude and longitude data will be captured using a Global Positioning
System Device (GPSD). However, the depth of the buried asset will be ascertained
using RF tags, antennae and reader. All this information will be captured by a
waterproof and portable computer – Tablet/PC.
In the second part the data from the Tablet/PC will be sent and stored in the Buried
Asset Information (BAI) system: the data will be processed to allow user visualization
of buried assets using the Digital National Framework (DNF) compliant Topographic
Map overlay. When processed, the necessary/required information about the
underground services will be stored in the Ordnance Survey (OS) DNF format.
5 Conclusions
From what was achieved at this stage of research project the most significant results
are, that:
1.) Air tests allowed to identify the ideal combination of antennae and tags. These
tests also allowed to establish reception shells and expected reception ranges. These
ranges facilitated expansion of the testing into appropriate site conditions.
2.) Underground tests enabled to establish reception at a range of depths through one
soil type. As the tests progressed we were able to receive a signal at the target depth
outlined in Phase 1 (2m). We also discovered that soil characteristic i.e. saturation,
soil type, etc. may not have an adverse effect on the signal reception.
These early results are encouraging and they seem to indicate that an answer to
identifying non-metallic buried assets does lie in the use of RFID technology.
Although there is not single solution to the problem concerning utility services, it may
be that RFID will be able to contribute to a part of the problem related to locating
buried assets.
As stated earlier, a considerable amount of development work is still to be done to
arrive at a fully operational system. A successful beginning has at least been made.
The next step will focus on improving the accuracy of reception range. Also more
tests will be provided changing the condition of the soil, types of the pipes and
different surfaces layers respectively.
RFID technology is becoming ubiquitous: as the RFID systems become more
widespread, the technology itself becomes smaller and cheaper. The proliferation of
RFID systems suggests that it will be all pervasive, and there is no doubt that RFID is
set to have a tremendous impact on all major industries.
References
1. Business Benefits from Radio Frequency Identification (RFID), SYMBOL, 2004
http://www.symbol.com/products/whitepapers/rfid_business_benefits.html
2. Capacitive Tomography for Locating Buried Plastic Pipe, Gas Technology Institute
(GTI), Feb. 2005. http://www.gastechnology.org/webroot/app/xn/xd.aspx?it=enweb&xd=
4reportspubs%5C4_8focus%5Ccapacitivetomography.xml
162 K. Dziadak, B. Kumar, and J. Sommerville
Chuck Eastman
1 Introduction
3D parametric modeling and rich attribute handling are making increasing inroads
into standard construction practice, worldwide. This fundamental change of building
representation from one that relies on human readability to a machine readable build-
ing representation, opens broad new opportunities for enhancing design and construc-
tion in ways that have been dreamed about over the last two decades. The new
technology and its facilitated processes are called Building Information Modeling, or
BIM. BIM design tools provide a few direct and obvious benefits. These are based on
a single integrated representation from which all drawings and reports are guaranteed
to be consistent, and the easy catching of spatial conflicts and other forms of geomet-
rical errors. Even the first step of realizing these basic BIM capabilities requires new
practices regarding design development and coordination between design teams.
Many other benefits are available, such as integrated feedback from analy-
sis/simulation and production planning tools. BIM allows tools for structural, energy,
costing, lighting, acoustic, airflow, pedestrian movement and other analyses to be
more tightly integrated with design activities, moving these tools from a long loop
iteration to one that can be used repetitively to fine tune architectural design to better
achieve complex mixes of intentions. Many of these capabilities have been outlined in
the research literature for decades and they now have the potential to be realized [1].
It is unclear whether there is a single integrated model from which all abstractions
are derived, or more likely whether there is a federation of associated representations
that are consistent internally and at their points of interaction. Issues of representation
and model abstraction have not been resolved in manufacturing research [2] and will
continue in construction. Issues of the specification of abstraction for new modeling
tools, such as CFD fluid flows, and automating them, are open research issues that
may be discussed during this workshop.
The definition of a construction-level building model is a complicated undertaking,
requiring the definition and management of millions of component objects. Most
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 163 – 174, 2006.
© Springer-Verlag Berlin Heidelberg 2006
164 C. Eastman
reputed BIM efforts for architectural use actually only partially define the building in
machine readable form, carrying out most architectural detailing using drawn sections
based on the older drafting technology. This mixed approach facilitates the transition
from drawing to modeling and reduces the functional and performance requirements
of the software. We should all be aware however of this limitation and the hugely
varying scale of knowledge embedded in various so-called building models. Are the
following building elements defined as fully machine readable information: fresh
water and waste water piping? electrical network layouts? reinforcing bars and
meshes? window detailing? ceiling systems? interior trimwork? A fully compliant
building model allows detailed bills of material for all these components and provides
the option for their offsite fabrication. Current architectural building models promoted
as examples of BIM generally fail at the detail level.
Other systems have been developed to begin addressing the fabrication-level de-
tailing of building systems, for steel, concrete, precast, electrical, piping and other
systems. These systems embed different knowledge than that carried in architectural
systems. The two different types of system have articulated the different level of in-
formation traditional carried in design documents and shop-level documents.
Resolution of the level-of-detail problem requires the extension of current BIM
tools to support the definition and parametric layout of the components making up all
building subsystems and assemblies. This is being undertaken incrementally, with all
the current BIM design tools providing parametric objects to different levels of detail.
2 Design Expertise
Parametric building objects in BIM tools encapsulate design knowledge and expertise.
The embedded knowledge facilitates definition and automatic editing. It distinguishes
a BIM design tool form amore general parametric modeling tool, such as Solid-
works®, CATIA® or Autodesk Inventor®. The most basic parametric capability of
an architectural BIM tool is the layout of walls, doors and windows. All BIM tools
allow easy placement and editing of wall segments and insertions of doors, windows
and other openings. Some systems maintain the topological relations between walls
and respond to changes to the walls they butt into. Most of the wall objects used in
BIM design tools also incorporate layers of construction as a set of ordered parame-
ters of the wall defined in a vertical section, with a structural core, optional insulation,
and layers of finishing on both sides, to obtain a built-up wall and implicit material
quantities, based on the wall area. ArchiCad® and Revit® support varying section
properties along the vertical section, allowing horizontal variation of construction and
finishes. Changes in the horizontal direction require a change in the wall element. We
can say that the wall model incorporates this level of design knowledge.
BIM tool users will encounter the limitations of this level of definition of a wall if
they try to design outside this limited conceptualization. Often building walls have
small regions with different finishes or internal composition. External walls are com-
monly made up of such regions, as shown in Figure 1. Only one BIM design tool
supports segments, to my knowledge, each geometrically defined region consisting of
its set of layers and finishes. Built-in wall regions allows walls to have mixed compo-
sition and defines another level of wall architectural design expertise.
New Opportunities for IT Research in Construction 165
Fig. 1. Example walls that are not easily represented in current parametric models. They each
show mixes of materials and properties that will not be reflected in costs or energy assessments.
Other potential limitations include the definition of the connection conditions be-
tween heterogeneous walls. Do they incorporate a connection detail, beyond a drawn
section, that potentially has cost, energy, acoustic and other properties? See the sec-
tion example in Figure 2. Can I define the detail for the wall-joint in such a way that
thermal or acoustic properties can be assigned to the connection? Can I control that
the interior wall goes through the insulation? Other cases include the automatic man-
agement of walls on sloped surfaces, the updating of non-vertical sloped walls, and
other uncommon cases. I emphasize these examples because walls are universal com-
ponents of BIM design tools, but the various levels of design knowledge they are
based on varies widely. If the capability is not included, a designer must resort to
manual definition of details and components, without parametric modeling updates,
eliminating easy editing.
Rigid insulation
Cast-in-
place concrete
Fig. 2. Three walls coming together and the detail that effects energy, sound and other behaviors
Software programmers with a few architectural advisors have determined the vo-
cabulary of shapes and behavior that are supported by the common BIM tools, de-
termining to a significant degree the working vocabulary readily available to architec-
tural designers. There is no deep study of cases, definition of best practices, direct
industry input or careful review of various codes. I summarize the main issues:
1. software companies develop products with only limited involvement of end-
users, relying on existing platforms and internal expertise for embedding exper-
tise in their systems. They do not clearly distinguish construction domain ex-
pertise from software development expertise;
166 C. Eastman
2. products initially often only poorly meet the requirements of the end users,
and require iterative extension and modification. While this provides a context that
facilitates feedback, it is inefficient for both software developers and end users;
3. on the other hand, if an advanced-level product is introduced, end users are
typically naive and attempt to use the product in an evolutionary way, trying
to make it fit older practices. A gap exists between where users are and where
they will be, say, three years in the future.
There are many approaches that can address these issues, including, the develop-
ment of standards regarding object behavior, the organization of panels or consortia
to define the needed behavior in BIM tools, and more generally, the definition of
procedures for specifying the knowledge to be embedded in AEC design and engi-
neering tools.
The author has been responsible for an industry-wide endeavor for the maintenance
and refinement of the CIS/2 product model for steel fabrication [3], for an industry
consortium-led project that specified fabrication-level design software for precast
concrete fabrication, and now another consortium for the specification of a engineer-
ing product for reinforced (cast-in-place) concrete. The software for precast concrete
was developed out of an open bid for proposals to software companies, with 12 sub-
missions and 6 months of evaluation prior to selection. This process and the resulting
product have been widely reported [4],[5],[6]. The current work to develop an ad-
vanced system for reinforced concrete design and construction involves a consortium
of interested engineering and construction companies and a pre-selected software
developer.
The challenge in these activities has been to form a broad-based industry consor-
tium, capture the knowledge and expertise of experienced designers and engineers,
resolve stylistic differences, and to specify in an implementable format the functional-
ity and behavior the software is to encapsulate. Based on our experiences, my associ-
ates and I have evolved the following general guidelines:
1. jointly define the range of product types and corresponding companies with in-
house expertise in the design and fabrication of those product types; this group
becomes the technical team;
2. model the processes used in the design/fabrication of each product type, captur-
ing the process and information needed/used/generated in each, along with re-
quired external resources involving information exchange;
3. for each product type, define the functionality of an idealized system, characteriz-
ing in broad terms how the system would operate;
4. select (if necessary) and learn the existing functionality of the platform system, so
that gaps in functionality from the system capabilities upward can be defined;
5. identify the needed system objects -- beams, walls, columns, connections, etc. that
are needed to define the components of the system in each type of product; some
of the objects may be abstract and transfer their components to other objects, in
part or whole;
6. resolve overlaps among different product types and define the combined behavior
for the set of target objects;
7. for each object, identify (and gain agreement) on the object’s initial definition,
including all control parameters;
New Opportunities for IT Research in Construction 167
8. for each object, identify in detail the possible changes to its context; for each of
the identified changes, define the desired update behavior;
9. identify basic types of drawings and other reports needed; for each, identify de-
sired automatic layout;
10. as the system is implemented, undertake testing to determine if the functionality is
correct and that the various embedded cases have been properly identified.
In each case, it is necessary to resolve conflicting requirements among the team
members. It is also necessary to translate the system requirements into an incremental
sequence of discrete functional capabilities that can serve as software development
steps and be implemented. For the precast concrete specification effort, the first four
steps took 18 months. The last six steps took 30 months, with about a year of that time
being software development time. The specification contained 626 development items
in 31 distinct areas. The precast concrete process is now completed, with an effective
commercial product.
Universities should be undertaking industrial development for multiple reasons,
among them being:
- to develop operational knowledge about problem domains as they are in real-
ity, rather than textbooks;
- to develop strong working relationships with innovative organizations and
help them adopt innovative technologies or concepts;
- to use the application of research as a springboard to undertake supporting re-
search in related areas.
Our work with the North American precast concrete industry has led to the devel-
opment of a set of new technologies to support expert knowledge capture and utiliza-
tion. For step #2 above, we developed process modeling tools that allow full capture
of different corporate processes, them merging into a single integrated data model
supporting implementation [7],[8]. This work led to new ways to define and extend
the integration of process models with product models. For steps #7 and #8, we de-
veloped a notation for representing parametric behavior [9], so that the technical team
could effectively communicate complex behaviors.
An example of the behaviors we were dealing with is shown in Figures 3 and 4
showing the definition of pocket connections for double tee members. The process
Fig. 3. The precast elements to be modeled, example of the pocket connections, a portion of the
parameters involved
168 C. Eastman
Fig. 4. Examples of the design rules developed for the precast concrete design tool
described moves from a general condition regarding one or more related system ob-
jects and definition of their parameters. Then their behavior in response to different
external conditions is identified.
Those who participated in the consortium-led product development process have
been very satisfied with the result. End-users participating in the specification have
developed an early but sophisticated view of what the system should do. While they
are not always satisfied with a 90 percent solution, they are well down the path of re-
organizing company processes to take advantage of the new technology by the time it
is released. The software company involved has developed products faster and with
less expense than using traditional practices. Note: This work was undertaken col-
laboratively with Rafael Sacks and Ghang Lee, who deserve much of the credit for its
success.
I offer several points growing out of these efforts:
1. while the current effort to capture domain expertise and transfer it to com-
puter systems for production use is very visible at this time in history, the
definition and translation of this knowledge will be on-going and evolution-
ary; people will continue to learn by doing, requiring later translation to
knowledge embedded software; further development of methods for making
this transfer is needed;
2. the tools for representing and communicating desired parametric object and
assembly behavior is weak and not well developed.. We relied on a special-
ized form of story-boarding. I could imagine animation tools with reverse
code generation (examples exist in the robotics programming [10]) allowing
the desired behavior to be programmed by direction manipulation of objects,
for example;
3. an underlying issue in the development of knowledge embedded design tools
is the definition, representation, and refining of processes. The design of de-
sign processes is a fundamental issue in the development of advanced soft-
ware for engineering and design; the old process modeling tools, such as
IDEF0, (which grew out of SADT in the 1960s,) are terribly outmoded and
should be replaced by methods that are more structurally integrated in
New Opportunities for IT Research in Construction 169
how local actions lead to improvements in the overall score. These latter cases
often are the result of analysis/simulation applications, that apply models of be-
havior to determine the metric. The metric alone, however, is of only limited
benefit without additional knowledge that gives insight what how operations
change the metric.
(3) Other design information can be defined as rules or metrics, for example for
security issues or for costs. The rules and metric can provide sub-goals for the
design. In these cases a design can be assessed whether it satisfies the goal or
not. These may be simply local tests that have no way to summarize (such as
safety), while others may have a relation to an overall score, (elimination of
blind corners in heavy pedestrian circulation routes affects circulation effi-
ciency). The check may be quantitative, for example an area requirement. Al-
ternatively, it may be qualitative, dealing with multiple-dimensional issues,
such as the acoustics of a space. These rules were called in problemsolving
“means-ends-analysis”.
(4) A stronger level of design knowledge are rules that can be embedded into gen-
erative procedures. In these cases, testing of the design is not necessary, it is
guaranteed by the generation process. That is, a set of operators exist that em-
bed the goal within them. Newell calls this method “induction”. This is the
manner of implementation of design knowledge within a parametric modeling
tool, that relies on parametric objects that are self-adjusting to their context yet
are guaranteed to update in a manner that maintains the desired design rules.
When an external input requires that the layout change, the changes are made
automatically, or the system reports that the change led to conditions where the
embedded rules cannot be satisfied. An example is the automated connection
details found in some structural detailing applications such as Tekla Struc-
tures® and Design Data’s SDS/2®.
Each of these levels of knowledge suggest different methods of information deliv-
ery to designers or engineers. The methods for making knowledge available have
different implementations. For the first method, help systems are mostly easily im-
plemented using a Help toolkit, such as Microsoft’s, which is compatible with almost
all CAD system environments. However, richer toolkits are possible. How can one
172 C. Eastman
build a context sensitive knowledge base that can work with different design tools?
How can a program decipher the design intention within a design context, in order to
provide desired information? Development of a case-based design information system
platform is an important research (and possibly business) endeavor.
The second kind of knowledge is typically embedded in analysis and simulation
tools. Support for iterative use of such tools and keeping track of multiple runs is
important in practical use. However, I will not focus on this kind of knowledge appli-
cation; I assume it will be extensively addressed in other parts of this workshop.
The third type of knowledge application involves developing the equivalent of a
spell and grammar checker for particular building types, structural or other systems,
or even design styles. Such building assessment tools will have to be implemented on
some software platform. One part of the platform is the building model representa-
tion. Here, we rely on a public format that is open and accessible to all BIM design
tools. In this case, the public standard building representation is Industry Foundation
Class (IFC) [24]. IFC may not have the data required to carry out certain checks and
these may require temporary extensions through property -sets, and later extensions to
the IFC schema. The second aspect of the platform is the environment that reads in
the building model data and provides the software environment to support calculating
properties not directly stored and developing tests to assess the base or derived prop-
erties. Several rule-checking platforms exist, such as Solibri (see: http://www.solibri.
com/services/public/main/main.php and EDM Model Server (see: http://www.
epmtech.jotne.com/products/index.html).
For the fourth method of information usage, parametric models of building ele-
ments and systems provide a rich toolkit for defining generative design tools that can
respond to their context. Currently parametric models are not portable, but can only
be implemented for a particular parametric design tool. Today, the technology does
not exist to define cross-platform parametric models. Further research is needed be-
fore such a production undertaking is warranted.
All four methods of information-capture can be applied to BIM design environ-
ments. They augment the notion of a digital design workbench. We expect that all
designers and engineers will increasingly work at such workbenches from now on.
Two lines of study are embedded in this discussion. One deals with the abstract
study of the power of different kinds of information in solving problems. What are
the abstract classifications and what is their essential structure? I have proposed a
classification based on problemsolving theory. The second line of study is to identify
effective ways to delivery particular classes of design knowledge I have outlined
methods for the four types of information. This suggests that a science of information
delivery in design is possible, built upon the classic knowledge of problem solving.
Last, we have the exercise of packaging and delivering the design information to end-
users. This will become a major enterprise, a replacement for the current generation of
material embedded in paper-based and electronic books.
4 Conclusion
The development of machine readable building models, first at the design, then at the
construction stages, is leading to major changes in how we design and fabricate
New Opportunities for IT Research in Construction 173
buildings. These transitions will impact all parts of the construction industry and lead
to major restructuring of the industry, I believe. The transition provides a rare oppor-
tunity for strong collaboration between schools and practitioners. We are at the start
of an exciting era of building IT research.
References
1. Eastman C, Building Product Models: Computer Environments Supporting Design and
Construction, CRC Press, Boca Raton, FL 1999; Chapter 2.
2. Li WD, Lu WF, Fuh JYH, Wong YS, Collaborative Computer-aided design – research and
development status, Computer-Aided Design, 37:9, (August, 2005), 931-940.
3. Eastman C., F. Wang, S-J You, D. Yang Deployment of An AEC Industry Sector Product
Model, Computer-Aided Design 37:11(2005), pp. 1214–1228 .
4. Eastman C, Lee G, Sacks R, Development of a knowledge-rich CAD system for the North
American precast concrete industry, in: K. Klinger (Ed.), ACADIA 22 (Indianapolis, IN,
2003) 208-215.
5. Lee G, Eastman C, Sacks R, Wessman R, Development of an intelligent 3D parametric
modeling system for the North American precast concrete industry: Phase II, in: ISARC -
21st International Symposium on Automation and Robotics in Construction (NIST, Jeju,
Korea, 2004) 700-705.
6. Eastman, C. M., R. Sacks, and G. Lee (2003). The development and implementation of an
advanced IT strategy for the North American Precast Concrete Industry. ITcon Interna-
tional Journal of IT in Construction, 8, 247-262. http://www.itcon.org/
7. Eastman C, Lee G, and Sacks R, (2002) A new formal and analytical approach to model-
ing engineering project information processes, in: CIB W78 Aarhus, Denmark, 125-132.
8. Sacks R, Eastman C, and Lee G, Process model perspectives on management and engi-
neering procedures in the North American Precast/Prestressed Concrete Industry, the
ASCE Journal of Construction Engineering and Management, 130 (2004) pp. 206-215.
9. Lee G, Rafael Sacks , Eastman C, Specifying Parametric Building Object Behavior (BOB)
for a Building Information Modeling System, Automation in Construction (in press)
10. Bolmsjö, G, Programming robot systems for arc welding in small series production, Ro-
botics & Computer-Integrated Manufacturing. 5(2/3):199-205, 1989.
11. Eastman CM, Parker DS, Jeng TS, Managing the Integrity of Design Data Generated by
Multiple Applications: The Theory and Practice of Patching, Research in Engineering De-
sign, (9:1997) pp. 125-145.
12. Schmidt C , Kastens U, Implementation of visual languages using pattern-based specifica-
tions, Software—Practice & Experience, 33:15 (December 2003): 1471-1505
13. Ramsey CG , Sleeper HR, and Hoke JR Architectural Graphic Standards, Tenth Edition
Wiley, NY (2000).
14. Neufert E & G, Architects' Data, Blackwell Publishing, 2002.
15. Kilbert CJ, Sustainable Construction: Green Building Design and Delivery, John Wiley &
Sons, NY (2005).
16. Butler RB, Architectural Engineering Design -- Structural Systems McGraw-Hill, NY
(2002)
17. Kobus RL, Skaggs RL, Bobrow M, Building Type Basics for Healthcare Facilities, John
Wiley & Sons, Inc.NY (2000).
18. College of Architecture, Georgia Institute of Technology, Conducting Effective Court-
house Visits, General Services Administration, Public Building Service and Administra-
tive Office of the U.S. Courts, 2003.
19. PCI 2004 PCI Design Handbook : Precast And Prestressed Concrete 6th Edition, Pre-
cast/Prestressed Concrete Institute, Chicago
174 C. Eastman
Tamer E. El-Diraby
1 Introduction
Two features are believed to dominate the design of civil infrastructure in the 21st
century: consideration for impacts on sustainable development and analysis of infra-
structure interdependency. Parallel to that, computer based systems are evolving from
focusing on data interoperability and information sharing into knowledge manage-
ment. In fact, the city of tomorrow is shaped as a knowledge city that promotes pro-
gressive and integrated knowledge culture. The main characteristics of a modern
knowledge city include [1]:
• Knowledge-based goods and services;
• Provision of instruments to make knowledge accessible to citizens;
• Provision of dependable and cost competitive access to infrastructure to sup-
port economic activity;
• An urban design and architecture that incorporates new technologies; and
• Responsive and creative public services.
It can be argued that the design of infrastructure in the 21st century requires the de-
ployment of an effective knowledge management system with two core components:
1. Theoretical Components: a shared knowledge model (ontology) of interde-
pendency and sustainability knowledge.
2. Implementation Components: the computer systems, inter-organizational
protocols and government polices that use the knowledge model.
The promise of implementing a common knowledge management system is envi-
sioned to allow stakeholders to work so closely that there are interoperable computer
systems that allow partner A to seamlessly access the corporate data of partner B,
manipulate certain aspects of their designs, and send a message: “we have changed
the schedule of your activity K or the design of your product M to achieve more
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 175 – 185, 2006.
© Springer-Verlag Berlin Heidelberg 2006
176 T.E. El-Diraby
1
Academic models of such open cooperation started about 5-8 years ago (Hammer, 2001).
Lately, some industries have started implementing these models (see for example: Fischer
and Rehm, 2004).
Infrastructure Development in the Knowledge City 177
The architecture provides access to information at three levels: corporate level (full
access to corporate information), partner level (access to certain information in part-
ner organizations) and public level (free access to a limited set of information).
Orthogonal to these levels, information can be viewed in portals dedicated to prod-
ucts, processes and actors. All technical details of the infrastructure system being
designed or managed will show up on the product view. Workflow and business
transactions will show on the process view. Status, interest, and tasks of stakeholders
will be shown in the actor view. Proper links will be made to relevant policies, codes
and regulations that have bearing on any of these views.
Public Domain
Proprietary ontologies
Infrastructure
ontologies
Partners Domain
(Interdependency
Domain )
Corporate Domain
ontology
Merger module
3 Ontology Development
This section summarizes the progress made so far in ontology development. The
ontologies were developed in OWL (using Protégé) with the axioms molded in
SWRL (Semantic Web Rule Language).
material, security, performance index that relates to the IP performance and surround-
ing conditions attributes (see Figure 2).
The IPD-Ontology was created based on review of existing information modeling
efforts in the various infrastructure domains (water, wastewater, electricity, telecom-
munication, and gas). IPD-Onto reused existing taxonomies whenever possible and
created an upper level classification common to all products. IPD-Onto is currently
implemented in OWL and contains around 1,200 concepts and relationships.
In this regard, it is worth noting that several initiatives for interoperability in the in-
frastructure product realm have been attempted (e.g. LandXML [2], SDSFIE [3],
MultiSpeak [4], etc…). Nevertheless, these models lacked: 1) The ability to represent
knowledge rather than data in a domain, 2) Interoperability among various infrastruc-
ture domains due to their industry-specificity and, 3) Object orientation and its associ-
ated benefits in information modeling. Other more application-oriented initiatives
focused on the data interoperability between CAD and GIS for specific use case sce-
narios requiring their interaction [5].
Taxonomy: IPD-Onto is divided into two distinct ontologies. IPD-Onto Lite is con-
sidered as the common ontology that is shared among the process and actor ontolo-
gies. It contains only those concepts that need to be consistently defined among other
ontologies. Currently IPD-Onto Lite contains 132 concepts. It identifies 3 distinct
product groups under which any particular infrastructure product must fall. The sector
group identifies the main infrastructure sectors (water, wastewater, gas, etc…) The
functional group identifies 7 main functions that any infrastructure product must serve
(transportation, protection, tracing, control, storage, access, pumping). The composi-
tional product identifies whether the product is a simple product (pipe, valve, fitting)
or a compound product (made up of more than one simple product) (water line,
bridge, culvert). The notion of composition is not absolute and depends on the domain
and setting considered (hence the need for categorization concepts at the root level).
For example, in the infrastructure asset management domain a pump would be con-
sidered a simple product while in the domain of pump design it would be considered a
complex product. Two concepts were central to the ontological model in this regard:
attributes (as they present characteristics that fully describe any product) and con-
straints (as they present concepts that impact all aspects relating to a product). Other
concepts like techniques and measures are also extensively utilized in the model.
Relationships: Taxonomical relationships are in the form of is-a relations (e.g. Elec-
tricSwitch is-a ControlProduct). Non taxonomical relations relate different concepts
together through a semantic construct for the relation. Some of the upper-level rela-
tions in IPD-Onto Lite include:
• InfrastructureProduct has_attribute InfrProductAttribute
• InfrastructureProduct has_technique InfrProductTechnique
• InfrastructureProduct has_constraint Constraint
• InfrProductAttribute has_domain Domain
Ontological modeling allows for creating taxonomies of relationships. As such, the
following 4 relationships are considered to fall under a class hierarchy of descending
abstraction (has, has_technique, has_method, has_repairmethod).
Infrastructure Development in the Knowledge City 179
Axioms: Axioms serve to model sentences that are always true in a domain. They are
used to model knowledge that cannot be represented by concepts and relationships.
Axioms can be very useful in inferring new knowledge. Examples of some axioms
(and their equivalent in first order logic) defined in IPD-Onto Full include:
• PVC pipes have an attribute of high resistance to aggressive soils:
∀ x (Pipe(x) ^ has_MaterialType(x, PVC)) ⊃ has_SoilResistance(x, High)
• Steel pipes has an attribute of high strength:
∀ x (Pipe(x) ^ has_MaterialType(x, Steel)) ⊃ has_Strength(x, High)
• Fiber optic cables that do not have a casing are likely to be damaged during
construction:
∀ x,σ,t (FiberOpticCable(x) ^ hasCasing(x, None) ^ ExcavationProcess(σ) ^
Occurs(σ, t)) ⊃ holds(has_attribute(x, damaged), t))
This ontology captures process knowledge. A process has life cycle (expressed in a
set of phases) including conceptualization (capturing the requirements, identifying the
constraints), planning (who will do what at which time)2 development (alternative
development and evaluation), implementation (development of the final output). In
addition to phasing, a full description of each process will require linkage to other
concepts, such as actors (the people and organizations) involved in the process, roles
(responsibilities of each actor), constraints (rules, codes, environmental conditions)
and the supporting mechanisms (theories, best practice, technologies) that support the
execution of the process. The ontological model of processes is perceived to be an
extension of the basic IDEF0 model (input, output, constraints and mechanisms).
2
Please notice that even the Planning Process has a planning phase of who will do what to
develop the plan.
180 T.E. El-Diraby
The proposed ontology has the following main concepts/domains (each is the root of a
taxonomy): Entity (including Project, Process, Product, Actor, and Resource),
Mechanism and Constraint. Any project (e.g. renovation of a street, construction of
a new street, a new transit system) produces a set of products (e.g. new lanes, new
Infrastructure Development in the Knowledge City 181
bridge, dedicated lanes, transit tracks, new traffic patterns, and signals). Each of these
products has a set of possible design options. The options are developed through a set
of interlocked processes, where actors (e.g. design firms, Dept. of Transportation)
make decisions (e.g. set project objectives, develop options, configure options, and
approve an option). Each option has a set of impacts on various sustainability ele-
ments, such as health hazards, increased user cost, negative impacts on local business,
and enhancement to traffic flow. These elements include stakeholders (Actor), such as
a business or community group, or basic environmental elements, such as air, water,
and soil. For each of these impacts, a set of strategies could be used to reduce any
negative consequences on the impacted elements.
The ontological representation of highway sustainability management process is at
the intersection of this ontology and the aforementioned process ontology. Each Sus-
tainability process consists of two major phases: planning and management. Each
phase is subdivided into sub-processes.
For example, the Sustainability planning process encompasses five major sub proc-
esses: Analysis of existing elements process, Impact & risk identification process,
Impact & risk assessment process, Impact & risk mitigation process, and Code/policy
enforcement process. On the other hand, three themes of sustainability: Natural envi-
ronment, Society and Economy, have to be taken into account during any Sustainabil-
ity process. Therefore, a matrix is formed with the columns representing the three
themes and the rows representing the two phases. The first level sub processes of the
highway sustainability optimization process is shown in the matrix in Figure 4. Each
Planning process includes the following sub-processes: analysis of existing systems,
identification of risks, risk assessment, development of risk mitigation tools, and code
compliance check. Each management process includes two sub processes: develop-
ment of risk/impact controls and evaluation process. For instance, the Analysis of
existing natural environment elements process is at the intersection of the analysis of
existing elements process and natural environment sustainability process. This is
because it covers both domains of knowledge: looking at existing conditions (in con-
trast to future/suggested conditions) and only considering the environmental aspects
of these conditions (in contrast to social and economic aspects).
4 Implementation
be represented in XML that will abide by the XML-Constraint schema. The constraint
file is then used to generate the necessary constraint checking code using the Ar-
cObjects programming language. The designer of a new utility system can consis-
tently check the proposed route of his/her utility throughout the design process against
any number of constraints that are shared and made explicit by other utility companies
or regulating bodies.
The primary use-case of the system assumes the following process flow (see
Figure 3). The designer of a new utility system uploads a new design to the system in
either CAD or GIS format. The system will start resolving semantic differences be-
tween the uploaded data and that of existing utilities in the street. Examples of seman-
tic inconsistencies include layer, attribute and value naming (e.g. the uploaded data
might refer to a ‘Gas_Pipe’ whereas the OO geo-data model uses ‘GasLine’). The
semantic matching is made possible by the Infrastructure Product Ontology running at
the back-end, but nonetheless the user is prompted to confirm semantic matching.
This semantic conflict resolution is similar to that performed by [7] in the context of
collaborative editing of design documents.
After all semantic differences are resolved, the existing geospatial utility data is
appended with the new design. The user selects which subset of constraints to check
for, based on the spatial constraint model. For example, the user may want to check
the design against ‘hard’ constraints first to ensure that all minimum clearance re-
quirements are satisfied and then check ‘advisory’ constraints to know how the design
may be improved. Alternatively the user may want to select those constraints that
have to do with Telecom infrastructure or those that are related to maintenance issues,
etc. Based on the selected constraint subset, the GIS system invokes a series of spatial
queries that are stored in the spatial constraint knowledgebase in XML format. The
output of this process is a violated constraint list that registers all constraints that were
violated by the proposed design.
The user can amend the design accordingly until it is ready for final submission af-
ter which other affected parties (agencies that have utilities within the ROW) are
notified. These agencies can then view the proposed new design using the system and
Area Geospatial
Entities Geospatial
Axioms
Data
Data
Attributes
Ontology-based
Project Geospatial New Project
Constraints Geospatial
Data
Data
Project Attributes
invoke any subset of constraints to check the quality of the design against the knowl-
edge base. The system allows for approvals and comments to be communicated
among the collaborators to expedite the design coordination process. The collabora-
tive web portal eliminates current practices of drawing exchange and review cycles
that create bottlenecks in the design process. The designer of a new utility system can
consistently check the proposed route of his/her utility throughout the design process
against any number of constraints that are shared and made explicit by other utility
companies or regulating bodies
A prototype portal for integrating work processes across different organizations has
been implemented. The portal aims at integrating these processes based on knowledge
flow. i.e. a consolidated process structure is created by matching (in a semi-automated
fashion) closely aligned activities of the collaborating organizations. The following
main steps are included in the implementation (see Figure 4):
1. present processes: the user of the portal can use the proposed process ontology
(in a drag-and-drop fashion) to build the structure of their processes. If the user pre-
fers not to use the proposed ontology, they can upload and use their own ontology to
represent their processes. If the user does not want to use an ontology to present their
processes, they are requested to fill out a simple table of the main tasks and their re-
lated actors and products before they document their process. The table is then trans-
formed into a small ontological model using Formal concept analysis.
2. ontology merger: a separate module is then invoked to provide interoperability
between the different ontologies of all collaborating organizations (see next section).
3. Establish collaborative process: the portal sorts out similarities in the different
organization’s processes. A user (called the coordinating officer) can use these simi-
larities in developing a common process. Basically, the coordinating officer can ac-
cess all the processes and drag-and-drop any activity from any organization into the
combined process. The combined process can show the flow of information between
different stakeholders. It can also show: who is involved in the project at which time,
what products (or parts of products) are being designed at which time and by whom,
and what attributes (of products) are being considered at which time?
5 Ongoing/Future Work
Actor ontology: given that many actors are going to be involved in the exchange of
knowledge during the collaborative processing of infrastructure design, a substantial
flow of information is expected. Furthermore, the consideration of sustainability adds
a substantially new domain of knowledge, with very subjective and conflicting con-
tents. This ontology will attempt to link the roles and responsibilities of various actors
(including the general public) to their information needs. An agent-based system will
then be implemented to filter relevant information to interested actors based on their
Infrastructure Development in the Knowledge City 185
References
1. Ergazakis, K., Metaxiotis, K., and Psarras, J. (2004). “Towards knowledge cities: concep-
tual analysis and success stories”, J. of Knowledge Management, Vol. 8, No.5.
2. LandXML. http://www.landxml.org Accessed July 2005
3. SDSFIE. Spatial Data Standard for facilities, infrastructure, & environment – Data Model &
Structure, U.S. Army CADD/GIS Technology Center, 2002
4. MultiSpeak. http://www.multispeak.org/whatisit.php Accessed July 2005
5. Peachavanish, R., Karimi, H. A., Akinci, B. and Boukamp, F. “An ontological engineering
approach for integrating CAD and GIS in support of infrastructure management”, Ad-
vanced Engineering Informatics, Vol. 20, No 1, 2006, pages 71-88.
6. CII-Construction Industry Institute. (1997). “Pre-Project Planning Handbook,” University
of Texas at Austin.
7. Gu, N., Xu, J., Wu, X., Yang J. and Ye, W., “Ontology based semantic conflicts resolution
in collaborative editing of design documents”, Advanced Engineering Informatics, Vol. 19,
No 2, 2005, pages 103-112.
8. Rouane, M., Petko V., Houari S., and Marianne H. Merging Conceptual Hierarchies Using
Concept Lattices. MechAnisms for SPEcialization, Generalization and inHerItance Work-
shop (MASPEGHI) at ECOOP 2004, Oslo, Norway, June 15, 2004.
Formalizing Construction Knowledge for Concurrent
Performance-Based Design
Martin Fischer
Abstract. The capability to represent design solutions with product models has
increased significantly in recent years. Correspondingly the formalization of de-
sign methods has progressed for several traditional design disciplines, making
the multi-disciplinary design process increasingly performance and computer-
based. A similar formalization of construction concepts is needed so that con-
struction professionals can participate as a discipline contributing to the model-
based design of a facility and its development processes and organization. This
paper presents research that aims at formalizing construction concepts to make
them self-aware in the context of virtual computer models of facilities and their
construction schedules and organizations. It also describes a research method
that has been developed at the Center for Integrated Facility Engineering at
Stanford University to address the challenge of carrying out scientifically sound
research in a project-based industry like construction.
1 Introduction
Virtual Design and Construction (VDC) methods are enabling project teams to con-
sider more design versions from more perspectives than possible with purely human
and process-based integration methods [1]. Advancements in product modeling (or
building information modeling (BIM)) methods [2], [3], [4], information exchange
standards [5], [6], [7], and formalizations of discipline-specific analysis methods [8],
[9], [10], [11] now allow many different disciplines (e.g., structural and mechanical
engineers) to have their concerns included in the early phases of a project [12], [13].
As a consequence, performance-based design supported by product models is becom-
ing state-of-the-art practice [1] (Fig. 1). The number of performance criteria that can
be analyzed from product models continues to increase and now include some archi-
tectural, many structural, mechanical (energy), acoustical, lighting, and other con-
cerns. These VDC methods are enabling multi-disciplinary design teams to consider
more performance criteria from more disciplines and life-cycle phases than possible
with traditional, document-based practice. They contribute greatly towards better
coordinated designs [14] and to creating Pareto-optimal designs [15] that are typically
more sustainable than designs created by the traditional design process that involves
design disciplines sequentially. In most cases several related product models form the
basis of this performance-based design [16], [17]. These models also support the
reuse of knowledge from project to project [18].
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 186 – 205, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Formalizing Construction Knowledge for Concurrent Performance-Based Design 187
Fig. 1. Tools for analysis and visualization integrated through shared product models are
emerging as cornerstones of integrated, performance-based, life-cycle focused facility design.
The figure illustrates the current capabilities and offerings of mechanical design firm Granlund
in Helsinki, Finland, Figure from [1].
A promise of virtual design and construction is that not only the traditional design
disciplines, but also downstream disciplines (e.g., construction) can contribute to
improve the design of a facility in a timely and effective manner. It supports an ex-
pansion of the concept of performance-based design from a traditional focus on the
physical form of a facility and its predicted behaviors during facility operations (e.g.,
the performance of the structural system during an earthquake) to the concurrent de-
sign of a project’s product (i.e., the facility itself) and the organizations and processes
that define, make, and use it. The construction perspective is an important perspective
to consider in this expanded performance concept of facility design. It considers the
constructibility and therefore the economy in monetary, environmental, and social
costs of a particular facility design and includes the performance-based design of the
virtual and physical construction processes in the context of the facility’s lifecycle.
However, construction knowledge has not yet been formalized to the extent neces-
sary to consider construction input explicitly in the information models and systems
used to represent and analyze the concerns of the various design disciplines in prac-
tice. Furthermore, a conceptual limitation of the modeling and analysis approaches
used for the concerns of traditional design disciplines is that the underlying represen-
tation is typically a 3D product model. However, the explicit consideration of con-
struction concerns in a performance-based design process requires not only the
formalization of a wide range of construction knowledge to support computer-based
analyses of productivity, safety, workflow, and other concerns, but also the addition
of the time dimension to the 3D product model, since the time dimension is a critical
factor in the consideration of construction concerns early in the design of a facility.
188 M. Fischer
early project phases. To help overcome this difficulty, constructibility knowledge has
been organized according to levels of detail of design decisions and the timing of
constructiblity input [28], [29].
Fig. 2. Social integration process widely used in today’s practice to improve the constructibility
of projects. Professionals need to understand a wide range of project information by interpret-
ing documents, make performance predictions in their minds, and share the predictions verbally
and with sketches with the other professionals.
Using these formal knowledge representation methods, a next, longer-term step to-
wards large scale integration is for each element to “see” what affects its design and
behavior. An “element” can be a physical item like a wall, a process like an activity,
or an organizational actor like a company.
For example, a self-aware scaffold would recognize when the facility design has
changed and check whether its design needs to change, or an activity in such a model
would recognize when its sequence relationships to other activities have changed and
compute the impact of the revised activity sequence on its production rate. Note that
these self-aware elements are aware of what affects their own design and behavior,
but do not need to be aware of the impacts changes in their design and parameters
have on other elements. For example, the self-aware scaffold is not aware of the
schedule impact of a change to its design. It knows only about when its design works
in the context of the facility design and schedule. The self-aware schedule would
compute the schedule impact of a change in the scaffold design. The self-aware activ-
ity knows how its production rate is affected by, among other things, the activity
sequence it is part of, but does not know the overall cost impact of the change in pro-
duction rate. A self-aware cost element would figure out the cost impact.
Formalizing Construction Knowledge for Concurrent Performance-Based Design 191
It is important that construction and facility elements are made self-aware in this
manner, i.e., each element knows what affects its design and not what effect the de-
sign of a particular element has on other elements, to enable the flexible use of these
elements for facility design and to make the maintenance of the knowledge encapsu-
lated in these elements manageable. The knowledge encapsulated in self-aware ele-
ments that focus on the computation of the impact of their design on other project
elements would be difficult to maintain since the knowledge base needed would typi-
cally come from many disciplines and the nature and magnitude of the impact cannot
always be predicted a priori. For example, the cost impact of a sequence change may
depend on other aspects of a construction schedule, e.g., access conditions to the site,
which the activity cannot know about, but a cost element could include in its knowl-
edge base. In my experience it becomes quickly an intractable problem to maintain,
e.g., the knowledge about the possible impacts of a change in activity sequence be-
cause there are all kinds of conditions that affect the types and magnitudes of the
impacts of such a change on other project elements and their performance in the con-
text of the overall design of a facility and its organizations and processes.
To the extent to which self-aware “virtual elements” can be formalized and imple-
mented as computational models and methods, the resulting computer model of the
design of a project becomes intelligent and can actively support the concurrent efforts
of the various construction disciplines (architects, structural engineers, builders, etc.)
to integrate their concerns and information with everyone else’s concerns and infor-
mation. Such self-aware elements would also enable a pull-driven method for design,
which should be more productive than the prevalent current push-driven design meth-
ods. For example, a construction activity that knows what building elements it is
building and that knows what resources it consumes can react automatically to
changes in the design of its building elements or to changed resource availability. It
can automatically adjust its duration, its timing, its relationship to other activities, etc.
and make this updated information available for other analyses, which can then be
carried out when they are needed. In contrast, a push-driven design method would
calculate the impacts of a design change just in case, regardless of whether a project
stakeholder actually needs that information at the time.
Such a self-aware activity can support a construction team much more proactively
and quickly with insights into the impact of changes and changed conditions than an
activity that can only gain self-awareness through human interpretation. It is challeng-
ing, however, to formalize and validate the concepts needed for construction due to
the large-scale integration needed and due to the unique nature and context of each
project. The challenge is to find the appropriate level of formalization so that the
conceptual model is general enough so that it can be applied in a number of situations,
but is powerful enough to provide a useful level of intelligence or self-awareness in a
specific situation on a construction project.
For example, the work in my research group has focused on formalizing the following
construction-related concerns. This work is extending the conceptual basis of virtual
construction elements to make them more intelligent and self-aware in the context of
a project design:
192 M. Fischer
more disciplines early in project design and throughout project development and help
project teams develop integrated design solutions that perform better for more per-
formance criteria. The result should be a seamless process of sustainable design, con-
struction, and use of facilities. Significant research is still needed to formalize these
self-aware construction elements in the context of design solutions of the many disci-
plines involved in a facility project and the economic, environmental, and social life-
cycle context of facilities. Therefore, the second part of this paper describes the
research method developed at CIFE in support of such research efforts. The goal of
the method is to help researchers achieve research contributions that are scientifically
sound and practically relevant and applicable in the experience-based, anecdotally-
focused construction industry.
literature with predictions and insights from experts and with descriptions, explana-
tions, or predictions from models developed from the observations, theory, and expert
opinions. To support this research process, we have developed the ‘horseshoe’ re-
search method at CIFE. The method supports researchers in building on the experien-
tial knowledge and anecdotal evidence that can be gathered on construction sites in
the context of existing theory and expert knowledge to carry out practically-relevant
and scientifically-sound research.
We call the method the ‘CIFE horseshoe research method’ (Fig. 3). Given the unique
combination of large-scale integration challenges on construction projects the method
cannot guarantee full repeatability of a research effort by different research teams
addressing the same topic, but we have tried to make the method as explicit and rep-
licable as possible given the nature of the domain studied. We have found that stu-
dents who work with this method progress more quickly to defensible research results
and can understand each others’ work more easily, quickly, and fully.
While one can enter the steps in the horseshoe diagram showing the research proc-
ess in Fig. 3 at any step, I will describe the method from the upper left corner around
to the lower left corner. Throughout the remainder of Section 3 I will use an example
from a recent research project in my group to illustrate the steps of this method – the
development of a geometry-based construction process modeling method motivated
by our experience in applying 4D models to plan the construction of part of Disney’s
California Adventure® theme park [73].
research project that starts with a problem definition that is too vague and too broad
will probably not yield a productive research process and a strong research result
because the criteria for success are not clear.
Metrics (criteria),
Observed scope (domain) Power
Problem
in Practice Theoretical Generality
Limitations Testable?
Intuition (Point of
Research
Departure) Incl.
Questions testing
Research
Tasks
Evidence? Research
Contributions Results
to Knowledge
Practical
Significance Intellectual
Merit
to knowledge this statement offers many more specific dimensions along which
evidence for the existence of the problem can be sought and for which a general solu-
tion (approach, method) can be formalized. For example, does this problem manifest
itself for all lagoon projects, or for all construction activities whose scope of work can
be represented with one or a few 3D CAD objects, but requires many activities to
complete?
Fig. 4. Snapshot of a parametrically generated 4D model of the lagoon for the example project
in this section [73]
My experience is that the precise articulation of a specific problem for a few spe-
cific projects is the fastest way to develop a precise problem statement that allows the
researchers to look for relevant points of departure, formulate a sound research
method and plan, and test their results.
3.3 Intuition
This is the least formalized step in the CIFE horseshoe research process. However,
typically, an intuition is needed about how the problem could be addressed in a gen-
eral, i.e., project-independent way. For example, for the research that eventually
resulted from the lagoon case, the researcher’s intuition was that discrete event simu-
lation methods combined with geometric transformation mechanisms that could auto-
matically generate the appropriate level of detail in the 3D model to match the desired
level of detail in the schedule combined with a formalization of scheduling knowl-
edge might yield a novel approach to 4D modeling that would solve the identified
problem.
Only one good question is needed to make a research effort worthwhile. In my ex-
perience, it also takes significant effort to develop a good research question or ques-
tions. Typically, initial research questions are too broad and the researchers have too
many questions to make a successful research project possible in a reasonable time-
frame. A key criterion for a good research question is whether it is testable or not. I.e.,
as soon as a research question is formulated the researcher should think about how
evidence for the generality and the power of an answer (solution) to the question
could be found or developed. Hence, it is critical that the research question(s) relate
directly to the domain established in the research problem statement (to set up a future
claim for the generality of the formalized solution) and to the metrics used to quantify
the problem (to set up a future claim for the power of the formalized solution). Ques-
tions that can be answered with a ‘yes’ or ‘no’ answer are typically not good research
questions. Many research questions start with ‘what’ or ‘how’. A further challenge in
formulating a research question is that it must be possible to know when one has an-
swered the question, i.e., when the research is done. Any research project will create
the foundation for new research, of course, but it is vitally important that a specific
scope of work is identified in the question(s) for a particular research project. Finally,
a research question needs to be formulated in way that makes any finding to the ques-
tion an interesting answer. If not, the researcher might set himself up for failure or
might anticipate (hope for) a specific result, which will likely cloud the researcher’s
objectivity and make the research biased towards a specific outcome, i.e., the re-
searcher may look for, and therefore find, evidence for particular phenomena.
Depending on the research problem, the research tasks may include further literature
study, interviews, surveys, case studies, observations of practice, participation in
ongoing construction projects, ontology development, implementation of software
prototypes, etc. to develop the research results and contributions to theory. An
198 M. Fischer
important consideration for research planning is the number of test cases (or interview
or survey subjects) that will be needed to be able to argue for the generality of the
research result(s). Testing of the research results is typically the most critical research
task, and the researcher should consider and refine the test plan often during the re-
search. A good test plan needs to be transparent, i.e., it needs to be plausible that an-
other researcher would have come to the same conclusions if she had done the same
test. Common test methods for the formalization of knowledge and methods for con-
struction tasks include:
Variation studies on retrospective cases. Essentially all research projects that for-
malize new concepts and methods carry out validation studies based on retrospective
or past cases. These studies are typically done in the computer in the lab and include
varying various input parameters to test that the formalized concepts and implemented
software prototype perform in a technically sound way.
Asking an expert panel. To validate the results of a research project, the researcher
could show the results to experts and ask them whether the approach and results make
sense and the important concepts in a domain are covered adequately. A better
method is to give the experts several problems that can be addressed (solved) with the
research results and ask them for their predictions of the solutions to these problems,
then apply the method developed from the research results and compare the expert
predictions with the predictions of the new method [80].
Charrette tests. The researcher develops a test that is representative of the engineer-
ing task(s) the research is trying to improve, but that practitioners or other researchers
can carry out in an hour or two. Typically, a charrette test contrasts the performance
of a traditional or typical method of performing the tasks with the method enabled by
the concepts formalized in the research [81]. Ideally, a charrette test compares the
output and process performance of the two methods and therefore sheds light on
the impact newly developed concepts have on the execution of engineering tasks. The
advantage of a charrette test is that it allows researchers to isolate certain factors that
are difficult to isolate in practice. The disadvantage is that tasks rarely happen in prac-
tice in the way in which they are conceptualized in the charrette test. Nevertheless,
charrette tests can provide excellent evidence for the power and generality of the
research contributions because the researcher can design the tests so that the metrics
for power identified in the problem statement are addressed and so that the generality
of the research results can be tested (e.g., for how many project phases, stakeholders,
building types, levels of detail, etc. a contribution applies). A difficult aspect of char-
rette tests is that a ‘gold standard’ (i.e., the correct output of a task or right way for
doing a task) for test case tasks needs to be established. In some cases, this can only
be done by consulting experts and taking their opinion as the gold standard. In other
cases, a gold standard, i.e., the correct result of the test case tasks, can be calculated. It
is usually quite easy to study process performance differences between the two meth-
ods with a charrette test. Typical process performance metrics include the durations to
complete the tasks (or the number of tasks completed in a certain timeframe), the
number of issues, concerns, or criteria that could be considered, the number of alter-
natives that could be developed, the number of stakeholders that could be included,
Formalizing Construction Knowledge for Concurrent Performance-Based Design 199
Prospective or intervention case studies. The researcher (or someone else) applies
the formalized concepts to a real, ongoing construction project and, if necessary, sug-
gests changes to the project plan or design based on the insights from the application
of the newly developed concepts. The project manager then implements or rejects the
suggestions and the project or process outcome is observed. While anecdotal in na-
ture, prospective cases demonstrate the value of formalized concepts vividly, and it is
usually seen as convincing evidence of the power of the research contributions if the
concepts work at the scale and under the time and organizational pressures of an on-
going project and if seasoned practitioners pay attention to the results generated from
the application of the new concepts to a situation in practice [77].
For the case example research project, the researcher conducted – among other test
cases – retrospective test cases using the construction of parts other than the lagoon of
Disney's California Adventure®, such as the construction planning of the Seafood
Restaurant. The resulting schedules and 4D models were then reviewed by a construc-
tion expert from Disney’s project team that had built that part of the park. The expert
assessed whether the resulting schedule was realistic and considered the important
constraints. Additional test criteria included, e.g., comparisons of the time needed to
generate a detailed construction schedule and 4D model from a 3D model and a Mas-
ter schedule using the geometric construction process modeling method (implemented
as a computer prototype) with the manual approach used on the actual construction
project.
The research results need to answer the research questions, of course. Most impor-
tantly, the results include evidence from the tests for the generality and power of the
research contributions using the metrics from the problem definition. For the example
research project, the results included the specification of the geometric construction
process modeling method, the implementation of a computer prototype based on the
method, and the results and insights from the tests.
The research contributions, i.e., the tested and validated results of the research, extend
prior work and contribute new concepts to theory. They become a new foundation for
further research and improved practice. The main contribution to knowledge of the
case example research is the formalization of a geometric construction process
200 M. Fischer
modeling method. The method has become a basis to study additional construction
scheduling topics, such as studying the tradeoff between schedule uncertainty and
flexibility and the planning of temporary structures.
The research method asks researchers to address the practical significance of the
planned or completed research. The practical significance or impact must be ex-
plained with the same metrics used to define the problem. Even though Fig. 3 shows
this step as the last step, research projects often start here, i.e., with a vision for the
desired impact of the research or a vision of how engineers should be able to carry a
particular task. The remaining steps of the research method are then tailored to sup-
port the vision. The practical significance of the case example research project lies,
e.g., in allowing construction engineers to base their work and their analyses on the
same 3D models and project schedule information as other disciplines. They can, in
this way, participate more effectively in the concurrent, performance-based design of
facilities.
The research method works well for the formalization of new construction (and other)
knowledge and methods to embed the newly formalized knowledge in facility defini-
tion, design, construction, commissioning, and operations phases. It forces the
researcher to develop a scope of work and research plan that is manageable and ex-
ecutable and leads to scientifically defensible and practically relevant results. The
researcher should, however, in a brief section embed the particular research effort in
the larger picture of theory and practice surrounding the research topic. For the exam-
ple project, this included a short discussion of concurrent engineering of the facility
and its construction schedule at different levels of detail.
This research method works best when it is used in all phases of a research project,
i.e., to define and select the focus of a research effort, to design a particular research
effort, to manage it, and to report on it. In all phases, the researcher should advance
the thinking on all steps as far as possible and proceed to the next research task based
on the ‘maximum anxiety principle’ [82], i.e., tackle the task that has the greatest
uncertainty or risk or lack of definition. I have found that a commonly used research
method like the method presented in this paper is particularly important for construc-
tion research that has still a young research tradition.
For research efforts that aim mainly at deepening the understanding of current
practice, in-depth case studies [83] are usually more appropriate than the presented
method; see [84] for an example of the application of this type of research method.
construction and operations) are often missed. Formalizing and integrating construc-
tion concerns into the facility design have proven particularly difficult. While it will
be some time before construction project teams will be able to consider all the eco-
nomic, environmental, and social concerns for all disciplines and stakeholders for all
parts of a construction project throughout all the phases, it is important that better
visualization, integration, and automation methods become available for engineering
practice to improve the lifecycle performance of facilities. For the foreseeable future,
these integration methods will blend formal computational models with visualizations
and human cognition to leverage the expertise of humans and take advantage of exist-
ing and emerging computational modeling, simulation, and visualization tools. This
paper has summarized underlying methods to represent this knowledge and make it
available for concurrent engineering efforts of facilities and presented areas where
such formalizations have been developed or are being researched.
As more formalizations become available, research will be needed to find mecha-
nisms to integrate them across disciplinary concerns, lifecycle phases, and levels of
detail of the facility, its development processes, and organizations. Practitioners who
have focused on formalizing their knowledge and the knowledge of their firm for
integrated and automated application are already able to capitalize on efficiency gains
from the consistent and rapid application of this knowledge base at the right time in a
construction project and are seeing opportunities to expand their involvement in ear-
lier and later project phases. For example, the mechanical design firm Granlund has
seen the opportunities to participate in the project definition and facility management
phases increase significantly with the formalization of its knowledge base and with
the use of product models to support the lifecycle of buildings [1]. The price for these
opportunities is, however, the allocation of significant resources (about 5 to 10% of its
staff in Granlund’s case) to research and development activities, i.e., shifting repeti-
tive design work to the development of formal computer-based methods.
Extensions of the presented work includes expanding the joint consideration of
economic, environmental, and social performance goals and including the operations
and maintenance phase in the continued development of methods to address this very
large-scale integration problem. The goal is to make the construction phase much
more efficient than it is today and enable students and professionals to understand the
large scale integration problem they face when attempting to satisfy the concerns
from all the stakeholders and contribute to methods that balance facility lifecycle
costs and uses, maintenance costs, expected facility life, security costs, global, re-
gional, and local concerns relative to impacts on energy, water, air, etc. Finally, tools
to design appropriate integration mechanisms and methods to innovate on projects are
needed to improve how we integrate the many stakeholder and lifecycle concerns
from project to project to better their lives and the lives of their peers and children.
Acknowledgements
I am most indebted to my colleague at CIFE, John Kunz, for the many discussions and
work sessions that have led to the formalization of the research method presented in
this paper. I also thank the CIFE students for their inspiring work and the CIFE mem-
bers for their support of the environment that has made the presented work possible.
202 M. Fischer
References
1. Hänninen, R.: Building Lifecycle Performance Management and Integrated Design Proc-
esses: How to Benefit from Building Information Models and Interoperability in Perform-
ance Management. Invited Presentation Watson Seminar Series Stanford Univ. (2006)
2. Eastman, C.: General Purpose Building Description Systems. Computer Aided Design 8(1)
(1976) 17–26
3. Bjork, B.C.: Basic structure of a proposed building product model. Computer Aided De-
sign 21(2) (1989) 71–78
4. Eastman, C.M., Siabiris, A.: Generic building product model incorporating building type
information. Automation in Construction 3(4) (1995) 283–304
5. Karola, A., Lahtela, H., Hänninen, R., Hitchcock, R., Chen, Q.Y., Dajka, S., Hagstrom, K.:
BSPro COM-Server - Interoperability between software tools using industrial foundation
classes. Energy and Buildings 34(9) (2002) 901–907
6. Lee, K., Chin, S., Kim, J.: A core system for design information management using indus-
try foundation classes. Computer-Aided Civil and Infrastructure Eng. 18(4) (2003)
286–298
7. Eastman, C., Wang, F., You, S.J., Yang, D.: Deployment of an AEC industry sector prod-
uct model. Computer Aided Design 37(12) (2005) 1214–1228
8. Rivard, H., Bedard, C., Ha, K.H., Fazio, P.: Shared conceptual model for the building en-
velope design process. Bldg. & Env. 34(2) (1999) 175–187
9. O'Sullivan, D.T.J., Keane, M.M., Kelliher, D., Hitchcock, R.J.: Improving building opera-
tion by tracking performance metrics throughout the building lifecycle (BLC). Energy and
Buildings 36(11) (2004) 1075–1090
10. Shea, K., Aish, R., Gourtovaia, M.: Towards integrated performance-driven generative de-
sign tools. Automation in Construction 14(2) (2005) 253–264
11. Mora, R., Rivard, H., Bedard, C.: Computer representation to support conceptual structural
design within a building architectural context. J. Comput. Civ. Eng. 20(2) (2006) 76–87
12. Howard, H.C., Levitt, R.E., Paulson, B.C., Pohl, J.G., Tatum, C.B.: Computer integration:
Reducing fragmentation in AEC industry. J. Comput. Civ. Eng. 3(1) (1989) 18–32
13. Rivard, H., Fenves, S.J.: Representation for conceptual design of buildings. J. Comput.
Civ. Eng. 14(3) (2000) 151–159
14. Hegazy, T., Zaneldin, E., Grierson, D.: Improving design coordination for building pro-
jects. I: Information model. J. Constr. Eng. & Mgt. 127(4) (2001) 322–329
15. Gero, J.S., Louis, S.J.: Improving Pareto optimal designs using genetic algorithms. Micro-
computers in Civ. Eng. 10(4) (1995) 239–47
16. Turk, Z.: Phenomenologial foundations of conceptual product modelling in architecture,
engineering and construction. AI in Eng. 15(2) (2001) 83–92
17. Kam C., Fischer M., Hänninen R., Karjalainen A., Laitinen J.: The product model and
Fourth Dimension project. ITCon 8 (2003) 137–166
18. Demian, P., Fruchter, R.: Measuring relevance in support of design reuse from archives of
building product models. J. Comput. Civ. Eng. 19(2) (2005) 119–136
19. Russell, J.S., Gugel, J.G., Radtke, M.W.: Comparative analysis of three constructibility
approaches. J. Constr. Eng. & Mgt. 120(1) (1994) 180–195
20. O’Connor, J.T.: Impacts of Constructibility Improvement. J. Constr. Eng. & Mgt. 111(4)
(1985) 404–410
21. Tatum, C.B.: Improving Constructibility During Conceptual Planning. J. Constr. Eng. &
Mgt. 113(2) (1987) 191–207
Formalizing Construction Knowledge for Concurrent Performance-Based Design 203
22. Boeke, E.H. Jr.: Design for constructibility. A contractor's view. Concrete Constr. 35(2)
(1990) 3p
23. Constructibility and constructibility programs. White paper: J. Constr. Eng. & Mgt. 117(1)
(1991) 67–89
24. O'Connor, J.T., Miller, S.J.: Constructibility Programs: Method for Assessment and
Benchmarking. J. Performance of Constructed Facilities 8(1) 1994 46–64
25. Glavinich, T.E.: Improving constructibility during design phase. J. Arch. Eng. 1(2) (1995)
73–76
26. Fisher, D.J., Anderson, S.D., Rahman, S.P.: Integrating constructibility tools into construc-
tibility review process. J. Constr. Eng. & Mgt. 126(2) (2000) 89–96
27. Pocock, J.B., Kuennen, S.T., Gambatese, J., Rauschkolb, J.: Constructibility state of prac-
tice report. J. Constr. Eng. & Mgt. 132(4) (2006) 373–383
28. Fischer, M., Tatum, C.B.: Characteristics of Design-Relevant Constructibility Knowledge.
J. Constr. Eng. & Mgt. 123(3) (1997) 253–260
29. Pulaski, M.H., Horman, M.J.: Organizing constructibility knowledge for design. J. Constr.
Eng. & Mgt. 131(8) (2005) 911–919
30. Paulson, B.C.: Interactive Graphics for Simulating Construction Operations. J. Constr.
Div. 104(1) (1978) 69–76
31. Cleveland, A.B. Jr.: Real-time animation of construction activities. Constr. Congr. I - Ex-
cellence in the Constructed Project (1989) 238–243
32. Retik, A., Warszawski, A., Banai, A.: Use of computer graphics as a scheduling tool.
Bldg. & Env. 25(2) (1990) 133–142
33. Fischer, M., Liston, K., Schwegler, B.R.: Interactive 4D Project Management System. 2nd
Civ. Eng. Conf. in the Asian Region (2001) 367–372
34. Fischer, M., Haymaker, J., Liston, K.: Benefits of 3D and 4D Models for Facility Manag-
ers and AEC Service Providers. 4D CAD and Visualization in Construction - Develop-
ments and Applications Issa, R.R.A., Flood, I., O'Brien, W. (eds.) Balkema (2003) 1–32
35. Chau, K.W., Anson, M., Zhang, J.P.: Four-dimensional visualization of construction
scheduling and site utilization. J. Constr. Eng. & Mgt. 130(4) (2004) 598–60
36. Kamat, V.R., Martinez, J.C.: Comparison of simulation-driven construction operations
visualization and 4D CAD. Winter Simulation Conf. 2 (2002) 1765–1770
37. Haymaker, J., Fischer, M.: 4D Modeling on the Walt Disney Concert Hall. tec21 38
(2001) 7–12
38. Fischer, M.: The Benefits of Virtual Building Tools. Civ. Eng. 73(8) (2003) 60–67
39. Liston, K., Fischer, M., Winograd, T.: Focused Sharing of Information for Multidiscipli-
nary Decision Making by Project Teams. ITCon 6 (2001) 69–81
40. Khanzode, A., Fischer, M., and Reed, D.: Case Study of the Implementation of the Lean
Project Delivery System (LPDS) using Virtual Building Technologies on a large Health-
care Project. 13th Annual Conf. of the Int. Group for Lean Constr. (2005) 153–160
41. Rischmoller, L., Alarcon, L.F., Koskela, L.: Improving value generation in the design
process of industrial projects using CAVT. J. Mgt. in Eng. 22(2) (2006) 52–60
42. Hendrickson, C., Zozaya-Gorostiza, C., Rehak, D., Baracco-Miller, E., Lim, P.: Expert
System for Construction Planning. Comput. Civ. Eng. l(4) (1987) 253–269
43. Fisher, D.J., Rajan, N.: Automated constructibility analysis of work-zone traffic-control
planning. J. Constr. Eng. & Mgt. 122(1) (1996) 36–43
44. Poon, J.: Development of an expert system modelling the construction process. J. Constr.
Research 5(1) (2004) 125–138
45. Darwiche, A., Levitt, R., Hayes-Roth, B.: OARPLAN: Generating Project Plans by Rea-
soning about Objects, Actions and Resources. AI EDAM, 2(3) (1988) 169–181
204 M. Fischer
46. Cherneff, J., Logcher, R., Sriram, D.: Integrating CAD with construction-schedule genera-
tion. J. Comput. Civ. Eng. 5(1) (1991) 64–84
47. Fischer, M.A.: Automating Constructibility Reasoning with a Geometrical and Topologi-
cal Project Model. Comput. Syst. in Eng. 4(2-3) (1993) 179–192.
48. Chevallier, N., Russell, A.D.: Automated schedule generation. Canad. J. Civ. Eng. 25(6)
(1998) 1059–1077
49. Nakasuka, S., Yoshida, T.: Dynamic scheduling system utilizing machine learning as a
knowledge acquisition tool. Int. J. of Production Research 30(2) (1992) 411–431
50. Skibniewski, M., Arciszewski, T., Lueprasert, K.: Constructibility analysis: Machine
learning approach. J. Comput. Civ. Eng. 2(1) (1997) 8–16
51. Brilakis, I., Soibelman, L., Shinagawa, Y.: Material-based construction site image re-
trieval. J. Comput. Civ. Eng. 19(4) (2005) 341–355
52. Schmitt, G., Engeli, M., Kurmann, D., Faltings, B., Monier, S.: Multi-agent interaction in a
complex virtual design environment. AI Communications 9(2) (1996) 74–78
53. Schnellenbach-Held, M., Geibig, O.: Intelligent agents in civil engineering. Int. Conf. on
Comput. Civ. Eng. (2005) 989–998
54. Ito, K.: Utilization of 3-D graphical simulation with object-oriented product model for
building construction process. Congr. on Comput. in Civ. Eng. (1988) 73–78
55. Froese, T.M., Paulson, B.C. Jr.: Integrating project management systems through shared
object-oriented project models. Int. Conf. on Applications of AI in Eng. (1992) 69–85
56. Froese, T.: Models of construction process information. J. Comput. Civ. Eng. 10(3) (1996)
183–193
57. Stuurstraat, N., Tolman, F.: Product modeling approach to building knowledge integration.
Automation in Construction 8(3) (1999) 269–75
58. Halfawy, M., Froese, T.: Building integrated architecture/engineering/construction sys-
tems using smart objects: Methodology and implementation. J. Comput. Civ. Eng. 19(2)
(2005) 172–181
59. Shen, Z., Issa, R.A., O'Brien, W., Flood, I.: A trade construction knowledge module to en-
able use of design component data in project management. Int. Conf. on Comput. Civ.
Eng. (2005) 1595–1604
60. Lee, S.H., Pena-Mora, F., Park, M.: Dynamic planning and control methodology for stra-
tegic and operational construction project management. Automation in Construction 15(1)
(2006) 84–97
61. Ugwu, O.O., Anumba, C.J., Thorpe, A.: Ontological foundations for agent support in con-
structibility assessment of steel structures - A case study. Automation in Construction
14(1) (2005) 99–114
62. Udaipurwala, A.H., Russell, A.D.: Hierarchical clustering for interpretation of spatial con-
figuration. Constr. Research Congr. - Broadening Perspectives (2005) 1137–1147
63. Anumba, C.J., Baldwin, A.N., Bouchlaghem, D., Prasad, B., Cutting-Decelle, A.F., Dufau,
J., Mommessin, M.: Integrating concurrent engineering concepts in a steelwork construc-
tion project. Conc. Eng. Research & Applications 8(3) (2000) 199–212
64. Navon, R., Shapira, A., Shechori, Y.: Automated rebar constructibility diagnosis. J.
Constr. Eng. & Mgt. 126(5) (2000) 389–397
65. Milberg, C., Tommelein, I.: Role of Tolerances and Process Capability Data In Product
and Process Design Integration. Contruction Research Congr. - Winds of Change: Integra-
tion and Innovation in Construction (2003) 795–802
66. Clayton, M.J., Kunz, J.C., Fischer, M.A.: Rapid Conceptual Design Evaluation Using a
Virtual Product Model. Eng. Applications of AI 9(4) (1996) 439–451
Formalizing Construction Knowledge for Concurrent Performance-Based Design 205
67. Clayton, M.J., Teicholz, P., Fischer, M., and Kunz, J.: Virtual components consisting of
form, function, and behavior. Automation in Construction 8 (1999) 351–367
68. O'Brien, W.J.: Capacity Costing Approaches for Construction Supply-Chain Management.
Ph.D. Thesis, Stanford Univ. (1998)
69. Fischer, M.A., Aalami, F.: Scheduling with Computer-Interpretable Construction Method
Models. J. Constr. Eng. & Mgt. 122(4) (1996) 337–347
70. Luiten, G.T., Tolman, F., Fischer, M.A.: Project-modelling in AEC to integrate design and
construction. Computers in Industry 35(1) (1998) 13–29
71. Akinci, B., Fischer, M., Kunz, J.: Automated Generation of Work Spaces Required by
Construction Activities. J. Constr. Eng. & Mgt. 128(4) (2002) 306–315
72. Staub-French, S., Fischer, M., Kunz, J., and Paulson, B.: A generic feature-driven activity-
based cost estimation process. Advanced Eng. Informatics 17(1) (2003) 23–39
73. Akbas, R.: Geometry Based Modeling and Simulation of Construction Processes. Ph.D.
Thesis, Stanford Univ. (2003)
74. Koo, B.: Formalizing Construction Sequencing Constraints for the Rapid Generation of
Scheduling Alternatives. Ph.D. Thesis, Stanford Univ. (2003)
75. Haymaker J., Kunz, J., Suter, B., Fischer, M.: Perspectors: composable, reusable reasoning
modules to construct an engineering view from other engineering views. Advanced Eng.
Informatics 18(1) (2004) 49–67
76. Kiviniemi A.: Product Model Based Requirements Management. Ph.D. Thesis, Stanford
Univ. (2005)
77. Kam. C.: Dynamic Decision Breakdown Structure: Ontology, Methodology, and Frame-
work For Information Management In Support Of Decision-Enabling Tasks in the Build-
ing Industry. Ph.D. Thesis, Stanford Univ. (2005)
78. Reinhardt, J., Akinci, B., Garrett, J.H.: Navigational models for computer supported pro-
ject Management tasks on construction sites. J. Comput. Civ. Eng. 18(4) (2004) 281–290
79. Hammad, A., Garrett, J.H. Jr., Karimi, H.A.: Mobile Infrastructure Management Support
System Considering Location and Task Awareness. Towards a Vision for Information
Technology in Civ. Eng. Conf. (2003) 157–166
80. Christiansen, T.: Modeling Efficiency and Effectiveness of Coordination in Engineering
Design Teams. Ph.D. Thesis, Stanford Univ. (1993)
81. Clayton, J., Fischer, M., Teicholz, P., Kunz, J.: The Charrette Testing Method for CAD
Research, Applied Research in Architecture and Planning 2 Hershberger, R., Kihl, M.
(eds.) (1996) 83–91
82. Kunz, J.: Concurrent Knowledge Systems Engineering. Working Paper 5, CIFE, Stanford
Univ. (1989)
83. Yin R.: Applications of Case Study Research. 2nd Ed. Sage Publications (1994)
84. Hampson K.: Technology Strategy and Competitive Performance: A Study of Bridge Con-
struction. Ph.D. Thesis, Stanford Univ. (1993)
Next Generation Artificial Neural Networks and Their
Application to Civil Engineering
Ian Flood
Abstract. The aims of this paper are: to stimulate interest within the civil engi-
neering research community for developing the next generation of applied arti-
ficial neural networks; to identify what the next generation of devices needs to
achieve, and; to provide direction in terms of how their development may pro-
ceed. An analysis of the current situation indicates that progress in the devel-
opment of this technology has largely stagnated. Suggestions are made for
achieving the above goals based on the use of genetic algorithms and related
techniques. It is noted that this approach will require the design of some very
sophisticated genetic coding mechanisms in order to develop the required
higher-order network structures, and may utilize development mechanisms ob-
served in nature such as growth, self-organization, and multi-stage objective
functions. The capabilities of such an approach and the way in which they can
be achieved are explored in reference to the truck weigh-in-motion problem.
1 Introduction
Civil engineering, as with many disciplines, is fraught with problems that have defied
solution using conventional computational techniques, but can often be solved by
people with appropriate training and expertise. Examples include determining legal
compliance of designs from drawings and specifications; identifying constructability
problems from the design of a building; and measuring construction progress from
site images. Automated methods of performing such tasks would help reduce design
and construction costs, and improve or validate the efficacy of design and construc-
tion decision making. Classical artificial intelligence has targeted this class of prob-
lems by attempting to capture the essence of human cognition at the highest level,
although progress has been frustratingly slow. This disappointment helped revive
interest in computational devices that emulate the operation of the brain at the neu-
ronal level (albeit in a highly abstract form) with the intent of achieving higher level
cognitive skills as an emergent property.
Indeed, within the civil engineering discipline, artificial neural networks appear
from publications statistics to be one of the great successes of computing. In the
ASCE Journal of Computing, for example, over 12% of papers published from 1995
to 2005 (54 out of 445) have used the term “neural” as part of their title [1]. Further-
more, the distribution of these publications by year (see Figure 1) indicates that there
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 206 – 221, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Next Generation Artificial Neural Networks and Their Application 207
has been no decline in interest over this period. The citations indices similarly con-
firm the popularity of artificial neural networks: for example, according to the ISI
Web of Knowledge [18] and summarized in Table 1, three of the top five most fre-
quently cited articles from all issues of the ASCE Journal of Computing are on artifi-
cial neural networks, including the first and second placed articles in this ranking.
This enthusiasm and the diversity of applications reported across all fields of civil
engineering make this technology difficult to ignore.
11
10
Number of Publications
9
8
7
6
5
4
3
2
1
0
1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005
Year
Fig. 1. Distribution of articles in the Journal of Computing using the terms: (1) “neural” and (2)
“genetic algorithm(s)” or “GA(s)” in their title [1].
Table 1. The five most frequently cited articles in the Journal of Computing in Civil Engineer-
ing [18]
Number of
Article Title: Citations:
Neural Networks in Civil Engineering, Parts I and II [5] 131
1.00E+01
1.00E+00
A comparison of progress with the most popular computational model, the general
purpose electronic digital computer, reinforces this view that progress in the rate of
development of artificial neural networks has been slow. The initial development of
artificial neural networks dates back to the mid 1950’s [15] whereas electronic digital
computing is only about a decade or so older with the first stored memory device (the
Small Scale Experimental Machine - SSEM) first operating around 1948. Given this,
it might be expected that the two technologies would have reached a similar state of
development. However, since its inception, the electronic digital computer has
evolved steadily from a device comprising just a couple of thousand primary process-
ing units (switches or transistors) into one comprising billions organized into a
sophisticated structure of higher-order functional subsystems. Artificial neural
Next Generation Artificial Neural Networks and Their Application 209
networks, on the other hand, have failed to advance beyond simple applications that
require rarely more than a few hundred primary processing units (neurons in this case)
arranged with almost no higher-order structuring.
This point is illustrated in Figure 2 which compares complexity for various compu-
tational systems, including digital computers, artificial neural networks, and the brains
of various animals. The chart measures complexity in the simple terms of the number
of primary processing units that can be usefully employed by the system, and is scaled
logarithmically with a range from around 300 units to 100 billion units. Using this
simple measure, we can compare today’s most complex integrated circuit to the brain
of a rabbit (each comprising in the order of 109 primary processing units), while artifi-
cial neural networks have progressed little further than the brain of the nematode
(comprising in the order of just a few hundred primary processing units). In fact, the
vast majority of artificial neural network applications in civil engineering employ no
more than a few tens of neurons. While there are examples of artificial neural net-
works in civil engineering that make useful employment of several hundreds of neu-
rons (such as the CGM network for modeling transient-heat flow in buildings [7] or
even thousands of neurons (such as the WIM network for truck Weigh-in-Motion [8]),
the additional complexity in these devices is there simply to provide greater precision
in results, not greater functionality.
Arguably, the number of primary processing units that can be employed usefully in
a given system provides an overly simplistic measure of comparative complexity.
After all, a neuron in an artificial neural network is usually a much more complicated
processing device than a transistor. Likewise, a biological neuron is far more compli-
cated than an artificial neuron. Also, it is likely that significant aspects of the compu-
tational mechanisms underlying biological neural networks are yet to be discovered
and could be dependent on key processes that operate well below the level of individ-
ual neurons (see Bullock [2] for example). That said, the comparison of Figure 2
clearly demonstrates two important and related points: (i) that applied artificial neural
networks, unlike digital computers, have failed to advance very far over their history;
and (ii) that, according to the biological model, artificial neural networks have a great
potential yet to be realized.
An obvious question at this stage is that if development has been so limited in the
application of artificial neural networks to civil engineering then why has there been
such a high and sustained level of interest in this technology? The answer is that
artificial neural networks, notwithstanding their currently rudimentary form, are very
good at solving vector mapping problems that are non-linear in form and comprise a
fixed set of independent variables, a common class of problems in engineering. In
this context, they frequently provide more accurate solutions than the alternative
modeling techniques (such as multi-variate nonlinear regression analysis), and do not
require the user to have a good understanding of the basic shape of the function being
modeled. Still, solving direct vector mapping problems is no more than a primitive
first step in the application of artificial neural networks if we dare aspire to the com-
putational capabilities of the brain.
210 I. Flood
Not surprisingly, the biological model suggests that an increase in cognitive skills can
be achieved by moving towards networks of greater complexity. Brain size alone is a
poor indicator of intelligence of a species since larger organisms require more brain
capacity for basic monitoring and control of the body; otherwise, we would have to
conclude that the Blue Whale is the most intelligent species having a brain mass of 6
kg. The ratio of brain size to body size is also not a particularly accurate indicator of
intelligence since the required brain capacity for basic monitoring and control of the
body does not increase linearly with body size. This has led to the development of the
so called encephalization quotient (EQ) as an indicator of intelligence in a species,
being the ratio of the actual brain size of an organism to its expected brain size needed
for basic monitoring and control of the body (see Jerison [10] for example). Even this
measure can lead to some unexpected results in the ranking of species, and so it has
been proposed that a measure of residual brain capacity (such as the difference be-
tween actual brain size and expected brain size for a species) is a better indicator of
intelligence.
It could be argued that this search for a good indicator of intelligence is biased
since we keep refining the method of measurement until we find one that places hu-
mans at the top of the scale. However, this should not be a problem here since our
goal is to achieve greater human-like intelligence in our computational devices, and
therefore we are not so much concerned with ranking as we are with the predictors of
rank. Accordingly, these metrics indicate that an increase in cognitive capability
requires an increase in the size of the neural network. It also seems that an increase in
the number of neurons alone is not sufficient to provide greater cognitive skills, but
that the neurons must also be formed into a structured system of higher-order func-
tional units such as is found in the visual system [16].
The barrier in the development of applied artificial neural networks has not been an
inability to construct systems comprising very large numbers of neurons. Networks
comprising many millions of neurons are certainly feasible with today’s technology
whether they be implemented in hardware or as software simulations. Rather, the
problem, at least in part, is knowing how to organize very large numbers of neurons
into appropriate higher-order network structures. (Note, the term “higher-order struc-
tures” is used here to refer to the physical organization of a network, and should not
be confused with the term “higher-order neural networks” which is commonly used in
the literature to refer to devices composed of Sigma-Pi neurons.) The argument of
this paper is that there is much that can be done to advance the scope of application
and utility of artificial neural networks by focusing research on the development of
higher-order structures.
Higher-order neural network structures are composed of discrete network units each
of which perform some sub-function and may contain any number of neurons. The
Next Generation Artificial Neural Networks and Their Application 211
way in which these units are organized defines the higher-order structure of the net-
work and its collective functional behavior. The boundary of a unit may be identified
by the mode of operation of the neurons and of their connections and/or by the con-
nectivity of the neurons (the topology of the connections). In this sense, we are iden-
tifying patterns in the overall organization of the neural network.
KEY:
discrete neural
network units
Figure 3(a) shows, as an example, a double array of neurons of different types and
connections that form an underlying repetitive pattern. The repeating elements of the
pattern mark the boundaries of the individual units, each of which will presumably
perform a similar sub-function. Figure 3(b) shows a similar situation but with the
connectivity identifying the boundaries of the elements. In either case, the definition
of the boundaries of repeating units can be ambiguous. For example, referring to Fig-
ure 3, by moving the boundaries of the units down by two neurons in the arrays we
still are able to identify collections of neurons and connections that are repetitive.
Furthermore, the boundaries of higher-order units may range from very distinct to
vague. This is illustrated in Figure 4 which shows three units defined by their
212 I. Flood
B C
For versatility in application, the structuring of a network will often have to allow for
a variable format in the configuration of the input data. This is to allow the network
to function in an environment where there is positional, temporal, and/or stochastic
variance in the presentation of a problem at the network’s inputs. Positional variance
occurs when there is no fixed mapping of data sources to input neurons. Obviously a
complete randomization of data input locations cannot work and so their must be
some organization of the data spatially, however, this should only need be a relative
Next Generation Artificial Neural Networks and Their Application 213
positioning not an absolute positioning. Figures 7(b) to 7(d) illustrate the different
types of positional variance that can occur for a two-dimensional spatially distributed
set of inputs. Moreover, with the exception of rotation, all forms of positional vari-
ance can occur for a one-dimensionally distributed set of inputs. Next, temporal vari-
ance can occur when a series of values are input to one or more neurons over time. It
can result from, for example, the use of arbitrary starting points in input data streams
(translation), differences in the rate of data flow (scaling), and/or gradual shifts in the
nature of a problem over time (distortion). Finally, stochastic variance, illustrated in
Figure 7(e), is a corruption in the values of the data presented at the input neurons and
can result from signal noise and/or missing data values.
Concurrent Units
Serial Units
Data
Flow
Recursive Structuring
The function performed by a neural network is not just dependent on its structure,
but also on the mode of operation of its neurons and their connections. This proposal
places no restrictions on these lower level operating modes. Studies might consider
anything from neurons that act as simple logic gates (effectively making the network
operate as a digital circuit) through to pulse frequency coded units. In addition, con-
nections may apply simple weights to transmitted values or some more complicated
function (see Flood and Kartam [6] for a classification of the alternative modes of
neuron operation). Similarly, processes operating both below and above the neuronal
level may be considered.
the neural network in a single evolutionary process, would have a series of intermedi-
ate objectives that prescribe for very simple versions of the problem through progres-
sively more complicated and complete versions of the problem, each of which must
be solved in turn. As each intermediate objective is solved, the resultant neural net-
work codes would be used as the basis for solving the next stage in the problem. This
would allow for development of primitive structures that are seminal in the develop-
ment of a more complete solution.
Finally, the method of development might be designed to evolve automatic learn-
ing responses in a network when subjected to input data, including learning of con-
nection weights and self-organization of the connection structure. In the longer term,
other processing mechanisms operating at levels below and above that of the neuron
may be considered, particularly as we come to understand these processes from bio-
logical studies.
Figure 8 shows the existing hand-crafted structure for the network. The first unit was
trained to determine the type of truck crossing the bridge according to the U.S. Fed-
eral Highway Administration (FHWA) system of classification (based on axle con-
figuration and spacing). The input to this network was a sequence of noise filtered
strain readings measured at a fixed location on a girder of a single span simply sup-
ported bridge. Following this network are a set of parallel units arranged into clusters
of three. Each cluster was dedicated to determining the attributes of a specific truck
type, and was selected based on the truck type output from the truck classifier unit.
The units in each cluster were trained to estimate truck axle spacings, truck axle
Next Generation Artificial Neural Networks and Their Application 217
loads, and truck velocity from the filtered strain readings. The individual units used a
form of radial Gaussian Basis Function neurons developed using a supervised training
algorithm. A complete description of the development and performance of this neural
network can be found in Gagarin et al. [8].
Input: Output:
strain readings truck class
Truck
Classifier
Fig. 8. Existing Hand-Crafted Higher-Order Neural Network Structure for the Truck Weigh-in-
Motion Problem
This problem poses some real challenges to the development of a valid neural net-
work based solution. In particular, the input data stream (the strain readings gener-
ated when a truck crosses the bridge) does not follow a fixed format for the following
reasons:
1. The passage of other vehicles and the dynamic response of the bridge to traffic
can add noise to the data stream, as illustrated in Figure 9(a). Noise, as such,
makes the shape of the strain envelope less distinct and therefore more difficult
to characterize.
2. Noise in the strain readings also makes it difficult to determine where the truck
crossing event starts in the input data stream. Again, this is illustrated in Figure
9(a) where it can be seen that the first strain reading resulting from the truck
crossing event is not easily determined. Adopting alternative starting points in
the data stream as such represents a time-wise translation of the strain envelope.
218 I. Flood
3. The velocity of the truck crossing the bridge may range from just a few miles per
hour to over 80 miles per hour, causing time-wise scaling of the data envelope as
shown in Figure 9(b).
4. A truck may accelerate or decelerate while crossing the bridge, causing time-
wise distortion of the data envelope as shown in Figure 9(c).
5. There are several other factors that affect the form of the input data stream but
will be deferred to later studies – these include the effects of events such as si-
multaneous truck crossings and lane changes.
A second point of interest for this problem is that the data input to the neural net-
work can be handled as a time series of values presented to a single neuron or concen-
trated cluster of neurons (analogous to the way hearing functions) rather than as a
vector of parallel inputs of fixed size as considered in the original study. This is illus-
trated in Figure 10 in which the parallel input represents the approach adopted in the
original study and the serial input represents the approach proposed here. The advan-
tage of the serial approach is that it does not require predetermination of the number
of elements (strain readings) in an input data set. However, it does create a challenge
for the operation of the neural network since it now has to integrate inputted data over
time. If significant ambiguity in the problem arises from, for example, deceleration of
a truck as it crosses a bridge, then this may be alleviated by sampling strain at two or
more different locations along the length of the bridge, and using separate parallel
input neurons to the network for each of these data streams.
A third point of interest for this problem is that the number of values output from
the network will vary depending on the type of truck crossing the bridge. That is, the
number of axle spacings and axle loads that need to be predicted for any truck cross-
ing event depends on the number of axles on the truck. Outputs might therefore be
treated as a series of values generated at a single neuron, a selected value at a single
neuron whereby another input is used to define which axle is of interest, or as a spa-
tially oriented vector of values across an array of neurons where the length of the
array is variable, for example.
This weigh-in-motion study is still in an early stage of development, but it serves
to illustrate the sorts of issues that the next generation of artificial neural networks
will need to tackle. In summary, these are: a need for sophisticated genetic coding
system that can develop very large and highly structured network organizations (per-
haps introducing the concepts of growth algorithms, fractal analysis, chaos theory,
and multi-stage objective functions); an ability to handle unfixed formatting of input
data (resulting from, for example, translation, rotation, and scaling either in a spatial
or temporal framework); an ability to handle serial input data streams and the conse-
quential need for the network to integrate these values over time; and an ability to
handle variable output data structures.
With the lessons learned from this study, work will advance to more complicated
problems, including dynamic decision-making for industrialized custom home design
(the problem here being to make sure that all manufacturing resources are used effec-
tively within a dynamic and uncertain market), and damage evaluation in structures
determined from their dynamic response to signals such as acoustic pulses.
Next Generation Artificial Neural Networks and Their Application 219
noise
STRAIN
TIME
STRAIN
high velocity truck crossing event
TIME
(b) Impact of velocity on input data stream (time-wise scaling).
STRAIN
changing velocity truck crossing event
TIME
(c) Impact of acceleration on input data stream (time-wise distortion).
Fig. 9. Example Factors Affecting the Format of the Strain Data Envelope
220 I. Flood
TIME
STRAIN
Serial Input (new approach)
input
neuron
tn...,...t3...t2...t1
TIME
4 Conclusions
For many years, artificial neural networks have enjoyed a significant and sustained
level of interest in computer-based civil engineering research. They have provided a
convenient and often highly accurate solution to problems within all branches of civil
engineering. The concern raised here, however, is that the extent of this application
has rarely ventured beyond rudimentary problems such as simple function modeling
and pattern classification, using single unit neural network structures. Yet, biological
neural systems suggest a far greater potential than this. In order to realize this poten-
tial, researchers must take on the challenge of developing networks that are vastly
more complex than have been developed to date both in terms of size and structure,
such as is found in the rich higher-order structuring and behavior of biological neural
systems. Promising approaches to the development of these structures are genetic
algorithms and related methods. While these will require the design of sophisticated
genetic coding mechanisms, the potential payoff is considerable in terms of broaden-
ing the scope of application of neural computing to civil engineering. Research is, of
course, a very long way from being able to replicate human cognitive skills using
artificial neural networks, but the decision to take the next tentative step towards this
goal is long overdue.
Next Generation Artificial Neural Networks and Their Application 221
References
1. ASCE: Research Library. At: http://ascelibrary.aip.org/. (2006)
2. Bullock, T.H., Bennett, M.V.L., Johnston, D., Josephson, R., Marder, E., Fields R.D.: The
Neuron Doctrine, Redux. Science, 310. (2005) 791-793
3. Cigizoglu, H.K., Tolun, S., and Öztürk, A.: Evolutionary Artificial Neural Networks in
Hydrological Forecasting. In: proceedings of the World Water and Environmental Re-
sources Congress 2003. Eds: Paul Bizier & Paul DeBarry. 118. ASCE. (2003) 149
4. Fahlman, S.E., and Lebiere, C.: The Cascaded-Correlation Learning Architecture. Rep.
CMU-CS-90-100. Carnegie Mellon University, Pittsburgh, PA. (1990)
5. Flood, I., and Kartam, N.: Neural Networks in Civil Engineering, I: Principles and Under-
standing, II: Systems and Application. In: Journal of Computing in Civil Engineering, 8,
(2). ASCE. (1994) 131-162
6. Flood, I., and Kartam, N: Systems. In: Artificial Neural Networks for Civil Engineers:
Fundamentals and Applications. Eds: Kartam, N., Flood, I., & Garrett, J.H. Jr. ASCE.
(1997) 19-43
7. Flood, I., Issa, R.R.A., and Abi Shdid, C.: Simulating the Thermal Behavior of Buildings
Using ANN-Based Coarse-Grain Modeling. In: Journal of Computing in Civil Engineer-
ing, 18, (3). ASCE. (2004) 207-214
8. Gagarin, N., Flood, I. and Albrecht, P.: Computing Truck Attributes with Artificial Neural
Networks. In: Journal of Computing in Civil Engineering, 8, (2). ASCE. (1994), pp 179-
200
9. Goldberg, D.E., and Kuo, C.H.: Genetic Algorithms in Pipeline Optimization. In: Journal
of Computing in Civil Engineering, 1, (2). ASCE. (1987) 128-141.
10. Jerison, H. J.: The Evolution of Intelligence. In: Handbook of Intelligence. Ed: Sternberg,
R. J. Cambridge University Press. (2000)
11. Karunanithi, N., Grenney, W.J., Whitley, D., and Bovee, K.: Neural Networks for River
Flow Prediction. In: Journal of Computing in Civil Engineering, 8 (2). (1994) 201-220
12. Kohonen, T.: Self-organization and associative memory. 3rd edn. Springer-Verlag,
Berlin. (1989)
13. Koumousis, V.K., and Georgiou, P.G.: Genetic Algorithms in Discrete Optimization of
Steel Truss Roofs. In: Journal of Computing in Civil Engineering, 8 (3). ASCE. (1994)
309-325
14. Moses, F., and Kriss, M.: Weight-in-Motion Instrumentation. Report FHWA/RD-78/81.
Federal Highway Administration, McLean, VA. (1978)
15. Rosenblatt, F.: The Perceptron: A Probabilistic Model for Information Storage and Or-
ganization in the Brain. In: Psychological Review, 65, (6). Cornell Aeronautical Labora-
tory. (1958) 386-408
16. Sirosh, J. and Miikkulainen, R.: Topographic Receptive Fields and Patterned Lateral In-
teraction in a Self-Organizing Model of the Primary Visual Cortex. In: Neural Computa-
tion, 9. (1997) 577-594
17. Szewczyk, Z.P. and Hajela, P.: Damage Detection in Structures Based on Feature-
Sensitive Neural Networks. In: Journal of Computing in Civil Engineering, 8 (2). ASCE.
(1994) 163-178
18. Thomson Corporation: ISI Web of Knowledge, Web of Science. At: http://portal.
isiknowledge.com/. (2006)
Evolutionary Generation of Implicative Fuzzy Rules for
Design Knowledge Representation
1 Introduction
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 222 – 229, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Evolutionary Generation of Implicative Fuzzy Rules 223
The output fuzzy set of a conjunctive fuzzy rule base is the set of all possible val-
ues for Y given the situation X. By means of defuzzification a crisp value for Y can be
derived. The output fuzzy set of an implicative fuzzy rule base represents an upper
bound of possible values on V, according to the knowledge considered within the rule
base. The determination of a crisp output value by means of defuzzification is not
suitable in general. Often the incorporation of further knowledge is required. How-
ever, in combination with subsequent search mechanisms the elimination of impossi-
ble values can be very useful.
Within the scope of knowledge based decision support different areas for the appli-
cation of the two types of rules bases can be stated. Depending on the structure of the
knowledge to be represented the two reasoning mechanisms hold advantages and
disadvantages. One advantage of conjunctive fuzzy rule bases is the possibility to
directly obtain a crisp output value. To ensure the reliability a complete rule base and
the consideration of all influence parameters is necessary. For the support of more
complex decisions implicative rule bases are of advantage. In this case, not all influ-
ence parameters have to be considered in the rule base. On the one hand this leads to a
224 M. Freischlad, M. Schnellenbach-Held, and T. Pullmann
reduced specificity of output sets. On the other hand the size of the rule base is re-
duced significantly. Hence the interpretability is increased.
3.1 Specificity
( ∫ μ ( y ) dy / ∫ dy ) / N
N CB
FSP = 1 − ∑ i CB (1)
i =1
where μi ( y ) is the membership function of the output fuzzy set on the ith case ex-
ample. The higher the specificity, the more values of the output variable are (partly)
excluded and consequently the more valuable is the rule base in terms of finding a
crisp output value.
3.2 Consistency
By increasing the specificity the rule base might get inconsistent with the case knowl-
edge. That means the output value of a case example is determined as an (highly)
impossible value. The mean consistency of the rule base with the underlying case
knowledge is derived by
Evolutionary Generation of Implicative Fuzzy Rules 225
NCB
FCS = ∑ μi ( yi ) / N CB (2)
i =1
where yi is the output value of the ith case example and μi ( yi ) is the corresponding
degree of membership to the output fuzzy set on case i.
3.3 Congruency
An implicative FRB should only fire in those areas of the input domain that are cov-
ered by the knowledge contained in the case base. In order to determine the congru-
ency two parameters, the fuzzy coverage of the input domain U ( x ∈ U ) by the case
base μ CB ( x ) and the coverage by the rule base μ RB ( x ) are defined. The coverage of
U by the case base is derived by
with
2
⎛ x m − x m ,i ⎞
M −⎜⎜ ⎟⎟
μ C,i ( x ) = ∏ e ⎝ σm ⎠ (4)
m =1
and with xm,i is the value of the mth input variable of case i and σ m is the predefined
fuzziness of input variable m. By means of this fuzziness the degree of coverage in
the neighbourhood of a case can be adapted.
The coverage of U by the rule base is derived by
(
μ RB ( x ) = max μ R , j ( x ) , ) j = 1, 2,..., N R (5)
The overall accuracy fitness FAC of the rule base is composed of the presented
measures:
FAC = ( a SP ⋅ FSP + a CS ⋅ FCS + a CG ⋅ FCG ) / ( a SP + a CS + a CG ) (7)
By means of the weight parameters aSP (specificity), aCS (consistency) and aCG (con-
gruency) the optimization process can be adapted in order to increase a preferred
quality.
Besides this objective the REM algorithm takes into account the demands on the
interpretability of the rule base represented by the number of rules [4]. The overall
fitness of an individual is determined by the pareto-rank based approach presented by
Fonseca and Fleming [6].
226 M. Freischlad, M. Schnellenbach-Held, and T. Pullmann
4 Evaluation
The developed approach was evaluated by applying it to a real world data set for the
determination of the beam height of a beam slab. The input variables are the span
length of the beam lb, the span length of the slab ls and the life load on the slab q. The
output variable is the beam height hb.
To demonstrate the advantages of implicative FRBs the goal was to find a rule base
considering only the major influence parameters for this design decision. In order to
simulate knowledge incompleteness only case examples covering parts of the input
domain were chosen. The fuzzy coverage by the case base is shown in figure 2a.
The REM algorithm was run for 300 generations; the population size was set to
200 individuals. The maximum number of rules was set to 8 rules. The weight pa-
rameters of the accuracy fitness components were set to aSP = aCS = aCG = 1.0.
The pareto-optimal rule base consisting of 6 rules is presented in further detail:
Figure 2b shows the coverage of the input domain by this rule base. It is obvious that
any rule fires significantly outside the fuzzy support of the case base, confirmed by
the value of the congruency measure FCG = 0.978.
Fig. 2. Coverage of the input domain (a) by the case base and (b) by the rule base
Figure 3 shows the output fuzzy sets for three situations: S1(lb = 5 m, ls = 5m),
S2(lb = 10 m, ls = 7 m) and S3(lb = 8 m, ls = 6 m). For S1 a highly specific fuzzy set is
obtained (fig. 3a). This set is in accordance with the case examples C1(lb = 5 m, ls = 5
m, q = 3 kN/m², hb = 35 cm) and C2(lb = 5 m, ls = 5 m, q = 9 kN/m², hb = 42 cm). The
fuzzy set obtained for S2 is less specific (fig. 3b), representing well the larger range of
the optimal beam height (hb = 65 ~ 82 cm) for typical values of the life load (q =
3 ~ 9 kN/m²).
For S3 almost any value of hb is ruled out (fig. 3c). There was no case highly simi-
lar to this situation in the case base. Consequently almost any information can be
provided and a nearly unrestricted search for a suitable output value has to be per-
formed. Assumed a solution for this situation was found, the rule base can be ex-
tended based on this newly discovered knowledge.
The mean specificity for all case examples is FSP = 0.586. The generated rule base
is highly consistent with the underlying case base (FCS = 0.956).
Evolutionary Generation of Implicative Fuzzy Rules 227
Fig. 3. Output fuzzy sets for (a) lb/ls = 5m/5m, (b) lb/ls = 10m/7m and (c) lb/ls = 8m/6m
the knowledge base. Learned rule bases are incorporated within further optimization
tasks as described in section 5.1.
Every time a project is completed, the project data base and, subsequently, the case
base is updated. If necessary, the rule base is revised or extended.
6 Conclusions
In this paper a genetic programming based approach for the data-driven generation of
implicative fuzzy rules was presented. Three measures for the evaluation of implica-
tive fuzzy rule bases were proposed. The evaluation of the developed REM algorithm
on a real world problem has shown that the generated fuzzy rule bases fulfill the de-
mands on the specificity and the consistency with the knowledge of the underlying
case base.
The application of REM within a machine learning environment for evolutionary
design and optimization of complex structural systems was presented. Currently the
authors investigate the impact of the proposed approach on the exploration of innova-
tive design solutions.
References
1. Schnellenbach-Held, M., Freischlad, M.: Fuzzy Rule Based Models for Slab System De-
sign. In: Schnellenbach-Held, M., Denk, H. (eds.): Advances in Intelligent Computing in
Engineering. Proceedings of 9th International EG-ICE Workshop, VDI-Fortschritt-Berichte,
Darmstadt (2002) 134-143
2. Dubois, D., Prade, H., Ughetto, L.: A new perspective on reasoning with fuzzy rules. Inter-
national Journal of Intelligent Systems, V. 18 N. 5 (2003) 541-567
Evolutionary Generation of Implicative Fuzzy Rules 229
3. Cordón, O., Gomide, F., Herrera, F., Hoffmann, F., and Magdalena, L.: Ten years of genetic
fuzzy systems: current framework and new trends. Fuzzy Sets and Systems, Vol. 141, Issue
1 (2004) 5-31
4. Freischlad, M., Schnellenbach-Held, M.: A machine learning approach for the support of
preliminary structural design. Advanced Engineering Informatics, Vol. 19, No 4 (2005)
281-287
5. Jin, Y., von Seelen, W., Sendhoff, B.: On Generating FC3 Fuzzy Rule Systems from Data
Using Evolution Strategies. IEEE Transactions on Systems, Man and Cybernetics, Part B
29(4) (1999) 829-845
6. Fonseca, C. M., Fleming, P. J.: Genetic algorithms for multiobjective optimization: Formu-
lation, discussion and generalization. In: Forrest, S. (ed.): Genetic Algorithms: Proceedings
of the Fifth International Conference. San Mateo, CA: Morgan Kaufmann (1993) 416-423
7. Pullmann, T., Skolicki, Z., Freischlad, M., Arciszewski, T., De Jong, K., Schnellenbach-
Held, M.: Structural Design of Reinforced Concrete Tall Buildings: Evolutionary Computa-
tion Approach Using Fuzzy Sets. In: Ciftcioglu, Ö., Dado, E. (eds.): Intelligent Computing
in Engineering, Foundation of Design Research SOON (2003) 53-61
Emerging Information and Communication
Technologies and the Discipline of Project Information
Management
Thomas Froese
Dept. of Civil Engineering, 6250 Applied Science Lane, University of British Columbia,
Vancouver, Canada
[email protected]
1 Introduction
Recently, a design meeting was held for a local architecture, engineering, and
construction (AEC) project. The project is typical of many such projects, except that
it involves renovations and additions to one of the city’s landmark buildings, situated
on a stunning site, originally designed by one of her leading architects. The
discussion focused around a cardboard model of the renovations. Although 3D
computer models would offer significantly better visualization functionality—which
is clearly vital for this project—and would also offer numerous other benefits, no such
models have been developed. Why?
This simple anecdote is representative of one of the fundamental issues for the
development of information and communication technologies (ICT) in AEC—how to
transfer the technology to industrial practice. The answer to the question of why this
project has not adopted technology that is readily available and undeniably beneficial
is complex, but one part of the answer seems clear: it is not because of significant
problems with the technology itself. In order to succeed in transferring new ICT to
practice, then, the ICT research and development community must address issues that
go beyond the technology, and address the corresponding changes to work practices.
This paper suggests that an effective approach to address this challenge is through the
development of the discipline of project information management (PIM).
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 230 – 240, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Emerging ICT and the Discipline of PIM 231
Information and communication have always been important to AEC projects, yet
approaches for managing information have generally been informal and ad-hoc. We
have argued that the AEC industry would benefit from the development of PIM as a
well-understood and formalized sub-discipline of project management. Not only
would this benefit projects in general, but it seems to be a necessary pre-condition to
the successful implementation of more complex emerging ICT such as Building
Information Models (BIM). In this paper, we consider PIM from a somewhat
different perspective, and suggest that ICT researchers and developers could use a
formalized PIM process as the primary tool for building the bridge from technology
development to industrial practice.
This paper first presents a summary of our current thinking about a formalized sub-
discipline of PIM (portions reported earlier in [5]-[8]) (as a research initiative, this
represents early, conceptual work with validation and implementation effort yet to
occur). The paper then discusses how elements of this approach to PIM could support
the transfer of any new ICT technologies to improve its chances of successful
adoption in industrial practice. The approach encourages a common approach to
PIM, such that the more widely the approach is adopted, the higher the benefit for all.
A number of efforts have been carried out within construction IT research related
to information management practices. For example, Björk [3] defines a formal model
for information handling in construction processes. Turk [15] explored the
relationships between information flows and construction process workflows, and
makes a distinction between base processes (the main value adding activities) and
glue processes (that make sure that the materials and information can flow between
the base processes) [16]. Mak [12] describes a paradigm shift in information
management that focuses mainly in Internet-based information technologies. Betts
[2] includes much work on the role of information technologies in the management
for construction, with an emphasis on strategic management of the firm. These and
other works have much to offer in the area of information management practices.
This paper, however, takes a fairly specific perspective that has not been widely
addressed: i.e., the development of specific information management practices as they
relate to the management of individual construction projects in the context of
emerging ICT.
We contend that improved PIM could improve performance on any construction
project today. (Although we have no data to prove this statement, we suggest it is
axiomatic that, first, information and communication is critical to the performance of
AEC projects and, second, that any aspect of work processes can be improved through
improved management practices. This leaves open the issue of how best to improve
PIM and what level of PIM is appropriate). Yet improved PIM becomes much more
significant as projects adopt more advanced, emerging ICT, such as building
information models (BIMs). This is because of the increased complexity, required
skills, and work tasks. Indeed, we contend that a careful consideration of how
information management practices could adopt new ICT provides an essential bridge
to move new ICT from development into industrial practice.
Fig. 1 illustrates the relationships between Project Management and PIM, including a
comparison with quality management. In current practice, Quality Management, as a
body of knowledge, can be thought of as a subset of both project management and
production technologies. ISO 9001 [10] provides standardized specifications for
quality management systems. For any individual AEC project, ISO 9001 is
implemented in the Quality Management System, as documented by the Quality
Manual. One component of the Quality Manual is a collection of individual Method
Statements.
In a similar fashion, we suggest that PIM is a distinct sub-discipline of both Project
Management and ICT (i.e., it represents the management of the project’s information
and ICT). Within the general body of knowledge of PIM, one possible way of
implementing PIM is to develop a PIM protocol that provides a set of specifications
for PIM systems. On an AEC project, the PIM protocol is implemented in the
project’s information management system (i.e., the socio-technical system that
includes the people, work practices, technical systems, etc.—not just the software
systems), which is documented in the Information Management Plan/Manual. This
plan contains (among other things) a collection of specific information management
Emerging ICT and the Discipline of PIM 233
Project Management
Information and Communication Technologies (ICT)
Production Technologies
An Individual AEC Project
For each of the project elements to which we are applying our information
management processes, there are a number of different elements of an information
system that must be considered:
• Information: Foremost, we must consider the information involved in each of
the project elements. First, the process should assess the significant
information input requirements for each element, determining the type of
information required for carrying out the tasks, the information
communicated in the transactions, or the requirements for integration issues.
With traditional information technologies, information requirements
generally correspond to specific paper or electronic documents. With
building information models and other newer information technologies,
however, information requirements can involve access to specific data
sources (such as specific application data files or shared databases) that do
not correspond to traditional documents. Second, we must assess tool
requirements by determining the key software applications used in carrying
out tasks, communication technologies used for transactions, or standards
used to support integration. Third, we must assess the significant
information outputs produced by each task. This typically corresponds to
information required as inputs to other tasks. After analysis, these results
should be formalized in the information systems plan as the information
required as inputs for each task, and the information that each task must
commit to producing.
• Resources: the information management process should analyze the
requirements, investigate alternatives, and design specific solutions for all
related resources. These include hardware, software, networking and other
infrastructure, human resources, authority, and third party (contracted)
resources.
• Work methods and roles: the solution must focus not only on technical
solutions, but equally on the corresponding work processes, roles and
responsibilities to put the information system to proper use.
• Performance metrics, specified objectives, and quality of service standards:
the information systems plan should include the specification of specific
performance metrics that can be assessed during the project and used to
specify and monitor information systems objectives and standards of service
quality.
• Knowledge and training: the information systems solution will require
certain levels of expertise and know how of people within the project
organization. This may well require training of project personnel.
• Communications: implementing the information systems plan will require
various communications relating to the information system itself, such as
making people aware of the plan, training opportunities, procedures, etc.
• Support: information system solutions often have high support
requirements, which should be incorporated as part of the information
management plan.
236 T. Froese
Solutions should be sought that meet the general project objectives of cost, time,
scope, etc. However, there are a number of objectives that are more specific to the
information system that should be taken into account:
• System performance is of primary concern, including issues such as
efficiency, capacity, functionality, scalability, etc.
• Reliability, security, and risks form critical objectives for information
systems.
• Satisfaction of external constraints: we have placed the emphasis on the
project perspective, but the information management must also be responsive
to a number of external influences. Of particular significance in alignment
with organization strategies and information management solutions,
including appropriate degrees of centralized vs. decentralized information
management. Other external influence include client or regulatory
requirements, industry standards
• Life-cycle issues should be considered. These include both the life cycle of
the information (how to ensure adequate longevity to the project data), and
of the information system (e.g., life-cycle cost analysis of hardware and
software).
• Interoperability is key objective for many aspects of the information system.
Clearly, the success of the transfer depends upon more than just the technical
characteristics of the solution. In particular, there are significant non-technical
challenges facing technology transfer of ICT in the AEC industry. For example, there
is very little research capacity relative to the size of the industry, and perhaps even
less ICT development capacity. The culture of the industry doesn’t generally place a
high value on ICT innovation, while the short-term, dynamic, virtual organizations
that make up AEC projects eschew significant investment in developing large-scale
ICT solutions and lead to high fragmentation. Some of the non-technical elements
required for the successful transfer of any new ICT solutions include the following:
• Development of the entire socio-technical or “soft” system. E.g., for a new
piece of software, the work practices required to use the software, the roles
and responsibilities involved, training requirements, expected benefits and
assessment techniques, etc.
• Interface with other systems. How the proposed system interacts with other
social-technical systems (e.g., planning the interface between design systems
and costing systems).
• Communication of the solutions from the system providers to potential end
users.
Developers, preferably working along with the industry users, are well positioned
to develop these elements of the overall solution. Indeed, technology transfer issues
are frequently considered as part of research and development efforts. Current efforts,
however, are relatively ad hoc, since there is little in the way of standard practice.
A PIM process as described above could improve this situation. It would provide a
comprehensive structure and methodology for developing the complete socio-
technical solution of new ICT systems, including interface issues. Furthermore, by
using the same PIM framework that project teams use to develop their own PIM
programs, the compatibility and communication of the overall solution would be
improved, making it easier for industrial users to understand and adapt new ICT
solutions. The common structure would also improve the reusability of new ICT
systems between projects.
8 Conclusion
Project Information Management, as a formal sub-discipline of project management,
could improve the performance of AEC projects. In addition, a formalized PIM
process could provide the unifying structure to support ICT developers in addressing
the wide range of issues involved in adopting new ICT systems into industrial
practice, and in easing the process for industrial users to learn about and incorporate
the new ICT. This paper has presented a conceptual framework for a formalized PIM
processes and discussed its potential for supporting ICT technology transfer. In future
work, we expect to further develop the approach and carry out implementation and
validation work.
References
1. Arciszewski, T., Smith, I., & Melhem, H. Progress Report, ASCE Global Center of Exce-
llence in Comp, http://www.asceglobalcenter.org/ProgressReports/ProgressReport.pdf
Accessed April 28, 2006.
2. Betts, M. (Ed). Strategic Management of I.T. in Construction, Blackwell Science, 1999.
240 T. Froese
3. Björk, B-C. “A formalised model of the information and materials handling activities in
the construction process,” Construction Innovation, 2(3), pp. 133-149.
4. Ducq, Y., Chen, D. and Vallespir, B., “Interoperability in enterprise modelling:
requirements and roadmap” Advanced Engineering Informatics, Vol. 18, No 4, 2004,
pages 193-204.
5. Froese, T., “Help Wanted: Project Information Officer”, 5th European Conf on Product
and Process Modelling in the Building and Construction Industry, Istambul, Turkey, Sept.
8-10, 2004.
6. Froese, T., “Impact of Emerging Information Technology on Information Management”,
International Conference on Computing in Civil Engineering, ASCE, Cancun, Mexico,
Paper #8890, July 12-15, 2005.
7. Froese, T. “Information Management for Construction,” 4th International Workshop on
Construction Information Technology in Education, Dresden, Germany, K. Menzel (Ed.),
Institute for Construction Informatices, Technische Universitat Dresden, Germany, ISBN
3-86005-479-1, CIB Publication 303, pp. 7-16. Jul 18, 2005.
8. Froese, T. “Project Information Management for Construction: Organizational
Configurations”, Submitted to ASCE/CIB Leadership conference, Bahamas, May, 2006.
9. Haymaker, J., Kam, C. and Fischer, M. “A Methodology to Plan, Communicate and
Control Multidisciplinary Design Processes”, CIB W78 22nd Conference on Information
Technology in Construction, Dresden, Germany, R.Scherer, P. Katranuschkov, S.-E.
Schapke (Eds.), Institute for Construction Informatices, Technische Universitat Dresden,
Germany, ISBN 3-86005-478-3, CIB Publication 304, pp. 75-82, Jul 19-21, 2005.
10. ISO. ISO 9001:2000. Quality management systems - Requirements. ISO, Switzerland,
2000.
11. Keller, M. and Scherer, R.J. “Use of Business Process Modules for Construction Project
Management”, CIB W78 22nd Conference on Information Technology in Construction,
Dresden, Germany, R.Scherer, P. Katranuschkov, S.-E. Schapke (Eds.), Institute for
Construction Informatices, Technische Universitat Dresden, Germany, ISBN 3-86005-
478-3, CIB Publication 304, pp. 91-96, Jul 19-21, 2005.
12. Mak, S. “A model of information management for construction using information
technology,” Automation in Construction 10, pp. 257-263.
13. PMI. A Guide to the Project Management Body of Knowledge (PMBOK Guide), 2000
Edition, Project Management Institute: Newtown Square, PA, USA.
14. Rebolj D and Menzel K., “Another step towards a virtual university in construction IT,”
ITcon Vol. 9, pg. 257-266, http://www.itcon.org/2004/17
15. Turk, Z. “Communication Workflow Approach to CIC,” Computing in Civil and Building
Engineering, ASCE, pp. 1094-1101.
16. Turk, Z. “What Is Construction Information Technology,” Proceedings AEC2000,
Informacni technologie ve stavebnictvi 2000, Praha, CD-ROM.FIATECH (2004), Capital
Projects Technology Roadmapping Initiative, FIATECH, Austin, USA.
17. Waroonkun, T., Stewart, R. and Mohamed, S. ”Factors Affecting Technology Transfer
Performance: Evidence from Thailand,” 1st International Construction Specialty
Conference, Calgary, Alberta, Canada May 23-26, 2006.
The FishbowlTM: Degrees of Engagement in
Global Teamwork
Renate Fruchter
1 Introduction
The challenge addressed in the study presented in this paper is to improve project-
based learning (PBL) in cross-disciplinary, global teamwork based on role modeling
methods by creating an innovative computer mediated learning experience. Architec-
ture, engineering, and management students engaged in PBL courses typically apply
the knowledge acquired in discipline centric classes to produce a product, but do not
necessarily know how to acquire the cross-disciplinary communication competences
that professional experts exercise in real life projects. These competences include
exploration of alternatives to solve problems, inquiry, probing the boundaries between
disciplines, and negotiation.
This goal is rooted both in market needs and a desire for innovation in AEC educa-
tion. The globalization of economic activity is perhaps one of the defining constants
of the rapidly changing market place. Increased competitive pressures shorten project
lead times and use of concurrent engineering in cross-functional teams. The availabil-
ity of communication technologies enables these cross-functional teams to be often
geographically distributed. It is interesting to observe that the social context of
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 241 – 257, 2006.
© Springer-Verlag Berlin Heidelberg 2006
242 R. Fruchter
multi-stakeholder project teams and the advances in technology have increased the
complexity of projects. Today, AEC projects are characterized not only by cross-
disciplinary, multi-stakeholder teams, and the social context, but also by the global
aspect of teams driven by diverse cultural perspectives, and the access to large vol-
umes of digital information.
The mission of the Project Based Learning Laboratory (PBL Lab) at Stanford is to
prepare the next generation of AEC professionals who know how to team up with
professionals from other disciplines worldwide and leverage the advantages of inno-
vative collaboration technologies to produce higher quality products, faster, more
economical, and environmentally friendly. The goal is for these students to become
leaders in global teamwork. The objective is a sustained effort in an integrated re-
search and curriculum to develop, test, deploy, and assess radically new collaboration
technologies, workspaces, processes, learning and teamwork models that support
cross-disciplinary, geographically distributed teams.
This is accomplished through an authentic project-based learning (PBL) learning
experience, an innovative information technology (ICT) infrastructure. The PBL Lab
serves as a home for the PBL learning experience and a testbed to study the impact of
ICT in global teamwork and learning [13]. The AEC Global Teamwork course estab-
lished at Stanford in 1993 and run in collaboration with universities worldwide [4],
offers an authentic PBL teamwork learning experience that enables students to iden-
tify discipline and cross-discipline objectives and thereby develop know-why knowl-
edge in an interdisciplinary context. They exercise the theory and knowledge acquired
in traditional discipline courses, i.e., know-what and know-how. The roles of each
information technology as mediator for communication and cooperation within cross-
disciplinary teams is justified and determined to support the diverse: (1) modes of
learning and interaction over time and space, (2) needs to capture, share, and reuse
information and knowledge, and (3) types of interactions among participants.
The paper presents the rationale and points of departure for this study, and discusses:
• The FishbowlTM learning interaction experience as a pedagogical method de-
signed to support effective knowledge transfer of communication skills from pro-
fessionals to students.
• An information and communication technology augmented (ICT) workspace that
(1) supports the communicative event among professionals and global learners,
(2) promotes a better conceptual understanding of the professional practice, and
(3) allows capture, transfer, and reuse of knowledge created during the
FishbowlTM sessions.
• The assessment of the FishbowlTM learning experience, ICT augmented work-
space affordances, their impact on learning, as well as the FishbowlTM as a tech-
nology transfer conduit.
the problem is not only situated [10][11][13][20] (3) but also identified and owned
by the students. The FishbowlTM was designed to test this hypothesis. It was designed
as a pedagogic intervention in the form of a learning interaction experience to support
the knowledge transfer of communication skills from professionals to students. This
learning interaction experience improves the students’ cross-disciplinary, collabora-
tive teamwork competences through learning by doing [3]. The learning interaction
experience takes place in a geographically distributed multi-cultural global PBL set-
ting, i.e., the AEC Global Teamwork course. The paper explores how students learn
and interact in such a global FishbowlTM PBL setting.
ICT augmented workspace for PBL. The second hypothesis asserts that a primary
source of information behind design decisions is embedded within the verbal conver-
sation among project members. Capturing these conversations is difficult because the
information exchange is unstructured and spontaneous. In addition, discourse is often
multimodal. It is common to augment speech with sketches and gestures. Au-
dio/video media can record these activities, but lack an efficient semantic indexing
mechanism. The objective of the developed ICT augmented workspace is to improve
and support the process of knowledge creation, capture, transfer, and reuse of the
learning interaction experience interventions. The designed and deployed information
and communication technology (ICT) augmented workspace: (1) supports the com-
municative event among professionals and learners during the creative process of
concept generation and development, (2) promotes a better conceptual understanding
of the professional practice based on cognitive apprenticeship and legitimate periph-
eral learning [13] principles, and (3) allows capture of the act, i.e., the creative dis-
course and problem solving sketching activity. Consequently the communicative act
becomes a multimedia learning artifact. Such multimedia learning artifacts are used
by learners for further reflection and knowledge reuse both in terms of product, i.e.,
solutions and ideas proposed by the professionals, as well as their communication
process.
The third hypothesis asserts that any ICT that engages multiple participants in
communicative events and tasks in an interactive workspace will determine (1) spe-
cific interaction zones, where participants engage in coordinated action in the physi-
cal workspace, and (2) degrees of engagement of the participants during the dis-
course. To explore this hypothesis we focus on:
• how to capture with high fidelity, and low overhead to the team members, the
knowledge experience that constitutes conceptual design generated during informal
events such as brainstorming or project review sessions?
• what interaction zones and degrees of engagement emerge in the proposed ICT
augmented workspace, and
• how the Fishbowl learning interaction experience is supported and impacted by
TM
the affordances of the ICT and the configuration of the interactive workspace in
both collocated and geographically distributed learning environments? [6]
The theoretical points of departure for this study include: learning theory, interac-
tion design and analysis, design theory and methodology, knowledge management,
and human computer interaction.
244 R. Fruchter
The design of the AEC Global Teamwork course and ICT workspace are grounded
in cognitive and situative learning theory. The cognitive perspective characterizes
learning in terms of growth of conceptual understanding and general strategies of
thinking and understanding [3]. The situative perspective shifts the focus of analysis
from individual behavior and cognition to larger systems that include individual
agents interacting with each other and with other subsystems in the environment
[10][11]. Situative principles characterize learning in terms of more effective partici-
pation in practices of inquiry and discourse that include constructing meanings of
concepts and uses of skills. Teamwork, specifically cross-disciplinary learning, is key
to the design of the AEC Global Teamwork course. Students engage with team mem-
bers to determine the role of discipline-specific knowledge in a cross-disciplinary
project-centered environment, as well as to exercise newly acquired theoretical
knowledge. It is through cross-disciplinary interaction that the team becomes a com-
munity of practitioners--the mastery of knowledge and skill requires individuals to
move towards full participation in the socio-cultural practices of a larger AEC com-
munity. The negotiation of language and culture is equally important to the learning
process--through participation in a community of AEC practitioners; the students are
learning how to create discourse that requires constructing meanings of concepts and
uses of skills.
This research builds on Donald Schon’s concept of the reflective practitioner para-
digm of design [18]. Schön defines the process of tackling unique design problems as
knowing-in-action. To Schön, design is an action-oriented activity. However, when
knowing-in-action breaks down, the designer consciously transitions to acts of reflec-
tion, termed reflection-in-action. Schön argues that, whereas action-oriented knowl-
edge is often tacit and difficult to express or convey, what can be captured is reflec-
tion-in-action. This concept was expanded into a reflection-in-interaction framework
to formalize the process that occurs during collaborative team meetings [9]. The act of
reflection-in-action is viewed as a step in the knowledge creation and capture of the
“knowledge life cycle” [8] – “creation, capture, indexing, storing, finding, understand-
ing, and re-using knowledge.” Knowledge that is created through dialogue among
practitioners, or between mentors and learners represents an instance of what
Nonaka’s knowledge creation cycle calls “socialization, and externalization of tacit
knowledge.” [14]. The design of the Fishbowl TM and the ICT augmented workspace
build on these constructs of the knowledge lifecycle and the “socialization, externali-
zation, combination, and internalization” cycle of knowledge transfer.
The human computer interaction (HCI) scenario-based design approach [15] is
used as a methodology to study the current state-of-practice, describe how people use
technology and analyze how technology can support and improve their activities. The
process begins with an analysis of current practice using problem scenarios that are
transformed into activity scenarios, information scenarios and interaction scenarios.
The final stage is prototyping and evaluation based on the interaction scenarios. The
process as a whole from problem scenarios to prototype development is iterative.
4.1 RECALLTM
RECALLTM builds on Donald Schon’s concept of the reflective practitioner. [17] [18]
[19] It is a drawing application written in Java that captures and indexes the discourse
and each individual action on the drawing surface. The drawing application synchro-
nizes with audio/video capture and encoding through a client-server architecture. Once
the session is complete, the drawing and video information is automatically indexed
and published on a Web server that allows for distributed and synchronized playback
of the drawing session and audio/video from anywhere at anytime. In addition, the
user is able to navigate through the session by selecting individual drawing elements
the user can jump to the part of interest. Fig. 1 illustrates the RECALLTM graphic user
interface during real-time production that is during a communicative session. The par-
ticipants can create free hand sketches or import CAD images and annotate them dur-
ing their discourse. They have a color pallet, and a “tracing paper” metaphor that en-
ables them to re-use the CAD image and create multiple sketches on top of it. The right
side bar contains the existing digital sketch pad pages that enable quick flipping or
navigation through these pages. At the end of the brainstorming session the participants
exit and RECALLTM automatically indexes the sketch, verbal discourse and video.
This session can be posted on the RECALLTM server for future interactive replay,
sharing with geographically distributed team members, or knowledge re-use in other
future projects. RECALLTM provides an interactive replay of sessions. The user can
interactively select any portion of the sketch and RECALLTM will replay from that
point on by streaming the sketch and audio/video in real time over the net.
TM
The RECALL technology patented by Stanford University is aimed to improve
the performance and cost of knowledge capture, sharing and re-use. It enables seam-
less, automatic, real-time video-audio-sketch indexing, Web publishing, sharing and
interactive, on-demand streaming of rich multimedia Web content. It has been used in
the AEC Global Teamwork course since 1999, as well as deployed in industry pilot
settings to support:
248 R. Fruchter
• solo brainstorming, where a project team member is by him/her self and has a
“conversation with the evolving artifact,” as Donald Schon would say, using a Ta-
bletPC augmented with RECALLTM and then sharing his/her thoughts with the rest
of the team by publishing the session on the RECALLTM server.
• team brainstorming and project review sessions, using a SmartBoard augmented
with RECALLTM
• best practice knowledge capture, where senior experts in a company, such as de-
signer, engineers, builders, capture their expertise during project problem solving
sessions for the benefit of the corporation.
4.2 VSeeTM
The digital videoconferencing holds great promise to reduce travel cost and time for
geographically distributed team members. Nevertheless, collocated face-to-face team
meetings are still the most efficient and effective. One of the reasons is rooted in the
multitude of cues that the participants leverage in a collocated face-to-face team meet-
ing – such as seeing all participants at the same time, building a sense of community
and shared understanding of the tasks and workspace, attention retention of partici-
pants, engagement, gesture language used to augment product descriptions or ideas.
In order to support and strengthen the geographically distributed teamwork through
videoconferencing each participant should be able to see all participants concurrently
at low cost and low set-up overhead.
The VSeeTM technology addresses this need. It allows dozens of students to take a
class from different locations. The conceptual usage model of the VSeeTM is that all
The FishbowlTM: Degrees of Engagement in Global Teamwork 249
participants can be seen and heard with minimal latency at all times. Unlike voice-
activated switching in standard commercial solutions, the VSeeTM allows the user to
decide at whom to look. Unlike FORUM [12], the VSeeTM does not require a student
to explicitly request the audio channel before he/she can be heard. The experience in
the AEC Global Teamwork course as well as the findings of [12] [16] suggests that
keeping all channels open all the time is essential in creating spontaneous and lively
dialogs. The VSeeTM end point is implemented as a plug-in for Internet Explorer.
Each student requires a web camera and a high-speed computer network connection.
Students can see the instructor, mentors, and all the other students in a video grid on
their computers. The mentors or instructor can see the students on a desktop com-
puter or projected near life-size on a wall-size display. [1] [2].
For many years all the remote sites and PBL Lab participants were able to share
data and use a teleconference bridge for high quality channel. (Fig. 2) Nevertheless,
visibility was not available in any multipoint session, since NetMeeting Videoconfer-
ence allows only for point-to-point video transmission and other commercial solutions
Fig. 2. PBL Lab FishbowlTM ICT augmented workspace before the introduction of VSeeTM
SmartBoard running RECALLTM and NetMeeting for application sharing
TM TM
Fig. 3. Stanford Fishbowl ICT augmented workspace with VSee . Left SmartBoard
TM
runs RECALL and NetMeeting with application sharing to allow Stanford and remote
TM
participants to sketch and annotate. Right SmartBoard runs eight VSee video streams.
250 R. Fruchter
where costly and offered only voice-switched video and no data sharing. Fig. 2 shows
the FishbowlTM ICT augmented workspace setting before the introduction of VSeeTM.
VSeeTM offered an important and valuable channel that allowed for all participants to
be visible and create a sense of persistent presence. Fig. 3 illustrates the setting of the
workspace after the introduction of VSeeTM.
The PBL Lab and the global learning workspaces create a network of hubs in which
learners interact. This provides an innovative testbed to study the impact of collabora-
tion technologies on teamwork, workspace, and engagement of learners. The data for
this study was collected in the following way:
• indexed and synchronized sketch and discourse activities were captured through
RECALLTM
• interactions, movement and use of collaboration technology within the PBL Lab
workspace was captured with the video camera (Fig. 4)
• interaction and engagement of remote students was captured through a screen cap-
ture application that recorded all the concurrent VSeeTM video streams for parallel
analysis of interaction and engagement of all students at all sites.
A temporal analysis of the data was performed. It integrated the information about
the speech-acts, discourse, movements, use of collaboration technologies and work-
space in the PBL Lab. The result was a temporal spreadsheet with the following ru-
brics: (1) time stamp, (2) verbal discourse transcript, (3) mark-up of discourse tran-
script with specific the speech-acts (e.g., Question, Explanation, Negotiation, Explo-
ration, etc.) (4) video snapshots of the PBL Lab configuration showing the partici-
pants movement in the physical space over time, (5) RECALLTM snapshots of key
sketch actions, (6) screen snapshots of concurrent Virtual Auditorium streams of
remote sites showing the participants movements in their physical space over time, (7)
screen layout of applications on remote PCs, (8) field notes, and (9) data analysis
observations.
5 FishbowlTM Affordances
The analysis of the FishbowlTM learning experience and ICT augmented workspace
affordances was aimed to better understand the nature of engagement during the
communicative events, the sharing of these workspaces, and the impact of the col-
laboration technologies. The analysis revealed three interaction zones and degrees of
engagement, as well as and how the participants are using the collaboration technol-
ogy and other artifacts to best explore and convey their ideas. The three interaction
zones and corresponding degrees of engagement are (Fig. 4):
Zone 1 or Action Zone– is defined as the action zone, since it is in this zone that
most speech-acts, interactions among participants and the digital content creation
takes place, as the participants annotate, sketch, explore, explain, propose ideas.
The FishbowlTM: Degrees of Engagement in Global Teamwork 251
Fig. 4. Zones of interaction and degrees of engagement in the FishbowlTM ICT augmented
workspace of networked global hubs
252 R. Fruchter
disposition. They were sitting at tables in front of Tablet PCs in small groups (1-4
students per location) and had continuous control of their workspace and digital con-
tent that was shared. In such small group setting the collocated participants have a
more confined space to control and negotiate vs. the PBL workspace that accommo-
dates larger groups (e.g., 15-25 participants). The participants in zone 3 are students
who work on similar projects as the ones in the FishbowlTM. They participate in the
session as active observers. This provides valuable peer-to-peer learning and legiti-
mate peripheral participation opportunities [Lave & Wenger, 1991]. Zone 3 students
engage in the FishbowlTM discussion by suggesting ideas related to the problem at
hand, or posing questions about the proposed solutions in the FishbowlTM by the men-
tors that they can relate to their own problems in their projects.
The observations and data collected from all the participants, i.e., learners and in-
dustry mentors, indicated that the value of new processes mediated by the FishbowlTM
experience and the ICT - RECALLTM augmented workspace resides in the fact that it
brings together and engages all participants (stakeholders) in:
• effective collaboration;
• rapid building of common ground. One of the effective actions was shared com-
mon ground building by having each participant share his/her priorities and chal-
lenges upfront made the session more effective;
• providing input from different discipline perspectives by having all the team mem-
bers/ stakeholders present and contribute in real time ideas and solutions;
• enabling participants to manipulate, annotate and sketch on images, and interact
with the project material that is discussed and track the discussion;
• joint exploration of ideas and concepts,
• effective identification of problems and key issues, and
• joint problem solving.
It is important to note that the FishbowlTM is not only a valuable experience during
the project meeting session, but the captured RECALLTM rich multi-modal and mul-
timedia content, i.e., sketch, discourse, video, serves as a learning resource over time.
The session content is re-used in multiple ways: (1) by the students or project mem-
bers in the FishbowlTM project team, as they replay specific parts of the proposed
solution details, (2) by the students or project members who were in zone 3 and find
relevant ideas that they can use in their own projects, and (3) future generations of
students or project members who have access to this rich learning resource leveraging
knowledge from past projects.
The option to use RECALLTM not only for project team meeting but also for asyn-
chronous communication of ideas, issues, and solutions was adopted. Over time the
participants started to feel comfortable with the technology and interact with the digi-
tal content, contributing ideas, identifying issues, engaging in collaborative problem
solving, and proposing solutions in RECALLTM.
With the introduction of VSeeTM we observed a significant improvement in com-
munication and a strong sense of community building. This was caused by the fact that
all sites were visible to each other. This enabled all participants to build a common
ground understanding regarding the local conditions and configuration. In addition, all
participants were able to observe the attention retention and level of engagement and
respond to it. The temporal data analysis provided evidence that the concurrent video
The FishbowlTM: Degrees of Engagement in Global Teamwork 253
streams create a visibility environment that leads to new behaviors such as socializing
acts, e.g., happiness to see each other, some of the participants decided to sing in front
of the camera during breaks creating a sense of presence and friendship, point cameras
to views from their window to share their impressions of the local weather (snow,
sunny), or views of their pets at home (since some of the meetings were at very late
hours for the European partners.
The comparison of the behaviors of the collocated PBL students vs. the remote stu-
dents, shows the “spatial buffer effect” that is – the remote students felt more relaxed
and free to move around, even leave their workspace for short periods of time, where
as the collocated PBL student who were in the same room with the mentors behaved
just as students would in a classroom. Consequently, even though the VSee video
technology is based on the virtual auditorium metaphor of a lecture hall with large
numbers of students visible to the instructor, the behavior of the students will be dif-
ferent and more relaxed than in a collocated context. When remote students need to
discuss an issue, they feel free to have a private conversation. Such spontaneous pri-
vate conversations are harder for the collocated participants in the PBL Lab where the
mentors and instructor are. At that point the remote students’ attention and level of
engagement in the general discussion is very low. In addition, this private conversa-
tion is visible to all sites, and the mentors can intervene and ask if there is a need for
clarifications.
From a usability point of view the participants indicated that RECALLTM was easy
to use in a professional environment. RECALLTM enables the participants to create
rich content and share it with the rest of the team adding more value to the process,
and come to the meeting prepared with questions and issues. Furthermore, having the
ability of the team members to interact over the net since this is a server based tech-
nology facilitates rapid and timely content sharing that provides rich contextual ra-
tionale behind proposals or decisions. Fig. 5 shows a snapshot from the meeting in
which two of the consultants identify a specific issue and explore alternative solutions
annotating a preliminary CAD image. Note how seamless the RECALLTM on the
SmartBoard is, as one of the participants is drawing with his finger directly on the
interactive display.
The participants expressed a desire for every project review meeting to be held in
the FishbowlTM format using the ICT augmented workspace and RECALLTM – since it
helps to see the results emerge, engage and work through the process. As a participant
observed “this experience allowed us to accomplish in three hours what would take in
a typical project three months. It was amazing through how much material the team
went over in such a short time. We spend time in substantial problem solving.” More
than that, the process changed from sequential to concurrent interactions that provide
timely input from other disciplines.
TM
Fig. 5. Fishbowl ICT augmented workspace in use during an industry project review
review the sessions and provided the company with a testbed environment to experi-
ment with valuable knowledge capture in context, i.e., specific problems solved in the
context of a project by corporate experts that can be reused in future projects. Fig.6
illustrates snapshots from such a distributed session from this project.
Fig. 6. FishbowlTM and ICT augmented workspaces in use during an industry pilot project at
Obayashi Corporation in Japan showing a distributed problem solving session. The Obayashi
Headquarters in Tokyo had a SmartBoard with RECALLTM and the Mizunami construction site
had a Tablet PC with RECALLTM.
7 Conclusions
The paper described the Fishbowl TM, as an engaging learning experience in support of
global teamwork. It offers a pedagogical intervention method designed to facilitate
knowledge transfer from industry mentors to students. It describes the interactive
learning and ICT augmented workspace developed in support of creative, collabora-
tive, synchronous, geographically distributed project meeting sessions. The PBL Lab
offered a testbed for data collection and analysis of the impact of technology on be-
havior, learning, and team dynamics, as well as a tech transfer conduit of innovative
ICT and teamwork processes from the research lab to industry.
This paper presents three additional take away messages.
1. The act becomes the artifact. FishbowlTM ICT workspace augmented by
RECALLTM mediates the creative concept generation process and supports the
real-time capture of the communicative act in its original context including the
dialogues, sketches, and annotated CAD/image artifacts. The RECALLTM cap-
tured communicative act becomes a multimedia learning and knowledge transfer
artifact. Such multimedia learning artifacts are used by learners for further reflec-
tion and knowledge reuse both in terms of product, i.e., solutions and ideas pro-
posed by the mentors, as well as their collaboration process. This concept is not
only valuable in an academic environment, but offers a model for corporations to
capitalize on their core competence through knowledge capture and reuse.
2. There are times when the process is more valuable than the product. Often it is
the successful new process that learners or corporations want to transfer or repeat,
256 R. Fruchter
Acknowledgements. This study was partially sponsored by the the PBL Lab, at Stan-
ford, and the Wallenberg Global Learning Network (WGLN II). The author would
like to thank PG&E, Obayashi Corporation, and all the PBL Lab academic and indus-
try partners for their engagement and support. Last but not least, the author would like
to thank the VSee Lab Inc. for the support of the Global Teamwork program at Stan-
ford. For a complete list visit http://pbl.stanford.edu
References
1. Chen, M. “Design of a Virtual Auditorium,” Proceedings of ACM Multimedia. (2001)
2. Chen, M., “A Low-Latency Lip-Synchronized Videoconferencing System,” Proceedings of
ACM Conference on Human Factors and Computing Systems, (2003).
The FishbowlTM: Degrees of Engagement in Global Teamwork 257
R. Robert Gajewski
Abstract. The main objective of the paper is to present the state of art in the
field of engineering software instruction and training. There are various ap-
proaches how to teach someone how to use an application. In the simplest illus-
trative approach training attempts to illustrate each screen and describe each
task what is hardly possible in the case of complicated CAD/CAE software. An
alternative is an exploration approach in which during training user can be
asked to look at various functions. In many opinions the most effective way to
learn software is scenario-based approach. Major types of authoring tools used
for software simulations are described and discussed. Finally there are raised
some open questions concerning costs of e-Learning and so called Civil Engi-
neering Crisis.
1 Introduction
The types of content people are willing to have online are in majority of cases soft-
ware instructions [18]. More and more online learning is related to Information Tech-
nology and also to engineering applications [8], [9], [10]. Nearly all organizations
need to provide training to software users and the Web is one of the most effective
tools to do this. What we are nowadays desperately looking for are multimedia-based
materials [4], [5], [15], [17] - software animations and simulations. Most of the online
content available is still “page turning courseware” (several years ago asynchronous
kind of e-Learning has acquired a name “page-turner”) and often does not yield the
type of results possible from online learning. Web Based Training (WBT) is one of
the most popular methods for instructing how to use software applications. One of the
main aspects of WBT is software simulation.
The first objective of the paper is to acquaint with scenario and simulation based
e-Learning. Simulation-based e-Learning (SIMBEL) is much more effective than
classical asynchronous or scenario-based e-Learning (SbeL) [13]. The simplest defini-
tion of SIMBEL is “learning by doing”. Software simulations, soft skills simulations
(role-play, sales process simulations, business modelling) and hard skills simulations
(troubleshooting, diagnostics, simulating physical systems) soon will be the crucial
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 258 – 261, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Animations and Simulations of Engineering Software 259
part of future e-Learning environments [1]. Online animations and simulations are
treated as next big wave in training. In the past simulations were extremely expensive.
Nowadays tools used to prepare software animations are cheaper and this approach is
gaining more interest.
The second objective is to show the potential of Intelligent Tutoring Systems (ITS)
which can be used in software instruction and training. Computers have been used in
education for more than 30 years. Computer Based Training (CBT) and Computer
Assisted Instruction (CAI) systems were the first attempts to teach using computers.
Instruction was not individualized to learners’ needs. CBT and CAI do not provide
personalized help and attention which could be received from a human tutor. Intelli-
gent Tutoring Systems (ITS) and also Adaptive Training Systems (ATS) offer flexi-
bility in presentation of material and ability to meet individualized students’ needs.
They are very effective in increasing students’ performance and motivation.
Full Simulation
Realism
Multiple Paths
Data Input
Point & Click
Screen Capture Ease of Construction
There are various approaches how to teach someone how to use an application
[11], namely: illustrative, exploration and scenario-based approaches. The last one, in
many opinions, is the most effective way to learn software. Scenario-based approach
uses three fundamental interaction modes: show, teach and try [11] (see Fig. 2), but
there are many arguments against it. One of them says that it is difficult to find mean-
ingful and realistic activities for some types of software training and to develop sce-
narios which are directly related to specific jobs. Show modules are passive elements
in which simulation level is a screen capture or point-and-click. Usually teach mod-
ules are point-and-click or data input simulations. Finally, try modules require data
input, full simulations or multiple paths. More detailed information about SIMBEL
and interaction modes can be found in [13] and [11].
260 R.R. Gajewski
Guidance Show me
Teach me
Let me Try
4 Concluding Remarks
One of the mayor conclusions concerns the costs. The average ratio for creating
e-Learning content (development time versus finished hours) is 200. In the case of
e-Learning simulations this ratio can be even higher - between 700 and 1300. The
only chance to lower this factor is the use of appropriate authoring software enabling
to create training content, to prepare simulations as tutorials, assessments and demos
and to produce high quality documentation.
Intelligent Tutoring Systems could have positive impact on the solution of “Civil
Engineering Crisis” [2] in the field of computing. In many cases engineers are reluc-
tant to use computer programs because they are very complicated. Moreover, engi-
neers being concentrated on clicking, dragging and dropping treat software as black
box what can lead to incorrect use of software. Intelligent Tutoring Systems can help
in crossing these barriers.
The question “how to make e-Learning interesting” [20] is still open. One of the
biggest chances to do this is interactive software simulation, part of ITS. One particu-
lar animated tutorial cannot exactly fit to one particular user but it is much more ef-
fective than classical printed tutorials. There is a risk that very complicated tutorials
can de as complicated as programs but it is easier to limit tutorial complexity on a
certain level.
Animations and Simulations of Engineering Software 261
References
1. Aldrich, C.: Simulations and the future of learning: an innovative (and perhaps revolution-
ary) approach to e-learning, Pfeiffer (2003)
2. Arciszewski T., Civil Engineering Crisis, Leadership and Management in Engineering,
6(1), 26-30 (2006)
3. Chapman, B.: Online Simulations 2005: A Knowledge-Base of Custom Developers, Simu-
lation Courses and Simulation Authoring Tools, Brandon Hall Research (2005)
4. Clark, R.C., Lyons, C.: Graphics for Learning: Proven Guidelines for Planning, Designing,
and Evaluating Visuals in Training Materials, Pfeiffer (2004)
5. Clark, R.C.: e-Learning and the Science of Instruction: Proven Guidelines for Consumers
and Designers of Multimedia Learning, Pfeiffer (2002)
6. Estabrook, S.: Making the Most of Software Simulations, Learning Circuits (2004)
7. Feldstein, M.: Desperately Seeking Software Simulations”, eLearn Magazine (2004)
8. Gajewski, R .R., Morozowski, M.: Multimedialny podręcznik programu InRoads, In: III
Sympozjum Ksztalcenie na Odległosc – Metody i Narzedzia (2005) 21-28 [in Polish]
9. Gajewski, R.R.: Czy i jak uczyc oprogramowania - narzędzia tworzenia animacji do symu-
lacji oprogramowania i szkolen, In: Rozwój e-edukacji w ekonomicznym szkolnictwie
wyzszym (2004) 191-203 [in Polish]
10. Gajewski, R.R.: Simulations and Animations for CAD/CAE Software, The 10th Interna-
tional Conference on Computer Supported Cooperative Work in Design, (CSCWD2006),
May 3-5 (2006) Nanjing, P.R. China, to be published
11. Karrer, A., Laser, A., Martin, L.S.: Instruction and Feed-back Models for Software Train-
ing, Learning Circuits (2002)
12. Karrer, A., Laser, A., Martin, L.S.: Simulation Levels in Software Training, Learning Cir-
cuits (2001)
13. Kindley, R.: The power of simulation-based e-learning (SIMBEL), eLearning Developers
Journal, 17 September (2002)
14. M. Paris, M.: Authoring Interactive Software Simulations for e-Learning, In: Proceedings
of the 3rd IDEE International Conference on Advanced Learning Technologies ICALT’03
(2003)
15. Mayer, R.E.: Multimedia Learning, Cambridge University Press (2001)
16. Musslewhite, C.: Simulation classification system, Learning Circuits (2003)
17. Schecter, T., Fair, B., Managing the Matrix: Using Multimedia in Distance Learning Pro-
jects, Sidebars, British Columbia Institute of Technology, Learning Resources Unit
18. Shank, P.: Software Show N’ S/Tell, Learning Circuits (2003)
19. Simmons T.M.: ArchiCAD v. 7.0. Step by step tutorial, Graphisoft (2001)
20. Wilson, E.: How to make e-Learning Interesting, The Sydney Morning Herald, 24 Sep-
tember (2002)
Sensor Data Driven Proactive Management
of Infrastructure Systems
1 Introduction
The U.S. infrastructure is a trillion dollar investment, defined broadly to include road
systems and bridges, water distribution systems, water treatment plants, power
distribution systems, telecommunication network systems, commercial and industrial
facilities, etc. In spite of the enormous investments made in these systems and their
importance to the US economy, we (as in government, industry and academia) are not
being very good stewards of this infrastructure. The American Society of Civil
Engineers (ASCE) recently announced the overall “grades” for our infrastructure
systems as being a “D” [ASCE 2005]. By their nature, infrastructure systems are
large-scale, networked systems with physical components that may be themselves
networks of systems and whose health, due to use, environment, and abuse, can
significantly deteriorate. Because of expense and growing local demand, these
systems have expanded over decades in a more or less ad hoc fashion. Because of
their size and highly interconnected nature, the operating conditions of the overall
network are difficult to assess from local data. Often, local actions may give rise to
unexpected global behavior.
There are numerous examples of where a lack of knowledge of the actual condition
or state of an infrastructure system has led to failure of the system, or inefficient
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 262 – 284, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Sensor Data Driven Proactive Management of Infrastructure Systems 263
On August 14, 2003, in less than 8 minutes, a blackout, twice as large as any in US
history, affecting 250 power plants and 62 Gigawatts of generating capacity, left over
50 million people in the Northeast and Canada without electrical power [Mansueti
2004].
“Before it was over, three people were dead. One and a half million
people in northern Ohio had no running water for two days. Twelve
airports closed in eight states and one Canadian province. The estimated
economic damage was $4.5-$10 billion.” [Mansueti 2004]
In a more recent and local event for the authors, a section of an overpass on
Interstate 70 near Washington, PA collapsed onto the Interstate and several vehicles
collided with, or were struck by, the debris [Grata 2006]. The bridge had been
recently inspected and given a rating of 4 out of 10, which indicates that it is
structurally deficient, but was not in danger of imminent failure. Experts at the scene
believe that the likely cause was a combination of “age, wear and tear in the structure,
a history of being hit by trucks and very recently, another hit” [Grata 2006]. Extensive
corrosion damage to the pre-stressing cables and reinforcing bars in the concrete
beam is believed to have contributed to the failure; the extent of this damage was not
detected during visual inspection. Fortunately, nobody was killed in this incident.
However, Interstate 70 was closed for several days. “At this time of year, about
40,000 vehicles a day travel that section of I-70, about 25 percent of them trucks”
[Grata 2006].
Recent technological developments make it feasible and practical to address the
security, continuous monitoring, and rational and sustainable management of critical
infrastructure systems, such as those described in the previous paragraphs using
information and communication technology (ICT). A primary enabling technology is
provided by sensor networks. These create tremendous opportunities to instrument
distributed physical infrastructures with a large number of autonomous,
heterogeneous, inexpensive sensors that locally process the measurements and
264 J.H. Garrett Jr. et al.
Construction is the first phase in the facility delivery process where ICT devices like
sensors can actually be deployed and used. However, the data collected during this
phase and downstream phases can also inform the upstream phases of planning and
design. During construction, thousands of activities are conducted by hundreds of
people to place a large amount of material and to configure hundreds of engineered-
to-order components of a facility or component of an infrastructure system. There
exists a number of opportunities for material to be misplaced or material quality to be
affected. As such, there are a significant number of sensing opportunities on a
construction site. Integrated microchips with multiple sensing functions, local storage,
and integrated processing and communications can be used to track temperature,
strain and acceleration on structural components [Sohn 1999; Tanner 2003; Sohn
2003]. Rewritable RFID tags can be used to track the minute-by-minute movements
of equipment, material and engineered-to-order parts [Akinci 2004; Song 2004].
Embedded concrete maturity meters can be used to track the developing strength of
cast-in-place concrete [Ansari 1999; Collins 2004; Gordon 2003]. Wide-area sensors,
such as laser scanners, can be used to scan the physical locations of many components
at the same time and help determine that they are properly placed and aligned [Stone
2004; Cheok 2000; Akinci 2004]. These sensor systems will have various forms of
power supply; some will be wired to an emerging power grid, some will scavenge
power from ambient vibrations [Sodano 2004], while others will be powered by
battery or RF energy [Churchill 2003]. Sensors within this system will also need to
detect their location and associate location with the measured values [Shaffer 2003;
Tseng 2004; Brooks 2003]. These systems will also need to be self-monitoring and
Sensor Data Driven Proactive Management of Infrastructure Systems 265
able to detect and correct errors in their sensed quantities before they are reported
[Tanner 2003]. As these devices will be deployed in harsh environments, they must be
robust and able to withstand rugged, dirty, and humid conditions. To be used on
construction projects, these systems must also be cost-effective, preferably from a
first-cost perspective, but definitely from a system’s life-cycle perspective. Finally,
these systems must be accessible and usable by humans at the home office, field
office and workface [Reinhardt 2005].
The operation phase of a facility offers the greatest need for sensing and probably the
most challenging context. From a management perspective, it is this phase that
consumes the most energy and leads to air emissions. The loads on the facility (e.g.,
wind, live loads, snow), the performance of that facility, and the condition of the
various components of that facility need to be monitored so as to permit its efficient
and safe operation and to allow for efficient facility maintenance and management.
The use of sensing to control the operation of the mechanical and electrical systems in
a facility is not a new problem. Many companies manufacture and sell building
control systems. However, the extensive use of sensing to determine the loads,
performance, and condition of the structural and other functional elements is less
common. For example, the integrity of roof and wall systems is paramount for
maintaining a healthy facility and keeping water from entering the building envelope.
Sensors could be deployed throughout a building façade and roof system to act as
sentinels looking for penetrating moisture. Depending on the grid size of the sensor
network, the system could simply locate the existence of a leak or more accurately
pinpoint the specific location of the leak so that a very efficient and cost-effective
maintenance process can be conducted. All too often in built facilities, small
problems that could have been easily and cheaply fixed if detected early grow into
costly and extensive maintenance efforts. If the actual loads experienced by a facility
(from the environment and from usage) are tracked, the performance of the structure
can be much better understood. Such knowledge could also significantly inform
future design activities about the actual nature of the loads likely to be experienced.
The challenges for deploying sensing during operation are: large scale; long time-
frame; and harsh environment.
In addition to the 2003 North American Blackout and the I-70 bridge collapse
described in Section 1, several additional examples further illustrate the opportunity
to deploy ICT to improve the sustainable and secure operation of these systems.
Water Main Failures. Another example of a significant need for more condition
information is the problem of water main breaks throughout the US and other
countries. Water main breaks wreak extensive havoc on local residents and
businesses. For example, a major break to a 36 inch (approx. 91.5 cm) diameter water
main occurred in Pittsburgh in 2005, next to a dense collection of highrise buildings
containing many businesses and condominiums [Lara-Cinisomo 2005]. The break
caused floods in many of these buildings basements, leading to a loss of power and
water to all residents, leading to businesses being closed and residents being displaced
for several days [Lara-Cinisomo 2005]. According to Feiner and Rajani, “Direct
inspection of all water mains is often prohibitively laborious and expensive”
266 J.H. Garrett Jr. et al.
[Feiner 2002]. According to the GAO, “In the United States, about 54,000 community
water systems supply most of the nation’s drinking water and about 16,000
wastewater treatment systems provide sewer service” [GAO 2004]. In this same
report, a survey of water utilities indicates that one third of them stated that about
20% of their pipelines needed to be replaced. The GAO also states that “utilities
reported that collecting accurate data about their assets provides a better
understanding of their maintenance, rehabilitation, and replacement needs, which
helps utility managers make better investment decisions” [GAO 2004]. This need for
more information about water mains represents a significant opportunity for cost-
effective applications of ICT.
Residential Sustainable Operation. There is a significant potential to improve the
control and management of residential energy use using existing ICT. Networkable
temperature sensors, energy meters and switches continue to decline in price and
improve in sophistication, and could be integrated into energy monitoring and control
systems that inform residents of how and where energy is being used and provide
automation of many actions to affect their consumption [Williams 2006].
Unfortunately, only 27% of homes in the US had programmable thermostats in 2001,
a basic step in energy management, and homeowners by and large still have very little
information on energy use beyond their monthly meter reading [Williams 2006].
Delivering and deploying such ICT-based monitoring and control systems in the
future can likely save money for homeowners as well as reduce energy use, as US
consumers spend about US$1,400 per year on home energy, a significant incentive to
improve home energy utilization [Williams 2006].
Sustainable Transportation Infrastructure. In the US, there are over 600,000 bridges
in the National Bridge Inventory that must be inspected at least every two years. The
current bridge inspection and management process utilizes a paper form-based
condition rating method, whereby the rating of the entire bridge is based on the
conditions of all of the elements, no matter the relative importance of the elements.
ICT can be used to more efficiently and effectively construct infrastructure systems,
monitor the performance of these systems (if one maintains accurate performance
records, they should also inform future design decisions), assist in the management of
the entire set of assets having to be managed within an allocated budget, and assist in
the operation of these systems. Improved life cycle management tends to lead to less
major renovation and reconstruction activities, leading to sustainable outcomes.
As an example of the kind of transformation that can occur, consider the 1106 km
of the Portuguese system of toll roads operated by Brisa (the main part of the whole
Portuguese network), which recently adopted an aggressive policy to deploy and use
ICT in their operations over the entire toll road system [Bento 2004]. 57% of their
tolls are collected electronically, currently using RFID tags. A 1-8Gb/s network over
the entire road system and use it for data transmission, voice over IP (VOIP), office
applications, distributed data storage, and roadway telematics (namely for the
streaming of digital video generated in some 450 CCTV cameras along the motorway
network). They use this ICT infrastructure to assist in the coordination of roadside
operations, roadway incident monitoring, toll collection, and roadway incident/
Sensor Data Driven Proactive Management of Infrastructure Systems 267
condition monitoring. Brisa has seen significant and demonstrable economic benefit
from their effective deployment of ICT. Thanks to a number of efficiency factors that
have occurred after they intensified their ICT usage, their operating margin nears 75%
(the world’s highest amongst listed tolled operators). This is one reason why the
company maintains their current level of ICT spending at around 1.5% of their
operating revenues. As a matter of illustration, that compares with some 4% spent in
road maintenance [Bento 2004].
To avoid costly failures and provide a 21st century infrastructure, the United States
and other governments must build their critical infrastructures with a "nervous
system" that collects and feeds data to places in the system that interpret it and allow
better decision making. CenSCIR will perform research that will motivate the need
for, provide design guidance for, and provide the clear justifications for implementers,
operators and future designers of critical infrastructure systems to provide sensor
data-driven awareness of the usage and condition of their systems (both for
components and the entire network), and proactive, intelligent decision support and
control of these critical infrastructure systems over their lifetime.
The mission of this center to: (1) perform research with industry and government
partners to develop a thorough understanding of the data and decision support needs
(human and autonomous control) in a variety of infrastructure contexts and the
economic implications of delivering such support; (2) perform research on sensor
devices, data models, data interpretation techniques, system behavior models, or
decision support frameworks that addresses the needs of a specific critical
infrastructure system; (3) perform research needed to develop sensor-data driven
vertical decision support systems for specific critical infrastructure systems or
components, from the sensors needed, to the data models used, to the algorithms and
models applied to interpret that data, to the decision support needed to assist in the
construction, operation and maintenance of an infrastructure system; (4) validate these
systems using combinations of laboratory testbeds, actual infrastructure systems and
simulations; (5) perform research to explore the common aspects of the developed
vertical systems to determine if common approaches or tools may be developed from
these more specific solutions; (6) explore new network theories concerning the value
of the information provided by sensors distributed throughout a critical infrastructure
268 J.H. Garrett Jr. et al.
network, how that information is best converted into decisions (centrally or locally),
whether this decision making knowledge must be specified or can it be learned, and
whether that information is able to improve the reliability, stability and quality of
service of critical infrastructure systems; and (7) perform research to develop
frameworks that assist developers of such sensor data-driven decision support systems
to take advantage of the knowledge gained by this center when creating new vertical
decision support systems for critical infrastructure contexts.
There are many research questions that will need to be addressed to achieve this
CenSCIR vision. The set of research questions include:
What information needs to be measured about a system, and where and when it
should be measured?
What types of sensor are needed, but do not exist in a form usable for infrastructure
applications? How are they powered for long periods of time? How do they remain
functional for the long lives of most infrastructure systems?
How should this extremely large amount of information be represented, stored,
managed and exchanged and who is responsible for the stewardship of this data?
How does one predict global behavior, and more importantly the onset of an
incident, from localized sensor information, and what inference algorithms are needed
to infer the state of the system without having centralized knowledge of the network?
What types of action need to, and can, be taken to prevent abnormal behavior?
How does one translate the mass of information collected from the distributed
sensor network into intelligent decision support that actually helps the operators of
these systems make the best possible decision under the circumstances?
What are the economic conditions that make such an approach to delivering a
“nervous system” for an infrastructure system economically viable? Does it make
sense to use such a system only at certain times when problems are anticipated, such
as at the end of life as in the I-70 bridge example? What can we do at construction
time and what can wait until later?
These and many other questions need to be answered for the vision of CenSCIR to
be achieved.
While specific technologies are becoming available for effective on-line gathering
of real-time data in such systems and the case for their use is being made, it remains
critically important to develop methods for differentiating between the information
necessary to make decisions in order to prevent undesired performance, on one side,
and the excessive data gathering, on the other. Such methods are not available, and
this vacuum creates a major obstacle to enhanced performance of our critical
infrastructures, and it creates a fantastic opportunity for research.
The first project, being conducted by Gordon, Akinci and Garrett described in
Section 0, explores the need for, and mechanisms to deliver, sensor-based
construction inspection planning. The second project, being conducted by Wang,
Akinci and Garrett and described in Section 0, explores ways to more efficiently map
between data models in an evolving modeling context by making the most use of
previously acquired mapping knowledge. The third and final project, being
conducted by Singhvi, Matthews and Garrett and described in Section 0, explores a
utility maximization-based approach for sustainably controlling infrastructure
systems based on sensor information and knowledge of users’ and operators’ utility
functions.
4.1.1 Need
Construction inspectors need inspection planning assistance to ensure that inspection
goals are properly specified, to identify and search among inspection alternatives, and
to help intelligently deploy inspection resources. While reporting on best practice
quality management strategies in the construction industry, the Commission on
Engineering and Technical Systems (CETS) stated that inspection planning is needed
to assure effective inspections, and that current practices for developing and verifying
requirements for inspection are error-prone and can result in missed inspections
[CETS 1991]. The need for inspection planning on construction sites, and the
problems associated with poor specification and implementation of inspection
requirements motivate the development of approaches to help plan inspections for
construction projects.
Requirements for a formal representation of goals and plans for inspection have
been established to address the need for inspection planning assistance. According to
these requirements, construction inspection planning requires formal reasoning to
develop and reduce inspection planning spaces. Development of inspection planning
spaces requires development of inspection goals and feasible inspection plans that
may be applied to address these inspection goals. Formally developed inspection
goals specify the inspection action, building element, and property to be inspected;
contain zero or many constraints on how they are to be addressed; are known to be
applicable to projected weather conditions; and are refined to the lowest level of detail
needed to support development of inspection plans. Formally developed inspection
plans specify the method and corresponding sets of activities that are required to
address inspection goals, and can be evaluated to determine if they meet constraints
associated with inspection goals, and to determine how different inspection plans
compare to each other for a given goal. Reduction of inspection planning spaces
requires formal reasoning to simplify sets of inspection plans and search among sets
of inspection plans that may be used to address the set of inspection goals selected for
a construction site.
Inputs needed for these processes are detailed project models, including product
and process information. This information, and knowledge of construction site
conditions, prior inspections, and inspection knowledge, can be reasoned with to
develop goals and constraints for inspection. Project models can also be reasoned
270 J.H. Garrett Jr. et al.
Table 1. Inspection goals drawn from specifications for the project in the motivating example
are directly mapped to inspection methods by the specifications, and indicate if their existence
is weather-dependent
testing. Concrete strength was specified to be inspected per ASTM C39 by taking
samples of concrete as it was delivered to the site and then subjecting these samples to
destructive compressive strength tests in a laboratory setting. Concrete was also
specified to be tested daily for concrete workability by testing concrete slump per
ASTM C143.
The specifications for this project partially address two main requirements for
inspection planning: inspection goal development and inspection plan development.
In terms of inspection goal development, (1) the specifications identify the properties
to be inspected for components that are composed of concrete; (2) they specify the
weather conditions in which the inspection goals are applicable; and (3) they contain
informal mappings between measurable properties (e.g. slump) and the qualitative
properties that these measurable properties can address (e.g. workability). Further
inspection goal development concepts, not addressed by these specifications, are
limits on the performance of inspection plans used to address these goals, such as the
cost of addressing an inspection goal. In terms of inspection plan development, the
specifications for this project contain direct mappings of inspection goals (i.e.
concrete slump) to the inspection methods (i.e., ASTM C143) that can be used to
address the goals. Because the specifications contained direct mappings among
inspection goals and single inspection methods, the specifications did not permit a
detailed comparison of different possible inspection methods for inspection goals.
Considering all possible methods for inspection goals as an alternative to ad hoc
links between inspection goals and inspection methods opens up the possibility of
improving the inspection planning process by enabling identification of alternatives
that may be better suited for given contexts. For example, although the concrete
maturity method was not mentioned as a possible method to test concrete strength, it
has been recommended for schedule acceleration applications [Cable 1998]. Despite
the potential for process improvement by considering multiple inspection methods, it
is relatively difficult to search among the large amount of inspection methods.
ASTM, for example, has standardized approximately 5,000 inspection methods. In
the motivating example, the slab and six columns each had three inspection goals per
component. Reviewing each of the 5,000 inspection methods to determine all of the
methods that can address each of the inspection goals will result in 105,000
applicability tests. However, by classifying the inspection methods according to
material, the search space is reduced to 3,000 applicability tests. By further
classifying these inspection methods according to the inspected property, the search
space is reduced to 147 applicability tests. This demonstrates the effectiveness of
reasoning with more than one classification facet to limit the search for applicable
inspection methods.
Once applicable inspection methods are identified, these may be reasoned with to
develop and compare inspection plans. The number and cost of resources must be
reasoned with to support cost-based comparisons of inspection plans. Some
inspection methods, such as the concrete maturity method, require the use of sensors
that are embedded on or within components. Such methods specify how to reason
with component geometry or material quantities to determine the number of
measurements or sensors that are needed to inspect a given component. Hence, it is
necessary to reason about how to distribute sensors within a component and where to
embed them to make necessary measurements.
272 J.H. Garrett Jr. et al.
The motivating case demonstrates concepts needed to develop inspection goals and
inspection plans for construction sites. For example, some inspection goals are only
applicable in particular weather conditions (e.g. cold weather conditions); some goals
can be refined to more detailed inspection goals (e.g. a concrete workability
inspection goal can be refined to a concrete slump inspection goal); and inspection
goals can be addressed by one or more inspection methods. It also demonstrates the
need for increased formalism to support consideration of multiple possible inspection
methods and their associated resource distributions.
prototype, a user imports a project model that has been saved in .ifc format and parsed
into Java classes; loads files that describe inspection goal, method, and resource
libraries; and loads files that describe specifications that have been represented in
.xml format. These specifications are reasoned with to begin the inspection goal
development process.
Research in individual inspections, such as the work described in [Bungey and
Miller 1996 ], has identified criteria, such as resource cost, time, and accuracy, which
may be used to compare inspection plans. Such evaluation criteria can guide search
of available inspection plans. For example, Table 2 shows results of using such a cost
function with search methods based on random search, genetic search, and simulated
annealing, in comparison to exhaustive search.
Fig. 1. Inspection planning process develops inspection goals and plans, and searches for plans
to recommend from among the feasible plans that have been developed
Table 2. Comparison of search algorithms for to exhaustive search for plan selection
while applying the formalism results in large inspection planning spaces, these spaces
can in fact be searched to identify relatively good inspection plans within a short
amount of search time.
4.2.1 Need
The need for data exchange between computer applications has existed for decades.
Data exchange among applications involves translating data, which is modeled
specifically for one application, into data that can be understood by another
application. It requires that the target data model represents the source data as
accurately and completely as possible to minimize data loss during exchange [Fagin
et al., 2003]. This requirement arises in many domains, where independent
applications do not necessarily adopt the same data model (or schema). As we collect
more and more data about the various components, subsystems, and systems related
to critical infrastructure, this need for data exchange among models will only become
more severe.
The Architecture, Engineering, and Construction (AEC) industry is recognized as a
multi-disciplinary and multi-participant industry. Data created in the AEC domain
include 2D and 3D drawings, contracts, specifications, standards, reports, etc. It is
created by users like owners, designers, constructers and inspectors, through many
different domain-specific applications. To enable interoperability between different
software systems, data must be exchanged between multiple users or applications by
some public data exchange standards such as Industry Foundation Classes (IFC) [IAI
2003a], ifcXML [IAI 2003b], CIMsteel [Eureka 2004], AEX [FIATECH 2004], etc..
Accordingly, there is a long standing requirement to match different data schemas
(e.g., a task specific schema or a public data exchange standard). However, manual
matching of data models is time-consuming, error-prone and tedious work. Given the
rapidly growing number and scale of data models used in today’s applications, manual
matching is becoming a much harder task. In addition, the challenges associated with
model matching become even more pronounced when a source or a target model is
changing frequently, which often happens in the real world. For example, in the last
three years, the IFC data exchange standard has undergone two major updates,
Release 2x and 2x2, but most IFC-capable commercial software only supports
Release 2.0 or worse only Release 1.5 [Steinmann 2004].
It has been demonstrated that a computer (IT)-enabled process can provide further
help in this matching process. Some prior studies (e.g., [Doan et al., 2000, Madhavan
et al., 2001, Mitra et al., 2000, Li & Clifton, 2000]) could perform a considerable part
of schema matching automatically and output satisfactory results under certain
circumstances. However, because of the complexity of applying human knowledge,
there are two research areas that most of the prior studies do not address: 1) how to re-
use previously existing matches; and 2) how to use domain specific knowledge
[Rahm and Bernstein 2001].
4.2.2 Approach
In this research, we developed an approach that partially addresses the above issues to
improve the data model matching process, by utilizing prior matching correspondence
Sensor Data Driven Proactive Management of Infrastructure Systems 275
and constraints introduced by domain knowledge in a certain AEC domain (e.g., the
Building Commissioning domain) where both the source and the target models are
changing frequently.
Figure 2 illustrates the overall procedure of the proposed research, which aims at
matching a Building Commissioning data model (i.e., BC data model [Akin et al.
2003]) to different releases of the IFC data exchange standard.
IFC R2x2
3. Generate new
matches
2. Version
differenced
1. Existing matches
BC Data Model IFC R2x
4.3.1 Need
Meeting human preferences of comfort, safety and privacy are major factors in the
success of many civil infrastructure projects. Currently, these factors are taken into
account by incorporating available standards during decision making. In building
operation, occupant’s comfort is measured using standards like those from ASHRAE
[ASHRAE 1980]. However, most of the standards represent approximation of these
preferences, as in reality, they are unique for each individual and often are location
and time dependent. The inability to integrate the seemingly important factor of
human comfort in building operation is due to lack of a framework to quantitatively
Sensor Data Driven Proactive Management of Infrastructure Systems 277
4.3.2 Approach
Building operation is a complex activity, where building operators and occupants
continuously interact with each other. Typically, the interaction is passive with little
communication between the operator and the occupants. Historically, maximizing
occupant comfort and minimizing energy costs have always been two primary
objectives of intelligent buildings operation [Finley Jr. M. R. 1991; Flax B. 1991].
The trade-off between meeting occupant preferences for indoor environmental
condition and reduction in energy usage leads to a difficult optimization problem and
this optimization can be thought of as decision-theoretic — the objective is to
minimize the expected cost of building operation and to maximize the occupant’s
expected comfort. The challenges to develop such a balanced control strategy are
threefold. First, we need to identify the preferences of individual occupants in indoor
environments continuously, as preferences change over time and as the occupants
move in the building. The second challenge is to gather information about the
immediate indoor and outdoor environment of the occupants. The third challenge is to
optimize the trade-off between meeting occupants’ preferences and reducing energy
usage.
The proposed decision-theoretic approach described in [Singhvi 2005] optimally
achieves occupants' light preferences and energy usage tradeoff by solving a multi-
criterion optimization problem. We show that given the utility function for individual
occupants and the operation cost, the proposed coordinated illumination approach
can efficiently optimize the tradeoff in meeting occupant preferences and energy
usage. Our control strategy integrates individual occupants’ preferences and the real
state of indoor and outdoor environment through a network of wireless sensors. The
control strategies quantify the trade-off between meeting occupant preferences and
the corresponding energy utilization. While decision-theoretic optimization provides a
powerful, flexible, and principled approach for such systems, the quality of the
resulting solution is completely dependent on the accuracy of the underlying utility
function. To extend the coordinated illumination approach, we have developed a
utility elicitation approach that addresses the needs and requirements of an intelligent
building system. The main contributions of this work are:
• The development of a formal decision theoretic framework for integrating
occupants’ preferences in building operations;
• The development of an efficient coordinated illumination algorithm for
optimizing the tradeoff between meeting occupants’ preferences and
reducing energy consumption;
• The development of a principled utility elicitation technique using minimal
interaction and partial information provided by the occupants; and
• The extension of our approach to optimally exploit external light sources and
optimally exploit spatial and temporal correlation for sensor scheduling.
278 J.H. Garrett Jr. et al.
Here, s is the indoor environment state defined as the vector of the various parameters
si involved in defining the state. Φ (s) is the occupant’s utility function representing
the preference for the comfort in state s. φi is the sub-utility function and wi are the
associated weights in the utility function. The operating cost utility function is defined
as Ψ , which decreases monotonically with the energy expended.
A building has multiple occupants with varying preferences and hence with
varying utility functions Φ1 , Φ 2 ......Φ n . When cosidering occupant preferences and
m
the operating cost, the goal is to tradeoff Ψ with the occupant utility ∑ Φ i . A
i =1
common technique of ‘scalarization’ is then employed to solve this multi-criterion
optimization problem by defining a system utility function, U (s) which is defined as
follows:
m
U (s) = ∑ Φ i (s) + γ * Ψ (s) (2)
i =1
s* = argmax U (s)
s
Note that the solution space is exponential with the number of acuatable state
variables, and enumeration of all possible states is impossible. Singhvi et al.
[Singhvi 2005] present an efficient algorithm to solve this exponential
maximization problem exploiting the zoning principle in lighting design (i.e., not all
lights affect all spaces).
The success of such a decision support system depends on how well the utility
function Φ represents the preferences of the occupants. The main challenge is to
Sensor Data Driven Proactive Management of Infrastructure Systems 279
estimate/elicit the shape of the sub-utility function φi (as defined in eq 1) and the
associated weights wi for each occupant. In Section 0 we propose an approach for a
preference elicitation system that would estimate the utility functions using partial
information from the users.
4.3.3 Results
We implemented the control strategy in a test bed at Carnegie Mellon University. We
created the test bed (shown in Figure 3) to emulate the real situation at a smaller scale
to test our lighting control strategy. Our test bed consists of twelve MICA2 motes and
ten 60 Watt table lamps, arranged on a 146 in. by 30 in. table. The lamps are actuated
by the X10 system, which wirelessly communicates with a single desktop PC, and
uses power lines for controlling the lamps. Each lamp can be actuated to produce ten
different light intensities. The lamps are arranged in a triangular pattern which
corresponds to the zoning concept. Each of the seven triangular zones is affected by
three lamps, and each lamp affects up to three regions. The motes are distributed over
the different zones, and communicate with the base station over an ad hoc wireless
network. We added 5 wall lamps to this test bed that acted as a source of external
light.
We tested various scenarios in the test bed; a brief summary of the detailed results
presented in [Singhvi 2005] is:
• For our setup, at γ = 0.4 (see Eq 2), the system saves about 30% of energy
while allowing minimal loss of occupant utility;
• Coordinated lighting approaches do significantly better than a typical greedy
algorithmic approach of actuating the lamps;
280 J.H. Garrett Jr. et al.
5 Closure
The U.S. infrastructure is a trillion dollar investment, and in spite of the enormous
investments made in these systems and their importance to the US economy, we (as in
government, industry and academia) are not being very good stewards of this
infrastructure. To avoid costly failures and provide a 21st century infrastructure, the
United States and other governments must build their critical infrastructures with a
"nervous system" that collects and feeds data to places in the system that interpret it
and allow better decision making.
In late 2005, the Center for Sensed Critical Infrastructure Research was created at
Carnegie Mellon, building on components of the research activities in the departments
of Electrical and Computer Engineering, Civil and Environmental Engineering,
Engineering and Public Policy, in the College of Engineering, and the departments of
Computer Science and Architecture. CenSCIR will perform research that will clarify
the need for, provide design guidance for, and provide the clear justifications for the
stewards of critical infrastructure systems to provide sensor data-driven awareness of
the usage and condition of their systems and proactive, intelligent decision support
and control of these critical infrastructure systems over their lifetime.
The three projects presented in this paper begin to address some of the research
questions identified in Section 0. The first project related to inspection planning
explores one approach to addressing the question: What information needs to be
Sensor Data Driven Proactive Management of Infrastructure Systems 281
measured about a system, and where and when it should be measured? There are
many additional issues that need to be addressed, but the need for formal
representation and reasoning is apparent from this project and stochastic search
processes appear to be feasible for supporting planning for sensor-based inspection.
The second project related to semi-automated data model matching explores the
research question: How does one manage, model, mine and exchange the large
amount of data that will be generated by sensed infrastructure? This project illustrates
how an evolving data modeling and exchange situation, which is the case in long-
lived infrastructure applications, can be provided automated support that makes
effective use of the mapping knowledge that has already been acquired. Over the
infrastructure system life-cycle, such as system will be able to evolve as the data
models evolve. The third project, related to utility maximization based on a sensed
environment and knowledge of user and operator utility functions, addresses the
research question: How does one translate the mass of information collected from the
distributed sensor network into intelligent decision support that actually helps the
operators of these systems make the best possible decision under the circumstances?
The project explores the ways in which such sensor-based lighting control might be
achieved with a dense set of environment sensors and an efficient utility optimization
approach.
We must commit to deploy ICT during the construction phase, and utilize it over
the whole life cycle of these infrastructure systems. This approach should allow us to
have full life-cycle visibility of infrastructure, and enable us to move beyond the
current “crisis mode” management paradigm that is pervasive within public agencies.
While more robust networking and device technologies will need to be created to
make ICT systems that deliver service over the lifetime of physical infrastructure
assets (e.g., on the order of 50 years), this is a worthy goal and one that will have far-
reaching cost, performance, and sustainability implications.
Acknowledgements
The project presented in Section 0 is funded by a grant from the National Science
Foundation, CMS #0121549. NSF’s support is gratefully acknowledged. Any
opinions, findings, conclusions or recommendations presented in this paper are those
of the authors and do not necessarily reflect the views of the National Science
Foundation. The project described in Section 0 is sponsored by the National Institute
for Standards and Technology. The support from, and interactions with, Kent Reed at
NIST is also gratefully acknowledged. Finally, the project presented in Section 0 is
sponsored by Pennsylvania Infrastructure Technology Alliance and PITA’s support is
also gratefully acknowledged.
References
Akin, O. Turkaslan-Bulbul, M.T. Gursel, I. Garrett, Jr. J.H. Akinci, B. Wang H. (2004)
Embedded Commissioning for Building Design. In: Proceedings of European Conference on
Product and Process Modeling in the Building and Construction Industry, 8-10 September,
Istanbul, Turkey.
282 J.H. Garrett Jr. et al.
Akinci, B., Ergen, E., Haas, C., Caldas, C., Song, J., Wood, C.R., Wadephul, J. (2004) “Field
Trials of RFID Technology for Tracking Fabricated Pipe,” FIATECH Smart Chips Report,
FIATECH, Austin, TX, February 25, 2004, http://www.fiatech.org/links.htm#smart.
Amor, A.W., Ge, C.W. (2002) Mapping IFC Versions. In: Proc of the EC-PPM Conference on
eWork and eBusiness in AEC, Portoroz, Slovenia, 9-11 September, pp.373-377.
Ansari, F., Luke, A., Dong, Y., Maher A. (1999) Development of Maturity Protocol for
Construction of NJDOT Concrete Structures, FHWA. Final Project Report, NJ 2001-017.
ASCE Infrastructure Report Card, http://www.asce.org/reportcard/2005. March 9, 2005.
ASHRAE (1980). American Society of Heating Refrigeration and Air Conditioning Engineers,
Atlanta.
Bento, J. (2004) ICT@Brisa. Why does it work?, unpublished presentation made as part of the
2004 International Associated of Bridge and Structural Engineers Symposium, Shanghai.
Berringer, F. “Large Oil Spill in Alaska Went Undetected for Days” New York Times, March
16, 2006.
Brooks, R.R.; Ramanathan, P.; Sayeed, A.M. (2003). “Distributed target classification and
tracking in sensor networks,” Proceedings of the IEEE, 91(8):1163-71
Bungey, J.H. and Millard, S.G. (1996) "Testing of concrete in structures." Chapman and Hall.
Cable, J. (1998) Using NDT to Reduce Traffic Delays in Concrete Paving 1998 Transportation
Conference Proceedings.
Cheok, G.S., Lipman, R.R., Witzgall, C., Bernal, J., and Stone, W.C. (2000) “Field
Demonstration of Laser Scanning for Excavation Measurement,” Proceedings of the 17th
International Symposium on Automation in Robotics in Construction, Taipei, Taiwan.
Churchill, D.L., Hamel, M.J., and Townsend, C.P. (2003) “Strain energy harvesting for
wireless sensor networks,” Proceedings of the SPIE - The International Society for Optical
Engineering (SPIE-Int. Soc. Opt. Eng) 5055, pp. 319-27.
Collins, V.A. (2004) Evaluation and Comparison of Commercially Available Maturity
Measurement Systems, unpublished MS Thesis, Carnegie Mellon Department of Civil and
Environmental Engineering, Pittsburgh, PA.
Commission on Engineering and Technical Systems (CETS). Inspection and Other Strategies
for Assuring Quality in Government Construction. National Academy Press (1991).
Demers, Cornelia E.; Gregory, Rita A.; and Upton, Mark N. (2002) “Cost at Element Level”
Journal of Infrastructure Systems Volume 8, Issue 4, pp. 115-121.
Doan AH., Domingos P., Halevy A. (2001) Reconciling schemas of disparate data sources: a
machine-learning approach. In: Proc ACM SIGMOD Conf. pp.509-520
Erol, K., Nau, D., and Hendler, J. (1994) "UMCP: A Sound and Complete Planning Procedure
for Hierarchical Task-Network Planning". In AIPS-94, Chicago, June.
The Eureka CIMsteel Project (2004). CIMsteel Integration Standards. Last accessed Nov 2004.
http://www.cae.civil.leeds.ac.uk/past/cimsteel/cimsteel.htm
Fagin, R., Kolaitis P.G. and Popa, L. (2003) Data Exchange: Getting to the Core. Proceedings
of the 22nd ACM SIGMOD-SIGACT-SIGART symposium on Principles of database
systems, pp90-101, San Diego, California.
Feiner, Y. and B. Rajani. (2002) "Forecasting Variations and Trends in Water-Main Breaks."
ASCE Journal of Infrastructure Systems, Vol. 8, No. 4, pp. 122-131
FIATECH (2004) Automating Equipment Information Exchange. Last accessed Nov 2004.
http://www.fiatech.org/projects/idim/aex.htm.
Finley Jr. M. R., A. K., and R. Nbogni (1991). "Survey of intelligent building concepts." IEEE
Communication Magazine.
Flax B., M. (1991). "Intelligent buildings." IEEE Communication Magazine.
Sensor Data Driven Proactive Management of Infrastructure Systems 283
Keeney, R., L., Raiffa, H. (1976). Decisions with Multiple Objectives: Preferences and Value
Trade-offs. New York, Wiley
General Accounting Office (2004) "WATER INFRASTRUCTURE Comprehensive Asset
Management Has Potential to Help Utilities Better Identify Needs and Plan Future
Investments." Report No. GAO-04-461.
Gordon, C., Boukamp, F., Huber, D., Latimer, E., Park, K., and Akinci, B. (2003) “Combining
Reality Capture Technologies for Construction Defect Detection: A Case Study,” in
Proceedings of EuropIA International Conference, pg. 99-108, Istanbul, Turkey.
Grata, J. (2005) "Road salt, hits from trucks likely led to bridge collapse." Pittsburgh Post-
Gazette, December 29, 2005.
International Alliance for Interoperability (2003a) Industry Foundation Classes, Last accessed
Nov 2004. http://www.iai-international.org
International Alliance for Interoperability (2003b) ifcXML Project. Last accessed Nov 2004.
http://www.iai-na.org/.
Lara-Cinisomo, V. (2005) "Water main break stymies Downtown business." Pittsburgh
Business Times, August 17, 2005.
Li, W., Clifton, C. (2000) SemInt: a tool for identifying attribute correspondences in
heterogeneous databases using neural network. Data Knowledge Engineering 33(1):49-84.
Madhavan, J., Bernstein, P.A. and Rahm, E. (2001) Generic Schema Matching with Cupid. In:
Proc the 27th VLDB Conference, Roma, Italy, 2001.
Malakooti, B. (2000). "Ranking and Screening Multiple Criteria Alternatives with Partial
Information and use of Ordinal and Cardinal Strength of Preferences." IEEE Trans. on
Systems, Man, and Cybernetics Part A 30: 355-369.
Mansueti, L. (2004) "Is Our Power Grid More Reliable One Year After the Blackout?"
Conservation Update September-October 2004 Issue, U.S. Department of Energy - Energy
Efficiency and Renewable Energy.
Mitra P., Wiederhold G. and Kersten M. (2000) A graph-oriented model for articulation of
ontology interdependencies. In: Proc Extending Database Technologies, Lecture Notes in
Computer Science, vol. 1777. Springer, Berlin Heidelberg New York, 2000, pp. 86-100
Rahm, E. and Bernstein P. A. (2001) A survey of approaches to automatic schema matching.
The VLDB Journal, 10, 334-350.
Reinhardt, J, B. Akinci, and J. H. Garrett, Jr., (2005) A Navigational Model for Providing
Customized Representations for Effective and Efficient Interaction with Mobile Computing
Solutions on Construction Sites,” J. of Computing in Civil Engineering, 19(2).
Sacerdoti, E.D. (1977) “A Structure for Plans and Behavior.” American Elsevier, NY, NY.
Shaffer, J., and Siewiorek, D.P., Zhuang, W., Yeh, C.-H., Droegehorn, O., Toh, C.-K., Arabnia,
H.R. (2003) “Locator@CMU a wireless location system for a large scale 802.11b network,”
International Conference on Wireless Networks - ICWN'03, CSREA Press, 666 pp. 61-65.
Singhvi, V., Krause A., Guestrin, C., Garrett, J., Matthews, S., (2005). Intelligent Light Control
using Sensor Networks. SenSys, San Deigo.
Sodano, H.A., Park, G., Inman, D.J. (2004). “Estimation of electric charge output for
piezoelectric energy harvesting,” Strain (British Soc. Strain Meas), 40(2): 49-58.
Sohn, H., Dzwonczyk, M., Straser, E.G., Kiremidjian, A.S., Law, K.H. and Meng, T. (1999)
“An Experimental Study of Temperature Effects on Modal Parameters of the Alamosa
Canyon Bridge,” Earthquake Engineering and Structural Dynamics, 28: 879-897.
Sohn, H. Park, G., Wait, J.R., Limback, N.P., Farrar, C.R. (2003) “Wavelet-Based Active
Sensing for Delamination Detection in Composite Structures,” Smart Matls. and Structures,
13(1):153-160
284 J.H. Garrett Jr. et al.
Song, J., Haas, C., Caldas, C., Ergen, E., Akinci, B., Wood, C.R., and Wadephul, J. (2004)
“Field Trials of RFID Technology for Tracking Fabricated Pipe -Phase II.” FIATECH Smart
Chips Report, FIATECH, Austin, TX, June 30, 2004.
Steinmann, R. (2004) International Overview of IFC-Implementation Activities. Last accessed
Nov 2004. http://www.iai.fhm.edu/ImplementationOverview.htm.
Stone, W.C. (2004) Performance Analysis of Next-Generation LADAR for Manufacturing,
Construction, and Mobility, NISTIR 7117, Nat. Inst. of Stds. and Tech., Gaithersburg, MD.
Tanner, N.A., Wait, J.R., Farrar, C.R., and Sohn, H. (2003) “Structural Health Monitoring
using Modular Wireless Sensors,” J. of Intelligent Matls., Sys. and Structures, 14(1), 43-56.
Tseng, Y.C., Kuo, S.P., and Lee, H.W. (2004) “Location Tracking in a Wireless Sensor
Network by Mobile Agents and Its Data Fusion Strategies” Computer J., 47(4): 448-460.
Uzarski, D.R.; Tonyan, T.D.; and Maser, K.R. (1989) “Facility and Component Inspection
Technology Concepts: Potential Use in U.S. Army Maintenance Management.” USCERL
Technical Report M-90/91.
Williams, E, Matthews, H.S., Breton, M., Brady, T., and M. Yao, (2006) Use of a Computer-
Based System to Manage and Measure Energy Consumption in the Home, Proceedings of
2006 IEEE International Symposium on Electronics and the Environment.
Understanding Situated Design Computing and
Constructive Memory: Newton, Mach, Einstein and
Quantum Mechanics
John S. Gero
Key Centre of Design Computing and Cognition, University of Sydney, NSW 2006, Australia
[email protected],au
1 Introduction
Design computing is the area of computing that deals with designing and designs.
Designs and their representations have been the focus of considerable research that
has resulted in a variety of representation models and tools. Designing involves the
development and refinement of requirements and approaches, the synthesis of designs
as well as the emergence of new concepts from what has already been partially de-
signed. In this designing is not a subset of problem solving – problem solving is a
subset of designing. This interaction between partial designs and the design process
has been called “reflection” [1]. Reflection is the general term used to describe a des-
ignerly behavior that allows a designer to “see” what they have differently to what
was intended at the time it was done. Current concepts in design computing make the
modeling of this conception of designing very difficult. Other designerly behavior is
equally difficult to model using current concepts of design computing. For example:
how is that two designers when given the same set of requirements produce quite
different designs? Why is that a designer when confronted with the same set of re-
quirements at a later time does not simply reproduce the previous design for those
requirements? How is that designers can commence designing before the require-
ments are fully specified? It is claimed that it is precisely these behaviors that distin-
guishes designing from problem-solving. Most models of design conflate it with
problem solving [2], [3], [4] and are unable to represent the activities that produce
these behaviors [5]. As a consequence our computational models of designing and the
computational support tools for designing do not adequately match the behavior of
designers and are insufficiently effective. This can be seen in the paucity of tools for
the early stages of designers where all the critical decisions are taken.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 285 – 297, 2006.
© Springer-Verlag Berlin Heidelberg 2006
286 J.S. Gero
This paucity of tools for the most significant and influential parts of designing is
not due to a lack of ingenuity in constructing such tools, it is due to a lack of knowl-
edge about what such tools should embody. This paper claims that the concepts that
provide the foundations for situated design computing and hence situated design com-
puting itself provide the knowledge for the development of a new class of tools, tools
that have the capacity to support these early stages of designing.
A new approach is needed. Situated design computing is a new paradigm for design
computing that draws its inspiration from situated cognition in cognitive science [6].
It is claimed that it can be used to model designing more successfully than previous
approaches. It has the capacity to form the basis of a model that can represent and
explain much of designerly behavior. In particular it is claimed it can model [7], [8]:
• how a designer can commence designing before all the requirements have
been specified
• how two designers presented with the same specifications produce different
designs
• how the same designer confronted with the same requirements produces a
different design to the previous one, and
• reflection, ie, how a designer can change their design trajectory during the ac-
tivity of designing.
As a consequence situated design computing warrants further investigation. Situ-
ated design computing is founded on three concepts that are new to design computing:
acquisition of knowledge through interaction, constructive memory and the situation.
The first concept draws its distinction from the source of the knowledge rather than
the techniques used in its acquisition. The second introduces a novel notion of mem-
ory that conceives of memory primarily as a process rather than the current view of
memory as a “thing in a location”. The third introduces the notion of a gestalt view of
the designer that influences what the designer “sees”.
1.2 Interaction
(a) (b)
Fig. 1. The same image has different encodings (a) and (b) that depend on the individuals who
created them rather than on any objective knowledge
Index need not be Explicit, It can be Constructed from the Query. Take the
query: find an object with symmetry. There is no need for there to be an index “sym-
metry” in the memory. The system can use its experiences about symmetry to deter-
mine whether it can construct symmetry in objects in the memory. This concept al-
lows for the querying of a memory system with queries it was not designed to answer
at the time the original memories were laid down. This is significant in designing as
there is evidence that designers change the trajectory of their designs during the proc-
ess of designing and introduce new intentions based on what they “see” in their partial
designs, intentions that were not listed at the outset of process. There are fundamental
issues here that are not addressed by fixed index systems. For a novel query to be able
to answered it first needs to be interpreted by the memory system using the experi-
ences it has that might bear on the query.
Index Changed by its Use. If the same query is made a multiple times the response
to the query should become faster, irrespective of whether it is a constructed index or
not; this is a very simple example of how an index is changed by its use. A more pro-
found and useful example of this phenomenon is when a memory is used to construct
another, later, memory a new index is created that connects these two memories such
that when either is used again the other is associated with it.
Content Changed by its Use. A trivial example is in the case above about symmetry,
the index can now become part of the memory. A less trivial example would be in the
case where an experience is used in constructing a new memory. The experience is
changed by having a link to the newly constructed memory. The experience is no
longer the same experience it was before it was used to construct the new memory.
That experience can no longer be recalled without its role in the construction of a new
memory being part of it.
Memory Structure Changed by its Use. In the example above, not only does the
content change but also the structure of the memory. The link between the experience
Understanding Situated Design Computing and Constructive Memory 289
and the new memory changes the structure of the memory system itself such that the
way these memories can be used is changed. Later experiences can change what was
experienced before. The content of a constructive memory system is non-monotonic.
Memories can be Constructed to Fulfill the Need to Have a Memory. Take the
case where a finite element analysis is carried out and both the cost of processing and
the result are passed onto the design team leader. The team leader may query whether
the analysis was cost effective. There is no memory of this but they can construct a
memory in response. Later, if asked whether the analysis was cost effective they can
respond directly through a recall-like process. If that query had not been asked earlier
there would be no memory to respond to the later query.
Memories are a Function of the Interactions Occurring at the Time and Place of
the Need to Have a Memory. Take the example of the cost effectiveness query
above. If at the same time as that query being made another member of the design
team states that his experience with the analysis group is that they always overesti-
mate their costs, this changes the design team leader’s construction of the memory to
take account of some discounting of the cost provided.
One way to conceptualize a constructive memory system is as a global, continu-
ously learning, associative system, where all later memories have the potential to
include and affect all earlier memories while earlier memories affect later memories.
The notion of past memories as fixed entities has to be modified such that all of the
past can only be viewed “through the lens of the present”, and the present is an encap-
sulation of the past. This brings us to the notion of a “situation”.
1.4 Situatedness
The third concept that provides one of the foundations of situated design computing is
situatedness. This is the notion that a designer works within a world of their own
making. This world is based on their perception of the world outside and inside them
and their behavior is a response to that world. This conceptualized world is called the
“situation”. Designers do not need to articulate the situation to behave in accordance
with it, just as people do not either. Situations can be extrinsic or intrinsic. Extrinsic
situations are available for observation, while intrinsic situations are emergent proper-
ties of behavior.
A well-known example of a situation is produced when people attempt to solve the
nine-dot problem. Consider three rows of three dots equally spaced in both the hori-
zontal and vertical directions. The problem is how to draw lines through all the dots
under the following three constraints: using only four straight lines with the pen not
leaving the paper. Most people cannot produce a solution. The reason is that they
appear to view the world they are in as if the following situation prevailed: no line can
pass outside the square (the convex hull) produced by the dots. No one told them this
and most people don’t even know that they have been working within this situation,
but their behavior appears as if it is controlled by it.
Situations are the basis for expectations and interpretations and hence play a domi-
nant role in designerly behavior. Situations can be explicit, ie, known to the designer
290 J.S. Gero
and affect their behavior, or implicit, ie, not known to the designer and still affect the
designer’s behavior.
In order to understand situated design computing all three of interaction, construc-
tive memory and situatedness need to be understood. Of the three constructive mem-
ory is the most contentious. The next sections draw analogies from the development
of the concepts of space in physics that aim to assist in the development of the under-
standing of constructive memory. It might be argued that the development of the
concepts of space in physics in itself is complex and difficult. It is claimed that the
base concepts in physics are sufficiently widespread and well understood that they
provide a foundation for enhancing the understanding that is trying to be achieved.
In 1883 Ernst Mach published Die Mechanik in ihrer Entwicklung [13], in which he
argued against Newton’s absolute space in favour of a completely relative space,
where objects are all relative to each other. Mach claimed that objects did not need an
absolute space to sit in. They could simply be in relation to each other and that pro-
vided the reference that Newton argued was the basis of absolute space. This is
equivalent to taking Newton’s view as presented with the grid as the reference and
removing the grid, leaving only the objects.
This maps well onto the notion of linked or network memory where all things in
the memory system are accessed by associations with other things. In linked computa-
tional memory things have both absolute locations and links but the absolute locations
need not be part of the visible accessing process. Semantic networks [14], [15] are
Understanding Situated Design Computing and Constructive Memory 291
examples of linked memory systems. Accessing a linked memory system does not
change its contents, nor does it change anything about the next time you access it.
In removing the grid Mach did not remove another of Newton’s foundational con-
cepts, namely that of interactions between objects: what he called gravitational attrac-
tion. In this view objects are linked together by the influence they exert on each other.
The introduction of a new object potentially changes the location of some or all exist-
ing objects in the system. This influence is bi-directional. Here the newly introduced
object’s influence is dependent on the individual masses of the existing objects and
their relative size in relation to it. A new object will have a small influence if its mass
is small in relation to the nearby objects, which will then have a greater influence on it
and vice versa. This applies to all the objects in the system.
Currently there is no memory system equivalent to this concept except aspects of
constructive memory which we will call interacting memory. A thing in the memory
is not only linked to other things in the memory but is influenced by them and influ-
ences them.
In 1927 Werner Heisenberg published Über den anschaulichen Inhalt der quanten-
theoretischen Kinematik und Mechanik [18], one of the cornerstones of quantum
mechanics, which displaced Newtonian mechanics, partly by replacing the Newtonian
concept of atoms and fields and the implied concept of certainty with an emphasis on
subatomic particles and uncertainty. Heisenberg’s uncertainty principle states that it is
not possible to simultaneously determine both the position and momentum of a parti-
cle. In quantum mechanics the location of particles is a function of their viewing. This
is an intriguing concept that previously had been associated with social and behav-
ioral science rather than physics.
The concept of situatedness is analogically related to this concept of not having a
fixed location and momentum, rather the location or momentum is a function of the
observation. A situation is like a particular worldview. A particular worldview affects
the interpretation of the memory. The memory could support all manner of world-
views, which one it supports is only apparent when a situation is used to query or
Understanding Situated Design Computing and Constructive Memory 293
access it. Before the memory is queried with that situation it does not have any bias to
that situation. The situation of the query biases the memory to produce interpretations
that support that situation.
Let us commence with an example drawn from the behavior of a human designer.
Take the drawings in Fig. 2. Fig. 2(a) shows the drawing produced by the designer at
some point in the design.
Fig. 2. (a) The original drawing, (b) the designer interprets the drawing as having a horizontal
axis; this creates the “axis” situation; (c) the designer now moves the blocks based on the situa-
tion of the “axis”
In looking at this drawing the designer reinterpreted what he had drawn not as a se-
ries of blocks but as a horizontal axis connecting the blocks, Fig. 2(b). This is a new
situation – a new worldview – with this he changes the meanings in his memory of
the locations of the blocks and orients them with respect to the axis, Fig. 2(c). Schon
and Wiggins [20] observed this behavior on numerous occasions in their studies as
did other researchers [21]. This is an example of an extrinsic situation.
Take another example from human behavior. Suppose a designer is shown a pic-
ture of pile of rubble that is clearly that of a collapsed house, Fig. 3. The picture is
labeled: “result of devastation by Cyclone Larry, 2006”. The designer, through their
interaction with the picture, is likely to have a response related to the damage caused
by nature or similar.
However, if they were shown the same picture with the label: “example of safety
issues in hand demolition of houses” the response would be quite different because
the viewer had used a different situation on which to base their interpretation. If the
picture was unlabelled it is unclear what the viewer would think without further
knowledge about the viewer and the viewer’s experiences.
294 J.S. Gero
This behavior is similar to the quantum physics behavior of the location or momen-
tum being a function of the observer, not only of the observed the effect of this con-
cept on memory systems is to change them from static memory to dynamic memory
systems.
4 Discussion
Situated design computing is a paradigmatic change in the way we view computation
in design. Typically design computing has taken the traditional computational stance
of memory being a repository and the primary memory process being that of recall.
This has served design computing well, but at the same time has restricted further
conceptualization of its development. Novel concepts from cognitive science and in
particular situated cognition have opened up new avenues for the development of
design computing.
The three foundational concepts of situated design computing are:
• knowledge through interaction
• constructive memory
• situatedness.
Knowledge through interaction moves design computing from encoding third-
person knowledge during a computational system’s initial design to including first-
person knowledge acquired during interactions continuously through the use and
application of the system. The effect of this that such a computational system can
adapt its behavior based on its use [22].
Constructive memory provides the foundation for all the activity in situated design
computing. It turns memory into a dynamic process that reconfigures itself based on
the “experiences” it has encountered. The trajectory of the development of our current
understanding of space from physics provides a strong analogy with the development
of our understanding of constructive memory [10].
Understanding Situated Design Computing and Constructive Memory 295
Acknowledgements
This research is part of a larger project on developing situated design computing and
is supported by a grant from the Australian Research Council, Grant No. DP0559885.
References
1. Schon, D. : The Reflective Practitioner: How Professionals Think in Action. Arena, Alder-
shot. (1995)
2. Pahl, G. and Beitz, G.: Engineering Design : A Systematic Approach (translated).
Springer, Berlin (1999)
3. Coyne, R., Radford, A., Rosenman, M., Balachandran, M. and Gero, J. S.: Knowledge-
Based Design Systems. Addison-Wesley, Reading (1990)
4. Suh, N.: Axiomatic Design. Oxford University Press, Oxford (2001)
296 J.S. Gero
5. Lawson, B.: How Designers Think, 4th edn. Architectural Press, London (2005)
6. Clancey, W.: Situated Cognition. Cambridge University Press (1997)
7. Lawson, B.: Acquiring design expertise. In Gero, J. S. and Maher, M. L. (eds), Computa-
tional and Cognitive Models of Creative Design VI. Key Centre of Design Computing and
Cognition, University of Sydney, Sydney (2005) 213-229
8. Constructive Memory Group meetings. Key Centre of Design Computing and Cognition,
University of Sydney, Australia (unpublished notes) (2006)
9. Wegner, P.: Interactive foundations of computing. Theoretical Computer Science (1998)
192: 315-351
10. Gero, J. S.: Constructive memory in design thinking. In G. Goldschmidt and W. Porter
(eds), Design Thinking Research Symposium: Design Representation, MIT, Cambridge,
(1999) I.29-35
11. Bartlett, F. C.: Remembering: A Study in Experimental and Social Psychology. Cambridge
University Press, Cambridge (1932 reprinted in 1977)
12. Newton, I,: Sir Isaac Newton’s Mathematical Principle of Natural Philosophy in His Sys-
tem of the World. trans. A. Motte and F Cajori, University of California Press, Berkeley
(1934)
13. Mach, E.: The Science of Mechanics: A Critical and Exposition of its Principles. Open
Court, Chicago (1893)
14. Shapiro, S. C.: A net structure for semantic information storage, deduction and retrieval.
Proc. IJCAI-71, (1971) 512-523
15. Sowa, J. F.,:(ed) Principles of Semantic Networks: Explorations in the Representation of
Knowledge. Morgan Kaufmann Publishers, San Mateo, CA (1991)
16. Einstein, A.: The foundation of the general theory of relativity. In H. A. Lorentz, A. Ein-
stein, H Minkowski, H. Weyl, Principle of Relativity: A Collection of Original Memoirs
on the Special and General Theory of Relativity, Dover, New York, (1923, republished in
1952) 109-164
17. Dewey, J.: The reflex arc concept in psychology. Psychological Review. 3 (1896 reprinted
in 1981) 357—370
18. Heisenberg, W. : Über den anschaulichen Inhalt der quantentheoretischen Kinematik und
Mechanik. Zeitschrift für Physik. 33 (1927) 879-893
19. Suwa, M., Gero, J. S. and Purcell, T.: The roles of sketches in early conceptual design
processes. Proceedings of Twentieth Annual Meeting of the Cognitive Science Society,
Lawrence Erlbaum, Hillsdale, New Jersey (1998) 1043-1048
20. Schon, D. A. and Wiggins, G.: Kinds of seeing and their functions in designing. Design
Studies (1992) 13, 135-156.
21. Suwa, M., Gero, J. S. and Purcell, T.: Unexpected discoveries: How designers discover
hidden features in sketches. In Gero, J. S. and Tversky, B. (eds), Visual and Spatial Rea-
soning in Design. Key Centre of Design Computing and Cognition, University of Sydney,
Sydney, Australia (1999) 145-162
22. Gero, J. S.: Design tools as situated agents that adapt to their use. In W. Dokonal and U.
Hirschberg (eds), eCAADe21, eCAADe. Graz University of Technology (2003) 177-180
23. Holland, J.: Emergence: From Chaos to Order. Perseus Books, Cambridge, MA (1999)
24. Stiny G.: Emergence and continuity in shape grammars. In U Flemming and S Van Wyk
(eds). CAAD Futures'93. Elsevier Science (1993) 37-54
25. Maher, M. L. and Gero, J. S.: Agent models of 3D virtual worlds. ACADIA 2002: Thresh-
olds, California State Polytechnic University, Pomona, (2002) 127-138
26. Smith, G., Maher, M. L. and Gero, J. S.: Towards designing in adaptive worlds. Computer-
Aided Design and Applications (2004) 1(1-4): 701-708
Understanding Situated Design Computing and Constructive Memory 297
27. Smith, G.J., Maher, M.L., and Gero, J.S.: Designing 3D virtual worlds as a society of
agents. In M-L. Chiu, J-Y. Tsou, T. Kvan, M. Morozumi and T-S. Jeng (eds), Digital De-
sign: Research and Practice, Kluwer (2003) 105-114.
28. Saunders, R. and Gero, J. S.: Situated design simulations using curious agents. AIEDAM
(2004) 18 (2): 153-161
29. Gero, J. S. and Kannengiesser, U.: Modelling expertise of temporary design teams. Journal
of Design Research (2004) 4(3): http://jdr.tudelft.nl/.
30. Peng, W. and Gero, J. S.: Concept formation in a design optimization tool. DDSS2006
(2006) (to appear)
31. Sosa, R. and Gero, J. S.: Design and change: A model of situated creativity. In Bento, C.,
Cardosa, A. and Gero J. S. (eds) Approaches to Creativity in Artificial Intelligence and
Cognitive Science, IJCAI03, Acapulco (2003) 25-34
32. Gero, J. S. and Reffat, R.: Multiple representations as a platform for situated learning sys-
tems in design. Knowledge-Based Systems (20010 14(7): 337-351
33. Gero, J. S. and Kannengiesser, U.: The situated Function-Behaviour-Structure framework.
Design Studies (2004) 25(4): 373-391
34. Gero, J. S. and Kannengiesser, U.: A Function-Behaviour-Structure ontology of processes.
In J. S. Gero (ed), Design Computing and Cognition'06, Springer (2006) 407-422
35. Kannengiesser, U. and Gero, J. S.: Agent-based interoperability without product model
standards. Computer-Aided Civil and Infrastructure Engineering (2006) (to appear)
Welfare Economics Applied to Design Engineering
Donald E. Grierson
1 Introduction
One of the difficulties in engineering design is that there are generally several
conflicting criteria, which forces the designer to look for good compromise designs by
performing trade-off studies between them. As the conflicting criteria are often non-
commensurable and their relative importance is generally not easy to establish, this
suggests the use of non-dominated optimization to identify a set of designs that are
equal-rank optimal in the sense that no design in the set is dominated by any other
feasible design for all criteria. This approach is referred to as ‘Pareto’ optimization
and has been extensively applied in the literature concerned with multi-criteria
engineering design (e.g., Osyczka 1984, Koski 1994, Grierson & Khajehpour 2002,
Cheng 2002). The number of Pareto-optimal designs so found can still be quite large,
however, and it is yet necessary to select the best compromise design(s) from among
them.
Koski (1994) briefly reviews several methods for searching among Pareto optima
to identify one or more good compromise designs. The final selection generally
depends on the designer’s personal preferences. This study employs a Pareto trade-off
analysis technique adapted from the theory of social welfare economics to identify
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 298 – 314, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Welfare Economics Applied to Design Engineering 299
one or more competitive general equilibrium states of the conflicting criteria that
represent good compromise designs; i.e., designs that represent a Pareto-optimal
compromise between designer preferences for the various criteria. The trade-off
strategy is initially developed for the two-criteria problem so that the concepts can be
given a simple geometric interpretation. A flexural plate design involving conflicting
weight and deflection criteria serves to illustrate the main ideas. Finally, a multi-
storey building design involving conflicting capital cost, operating cost and income
revenue criteria illustrates the extension of the trade-off strategy to the three-criteria
design problem.
2 Welfare Economics
In the theory of welfare economics involving multiple goods, utility functions are
used to describe consumer preferences for different bundles of the goods. For an
economy involving two goods, x1 and x2, a utility function assigns a number to every
possible consumption bundle (x1,x2) such that more-preferred bundles get assigned
larger numbers than less-preferred bundles. The utility assignment is ‘ordinal’ in that
it serves only to rank the different consumption bundles, while the size of the utility
difference between any two bundles isn’t meaningful.
In the space of the two goods x1 and x2, a certain consumption bundle (x1,x2) lies on
the boundary of the set of all bundles that the consumer perceives as being preferred
to it. This implies that the consumer is indifferent to all bundles that lie on the set
boundary itself, which is called an indifference curve. Utility functions are used to
define (label) indifference curves such that those which are associated with greater
preferences get assigned higher utility numbers. They can take on a variety of forms
(a specific utility function form is considered later in the paper).
For any incremental changes dx1 and dx2 of the two goods along an indifference
curve defined by utility function u(x1,x2), there is no change in the value of the utility
function. Mathematically, this may be expressed as (Varian 1992),
∂u ( x1 , x2 ) ∂u ( x1 , x2 )
du = dx1 + dx2 = 0 (1)
∂x1 ∂x2
Equation (1) may be reorganized to find the slope of the indifference curve at point
(x1,x2) as,
dx2 ∂u ( x1 , x2 ) / ∂x1
=− (2)
dx1 ∂u ( x1 , x2 ) / ∂x2
which is known as the marginal rate of substitution (MRS) that measures the rate at
which the consumer is willing to substitute good x2 for good x1. The negative sign
indicates that if the amount of good x1 increases then the amount of good x2
decreases in order to keep the same level of utility, and vice versa.
300 D.E. Grierson
Consider now a pure exchange economy in which two consumers A and B are seeking
to achieve an optimal trade-off between goods x1 and x2 (Boadway & Bruce 1984).
The total supply of good x1 is x1* , while that for good x2 is x2*. Suppose that
consumer A’s initial endowment consists of the entire supply of good x1, as indicated
by the distance between the origin 0A and point x1* along the horizontal axis in Figure
1. Her initial utility level corresponds to the indifference curve labelled by the utility
value uA0. If consumer A is offered a relative price of good x1 in terms of good x2 as
given by the (absolute) value of the slope of the terms-of-trade line (TLA) passing
through her endowment point x1*, she will choose to trade x1*-x1A units of good x1 in
exchange for x2A units of good x2 and, thereby, achieve increased utility level uA1 (note
that utility increases with distance from the origin 0A).
x2
uA0 uA1
good x2
x2A E
0A
x1A x1* x1
good x1
Fig. 1. Two-Good Exchange Economy (Boadway & Bruce 1984)
By offering consumer A different relative prices of good x1 in terms of good x2, her
offer curve (OCA) can be traced out by rotating the terms-of-trade line through her
endowment point and drawing the locus of equilibrium points E chosen. The highest
indifference curve (utility) available to consumer A at each relative price is tangent
Welfare Economics Applied to Design Engineering 301
to the terms-of-trade line at its intersection with the offer curve. Offer curves indicate
the willingness of consumers to exchange a certain amount of one good for a given
amount of another good at any relative price. They can take on a variety of forms (a
specific offer curve is considered later in the paper).
We can draw a similar diagram for consumer B by supposing that his initial
endowment consists of the entire supply of good x2. Upon doing that, the competitive
equilibrium of the two-consumer and two-good exchange economy can be
analytically examined by constructing the Edgeworth box1 diagram in Figure 2, the
horizontal and vertical dimensions for which are equal to the total supplies x1*and x2*
of goods x1 and x2, respectively. The origins for consumers A and B are 0A and 0B,
* *
respectively. Their initial endowment points A(x1 , 0) and B(0 , x2 ) are both located
at the lower right-hand corner of the box (note that consumer B’s axes are inverted
since they are drawn with respect to origin 0B).
Of special interest is the contract curve2, which is the locus of all allocations of the
two goods such that the indifference curves of consumer A are tangent to those of
consumer B. That is, the marginal rate of substitution between goods x1 and x2 for
consumer A is equal to that for consumer B at each point on the contract curve, but
not at any other point off the curve. This suggests the possibility for mutually
beneficial trade.
The initial indifference curves uA0and uB0 shown in Figure 2 form a lens-shaped
area within which lie points that are Pareto superior to the initial endowment point
and which can be reached by consumers A and B through trading goods x1 and x2
between themselves. Once they have traded to a point on the contract curve no further
Pareto improvements are possible, since then one consumer can gain utility only at
the expense of the other. That is, any point on the contract curve segment FG is a
Pareto-optimal allocation of goods x1 and x2 between consumers A and B. But some
points are better than others depending on the consumer; namely, all points from F up
to almost E are unacceptable to consumer A because they do not lie on her offer curve
and have smaller utility than desired, while the same situation applies for consumer B
for all points from G up to almost E. It is only at the intersection point E of their offer
curves OCA and OCB that consumers A and B are mutually satisfied with their highest
attainable utilities uA1 and uB1, respectively. Point E is a competitive general
equilibrium Pareto-optimal allocation of goods x1 and x2.
That point E lies on the contract curve follows from the fact the two offer curves at
that point have the same marginal rate of substitution. In other words, as indicated in
Figure 2, there exists a common terms-of-trade line TLA =TLB that affords consumers
A and B the opportunity to trade from their initial endowment to point E. This
opportunity to directly proceed to a Pareto-optimal competitive general equilibrium
state is exploited in the following concerning multi-criteria design engineering.
1
Named in honor of English economist F. Y. Edgeworth (1845-1926), who was among the
first to use this analytical tool.
2
Each point on the contract curve is obtained by maximizing the utility of one consumer while
holding that for the other consumer fixed: e.g., point G in Figure 2 is found by maximizing
uA(x1A,x2A) subject to uB(x1B,x2B)=uB0, x1A+x1B=x1* and x2A+x2B=x2*.
302 D.E. Grierson
x1B
0B
uAmax
uB0
uA0 OCA
G
uBmax
OCB E
x2A x2B
F uA1
good x2
uB1
Contract
Curve
TLA = TLB
x2*
0A x1A
x1*
good x1
Fig. 2. Welfare Economics Edgeworth Box (Boadway and Bruce 1984)
L M Δ
p
The analysis model for the plate is defined by the mesh of 36 finite elements
shown in Figure 4(a). The plate thicknesses of the six zones indicated in the design
model for the plate shown in Figure 4(b) are taken as the design variables. The (von
Mises) stress σi (i =1, 2,…, 36) for each finite element is constrained to be less than or
at most equal to 140 MPa, while the thickness tj (j =1, 2,…, 6) for each plate zone is
constrained to be in the range of 2-40 mm.
The two-criteria design optimization problem statement is:
Minimize: F( t ) = [W ( t ), Δ( t )] ⎫
⎪
Subject to: σ i ≤ 140 ( i = 1,2,...,36 )⎬ (3)
2 ≤ t j ≤ 40 ( j = 1,2,...,6 ) ⎭⎪
Fig. 4. Quarter-Plate (a) Analysis Model, (b) Design Model (Koski 1994)
304 D.E. Grierson
Design T1 t2 t3 t4 t5 t6 W Δ
Point (mm) (mm) (mm) (mm) (mm) (mm) (kg) (mm)
1 20.6 19.7 18.4 16.4 13.8 8.6 39.4 2.73
2 26.1 20.8 18.4 16.4 13.8 8.6 40.0 2.50
3 30.2 26.1 20.6 16.4 13.8 8.6 42.4 2.00
4 31.0 28.9 24.7 19.4 14.1 8.6 46.8 1.50
5 37.3 34.3 26.8 22.1 16.3 9.8 53.3 1.00
6 40.0 37.1 30.2 24.0 18.3 10.8 58.8 0.75
7 40.0 40.0 36.4 27.8 21.0 12.8 67.6 0.50
8 40.0 40.0 40.0 32.6 24.6 14.4 75.6 0.375
9 40.0 40.0 40.0 40.0 33.5 20.5 90.8 0.25
10 40.0 40.0 40.0 40.0 40.0 40.0 112.3 0.1746
Δmax= 2.73
Δ(mm)
Wmin
2.5 2 Wmin
t1 t2 t3 t4 t5 t6
Decreasing deflection
2.0 3
Point
1.5 4
Δmin
1.0 5 Uniform thickness
6
7
0.5 8
9 10 Δ
Pareto curve min
0 W(kg)
0 40 60 80 100 Wmax=112.3
Decreasing weight
Fig. 5. Pareto-Optimal Flexural Plate Designs (Koski 1994)
Welfare Economics Applied to Design Engineering 305
The ten Pareto-optimal designs define the Pareto curve in Figure 5; in fact, any
point along this curve corresponds to a Pareto design. Shown are sketches of the three
Pareto designs corresponding to Wmin, point 5 and Δmin. It remains to select a good-
quality compromise design from among the set of Pareto designs in accordance with
the preferences of the design team.
A formal trade-off strategy based on the welfare economics analysis3 presented earlier
is applied in the following to identify a compromise plate design for which designer
preferences concerning the conflicting W and Δ criteria are Pareto optimal (Grierson
2006).
To begin, normalize the data for the W and Δ criteria in the last two columns of
Table 1 to be as given by the x1 and x2 values in the fourth and fifth columns of Table
2. Note that the largest value of each of the normalized criteria x1 and x2 is unity (i.e.,
x1*= x2*=1.0). Then consider two designers A and B who are seeking between
themselves to achieve an optimal trade-off of the two criteria x1 and x2 for the plate
design. Suppose that designer A’s initial endowment is the largest value x1*=1.0 of
criterion x1, while that for designer B is the largest value x2*=1.0 of criterion x2.
Similar to that in Figure 2, the competitive equilibrium of the two-designer and two-
criteria trade-off exercise can be analytically examined by constructing the
normalized Edgeworth box diagram in Figure 6, the horizontal and vertical
dimensions for which are both equal to unity.
In Figure 6, the origins for designers A and B are 0A and 0B, respectively, and their
initial endowment points A(1,0) and B(0, 1) are both located at the lower right-hand
corner of the box. Designer A’s offer curve OCA is a plot of the data points (x1,x2) in
3
Here: consumers ≡ designers; goods ≡ criteria; x1 ≡W-criterion; x2 ≡ Δ-criterion.
306 D.E. Grierson
the fourth and fifth columns of Table 2 (i.e., a normalized plot of the Pareto curve in
Figure 5), while designer B’s offer curve OCB is a plot of the data points (1-x1, 1-x2)
in the last two columns of Table 2.
1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0B
1.0
uB = 0.5057 1A
0.9 10B 0.1
0.8267 E1
uA = 0.6136 0.2
0.4
1
0.5 OCA 0.5
MRS=1.306
0.4 0.6
uA=0.3935 5A OCB
MRS=0.4722
0.3 0.7
1
E2 0.8
0.1733
1B uB =0.7182
1.0
0A 0.1 0.2 0.3 0.367 0.5 0.633 0.7 0.8 0.9 1.0
criterion x1 (W )
4
Note that in Table 2 and Eqs.(4)-(8) the coordinates x1 and x2 are measured from the origin
point 0A in Figure 6; i.e., x1 =x1A and x2 =x2A , and therefore (1-x1) =x1B and (1-x2) = x2B .
Welfare Economics Applied to Design Engineering 307
Equation (6) is solved to find the two roots x1 = 0.367 and x1= 0.633 (MATLAB
2005), and then the corresponding roots x2 = 0.8267 and x2 = 0.1733 are found
through Eq.(4). That is, as shown in Figure 6, the equilibrium points are E1(0.367,
0.8267) and E2(0.633, 0.1733).
where the exponent c is a function of the point at which the utility is being measured.
Upon observing in Figure 6 that for both designers the marginal rate of substitution at
any point (x1,x2) is MRS= -x2/(1-x1), it is found from Eqs. (2) and (7) that the utility
functions for designers A and B can be expressed as,
u A (x1 ,x2 ) = x1x1 x21-x1 ; uB (x1 ,x2 ) = (1-x1 )x2 (1-x2 )1-x2 (8a,b)
The utility levels uA and uB indicated in Figure 6 are found by evaluating Eqs.(8)
for the two sets of (x1,x2) coordinates (0.367, 0.8267) and (0.633, 0.1733)
corresponding to equilibrium points E1 and E2, respectively. As expected, utility uA is
greater at E1 than at E2 (i.e., 0.6136 > 0.3935) because the plate weight W there is less
(i.e., 41.21 kg < 71.09 kg ). Conversely, utility uB is greater at E2 than at E1 (i.e.,
0.7182 > 0.5057) because the plate deflection Δ there is less (i.e., 0.473 mm < 2.26
mm). Presuming that designer A is the advocate for the W-criterion, she will opt for
the plate design at point E1 because it provides her greatest utility uA = 0.6136.
However, as the advocate for the Δ-criterion, designer B will alternatively opt for the
plate design at point E2 because it provides his greatest utility uB = 0.7182. This poses
a dilemma, which may be overcome if the two designers agree to act together as a
team that simply opts for the design having the maximum utility level umax from all
among all four utility levels associated with the two equilibrium points. That is, they
308 D.E. Grierson
would select the plate design at point E2 having weight W = 71.09 kg, deflection Δ =
0.473 mm, and utility umax = 0.7182.
Consider the design of a multi-storey building, such as that shown in Figure 7, having
the following constraints and conditions: tax rate = 2% ; annual cost rate = 2% ;
capital interest rate = 10% ; inflation rate = 3% ; geographical latitude = 400N ; angle
with East = 00 ; land unit cost = $1000/m2 ; min-max annual lease rate = $100-
$360/m2 ; steel, concrete, reinforcement, forming, windows, walls, finishing,
electrical, mechanical, and elevator unit costs = local values ; building colour = dark ;
maximum height = 300m ; minimum aspect ratio = 0.5 ; maximum slenderness ratio =
9 ; maximum footprint = 70m x 70m ; core/footprint area ratio = 20% ; minimum
lease floor area = 60,000m2 ; minimum core-to-perimeter distance = 7m ; floor-to-
ceiling height = 3m ; energy cost = $140/MWhr ; desired inside temperature = 220c ;
outside min-max temperature = -200c to 310c ; cold-hot daily temperature range = 100c
; desired inside relative humidity = 0.5% ; cold-hot daily outside relative humidity =
50% -80%; dead load = 1.45 kN/m2 ; gravity live load = 2.80 kN/m2 ; wind pressure =
0.48kPa ; clear sky percentage = 75% ; and rules of good design practice to ensure
architectural and structural layouts are feasible and practical.
The design of the building is governed by a large variety of primary and secondary
variables. The former include ten different structural types, two different bracing
types, four different floor types for each of concrete and steel structures, four
different window types, sixteen different window ratios, four different cladding types,
from two to five times more column bays on the perimeter of framed tube structures
than on the interior of the building, up to eight different core dimensions in each of
the length and width directions for the building, and a large number of different
regular-orthogonal floor plans having from three to ten column bays with span
distances of 4.5 to 12m in the length and width directions for the building.
It is required to design the building for the three conflicting criteria of minimum
initial capital cost (CC-criterion), minimum annual operating cost (OC-criterion) and
maximum annual income revenue (IR-criterion). The initial capital cost consists of the
cost of land, structure, cladding, windows, HVAC system, elevators, lighting, and
finishing. The annual operating cost consists of maintenance and upkeep costs, the
cost of energy consumed by the HVAC, elevator and lighting systems, and annual
property taxes. The annual income revenue accounts for the impact that flexibility of
floor space usage and occupant comfort level has on lease income.
Full details of the constraints, variables and objective criteria discussed in the
foregoing may be viewed in Khajehpour (2001) and Grierson & Khajehpour (2002),
as well as that the three-criteria building design optimization problem may be
generally stated as:
Minimize: { Capital Cost ; Operating Cost ; -Income Revenue } (9a)
Subject to: { Specified Constraints } (9b)
Welfare Economics Applied to Design Engineering 309
m
60
The Pareto surface in Figure 8 is defined by the 139 design points in Table 3; e.g.,
see the point corresponding to the building in Figure 7. In fact, any point on this
surface corresponds to a Pareto design. It remains to select a good-quality
compromise design from among all possible Pareto designs in accordance with the
preferences of the design team.
To begin, normalize the data in Table 3 using the maximum CC, OC and IR values
noted in the foregoing (i.e., such that the largest value of each of the three normalized
criteria is unity). Assign each of three designers A, B or C an initial endowment equal
to the largest value of the normalized CC, OC or IR criterion, respectively. As an
extension to that in Figure 6, the competitive general equilibrium of the three-designer
and three-criteria trade-off exercise can be analytically examined by constructing the
normalized Edgeworth-Grierson cube diagram in Figure 9.
In Figure 9, the origins for designers A, B and C are 0A, 0B and 0C, respectively, and
their initial endowment points A(1,0,0), B(0,1,0) and C(0,0,1) are all located at the lower
right-hand corner of the back face of the cube (see black dot). Designer A’s offer surface
OSA is plotted from origin 0A and is the same as the surface plot in Figure 8. Designer
B’s offer surface OSB is a plot of the same data points but from origin 0B, while designer
C’s offer surface OSC is also a plot of the same data points but from origin 0C.
310 D.E. Grierson
three offer surfaces OSA, OSB and OCC, the coordinates for which are found as
described in the following. Upon applying curve-fitting/equation-discovery software
to the normalized data points derived from Table 3, the surface they form is found to
be accurately represented (r2 = 0.981) by the function (TableCurve2D&3D 2005),
x3 = a + b/x1 + c/x2 + d/x12 + e/x22 + f/(x1x2 ) +g/x13 + h/x23 + i/(x1x22 )+ j/x12x2 ) (10)
where the constant values are a = - 4654, b = 1505, c = 12168, d = - 2654, e = -
13086, f = 2349, g = 1634, h = 3925, i = 1073 and j = - 2261, and the parameters x1,
x2 and x3 correspond to the CC, OC and IR axes, respectively. The offer surface for
each of the three designers is defined by plotting Eq.(10) from each of the three
origins. MATLAB software (2005) is applied to plot Eq. (10) from the three origins
0A, 0B and 0C, in turn, to form a superimposed image of the three offer surfaces OSA,
OSB and OSC that is exactly as shown in Figure 9. The two competitive general
equilibrium points E1 and E2 where the three curves simultaneously intersect are
readily identified, and the data cursor command finds their coordinates to be as given
in Table 4 for all three origins.
where the exponents e and f are each a function of the point at which the utility is
being measured.
Point Origin x1 x2 x3
0A 0.9173 0.9668 0.8794
E1 0B 0.0827 0.0332 0.8794
0C 0.0827 0.9668 0.1206
0A 0.9671 0.9450 0.5112
E2 0B 0.0329 0.0550 0.5112
0C 0.0329 0.9450 0.4888
There is no change in the value of the utility function for any incremental changes
of the three criteria along an indifference surface defined by Eq.(11), i.e.,
Welfare Economics Applied to Design Engineering 313
from which the expressions for marginal rates of substitution are readily derived as,
dx2 / dx1 = −( ∂u / ∂x1 ) /( ∂u / ∂x2 ) − (( ∂u / ∂x3 )dx3 ) /(( ∂u / ∂x2 )dx1 ) (13a)
dx3 / dx1 = −( ∂u / ∂x1 ) /( ∂u / ∂x3 ) − (( ∂u / ∂x2 )dx2 ) /(( ∂u / ∂x3 )dx1 ) (13b)
dx2 / dx3 = −( ∂u / ∂x3 ) /( ∂u / ∂x2 ) − (( ∂u / ∂x1 )dx1 ) /(( ∂u / ∂x2 )dx2 ) (13c)
Setting one-at-a-time the incremental changes dx1, dx2 and dx3 equal to zero in the
second term of Eqs.(13), the values of the exponents in the utility function Eq.(11)
associated with each of the three origins for designers A, B and C are found to be as
given in Table 5.
The utility levels uA, uB and uC indicated in Figure 9 at equilibrium points E1 and E2
are found by evaluating Eqs.(11) for the six sets of coordinates (x1,x2,x3) in Table 4
and the exponents in Table 5. If acting alone, designer A will opt for the building
design at point E2 because it provides her greatest utility uA = 0.9469 > 0.9180.
However, designer B will alternatively opt for the building design at point E1 because
it provides his greatest utility uB = 0.2603 > 0.1265. Similarly, designer C will also
opt for the building design at point E1 because it provides her greatest utility uC =
0.2603 > 0.1265. The foregoing dilemma is overcome when the three designers agree
to act together as a team that, for example, opts for the design having the maximum
utility level from among all six utility levels associated with the two equilibrium
points. That is, select the building design at point E2. Perhaps more reasonably,
however, the design team will opt for the design for which two of the three utility
types is greatest. In which case, select the building design at point E1.
Origin e f 1–e– f
x1/(2-x1) (1-x1)/(2-x1)
0A (1-x1)/(2-x1)
0B (x2-1)/(x2-2) -x2/(x2-2) (x2-1)/(x2-2)
0C (1-x3)/(2-x3) (1-x3)/(2-x3) x3/(2-x3)
6 Concluding Remarks
For a given multi-criteria design problem, this study demonstrates that only a very
few of the theoretically infinite number of designs forming the Pareto front represent
a mutually agreeable trade-off between the competing criteria. This finding, which
depends primarily on the shape of the Pareto front, is under ongoing investigation. At
present, the research is in its early stages and prompts fewer conclusions than it does
questions, some of which are as follows. Does the form of the utility function have an
314 D.E. Grierson
influence on the results? When there are multiple competitive general equilibrium
points, what is the veracity of selecting the design to be that at the particular
equilibrium point having the maximum utility level from all among all utility levels
associated with all equilibrium points? What is the veracity of selecting the design at
the equilibrium point for which the number of the different utility types is maximum?
Can the methodology be applied to design problems involving four or more
conflicting criteria? These and other lines of enquiry will be pursued by the ongoing
research program.
Acknowledgments
This study is supported by the Natural Science and Engineering Research Council of
Canada. For insights into social welfare economics thanks are due to Kathleen
Rodenburg, Department of Economics, University of Guelph, Canada. For help in
preparing the paper text and the engineering example thanks are due to Yuxin Liu,
Joel Martinez and Kevin Xu, Faculty of Engineering, University of Waterloo, Canada.
References
Boadway, R., and Bruce, N. (1984). Welfare Economics. Basil Blackwell, Chapter 3, 61-67.
Cheng, F.Y. (2002). “Multi-Objective Optimum Design of Seismic-Resistant Structures”.
Recent Advances in Optimal Structural Design, Edited by S.A. Burns, ASCE-SEI, NY,
Chapter 9, 241-255.
Grierson, D.E. (2006). “Pareto Analysis of Pareto Design.” Proceedings of Joint International
Conference on Computing and Decision Making in Civil and Building Engineering, Edited
by H. Rivard, E. Miresco and M. Cheung, June 14-16.
Grierson, D.E., and Khajehpour, S. (2002). “Method for Conceptual Design Applied to Office
Buildings.” J. of Computing in Civil Engineering, ASCE, NY, 16 (2), 83-103.
Khajehpour, S. (2001) Optimal Conceptual Design of High-Rise Office Buildings. PhD Thesis,
Civil Engineering, University of Waterloo, Canada, pp 191.
Koski, J. (1994). “Multicriterion Structural Optimization.” Advances in Design Optimization,
Edited by H. Adeli, Chapman and Hall, NY, Chapter 6, 194-224.
Osyczka, A. (1984). Multicriterion Optimization in Engineering. Ellis Horwood, Chichester
Varian, H.R. (1992). Microeconomic Analysis. Third Edition, W.W. Norton & Co., NY.
___MATLAB, Version 7.0 (2005). Automated Equation Solver. The MathWorks, Inc.
___TableCurve2D&3D, Version 5.01 (2005). Automated Curve-fitting and Equation
Discovery. Systat Software, Inc., CA.
A Model for Data Fusion in Civil Engineering
Carl Haas
Professor,
Canada Research Chair in Sustainable Infrastructure, and
Director of Centre for Pavement and Transportation Technology
Department of Civil Engineering
University of Waterloo
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
Phone: 519-888-4567 x5492
[email protected]
1 Introduction
Data fusion may be defined as the process of combining data or information to esti-
mate or predict entity states [16]. In civil engineering the entity state may be the struc-
tural health of a bridge, the productivity of a construction project, or the traffic flow in
a section of a traffic network. State may be multidimensional. An element of a system
or the whole system may be the entity. Need and application typically drive further
definition.
To be useful a data fusion model should facilitate understanding, comparison, inte-
gration, modularity, re-usability, scalability, and efficient problem solving or design.
It may take the form of a process model, a formal mathematical model, a functional
model, etc. For data fusion, a functional model is desirable, because it may be both
flexible and general.
This short abstract presents a simple functional data fusion model adapted from
Steinberg [16]. Examples of the application of data fusion methods in civil engineer-
ing are interspersed in the explanation of the model. Then, common tools for data
fusion are identified. Finally, a conclusion and a recommendation are presented.
However, before expanding on the model, it is worth exploring how data fusion re-
lates to system identification and case based reasoning. System identification method-
ologies have been used to identify characteristics of structural systems using
measurement data [12,13,14]. It is a type of inverse problem solving. Case based
reasoning is similar in the sense that attribute values are used to identify the closest
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 315 – 319, 2006.
© Springer-Verlag Berlin Heidelberg 2006
316 C. Haas
past case to that observed [2]. Typically, multiple inverse solutions may exist. A clas-
sic case is the inverse kinematic solution for a multi-degree-of-freedom manipulator
where none, one, a finite number greater than one, or infinite solutions may exist
depending on the link configuration. A redundant manipulator may result in an infi-
nite solution space, for example. In data fusion, it is typical and generally necessary to
begin with one or a finite number of a-priori models of the solution space for the level
at which the fusion occurs. Where system identification is used to define a population
of candidate models, data fusion can be used to reduce the population via refined
input estimates or complementary measurement data. Where case based reasoning is
being used, data fusion may improve the case fit by better estimating attributes and by
adding attributes. In all cases, Smith’s admonishments in, “Sensors, Models, and
Videotape,” should be observed [15].
Fig. 1. Model for data fusion in civil engineering adapted from Steinberg, et al [16]
In addition to sensors, data sources may include people, web bots, or data already
fused at a lower level. Typical sensors sources include strain gauges, inductive loop
detectors (ILD’s), radio frequency identification (RFID) tags, range cameras, etc.
People provide data via probe vehicles, on site assessments, etc. Web bots may troll
the web to provide data from sources with varying degrees of reliability. And data
fused at some lower level such as at the automated incident detection (AID) algorithm
functional level may be a source to be fused with probe vehicle reports, for example.
Within the data fusion domain six functions are defined. They are not necessarily
sequential. In fact, sensor management may be interpreted as a meta-level activity.
Database management serves the other functions and the objective to archive some
data. Four functional levels for data fusion are defined that correspond to typical hier-
archical levels associated with aggregation, semantic content, and/or decision level in
an operational environment.
A Model for Data Fusion in Civil Engineering 317
Level zero data fusion is signal state estimation. A signal is hypothesized, and
data sources are fused to construct the signal. Examples include: (1) stacking in sig-
nal processing to improve the quality of the signal itself, (2) merging of range point
clouds from range scans to improve the signal’s representation of the hypothesized
entity [5], (3) better boundary detection in piles of construction aggregates or mine
tailings by fusing Canny edge detection and watershed algorithm processed signals
[6], and (4) accurate, autonomous, machine vision based crack detection by fusing
digital and range image data [4].
Level one data fusion is object state estimation. Inferences from two or more ob-
servations of the object state are combined to improve the accuracy of and confidence
in the object state estimate. Examples include: (1) proximity based RFID tag locating
using constraint set techniques, fuzzy logic, and Dempster-Shafer theory [1], (2)
pavement crack mapping by logically combining manual clues with machine vision
based processes, (3) integrating range image data and CAD data to better describe
infrastructure entities, (4) integrating point based data (from ILD’s) with link based
data (from portal to portal reads of AVI tags on vehicles) to improve estimates of
traffic flow, and (5) integrating output data from diverse AID algorithms to detect
traffic incidents at a higher rate with less false alarms.
Level two data fusion is situation state estimating (assessment) based on inferred
relations among entities. Examples in transportation, structural health monitoring, and
construction are briefly described. At the TMC (traffic management centre) level,
level one traffic flow state estimates from several related links may lead quickly to an
assessment of a major traffic congestion situation. Typically, TMC floor level human
controllers integrate data via shouts, gestures and brief conversations, in a manner
similar to the communication that occurs on the floor of a stock exchange or in a
situation room in a battle environment. Integration may be facilitated via web based
collaboration schemes and graphical visualization of related data. Whatever the extent
of automation, the situation assessment may trigger a sensor management function
such as channeling remote video camera output of the situation to TMC monitors or
redeployment of probe vehicles. A rather simplistic example of level two data fusion
for structural health monitoring is pavement condition assessment. Data on cracking,
rutting, roughness etc. of AC pavements, when fused via a multi-layer quadtree and
an expert system, yielded reasonably accurate assessments of the pavement structural
health and the likely cause [4]. For construction project state estimation, it is becom-
ing increasingly clear that fusion of data sources such as GPS readers, RFID tags [1],
digital images, range images [5], heat sensors, strain gauges, etc. creates the opportu-
nity to automatically estimate project state (situation) variables such as productivity,
object location, process maturity (such as concrete curing), machine activity (as in
automated earth moving), and test completion. In fact the opportunities are over-
whelming and companies such as TrimbleTM and LeicaTM (major positioning and
spatial data equipment makers) are moving quickly to fill the gap. However, rigorous
implementation of data fusion tools at this level becomes increasingly challenging.
Even more challenging is data fusion at level three, the impact assessment level.
And yet, there is active research in this area. Bridge scour impact assessment tools
have been developed based on risk analysis. They assess for example, the risk associ-
ated with structural failure based on situation state estimates at for instance the bases
of the piers and the traffic situation. Widely used construction project impact
318 C. Haas
assessment tools have been developed via weighted indices such as the PDRI (project
definition rating index) from the CII (Construction Industry Institute) located in Aus-
tin Texas. In fact a CII research team is currently working on the concept of project
situation “lead indicators”. This is a very applied and very relevant form of data fu-
sion for estimating impact.
At all levels of data fusion, database management is a key service function. It
maintains entity attribute information, it manages data relationships (such as spatio-
temporal, part/whole, organizational, causal, semantic, legal, emotional, etc.) [16]. It
facilitates associative and relational operations. Data structures are used such as hier-
archical (e.g. quadtrees), object oriented, and relational. Integrating and managing
multi-modal data is a particular challenge.
References
1. Caron, Razavi, Song, Haas, Vanheeghe, Duflos, and Caldas, “Models For Locating RFID
Nodes,” for ICCCBEXI, Montreal, Canada, June 14-16, 2006.
2. Cheng, Y. and Melhem, H. G., “Monitoring bridge health using fuzzy case-based reason-
ing”, Advanced Engineering Informatics, Vol. 19, No 4, 2005, pages 299-315.
3. Haas, C., “Evolution of an Automated Crack Sealer: A Study in Construction Technology
Development,” Automation in Construction 4, pp. 293-305, 1996.
4. Haas, C., and Hendrickson, C., “Integration of Diverse Technologies for Pavement Sens-
ing,” the NRC's Transportation Research Record, No. 1311, pp. 92-102, 1991.
A Model for Data Fusion in Civil Engineering 319
5. Kim, C., Haas, C., Liapi, K., and Caldas, C., “Human-Assisted Obstacle Avoidance Sys-
tem Using 3D Workspace Modeling for Construction Equipment Operation,” J. Comp. in
Civ. Engrg., Volume 20, Issue 3, (May/June 2006), pp. 177-186.
6. Kim, H., Haas C.T., Rauch, A., and Browne, C., “3D Image Segmentation of Aggregates
from Laser Profiling,” Computer-Aided Civil and Infrastructure Engineering, pp. 254-263,
Vol. 18, No. 4, July 2003.
7. Kim, Y.S., and Haas, C., “A Model for Automation of Infrastructure Maintenance using
Representational Forms,” Vol. 10/1, Automation in Construction, pp. 57-68, Sept. 2000.
8. Kwon, Bosche, Kim, Haas, and Liapi, “Fitting Range Data to Primitives for Rapid Local
3D Modeling Using Sparse Range Point Clouds,” Automation in Construction 13, January
2004, pp. 67-81.
9. Mahmassani, H.S., Haas, C., Logman, H., Shin, H., and Rioux, T., “Integration of Point-
Based and Link-Based Data for Incident Detection and Traffic Estimation,” Center for
Transportation Research, Bureau of Engineering Research, University of Texas at Austin,
research report no. 0-4156-1, March, 2004
10. Mahmassani, H., Haas, C., Peterman, J., and Zhou S., “Evaluation of Incident Detection
Methodologies,” Center for Transportation Research, Univ. of Texas, Report No. 1795-2,
Austin TX, Oct. 1999.
11. McLaughlin, J, Sreenivasan, S.V., Haas, C., and Liapi, K., “Rapid Human-Assisted Crea-
tion of Bounding Models for Obstacle Avoidance in Construction,” Journal of Computer-
Aided Civil and Infrastructure Engineering, vol. 19, pp. 3-15, 2004.
12. Robert-Nicoud, Y., Raphael, B. and Smith, I.F.C. "System Identification through Model
Composition and Stochastic Search" J of Computing in Civil Engineering, Vol 19, No 3,
2005, pp. 239--247
13. Robert-Nicoud, Y., Raphael, B. and Smith, I.F.C. "Configuration of measurement systems
using Shannon’s entropy function" Computers & Structures, Vol 83, No 8-9, 2005, pp
599-612.
14. Saitta, S., Raphael, B. and Smith, I.F.C. "Data mining techniques for improving the reli-
ability of system identification" Advanced Engineering Informatics, Vol 19, No 4, 2005,
pp 289-298.
15. Smith, I., “Sensors, Models and Videotape,” proc.s, 2005 ASCE International Conference
on Computing in Civil Engineering, Soibelman, and Peña-Mora - Editors, July 12–15,
2005, Cancun, Mexico.
16. Steinberg, A.N., and Bowman, C.L., “Revisions to the JDL Data Fusion Model,” pp. 2-1--
2-18. in Hall, D.L., and Llinas, J., “Handbook of Multisensor Data Fusion,” CRC Press,
2001.
Coordinating Goals, Preferences, Options, and Analyses
for the Stanford Living Laboratory Feasibility Study
1 Introduction
To achieve multidisciplinary designs, Architecture, Engineering, and Construction
(AEC) professionals need to manage and communicate a great deal of information
and processes. They need to define goals, propose options, analyze these options with
respect to the goals, and make decisions [1]. This is a social process [2]; they need to
coordinate these processes and information amongst a wide range of team members
and stakeholders. AEC professionals have difficulty doing this today.
This paper describes observations of these processes on the feasibility study for the
Living Laboratory project: a new student dormitory and research facility being
planned for the Stanford University campus. It describes the ways in which goals
were defined, options were proposed and analyzed, and decisions were made. It also
describes some of the difficulty the team had communicating and coordinating these
processes and information.
The paper then describes a process called MACDADI: a Multi-Attribute Collective
Decision Analysis for the Design Initiative. The authors designed and implemented
MACDADI with the help of the design team towards the end of the feasibility stage.
The authors and project team collected, synthesized and hierarchically organized the
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 320 – 327, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Coordinating Goals, Preferences, Options, and Analyses 321
project goals; the stakeholders established their relative preference with respect to
these goals; the authors collected and aggregated these preferences; the design team
analyzed the design options with respect to the goals; and the design team, stake-
holders, and authors visualized and assessed these goals, options, preferences, and
analyses to assist in a transparent and formal decision making process. The methods
of Decision Analysis3, including adaptations for group decision-making and multi-
attribute decisions [4], inspired our analysis but we did not adhere strictly to those
methods. Finally, we discuss some of the strengths and weaknesses of the imple-
mented MACDADI process, and we identify opportunities for future improvement.
were discussed were omitted and not formally documented. For example a well,
which would both close the water loop and serve as a source of heat and cooling for
the dorm, was not mentioned in the draft of the report, even though several project
stakeholders felt this option still had merit.
Room Type NSF/Bed Efficiency Social Analyses: In some cases the
Future Fit w/ Campus
Decision Matrix (Factor) (Sustainability) Interaction Flexibility Popularity Housing Plan
5 Stanford
Researchers
4. Stakeholders
(including Designers) 4a. Decision Visualize
Makers Preferences
Environmental
Performance
Stakeholder
Preferences Stakeholder Living
Economy
Project
Socia l
Living Lab
Neighborhood
R&D Access
3. Design Materials
R: Process
R: Sensing
Team R: Energy
R: Ve hicl e
R: Structure
6. Stanford R: Materials
R: Wa ter
De mo to Stanford
Researchers De mo Te chnology
Team on Goals
H2O Efficie ncy
H2O Capture
Options
Desirable Housing Living Lab
Baseline Green
Environ. Performance Economy
Fig. 2. The MACDADI process involved seven steps, described below. The process is dia-
grammed using the Narrative formalism [5] in which processes are described as dependencies
(arrows) between representations (barrels). The reasoning agent (in this case human) required
to construct each representation is shown above each barrel.
2. Design Team identifies Design Options: With input from the stakeholders, the
design team proposed several design options, ranging from architectural solutions
such as a green roof and clerestory windows, to mechanical solutions such as solar hot
water and photovoltaic arrays, to structural alternatives, such as optimized wood
framing and an earthquake resistant steel framing system. The design team coupled
these many options into two primary alternatives: Baseline Green and Living Labora-
tory. The options are shown in the left-hand columns of the Matrix. The MACDADI
process did not ostensibly impact the process of choosing options.
3. Design Team assesses Options’ Impacts on Goals: Using the matrix the design
team next assessed the impact of each of these options on the project goals. Other
projects [6] have similarly used a matrix to evaluate project options with respect to
goals, although the matrix shown in Figure 3 is perhaps more comprehensive in terms
of the number of goals assessed. The assessment rated each option’s impact on each
goal with a numeric score. In this case the architects first completed the entire matrix,
then consulted with the specialty engineers to validate their scores. Some assessments,
for example the impact of the photovoltaic array on the Low/No Carbon Per kwh
goal, are reinforced by rigorous analysis in the appendices of the feasibility report.
Other assessments, such as the impact of the large roof deck on Dynamic Social Life,
are more qualitative and rely on the assessing designer’s experience and intuition.
4. Stakeholders report Preferences: Step 1 established the stakeholders’ goals. How-
ever, that effort provided no indication of the relative importance among these
sometimes-competing goals. To determine their relative perceived importance, each
stakeholder was asked to represent their preferences by allocating 100 points amongst
the lowest level (detailed) goals. Lower level goal preferences were summed to
324 J. Haymaker and J. Chachere
Reduced earthquake
Influence at Stanford
sustainable sourcing
Sensing (monitoring
building technology
Dynamic social life
Thermal comfort
Completion date
Acoustic quality
Water efficiency
Building energy
Lighting quality
Good neighbor
deconstruction
Lifecycle Cost
Vehicle enegy
education
First Cost
systems)
recycling
lifestyle
process
losses
Water
Preferences 1 2 1 2 5 4 3 1 1 2 5 5 1 4 2 5 5 5 3 5 5 3 3 4 1 2 2 16 5 1
Baseline Green
Shared "Information Center"(foyer) and entry -1 2 2 3 3 0 0 -1 0 2 3 2 1 0 0 1 3 2 2 2 0 0 1 0 0 0 0 0 1 0
Solar orientation for passive solar design 0 0 -1 0 0 2 3 0 0 1 0 1 0 0 0 0 1 1 0 3 0 0 0 0 0 0 0 0 3 0
Radiant slab heating 0 0 0 0 0 3 0 1 0 1 1 1 0 0 0 0 1 1 1 2 0 0 0 0 0 0 -2 -1 1 0
Optimized. 24" O.C. wood framing 0 0 0 0 0 0 0 0 0 2 0 0 0 1 0 0 2 1 1 1 0 1 0 0 0 2 1 0 0 0
Natural ventilation for passive cooling 0 1 0 0 1 3 0 -1 3 1 1 1 0 0 0 0 0 0 0 2 0 0 0 0 0 2 1 1 2 0
High-performance light and water fixtures 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 2 1 0 0 2 0 0 1 1 0 0 0 0 2 0
Fly ash or slag, low-cement concrete 0 0 0 0 0 0 0 0 0 2 0 1 0 2 2 0 2 2 1 0 0 3 0 0 0 1 0 -1 0 0
First floor location for building systems lab -1 0 0 2 1 3 0 -1 0 0 0 0 2 0 0 0 2 0 1 0 0 1 0 0 -1 * 3 2 1 0
Large roof deck at second level -1 3 1 2 1 2 2 -1 0 0 2 1 0 0 0 1 1 0 2 0 0 0 0 2 0 0 0 -1 0 0
Electric car garage 0 0 1 2 3 0 0 0 1 1 2 0 3 0 0 0 2 2 3 0 1 0 0 0 0 -1 0 -1 0 0
80% daylit interior 0 0 0 0 1 0 2 0 0 1 1 0 0 0 0 0 1 0 0 2 0 0 0 0 0 0 0 0 1 0
Living Laboratory 0
100% daylit interior 0 1 0 0 2 0 3 0 0 1 1 1 0 0 0 0 3 1 1 3 0 0 0 0 0 0 0 -1 2 0
Steel structure w/concrete-filled metal deck 0 0 0 0 0 0 0 1 0 2 2 0 0 3 3 0 1 3 3 0 0 -2 0 0 3 1 3 -2 1 2
FSC-certified wood 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 3 0 -1 0 0
5 kw fuel cell 0 0 0 0 0 0 0 0 1 2 1 3 1 0 0 0 1 3 3 0 2 0 0 0 0 0 0 -2 2 -1
Solar hot water system 0 0 -1 0 1 0 0 0 0 1 1 3 0 0 0 0 1 1 2 0 3 0 0 0 0 0 1 -1 3 0
Greywater heat recovery 0 0 0 0 0 0 0 0 0 2 1 3 0 0 0 1 2 3 3 0 3 0 0 0 0 0 0 -1 3 0
60 Kw Photovoltaic array 0 0 -1 0 1 0 0 0 0 1 1 2 1 0 0 0 3 1 3 0 3 -1 0 0 0 0 1 -3 3 0
Dimmed lighting in dorm rooms 0 2 0 0 1 0 3 0 0 1 1 1 0 0 0 0 2 2 1 2 0 0 0 0 0 0 0 -1 1 0
Evening lighting setback 0 0 1 0 1 0 1 0 0 1 1 1 0 0 0 0 2 1 0 1 0 0 0 0 0 0 0 0 1 0
Highest-performance lighting and water fixtures 0 0 0 0 1 0 2 0 0 1 1 2 0 0 0 0 2 1 0 3 0 0 2 0 0 0 0 -1 1 0
Building systems monitors 0 1 0 3 3 1 0 0 2 1 3 3 1 2 1 3 3 3 3 3 0 0 2 0 0 0 0 -1 2 0
Rainwater collection 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 3 1 1 2 0 0 0 0 3 0 0 0 -2 1 0
Greywater and blackwater collection 0 0 0 0 1 0 0 0 0 2 1 0 0 0 0 3 2 3 3 0 0 0 0 3 0 0 1 -2 1 -1
Stormwater Features and Native Landscaping 0 0 2 1 2 0 0 0 0 1 1 0 0 0 0 2 1 0 1 0 0 0 0 2 0 0 0 0 1 0
Sustainable finish materials (interior and exterior) 0 0 1 0 0 0 0 0 2 1 0 0 0 1 3 0 1 2 2 0 0 2 0 0 0 3 0 -1 0 0
Extensive green roof, 2 to 4 inches of soil. 1400 sf 0 1 1 1 0 1 0 0 0 1 1 0 0 0 0 1 1 1 2 0 0 0 -1 0 0 0 0 -1 0 0
Triple-paned, double low-e windows 0 0 0 0 0 2 1 1 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 -1 0 0 -1 1 0
Three foot clerestory pop-up at upper, north-facing rooms 0 0 -1 0 1 2 2 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 -1 0 -1 1 -1
Ventilation atrium on first floor 0 1 0 0 1 2 2 0 1 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 -1 0 0
Fig. 3. The Matrix: The top rows of the Matrix show the current consensus on project goals.
Each high-level goal is broken down into lower-level goals that, if achieved, would positively
impact the higher-level goal. The left side of the Matrix shows the options, aggregated into two
alternatives. The body of the matrix contains an evaluation of each option with respect to each
goal (3 = high positive impact, 0 = no impact, -3 = high negative impact).
Energy Use
Wow
Low Carbon / kWh
Lifecycle Cost
R: Energy
Green Lifestyles
L2 Preferences Quake Loss
Lighting
Social
Econom y
R: Water
9% Dem onstration R&D Access
Material Us e
12% Demo.
Goals at Level 3
9%
Thermal
H2O Capture
R: Sensing
H20Cycle Demo. to Stanford
7% Materials
H2O Efficiency
Material Sources
Experim enta Embodied Energy
20% First Cost Students Faculty
Neighborhood
R: Materials
Acoustic S.o.E SU Housing
Privacy
Zero Carbon R: Structure
14% R: Process Designers University
Adapt/Deconstruc
R: Vehicle
Com m unity Completion Date
9%
Indoor 0.00 1.00 2.00 3.00 4.00 5.00 6.00
Environm ent Percentage of Preference (%)
Learning
13%
8%
Lighting
Acoustic
Materials
R: Process
R: Sensing
R: Energy
R: Vehicle
80% Daylit
Fuel Cell
GreyH2O
Solar H2O
Cerified
Electric
Monitors
100%
PV Array
Grey/Black
Rainwater
Triple
Clerestory
Natural
Low
Info Center
Fixture
1st Floor
Dimmers
Fixture
Atrium
Radiant
Steel
Green
Evening
Green
Roof Deck
Passive
Wood
R: Structure
R: Materials
R: Water
A. Options
Demo to Stanford
Demo Technology
Wow
Energy Use
Low Carbon / kWh
Embodied Energy
H2O Efficiency
H2O Capture
Quake Loss
Material Sources
Adapt/Deconstruct
First Cost
Lifecycle Cost
Completion Date
B.
Baseline Green Living Lab
C.
Fig. 5. Impacts of Options on goals. A. Impacts of the Living Lab Stakeholder Value of the
Baseline Green and Living Laboratory Options. B. Weighted Impact of the Living Laboratory
and Baseline Green Alternatives broken down to level two goals. C. Weighted impact of the
Baseline Green and Living Laboratory alternatives on each level three goal.
also be constructed. For example Figure 5A shows the assessment of the impact of
each option in the Living Laboratory alternative on each goal. The overall impact is
broken down to show the impact on each level two goal.
7. Analyze Options’ Impacts, Weighted by Preference: Combining designer’s assess-
ments of the design options’ impact from Step 3 with stakeholder preference data
collected in Step 4 generates a prediction of the perceived costs and benefits of each
design option – or a measure of overall Stakeholder Value. For example, Figure 5B
compares the average value of the Baseline Green and Living Laboratory Options for
all stakeholders, showing that these stakeholders find the Living Lab option to be far
more valuable. Figure 5C illustrates the relative value of the Baseline Green and Liv-
ing Laboratory Options with respect to the Level 3 goals.
vanced tools to visualize the rich information MACDADI produces. Finally, there are
opportunities to develop frameworks that represent and manage these processes at
lower levels of detail10. Moving forward, we intend to first develop a web based
application that will enable the method to be deployed on many projects in order to
popularize the method. We also intend a more detailed literature review to establish
the relationships to prior work, and more clearly define the roadmap for future devel-
opment outlined above.
References
1. Gero, J. S. (1990). Design Prototypes: A Knowledge Representation Schema for Design,
AI Magazine, Special issue on AI based design systems, M. L. Maher and J. S. Gero (guest
eds), 11(4), 26-36.
2. Kunz, W., & Rittel H. (1970). Issues as elements of information systems. Working Paper
No. 131, Institute of Urban and Regional Development, University of California at Berke-
ley, Berkeley, California, 1970.
3. ASTM International (1988). Standard Practice for Applying the Analytic Hierarchy Proc-
ess to Multiattribute Decision Analysis of Investments Related to Buildings and Building
Systems, ASTM Designation E 1765-98, West Conshohocken, PA, 1998.
4. Keeney R., and Raiffa, H., (1976). “Decisions with Multiple Objectives: Preferences and
Value Tradeoffs,” John Wiley and Sons, Inc.
5. Haymaker J., Fischer M., Kunz J., and Suter B. (2004). “Engineering test cases to motivate
the formalization of an AEC project model as a directed acyclic graph of views and de-
pendencies,” ITcon Vol. 9, pg. 419-41, http://www.itcon.org/2004/30
6. BNIM (2002). “Building for Sustainability Report: Six scenarios for the David and Lucile
Packard Foundation Los Altos Project”, http://www.bnim.com/newsite/pdfs/2002-
Report.pdf
7. EHDD (2006). Stanford University Green Dorm Feasibility Report, In production
8. Kiviniemi, A. (2005). "PREMISS - Requirements Management Interface To Building
Product Models" Ph.D thesis, Stanford University.
9. Kam, C. (2005). "Dynamic Decision Breakdown Structure: Ontology, Methodology, And
Framework For Information Management In Support Of Decision-Enabling Tasks In The
Building Industry." Ph.D. Dissertation, Department of Civil and Environmental Engineer-
ing, Stanford University, CA.
10. Gentil, S. and Montmain, J. (2004). “Hierarchical representation of complex systems for
supporting human decision making”, Advanced Engineering Informatics, 18,3,143-160.
Collaborative Engineering Software Development:
Ontology-Based Approach
1 Introduction
Engineering software development requires knowledge from both the engineering
problem domain and software engineering domain. In the past, because it was often
more difficult for software engineers to learn engineering domain knowledge, success-
ful engineering application development often relied on domain experts with good
knowledge of software development. However, with the rapid advancement of infor-
mation technology, it has become increasingly difficult for domain experts to keep up
with the most advanced software development technologies. Furthermore, the growing
scale and complexity of today’s engineering problems have resulted in increasingly
large and complicated engineering applications that are much more difficult to extend
and maintain. Therefore, successful development of engineering applications with
good software flexibility, extensibility, and maintainability demands more and more on
good collaboration between domain experts and software engineers.
In recent years, the object-oriented technology [1] has emerged as a state-of-the-art
software development technology to address the reusability, extensibility, and main-
tainability issues in development of large-scale and complex software applications. It
also provides a solution for domain experts and software engineers to collaborate on
software development. In the object-oriented approach (see Fig. 1), the functional
requirements of the targeted software application are specified by the domain expert
and often expressed in use cases. The software engineer then works collaboratively
with the domain expert to design the object-oriented software architecture for the
targeted application so that all of the use cases can be supported. Finally, the
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 328 – 342, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Collaborative Engineering Software Development: Ontology-Based Approach 329
implementation of the object classes in the software architecture is carried out and the
targeted application is built by the software engineer.
The key challenge in this software development process is on the design of the ob-
ject-oriented software architecture not only because the design requires good collabo-
ration between the domain expert and the software engineer but also because the reus-
ability, extensibility, and maintainability of the targeted application depend largely on
the design. Often, to achieve good object-oriented design, the domain expert needs to
have sound background on the object-oriented design method so that he or she can
help laying out a good overall architecture for the software at the first place. Further-
more, through the object-oriented design process, the domain knowledge for problem
solving explicitly specified in the use cases is transformed and represented in a more
implicit manner by the designed set of object classes and their interactions. This also
means that the domain knowledge and the software technology are more strongly
coupled in the object-oriented design. As a result, the software architecture as well as
its components cannot be easily reused, extended, and maintained without good
knowledge on both the problem domain and the underlying object-oriented design.
This paper proposes an ontology-based approach for collaborative engineering
software development. In previous researches on ontology-based technologies, on-
tologies are often used for knowledge representation, knowledge interchange, or
knowledge integration in the collaborative design processes [2, 3, 4]. In the present
approach, ontologies are employed to represent explicitly the important concepts and
their relationships of the targeted engineering problem domain. They also serve as
knowledge interfaces between software components of the targeted engineering soft-
ware application.
In the collaborative software development process using the proposed ontology-
based approach (see Fig. 2), the domain expert first identifies and defines the domain
knowledge ontologies as well as analysis units involved in the solution workflow of
the problem domain, with the consideration of the nature of domain knowledge, pre-
viously defined domain knowledge ontologies, and available analysis units. Then, the
software engineer works collaboratively with the domain expert to design the analysis
units before he or she carries out the implementation of the analysis units. Finally,
following the solution workflow, the domain expert integrates the analysis units along
with associated ontologies to construct the application.
Although the design of the analysis units requires good collaboration between the
domain expert and the software engineer, the collaboration complexity and knowl-
edge coupling for the design task has been greatly reduced because each analysis unit
is well encapsulated by the ontologies and is responsible for solution of only a subset
of the entire targeted problem. In addition, because the major knowledge for problem
solving is expressed explicitly in the proposed approach using a set of ontologies and
analysis units, the targeted application and its components (i.e., the analysis units) can
be more easily reused, extended, and maintained.
The remaining of this paper is organized as follows. Section 2 presents the pro-
posed ontology-based approach for collaborative software development. In Section 3,
requirements on the software development environment needed for realization of the
proposed approach are discussed. Also, an example environment is demonstrated
using an ontology-based engineering application integration framework prototyped in
this work. Section 4 uses an engineering software development example to demon-
strate the application of the proposed approach. Finally, some conclusions are drawn
in Section 5.
330 S.-H. Hsieh and M.-D. Lu
among different information objects (e.g., classes of objects) in a formal and explicit
way of expression using vocabularies with logical axiom.
A popular recent application of ontologies is in providing better organization and
navigation of web sites and improving the accuracy of searches in the World Wide
Web. In this application, ontologies are employed to provide “a shared understanding
of a domain” [7] so that semantic interoperability can be supported in the Web envi-
ronment. A markup language called Web Ontology Language (OWL) [8] has been
developed to facilitate application of ontologies on the Web. In engineering field,
most of physical concept ontologies of a particular problem domain are usually
332 S.-H. Hsieh and M.-D. Lu
employed to integrate engineering models that are formed based on domain theories
[9]. In addition, for construction of ontologies for a domain, the method proposed by
Noy and McGuinness [10] may be used.
Here, we use the reinforced concrete (RC) section shown in Table 1 as an example
to demonstrate what the ontology of the RC section may look like, using the OWL
vocabularies and notations. As shown in Fig. 3, based on the general knowledge of an
engineer, one can explicitly express the knowledge concepts about a RC section into a
set of classes (e.g., Section, Concrete, Steel, Location, etc.) and establish the relations
between the classes using a set of properties (e.g., SubClassOf, Domain, MateralIs,
hasSteel, etc.). In Fig. 3, the elliptical and rectangular boxes are used for representa-
tion of classes (or concepts) and user-defined properties (or relations), respectively.
The texts denoted on the arrows indicate the properties already defined in OWL. For
example, the Section class represents the concept of a structural section and the Rec-
tangleSection class, representing the concept of a rectangular section, is a subclass of
the Section class. The hasSteel property is established to relate the objects of the Sec-
tion class to those of the Steel class, stating that a Section object may have some Steel
objects (i.e., steel bars). Similarly, the LocateAt property relates the objects of the
Location class to those of the Steel class and describes the locations of steel bars in-
side the RC section. It should be noted that the ontology shown here (in Fig. 3) is not
meant to be a comprehensive one and, depending on the purpose of the ontology,
different set of classes and properties as well as different levels of details may be
developed.
In addition, the ontology model is more general and reusable than the Entity-
Relation Model (E-R Model), commonly used in design of relational database, in
defining the relations between the information objects needed for solving the domain
problem. In the E-R Model, the Entity, Attribute, and Relation concepts are used to
represent the data model. An entity usually can have many attributes and the relations
between two different entities are established through the common attributes they
share. An attribute must belong to a certain entity and cannot exclusively represent
the concept of a relation. However, in the ontology model, both the classes and
Collaborative Engineering Software Development: Ontology-Based Approach 333
Spacing hasStirrup
Range Domain
Range Domain
Section
Stirrup
Domain
Literal SubClassOf Domain
Domain hasSteel SubClassOf
MateralIs
SteelIs
RectangleSection CircleSection
LocateAt Range Range
Range
Domain
DomainDomain Domain
Range Concrete
Steel
Width Height Radius
Domain Domain
Location Domain Domain Domain Range Range
Range
Ec fc'
E fy No
Domain Domain Literal
Range Range
Range Range Range
X Y
Literal
Range Range Literal
properties can represent independent concepts and relations in one domain and can be
also be reused to represent the same concepts and relations in another domain.
For solving the targeted domain problem, engineering software usually follows a
solution workflow that can be decomposed into a logical set of analysis or informa-
tion processing tasks. There is no fixed rule for the decomposition and the sizes of the
tasks can vary in a wide range. For processing of these tasks, a set of corresponding
software components is usually designed in the engineering software. In this paper,
the term “analysis unit” is used to refer to the software component and an analysis
unit may contain an integrated set of analysis units.
Furthermore, the decomposition creates interfaces between analysis units. These
interfaces are called knowledge interfaces in this paper because they represent not
only the knowledge for associating the analysis units but also the knowledge that can
be processed by the analysis units. In the proposed approach, ontologies are employed
to define the knowledge interfaces.
Every analysis unit deals with two groups of ontologies: input ontologies and out-
put ontologies (See Fig. 4). The output ontologies are resulted from the processing of
the input ontologies by the analysis unit. Therefore, an analysis unit is also called an
Ontologies Processing Unit (OPU) in this paper.
334 S.-H. Hsieh and M.-D. Lu
Fig. 4. Each analysis unit processes input ontologies and delivers output ontologies
• For designing ontology interfaces of the identified analysis units, the domain
expert may seek help from an ontology engineer for better accuracy and us-
ability of the designed ontology interfaces.
Once the ontology interfaces of the analysis units are designed and expressed
by an ontology language (e.g., OWL), the processing logics of the analysis units
also needs to be described and defined by the domain expert.
For domain experts with no or very little basic software engineering back-
ground, partition of the solution workflow into an appropriate set of analysis units
may not be an easy task for them. In this case, they should ask for assistance from
their software engineer partner to have a better understanding about the implica-
tion of software complexity associated with the knowledge interfaces and analysis
units. They can also ask their software engineer partner to review the partition and
provide suggestions in the iterative partitioning process.
2. Detailed Design and Implementation Stage. Following the design of the ontolo-
gies and analysis units from the previous stage, the software engineer performs de-
tailed design and implementation of the analysis units at this stage. Although the
software engineer needs to work collaboratively with the domain expert to have
enough domain knowledge for completing the detailed design tasks, he or she can
work on each analysis unit without knowledge of other analysis units as well as
the solution workflow of the software application because the complexity of each
analysis unit is well encapsulated by its interface ontologies. This also means that
the complexity of collaboration between the software engineer and the domain ex-
pert depends only on the limited domain knowledge needed for the detailed design
of an analysis unit. In addition, as long as the designed input and output ontologies
of an analysis unit are implemented accordingly, the implementation of an analy-
sis unit may reuse existing software libraries, components, packages, applications,
or analysis units.
3. Integration Stage. Following the solution workflow, the domain expert integrates
the analysis units and their associated ontologies to build the targeted engineering
software application. If new analysis units are needed for extending the applica-
tion, the domain expert can go through the previous stages again to develop them
and then integrates them with the existing ones to build the extended application.
It should be noted that the collaborative software development process proposed
above is not a waterfall (or sequential) one but an incremental and iterative one. Dur-
ing the development process, any of the analysis units may be extended and the solu-
tion workflow may be adjusted to reflect evolutionary changes of ontologies, analysis
task partitions, and analysis logic. Also, the domain expert and the software engineer
collaborate in iterative processes for re-analysis, re-design, re-implementation, and re-
integration of the developing software.
management, sharing, and reuse of ontologies and analysis units, a user-friendly soft-
ware development environment for ontology-based applications is needed. This sec-
tion discussions some basic considerations on the requirements of such a software
development environment. In addition, discussions are given to an example software
development environment for ontology-based applications prototyped in this work.
concept. Because the development of the OneApp Environment is not the focus of
this paper, only a brief discussion on the functionality and application of the Envi-
ronment is provided here.
Fig. 5 illustrates the major components of the OneApp Environment, their applica-
tions in the proposed software development process, and their interactions with domain
experts, software engineers, and end users. Among all the components in the OneApp
Environment, the OneApp runtime environment is the most essential one because it
serves as the bridge between all other OneApp components and the Windows operating
system and supports all of the basic functions in the OneApp Environment, such as
management of ontologies, analysis units, and ontology-based applications.
Besides the OneApp runtime environment, four other major tools are provided by
the OneApp Environment to facilitate the collaborative software development of
ontology-based applications:
z Protégé Ontology Editor. Protégé is a free, open-source ontology editor de-
veloped at Stanford [11]. It provides many well-developed functions for
modeling ontologies. It can also export ontologies into a variety of formats
including OWL.
z OPU Designer for Visual Studio 2005. This tool is designed for helping the
software engineer in obtaining the ontologies defined by the domain expert,
implementing the analysis units, and deploying the analysis units to the One-
App runtime environment. The tool is developed as a Visual Studio add-in to
take advantage of the powerful built-in development functions in Visual Stu-
dio and to shorten the time for learning the tool.
338 S.-H. Hsieh and M.-D. Lu
4 Demonstration Example
A simple engineering analysis example is used here to demonstrate how the proposed
ontology-based approach can be applied to development of engineering software
applications and how the prototype ontology-based software development environ-
ment presented in Section 3.2 can be used to assist the development tasks. The exam-
ple selected is the analysis of plastic hinge properties for a Reinforced Concrete (RC)
column using the analysis method proposed in [12]. When the column member is
under a constant axial force, the analysis method can account for both flexural and
shear failure models of the member to compute the plastic hinge properties in terms of
the moment-rotation curve. In this example, the first author of [12], Dr. Y. C. Sung,
Fig. 6. Ontology-based Application Designer for the domain expert to construct ontology-based
applications
Collaborative Engineering Software Development: Ontology-Based Approach 339
was invited to act as the domain expert and the second author of this paper played
both the role of the ontology engineer to help define the required ontology interfaces
and the role of the software engineer.
Following the software development process described in Section 2.2, the domain
expert first defines the solution workflow for the analysis problem and partitions the
workflow into six analysis units with eight ontologies (see Fig. 8). The ontologies are
expressed in the format of OWL. A brief description about each analysis unit and its
associated ontologies is provided below:
• Data Input Analysis Unit. This analysis unit obtains the necessary data for
the analysis of plastic hinge properties from the user through a graphical user
interface. Its output ontologies include Section Ontology, Column Ontology,
and Axial Force Ontology.
• Stress-Strain Relationship Analysis Unit. This analysis unit inputs Section
Ontology, computes the stress-strain relationships for both the concrete and
steel in the column, and outputs Concrete Stress-Strain Ontology and Steel
Stress-Strain Ontology.
• Mb-Rotation Analysis Unit. After inputting Section Ontology, Column On-
tology, Axial Force Ontology, Concrete Stress-Strain Ontology, and Steel
Stress-Strain Ontology, this analysis unit computes the moment-rotation ca-
pacity of the column considering only flexural failure of the column. The out-
put is Mb-Rotation Ontology.
• Mv-Rotation Analysis Unit. This analysis unit inputs Column Ontology,
Section Ontology, and Axial Force Ontology, computes the moment-rotation
capacity of the column considering only shear failure of the column, and out-
puts Mv-Rotation Ontology.
340 S.-H. Hsieh and M.-D. Lu
• Plastic Hinge Analysis Unit. After inputting Mb-Rotation Ontology and Mv-
Rotation Ontology, this analysis unit computes the moment-rotation curve of
the plastic hinge considering both flexural and shear failures of the column. It
outputs Plastic Hinge Ontology.
• Result Output Analysis Unit. With Plastic Hinge Ontology as input, this
analysis unit displays the computed moment-rotation curve of the plastic
hinge.
In addition, for editing the ontologies and publishing them to the OneApp runtime
environment, the ontology engineer uses Protégé.
The software engineer then performs detailed design and implementation of the
analysis units using the OPU Designer for Visual Studio 2005. Once the analysis units
are implemented, they are deployed to the OneApp runtime environment.
At the final stage, the domain expert, with the help of the ontology engineer, uses
the Ontology-based Application Designer (see Fig. 6) to integrate the analysis units
and build the software application for analysis of plastic hinge properties of a RC
column. Fig. 9 shows the final product of the application.
Start
Input Data
Analysis Unit
1 2 3
2 3
1 2 3 Stress-Strain Relationship
Mv -Rotation Analysis Unit
Analysis Unit 4 5
7 Ontologies for analysis of
1 2 3 4 5
plastic hinge properties :
Mb -Rotation
Analysis Unit 1 Column Ontology
6
2 Section Ontology
6 7
Plastic Hinge 3 Axial Force Ontology
Analysis Unit 4 Concrete Stress-Strain Ontology
8
5 Steel Stress-Strain Ontology
8
Result Output 6 Mb -Rotation Ontology
Analysis Unit 7 Mv -Rotation Ontology
5 Conclusions
An ontology-based approach for collaborative software development has been pro-
posed in this paper to facilitate collaboration between domain experts and software
engineers in developing engineering software applications. In this approach, ontolo-
gies are employed to represent important concepts and their relationships for problem
solving. They also serve as knowledge interfaces for encapsulation of software com-
ponents so that knowledge coupling between the software components can be
reduced. Furthermore, based on the decomposition of the solution workflow, this
approach decomposes the software development task into a set of smaller develop-
ment tasks to reduce collaboration complexity in the software design tasks performed
cooperatively by domain experts and software engineers. With the support of the
OneApp software development environment prototyped in this work, the application
of the proposed approach has been demonstrated using an engineering software de-
velopment example.
To better support the three-stage collaborative software development process of the
proposed approach, the prototyped OneApp environment needs to be further imple-
mented to provide more complete services and tools for editing of ontologies, design
and implementation of analysis units, integration of analysis units into applications,
management and sharing of ontologies, analysis units, and ontology-based applica-
tions, etc. Moreover, research is still needed for development of a more effective and
efficient software development environment for ontology-based applications. For
example, an ontology-based reasoning system discussed in [9] can be used to facili-
tate integration of the analysis units and their associated ontologies by the domain
expert for building the targeted engineering software application.
Further research is needed to validate the proposed software development process
in a real engineering software development project. The simple example demonstrated
in this paper is not sophisticated enough to serve as a good validation example for the
proposed software development process. Also, the OneApp environment is currently
342 S.-H. Hsieh and M.-D. Lu
prototyped with only basic and limited functionalities for carrying out the presented
demonstration example. Further implementation of the OneApp environment, as dis-
cussed earlier, is required before it can be applied to a real project. Continuous re-
search effort is underway to improve the OneApp environment and to further validate
the proposed software development process in a research project on development of
an aseismic capacity assessment program for RC buildings using nonlinear pushover
analysis.
Acknowledgements
The authors would like to thank Prof. Y. C. Sung of National Taipei University of
Technology, Taipei, Taiwan, for providing his expertise on nonlinear analysis of RC
structures and helping the authors to validate the proposed ontology-based software
development process.
References
1. Booch, G.: Object-Oriented Analysis and Design with Applications. 2nd edn. Addison
Wesley, Redwood City California (1994)
2. Gu, N., Xu, J., Wu, X., Yang, J., and Ye, W.: Ontology based semantic conflicts resolution
in collaborative editing of design documents. Advanced Engineering Informatics, Vol. 19,
No. 2 (2005) 103-111
3. Garcia, A. C. B., Kunz, J., Ekstrom, M., and Kiviniemi, A.: Building a project ontology
with extreme collaboration and virtual design and construction. Advanced Engineering In-
formatics, Vol. 18, No. 2 (2004) 71-83
4. Kim, T., Cera, C. D., Regli, W. C., Choo, H, and Han, J.: Multi-Level modeling and access
control for data sharing in collaborative design. Advanced Engineering Informatics, Vol.
20, No. 1 (2006) 47- 57
5. Studer, R., Benjamins, V. R., and Fensel, D.: Knowledge Engineering: Principles and
Methods. Data and Knowledge Engineering, Vol. 25, No. 102 (1998) 161-197
6. Gruber T. R.: A translation approach to portable ontology specifications. Knowledge Ac-
quisition, Vol. 5, No. 2 (1993) 199-220
7. Antoniou, G. and van Harmelen, F.: A Semantic Web Primer. The MIT Press, Massachu-
setts London (2004)
8. McGuinness, D. L. and van Harmelen, F..: OWL Web Ontology Language Overview.
W3C Recommendation, World Wide Web Consortium: http://www.w3.org/TR/owl-
features/ (2004)
9. Yoshioka, M., Umeda, Y., Takeda, H., Shimomura, Y., Nomaguchi, Y. and Tomiyama, T.:
Physical concept ontology for the knowledge intensive engineering framework. Advanced
Engineering Informatics, Vol. 18, No. 2 (2004) 95-113
10. Noy, N. F. and McGuinness, D. L.: Ontology Development 101: A Guide to Creating Your
First Ontology. Technical Report KSL-01-05, Knowledge Systems Laboratory (2001)
11. Noy, N., Fergerson, R., and Musen, M.: The knowledge model of Protege-2000: Combin-
ing interoperability and flexibility. The 2nd International Conference on Knowledge Engi-
neering and Knowledge Management (EKAW'2000), Juan-les-Pins France (2000)
12. Sung, Y. C., Liu, K. Y., Su, C. K., Tsai, I. C., and Chang, K. C.: A Study on Pushover
Analyses of Reinforced Concrete Columns. Structural Engineering and Mechanics, Vol.
21, No. 1 (2005) 35-52
Optimizing Construction Processes by Reorganizing
Abilities of Craftsmen
Wolfgang Huhnt
1 Introduction
It is well known that craftsmen with different professions are necessary for the execu-
tion of construction projects. Even if a construction project is executed by a general
contractor, the general contractor subdivides the set of construction activities of the
project into subsets. Subcontractors tender for the execution of selected subsets.
The process of forming packages of construction activities is influenced by tradi-
tional outlines of professions. However, there are interdependencies between con-
struction activities that are assigned to different subsets and – as a consequence -
executed by different parties. These interdependencies are called interfaces in the
context of this paper. Interdependencies between construction activities that are as-
signed to the same subset are called transitions in the context of this paper.
Costs can occur at interdependencies between construction activities. And, in gen-
eral, these costs differ depending on whether the interdependency is an interface or a
transition. In general, responsibilities change at interfaces so that the progress of work
needs to be documented clearly. Furthermore construction activities before and after
an interface are generally executed by different parties so that traveling costs can
occur. In addition, experienced project manager know where expensive interfaces
occur due to rework that is required because the necessary quality is not achieved.
Thus, calculating project costs need to consider costs at interfaces and transitions.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 343 – 347, 2006.
© Springer-Verlag Berlin Heidelberg 2006
344 W. Huhnt
3 Costs
It is state of the art to assign costs to construction activities. This is usually done when
a project is calculated. In addition, costs can be assigned to interdependencies be-
tween construction activities. Figure 2 shows such costs. The mentioned costs result
from traveling costs. The construction activities partly stop and start at noon. In such
cases, traveling costs are involved due to the departure of project participants assigned
to the ending activity and the arrival of project participants assigned to the succeeding
activity.
Optimizing Construction Processes by Reorganizing Abilities of Craftsmen 345
drywall
construction
work
filling expansion
joints with
silicone
filling and
sanding
priming for
painting painting
priming
for tiling tiling
drywall
construction
transition: 0 €
€
work
interface: 0 € €
filling expansion
joints with
silicone: 200 €
€
transition: 0 €
€
interface: 50 € €
filling and
sanding: 400 €
€ transition: 0 €
€
interface: 0 € €
priming for
painting: 200 €
€ painting
transition: 0 €
€
interface: 50 € €
priming
transition: 0 €
€ for tiling: 200 €
€ tiling
interface: 50 € €
activities where all those activities that are assigned to the same resource form a subset.
The assignment of resources to activities with minimal costs has to be determined. The
solution space consists of all possible assignments between resources, in this case
construction workers, and activities. To guarantee the optimum, each combination has
to be checked and the sum of costs has to be determined. A combination with minimal
costs is the optimum. The example shown in figure 1 has 4*4*4 possible solutions if
house painters, tillers and drywall construction worker are considerer as ressources.
Np-complete problems are treated in Computer Science. In general, they can have
more than one solution i.e. more solutions have minimal costs.
In Computer Science, branch and bound techniques have been developed to reduce
the effort of computation. This technique is based on the consideration of subdividing
the solution space into disjoint subsets (branches) and checking whether all solutions
in such a subset cannot be better than an already determined solution (bound). [1] If a
subset can be checked, the solutions in that subset do not need to be considered so that
the effort of computation can be reduced. The disadvantage of branch and bound
techniques is that there is no theoretical proof that in any case the effort of computa-
tion can be reduced. An additional concept to reduce the effort of computation is the
use of heuristics. [2] Heuristics determine a solution in an efficient way, but in gen-
eral it cannot be proven that the solution determined is the optimum.
For the problem discussed in this paper, an analysis of the complete solution space,
a branch and bound technique and a heuristic have been implemented. The pilot im-
plementation is able to find an optimum for the example shown in figure 1 and the
weights shown in figure 2. Several optima exist with minimal costs of 1.000 €€ . An
optimum is the assignment of drywall construction workers to all activities so that
they have to execute activities that are usually executed by house painters and tilers.
The computed results can be regarded as a proposal for continuing education. The
results show whether it is possible to optimize a process by the assignment of specific
resources to activities; and continuing education has to be initiated if specific co-
worker shall execute activities that are usually not part of their work. However, such
considerations require the analysis of several projects because the investment into a
specific continuing education cannot be returned in a single project. But the concept
described in this paper shows that the decision of investing into continuing education
can be quantified so that a comprehensible basis is available for such decisions.
is. Within such a given schedule, several optimization possibilities exist. Start and end
date of activities can be regarded as prescribed time frames for the execution of these
activities, and a given set of resources can be assigned to the activities with the aim to
achieve a balanced use of these resources [4]. However, the optimization procedure
presented in this paper is focused on finding an optimal mapping of craftsmen to
construction activities where the abilities of the craftsmen are not preset.
As a consequence of applying the approach presented in this paper to real projects,
abilities of craftsmen need to be rearranged. Activities always require abilities of
construction workers to execute these activities. Entrepreneurs have to think about the
abilities of their co-workers, and continuous education already takes place in con-
struction companies. The approach presented in this paper tries to quantify the benefit
of specific continuous educations so that an entrepreneur can come to the decisions
whether a specific continues education will help or not on a comprehensible basis.
A lot of investigations are still necessary to analyze the complete potentials of the
approach presented. In general, decisions that specific co-workers should learn spe-
cific additional abilities are not based on a single project. Representative projects need
to be analyzed where core abilities should not be questioned. At present time, the
approach presented is discussed with associations in Germany that are responsible for
the content of teaching apprentices. These contents of teaching need to be reviewed
permanently so that at the end construction workers are available at the market that
are able to execute construction projects in an efficient way.
References
1. Branch and Bound. http://en.wikipedia.org/wiki/Branch_and_bound, 20.2.2005
2. Heuristic (computer science).
http://en.wikipedia.org/wiki/Heuristic_%28computer_science%29, 20.2.2005
3. Jiang, G., Shi, J.: Exact Algorithm for Project Scheduling Problems under Multiple Re-
source Constraints. Journal of Construction Engineering and Management, ASCE,
September 2005, 986-992
4. Huhnt, W.: Disposition von Mitarbeitern im Kontext der Ausführung von Bauleistungen.
Technische Universität Berlin, Institut für Bauingenieurwesen, 2004
Ontology Based Framework Using a Semantic Web for
Addressing Semantic Reconciliation in Construction
1 Introduction
There is a need in the construction industry to develop strong interaction between
customers, contractors, and owners using emerging information technologies to facili-
tate communication and collaborative use of project information. A successful im-
plementation of these technologies will enable integration of project information
based on real time and transparent information transfers to facilitate decision-making
or to work on concurrent engineering.
The problems encountered in the exchanging, sharing, transferring, and integrating
of information when actors interact with one another include the lack of coordination,
inconsistencies, errors, delays, and misinformation. These problems make the ex-
changing, sharing, transferring, or integrating of information among construction
participants burdensome with their high costs and need for human intervention. The
consequence is a reduction in the productivity and efficiency of current interoperabil-
ity activity. This problems were analyzed in a recent labeled domain, construction
informatics [1]. Hence, construction participants in order to interoperate are forced to
either partially solve coordination errors or to totally rework the representation of
information, to manage the resulting project delays, and to use additional resources.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 348 – 367, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Ontology Based Framework Using a Semantic Web 349
This research assumes that the members of the community agree on a common
knowledge and an explicit ontology, but in contrast to the aforementioned efforts, our
strategy takes into account inferenceability and extensibility aspects. The first aspect
considers computation by inference of the concepts of the common ontology. The
second aspect consists of the proposed ontology which will allow for the addition of
concepts as well as the reuse of already developed concepts from other sources e.g.
concepts from IFC Industry Foundation Classes (IFC) schemas and taxonomy stan-
dards. Our strategy proposes onto-semantic schemas, which are ontology constructs
of construction domain concepts. The result of this approach is to produce a frame-
work that contains an analysis from the primitives to more refined construction con-
cepts in the domain thorough the use of Semantic Web’s representations with the
purpose of enhancing interoperability at both the semantic and the syntactic levels.
The goal of this strategy is to revaluate current efforts on mapping and on information
integration in order to achieve effective interoperability by reducing human interven-
tion. This research advances the current state of the art on semantic interoperability by
proposing the enrichment of an ontology to approach reconciliation. Other efforts in
semantic interoperability support particular processes within engineering practice
such as functional design knowledge [4] or infrastructure management based on find-
ing semantic agreements [5].
In summary, there is a need to create a framework in which dissimilar construction
participants are able to exchange, share, transfer, and integrate information in order to
fulfill collaboration goals. This approach aims to reach common knowledge by using
explicit ontologies through the Semantic web at semantic and syntactic levels.
Next, a brief analysis of the problems encountered in exchanging, sharing, transfer-
ring, and integrating information will be presented. The analysis is narrowed to the
reconciliation of two sources of information. The subsequent section illustrates some
approaches to model the information in the construction industry. The last section
discusses the onto-semantic framework approach and its relation to the semantic web.
2 Reconciliation Problem
Our research recognizes multiple sources for the problem of creating the effective
exchanging, sharing, transferring, and integrating of information. Roughly, the
sources of the problem are the different methods used to represent information [6],
different levels of specification of the concepts in the domain, and the various levels
of systematization or sophistication of the construction participants’ systems.
The effective exchanging, sharing, transferring, and integrating of information is
embraced within the semantic interoperability paradigm. Semantic interoperability
can be defined as the understanding of the information concept representations by
domain’s agents. This paradigm can be approached from two different standpoints:
information systems and problem domain. The former is addressed within the sphere
of computer science domain, and the latter is addressed by the domain problem field.
The approaches have different input to find semantics. A solution from the problem
domain perspective embraces “informal” methods such as surveys to find semantic
definitions, such as mechanisms that prescribe models that structure manageable
pieces of work and that are aimed at agents that have the same view of the models
(e.g. framework models that propose break down of construction products). Solutions
Ontology Based Framework Using a Semantic Web 351
from the information systems attempt to find axiomatizations and models of informa-
tion [7]. These solutions use complex algorithms and other tools from fields such as
artificial intelligence. These solutions from each perspective have been driven inde-
pendently by specialists from the domain and from the scientific community. How-
ever, we envision that research on semantic interoperability should encompass inputs
from both perspectives. For example, while a project participant is interested in how
to make other participants understand the information furnished by each agent in any
interoperability activity, and computer scientists and knowledge engineers are inter-
ested in how to model a domain, inputs from both perspectives are necessary to
develop an approach within the semantic interoperability arena. An attempt to com-
promise these perspectives is done through the elaboration of product models, which
are conceptual models that define schemata in which concepts of the construction
domain could fit, and that attempts to embrace the whole information during the life
cycle of a building. However, these type of models do not define the associated nature
of the construction domain, but employ a modeling technique that domain experts use
to reshape the world [8]. The authors claim that a conceptual model will lack effec-
tiveness, reliability, and reusability if the solutions are not addressed from the two
standpoints. Approaches that only work on finding conceptual models without a close
induction or collaborative input of the concepts by domain experts will lack reusabil-
ity and effectiveness among domain agents because of their poor semantics.
New efforts on semantic interoperability by the authors are aimed at finding
frameworks based on inducting a conceptual model from the intention of the con-
struction participants in order to reduce human intervention [9, 10]. Our claims ap-
proach semantics by associating it with perception of the agents, which are states and
direct agent’s assertion to the form that characterize a concept. The idea of intention-
ality has been rooted in the integrationists’ paradigm [11, 12].
With the purpose of finding strategies within this paradigm, part of our research is
aimed at finding insights into the problem of reconciling concept representations
within the construction domain. The reconciliation problem deals with finding com-
mon semantics about sources of information from different agents. Reconciliation is
the step of exchanging, sharing, transferring, and integrating information. To illustrate
the problem complexity, consider the following example, which employs simple
knowledge representations but uses highly demonstrative schemes. The example is
narrowed to ontology-based information integration.
The results of this research will not be a panacea that solves the reconciliation
problem described by the database community as one of the obstacles on the road
toward solving the heterogeneity of information problem [13]. Preliminary results
indicate that further effort should be spent on another step of semantic interoperabil-
ity, which is the reconciliation of two or more sources of information. For this pur-
pose, this research appeals to the ontological tradition by distinguishing the natural
ontological categories that are implicit on knowledge representations and it proposes a
strategy to map two sources of information.
The following example selects ad hoc ontologies of certain construction busi-
nesses. The ontologies represent the construction business model. A graphical view of
one of these ontologies is presented in Figure 1. The reader is reminded that ontolo-
gies are often associated to taxonomic hierarchies of classes and the subsumption
relations, but they are not limited to this structure, which could be easily confused
352 R.R.A. Issa and I. Mutis
with the rigid inheritance structures from object oriented representations. Ontologies
acquire knowledge about the world and frame them into categories and add terminol-
ogy and constrain them with axioms from the traditional logic. The nodes in Figure 1
represent the Concepts.
Figure 1 shows some specializations of the root, represented by Concept 1, such as
{Functional Areas, Administration and Buildings}, which in turn are represented by
Concepts 2, 3, and 4 respectively. The concept Project Manager, Concept 7, has in-
stances like {Bill O’Connell, General Manager, New Hall UF} with fixed attributes
like {Name, Hierarchy, Project In Charge}. Thus, the instance Project Manager
O’Connell has the values Name=Bill O’Connell, Hierarchy = General Manager,
Project in Charge=New Hall UF. Additionally, there is a set of relations among con-
cepts like {is a, part of, has}. The relations are generally denoted as PartOf(Project
Manager, Administration). These relations describe associated defined relations
within classes, inheritance relations, and instances of properties.
The assumption is that an actor exchanges, shares, transfers, and integrates infor-
mation between two construction businesses, i.e. construction participant queries
information from the two-construction businesses.
If we reduce the analysis to finding mappings that express semantic equivalence
between the two ontologies, then the problem can be expressed in terms of how an
actor can semantically map one concept from one ontology to the “most semantically
similar” concept of the other ontology. It is clear that mappings for this example are
semantic relations between two or more concepts.
Graphically, it is possible to bring up a clear picture of the complexity in finding a
reconciliation of two ad hoc ontologies. The two ontologies are shown in Figure 2.
The nodes in the ontology represent concepts that have levels of specializations from
Ontology Based Framework Using a Semantic Web 353
their parent’s concept. In addition, the reader can identify relationships among con-
cepts, for example, Isa(CPVC, Plastic Pipe Fittings) among the Concepts 11 and 9, as
shown in the Figure 2b. The ‘is a’ relation corresponds to a semantic link between
two concepts that have a subsumption relation and the ‘has’ relation corresponds to a
semantic link that constrains the subsumption relation to a directional relation of
containment [14, 15].
Suppose the construction participant queries information about the availability and
costs of specific items from different material suppliers, say pipes for internal water
distribution in buildings. Specifically, he/she will need to perform a semantic map-
ping between the most similar concepts of the two ontologies that contain the infor-
mation sought. The problem is in how a construction participant can semantically map
a concept or concepts of one data representation to another concept of the other data
representation. Moreover, as the reader can intuitively notice in Figures 2a and 2b, the
question of how a construction participant will be able to semantically map more
complex matchings when one concept is semantically similar to a concatenation of
two or more concepts needs to be addressed.
In addition, the mappings shown in Figure 2 resemble one to one and complex
matching problems that are studied within the database community [16]. For instance,
consider a one to one semantic match at one of the levels. With the aid of auxiliary
information such an “expert knowledge”, Concept 7 from Figure 2a (Steel Pipe, Black
Weld, Screwed) is matched with Concept 8 from Figure 2b (Metal, Pipes & Fittings).
Note that although they semantically match, they fully syntactically mismatch.
Consider the case where the construction participant queries more detailed infor-
mation from two businesses. In other words, the participant needs to map specific
instances from one source to another source. For example, the user semantically maps
Concept 4 (Plumbing Piping) in Figure 2a to Concept 6 (Pipes and Tubes) in Figure
2b. But this mapping does not resolve the aforementioned query. The user should
follow the subsumption relations of the concepts. This approach is similar to follow-
ing down the hierarchy of a taxonomy. A taxonomy is a central component of an
ontology [17]. Assume the expert’s information asserts that a Polymer Pipe between
1” and 1”1/2 diameter is suitable for internal water distribution, e.g. PVC (Polyvinyl
Chloride) pipe. Hence, the user matches Concepts 9 (PVC 1¼”) from Figure 2a to
Concepts 10 (PVC) and Concept 13 (1¼” Diameter) from Figure 2b. Observe that
Concept 10 is concatenated with Concept 13 to perform the match. Concept 10 is a
more general concept than Concept 13 and vice versa, Concept 13 is more specific
than Concept 10. This is a complex type of match that includes a joint of two concepts
and a specialization of one concept.
Figures 2a and 2b, illustrate how relations between concepts of two representations
can match at a more general or specific level; or they can overlap; or they can mis-
match. Based on these types of intuitively semantic relationships, these relations will
have what is called a level of similarity [16, 18]. Thus we could say that the concept
of Concept 4 (Plumbing Piping) in Figure 2a is similar to Concept 4 (Building Service
Piping) in Figure 2b.
In addition, it is important to remark that the relation between concepts such as
Isa(HVAC, Mechanical) is also a possible map to other concepts of the ontology.
These mappings between different types of elements make the process more complex.
354 R.R.A. Issa and I. Mutis
Note that the concepts from Figure 2a and Figure 2b partially match; they semanti-
cally match although they are not equal. In fact, the concepts are similar and over-
lapped into more specific elements of the concepts. It easy to observe that expert
Ontology Based Framework Using a Semantic Web 355
Fig. 3. An example of the reconciliation of two competing standards at their upper levels
Ontology Based Framework Using a Semantic Web 357
4 Onto-semantic Framework
This research is based on a framework that uses an architecture that sets stages to
reconcile construction domain concepts. The framework employs an ontology as its
knowledge representation structure in order to implement an information integration
strategy (see Figure 4). The designed architecture resembles ontology translation, but
uses a centralized global ontology as a repository. The approach is focused on finding
semantic relations between two ontologies to build Onto-semantics, which are gener-
ated ontology constructs elaborated via Web Services. Onto-semantics are ontological
concepts of the construction domain. This strategy allows the delineation of more
specific problems for ontology specifications using Semantic Web Services.
The framework is aimed at finding a strategy to solve the problem of reconciliation
based on ontology. As was mentioned earlier in section two, this research identifies
the semantic heterogeneity between information sources as a reconciliation problem.
The objective is to use the ontology constructs as a knowledge base that recognize the
semantic mismatch explicitly. This is performed by structuring ontology constructs
that define construction concepts for specific semantics.
The purpose of developing these constructs is to define semantics, which have a
form of ontology constructs. The constructs will articulate the input sources by using
inference algorithms. The outputs of the articulation are the relationships among the
Ontology
Ontology
Server
Repository
concepts of the sources or input. This result is expected to aid in the analysis of the
structure of poorly specified ontologies as well as in defining a framework for other
future implementations using these outputs.
There are three steps involved in implementing this approach. The first involves
the layers of the semantic web that this strategy employs, the second illustrates the
main components of the framework, and the third articulates the reasoning explaining
the concepts on which the approach is based.
addition, other ingredients of the semantic web services play a significant role such as
the Agents, which create multiple inputs of data from diverse sources and process the
information, and Proofs, which ensures data transfer when data comes with seman-
tics. New developments derived from RDF are RDFS, which provide better support
for semantic definitions, and DAML+OIL which extends RDF with richer definitional
primitives, that facilitate modeling semantics by using ontological constructs.
Our approach develops the ontological constructs in RDFS which facilitates the
creation of relation: rdf:property, rdf:subpropertyof, and rdf:domain. The use of
RDFS facilitates the representation of the construction domain ontology.
Mapping Module
Mapped
Data
Work Flow
Management
Wrapper Semi-
System Ontology 1 Request
automatic Onto Semantic
Mapping Module
Engine
Ontology 2 Response
Knowledge
Base
Repository
Inference Algorithms
The capabilities of the framework to make inferences are aimed at solving the recon-
ciliations problem of concepts from the external source over the main, global ontol-
ogy. The algorithms take into account the following factors: syntactical and semantic
patterns, similarity indexes, and frequency of queries. This strategy undertakes the
definitions of typical mapping patterns to derive a comprehensive understanding of
what needs to be mapped and how it should be performed (see Figure 5). This process
is supported with the use of search engines and, the aid of a knowledge base library.
This research proposes the development of a global ontology that promotes interop-
erability and consistency. The consistency is promoted by virtue of the existence of a
central repository of construction concepts. In other words, the approach will attain
coherence in the ontology components by virtue of a high-level layer of concepts and
operations. This layer will be leveraged, as much as possible, from standards such as
IFC [21]. The assembly of the ontology into the Semantic Web Service approach is
intended to mitigate the complexity of the meaning of concepts. This reduction of
dissimilar concept interpretations is attained by the interaction between the Ontology
representation and the agents, i.e. between the Onto-Semantic Framework and the
browsers of multiple clients. In addition, a strategy is implemented for registering the
reactions of multiple agents to the interpretation of the concepts that are presented to
them through the ontology; these interpretations depend on the specific domain or
application data. This reaction is called the intention of the agents or construction
participants. In this case, the agents are construction participants that are cognitive
agents, which interact with the browsers as clients. The record of the agent’s interac-
tion will then be utilized to anticipate data redundancy and other data conflicts in
future concept reconciliations in the mapping module. The incremental clarification of
the agent’s intention over the construction concepts will reduce semantic differences
on the interpretation of the concepts. In this framework, the domain experts are repre-
sented by the cognitive agents.
It is suggested that the reader navigates through the explanations of the fundamentals
in order to capture the full understanding of the onto-semantic framework. The fol-
lowing sections explain the construction concepts analysis used in the framework,
which is implemented in the inference algorithms, and the server management system.
For this purpose, a conceptual framework to analyze concepts to aid the interpreta-
tion of cognitive agents is employed. The objective of this strategy is to take into
account the interaction of the expert’s domain and his/her world. This conceptual
framework differs from the traditional, passive models that process information, since
the framework incorporates external clients or construction participants as cognitive
agents and their social role in the construction organization. As was previously men-
tioned, construction participants are multi-agents that interact with the system via the
browser clients using web services. The approach employs ontological levels to repre-
sent construction concepts and wraps them in a framework with the purpose to
362 R.R.A. Issa and I. Mutis
approximate those concepts to the cognitive agent’s intentions. In this section, the
fundamentals of the approach are explained and justified through examples.
Construction Concepts
Concepts are abstract, universal notions, of an entity of a domain that serves to desig-
nate a category of entities, events, or relations. A concept that is used in the construc-
tion community comprises geometric features, components or parts, additional or
assembled items, and functional characteristics. Details and conditions are expres-
sions used to define characteristics of any concept employed in a construction project
by the construction industry community. A concept is represented in two forms either
as a physical construct or as an abstract expression.
Concept details are modes of describing a concept with features (e.g. geometrical)
and ontological aspects (e.g. dependency relations). For example, the concept details
that describe the component ‘hung’ of an entity ‘window’ are a part of the entity
‘window’ and have functional characteristics which cannot exist independently;
‘hung’ needs a ‘window’ to perform the locking and handling functions necessary that
allow an agent to open or close the ‘window’.
The approach is based upon the assumptions that any concept in a region of space-
time has no intrinsic meaning. Accordingly, this research aims at finding semantics to
determine how an entity, which is an abstract, universal notion, is related to other
entities. The semantics takes into account additional relationships such as situational
conditions. The conditions identify a separate piece of the ‘world’ in which the con-
struction concept is involved. For any concept, specific situations, which are bounded
in a space-time region, are considered and are labeled as situational conditions. Situ-
ational conditions include state of affairs, which embrace the entity’s location, posi-
tion, site, place, and settings; status condition, which is the stage of the concept (e.g.
completed, installed, delayed) during its life in the time-space region; and the rela-
tionships with other products or context relations (e.g. set by, part of). In this research
context relations are strictly locative to the object the concept describes. This means
that the space or region of analysis is limited to the closest location of the objects.
Situational conditions help the analysis handle states of affairs and context rela-
tions. As an illustration, Figure 6 depicts a construction concept ‘wood frame win-
dow’. It shows the conditions of the visual symbol representation and the possible
situational condition (e.g. relative position of the wood window in the wall, and the
window settings). Figure 6 sketches the construction concept context relations and
indicates the state of affairs of this particular entity.
For example, in Figure 6, ‘place in’ is a context relation of the ‘wood frame win-
dow’ to another physical concept; the wood window is vertically placed in the wall.
The wood window and wall represent construction concepts, and ‘placed in’ repre-
sents the relationship between these two concepts.
Representations attempt to describe an extension of a concept in the real world.
The representations themselves are simple metaphors that give meaning to some con-
cept. Concept representations are not merely elaborations of signs in the mind, but are
extended to something physical, such as the context space, in order to be realized or
instantiated [28]. This means that representations of concepts cannot fully describe
Ontology Based Framework Using a Semantic Web 363
the meaning of the concepts if their relationships to the other concepts are not taken
into account. These relationships are termed contextual relations.
Contextual relations attempt to identify a possible agent’s relations to other loca-
tive objects or construction concepts, which might influence the current concept in-
terpretation, and to link such relation to other concepts. This line of characterization
of the interpretation has roots in the semiotic tradition [29]. The contextual relations
rest on the cognitive agent’s purpose in interpreting a concept. This strategy takes
contextual relations in the consideration of a valid construction participant’s
interpretation.
In order to introduce the conceptualization notion, one has to keep in mind that the
intension of a concept in the construction domain can be stated by its details and
situational condition relations. Construction concept details comprise their geometric
features, their components or parts, additional or assembled items, and their func-
tional characteristics. The concept conditions are the situational conditions or state of
affairs, which embrace the concept location, position, site, place, and settings; the
status condition, which is the stage of the concept (e.g. completed, installed, delayed),
and its relationships with other products or context descriptions (e.g. set by, part of).
A conceptualization accounts for all intended meanings of a representation’s use in
order to denote relevant relations [30]. This means that a conceptualization is a set of
informal rules that constraint a piece of a physical construct concept or an abstraction
concept. An actor or observer uses a set of rules to isolate and organize relevant rela-
tionships. These are the rules that tell us if a piece of such a concept remains the same
independently of the states of affairs. Guarino further clarifies the conceptualization
notion, which refers to a set of conceptual relations defined on domain space that
describe a set of state of affairs, by making a clear distinction between a set of state of
affairs or possible worlds and intended models [31].
A conceptualization of any physical construct or abstract notion in the construction
domain must include details that will independently describe the construction concept
from its states of affairs. Situational conditions will be needed to describe some ex-
tensions of the concepts in order to reflect common situations or relevant relations to
the states of affairs.
364 R.R.A. Issa and I. Mutis
Conceptualizations are described by a set of informal rules used to express the in-
tended meaning through a set of domain relations. These meanings are supposed to
remain the same even if some of the situational conditions change [30]. One particu-
lar set of rules, which describes an extension to the world, is called the intended
model. These descriptions are implemented in a language that has specific syntax.
The syntax can be a natural language (e.g. English words), a programming language
(e.g. LISP syntax), or any visual representation or topology that aids in communica-
tion (e.g. electrical symbols for drawings). An intended model uses a particular inter-
pretation of the language to elaborate representations and create the constraints. The
syntax of the languages composes what is called a vocabulary. This vocabulary is
used to define the intended models. The models fix a particular interpretation of such
a language [31] . The intended models, or the models that partially commit to a cer-
tain situation, weakly describe a state of affairs by an underlying conceptualization,
but at least describe the most obvious or primitive situations. These means that we
recognize the existence of other states of affairs which are not register in the model by
any conceptualization.
For a better illustration of the conceptualization concept, consider Figure 7, which
schematically depicts a conceptualization into a specific domain, and indicates com-
ponents that help define a conceptualization. The components are minimal ontological
definitions of the entity, logical axioms that use the syntax and vocabulary of a lan-
guage, and additional semantic relations, which help describe several states of affairs.
For an example in the construction domain, see Figure 8. The conceptualization of
this ‘wood frame window’ involves an explicit description of the ontological defini-
tion. Additional description of the concept intension, which comprehends context
relations and other constraints that do not change with the states of affairs of the con-
cept (e.g. the relationship ‘set by’ and ‘on’ of a detail do not change with the position
of the product), will help to define ‘wood frame window’ for further interpretation.
Ontology Based Framework Using a Semantic Web 365
From Figure 8 it easy to notice that the details and condition are specifications of
the intension of the concept. Specifications or ontological refinement processes are
explicit formalizations of the concept conditions and the concept details. Reifying a
concept denotes an understanding of the conceptualization of the representation. Rei-
fying a concept, in computer science and artificial intelligence, means to make a data
model for a previously abstract concept, i.e. to consider an abstract concept to be real.
Reification allows a computer to process an abstraction as if it were any other data.
Explicit formalization of concepts is by definition an ontology specification [7, 30,
32]. As it is illustrated in Figure 8, conceptualizations become extractions of the do-
main knowledge and are specified by ontological categories, relationships, and con-
straints or axioms. Categories are forms of classifications of the ways cognitive agents
see the world. Conceptualizations, through the use of relationships and constraints or
axioms, attempt to formally define cognitive agent’s views or their perception of the
world according to the nature of the concepts themselves and the categories that cog-
nitive agents use.
5 Summary
This paper explains a step of the semantic interoperability paradigm, particularly the
reconciliation problem. It explains the complexity involved in exchanging, sharing,
transferring, and integrating knowledge and the need for human intervention to aid
these processes. An Onto-semantic framework is proposed as an approach for seman-
tic interoperability that allows two sources of information represented each one in an
ontology to be reconciled. Our approach inherits the ability of enhancing interopera-
bility at the syntactic and the semantic levels of the Semantic web, and a central,
global ontology to support reconciliation of two ontologies. Further development of
this research is aimed at finding better strategies to induct the global ontology with
the goal of finding conceptualizations based on the intention of the cognitive agent.
366 R.R.A. Issa and I. Mutis
Acknowledgements
This work is partially supported by NSF research grant ITR-0404113.
References
1. Turk, Z. Construction informatics: Definition and ontology. Advanced Engineering Infor-
matics. (2006). 20(2): 187-199
2. Jeusfeld, M.A. and A.D. Moor. Concept Integration Precedes Enterprise Integration. In:
34th Annual Hawaii International Conference on System Sciences ( HICSS-34). Island of
Maui, Hawaii: IEEE. (2001) 10
3. Veeramani, R., J.S. Russell, C. Chan, N. Cusick, M.M. Mahle, and B.V. Roo, State of
Practice of E-Commerce Application in the Construction Industry, in E-Commerce Appli-
cations for Construction. 2002, Construction Industry Institute, The University of Texas at
Austin: Austin, TX. 138
4. Kitamura, Y., M. Kashiwase, M. Fuse, and R. Mizoguchi, Deployment of an ontological
framework of functional design knowledge. Advanced Engineering Informatics. (2004).
18(2): 115-127
5. Peachavanish, R., H.A. Karimi, B. Akinci, and F. Boukamp, An ontological engineering
approach for integrating CAD and GIS in support of infrastructure management. Advanced
Engineering Informatics. (2006). 20(1): 71-88
6. Partridge, C. The role of ontology in integrating semantically heterogeneous databases, in
LADSEB-CNR, T.R. 05/02, Editor. 2002, National Research Council, Institute of Systems
Theory and Biomedical Engineering. (LADSEB-CNR): Padova - Italy. 24
7. Zúñiga, G.L. Ontology: its transformation from philosophy to information systems. In:
Proceedings of the international conference on Formal Ontology in Information Systems.
Ogunquit, Maine, USA: ACM Press. (2001) 187 - 197
8. Turk, Z. Phenomenological foundations of conceptual product modelling in architecture,
engineering, and construction. Artificial Intelligence for Engineering. (2001). 15(2):
83 - 92
9. Mutis, I.A., R.R.A. Issa, and I. Flood. Semantic Structures of Construction Product Con-
ceptualization. In: International Conference on Computing Decision Making in Civil and
Building Engineering. Montreal, Canada: Submitted. (2006)
10. Mutis, I.A., R.R.A. Issa, and I. Flood. Conceptualization of Construction Industry Organi-
zations Via Ontological Analysis. In: International Conference on Computing Decision
Making in Civil and Building Engineering. Montreal, Canada: Submitted. (2006)
11. Gibson, J.J. The Theory of Affordances. In: R. Shaw and R.J. Brachman, (eds.): Perceiving,
acting, and knowing. Toward and ecological psychology. Vol. 1. Lawrence Erlbaum Asso-
ciates: Hillsdale, New Jersey. 492 (1977)
12. George Lakoff and M. Johnson, Philosophy in the Flesh: the Embodied Mind and Its Chal-
lenge to Western Thought. First ed. New York: Basic Books, Perseus Books Group. 624
(1999)
13. Garcia-Molina, H., J.D. Ullman, and J. Widom, Database systems: the complete book.
New Jersey: Prentice-Hall (2002)
14. Brachman, R.J. On the epistemological status of semantic networks. In: N.V. Findler,
(eds.): Associative Networks: Representation and Use of Knowledge by Computers. Aca-
demic Press: New York, NY. 3 - 50 (1979)
15. Woods, W.A. What's in a link: Foundations for semantic networks, in Representation and
Understanding: Studies in Cognitive Science, D.G.B.a.A.M. Collins, (Eds.). Academic
Press: New York, N.Y. (1975) 35-32
Ontology Based Framework Using a Semantic Web 367
16. Doan, A. Learning to Map between Structured Representations of Data, in Computer Science &
Engineering. 2002, University of Washington: Seattle, Washington. 133
17. Noy, N.F. and D.L. McGuinness, Ontology Development 101: A Guide to Creating Your
First Ontology', R. KSL-01-05 and SMI-2001-0880, Editors. 2001, Stanford Knowledge
Systems Laboratory Technical.: Stanford, Ca. 25
18. Giunchiglia, F. and P. Shvaiko, Semantic Matching. 2003, University of Trento, Depart-
ment of information and Communication Technology. 16
19. Ding, L., P. Kolari, Z. Ding, S. Avancha, T. Finin, and A. Joshi, Using Ontology in the
Semantic Web. 2004: Department of Computer Science and Electrical Engineer. Univer-
sity of Maryland. Baltimore, MD. 34
20. Rahm, E. and P.A. Bernstein, A survey of approaches to automatic schema matching. The
VLDB Journal. (2001). 10: 334-350
21. International Alliance for Interoperability (IAI). Industry Foundation Classes (IFC). Ac-
cessed in April, (2006). Last Update (2005)
22. OmniClass Construction Classification System. web. Accessed in July 4. Last Update
23. Amor, R. Integrating Construction Information: An Old Challenge Made New. In: Con-
struction Information Technology 2000. Reykjavik, Iceland: International Council for
Building Research Studies and Documentation. (2000) 11-20
24. Zamanian, M.K. and J.H. Pittman, A software industry perspective on AEC information
models for distributed collaboration. Automation in Construction. (1999). 8(3): 237 - 248
25. Sivashanmugam, K., J.A. Miller, A.P. Sheth, and K. Verma, Framework for Semantic Web
Process Composition, in Large Scale Distributed Information Systems. 2003, Computer Sci-
ence Department. The University of Georgia: Athens GA. 42
26. Sowa, J.F. Knowledge Representation: Logical, Philosophical, and Computational Founda-
tions. 1st. ed, ed. K.R. Theory. Pacific Grove, CA: Brooks Cole Publishing Co. 594 (1999)
27. World Wide Web Consortium (W3C). Web Ontology Language (OWL). Accessed in
April, (2006). Last Update (2004)
28. Emmeche, C. Causal processes, semiosis, and consciousness, in Process Theories: Cross
disciplinary Studies in Dynamic Categories., J. Seibt, (eds.). Dordrecht, Kluwer. (2004)
313 - 336
29. Luger, G.F. Artificial intelligence : structures and strategies for complex problem solving.
Fourth ed. Harlow, England: Pearson Education Limited. 856 (2002)
30. Guarino, N. Understanding, building and using ontologies. International Journal Human-
Computer Studies. (1997). 46: 293 - 310
31. Guarino, N. Formal Ontology and Information Systems. In: FOIS’98. Trento, Italy: IOS
Press. (1998) 12
32. Gruber, T.R. A Translation Approach to Portable Ontology Specification. Knowledge Ac-
quisition. (1993). 5(2): 199 - 220
GPS and 3DOF Tracking for Georeferenced
Registration of Construction Graphics in Outdoor
Augmented Reality
Abstract. This paper describes research that investigated the application of the
Global Positioning System (GPS) and 3DOF angular tracking to address the
registration problem in visualization of construction graphics in outdoor Aug-
mented Reality (AR) environments. AR is the overlaying of virtual images and
computer-generated information over scenes of the real world so that the user’s
resulting view is enhanced or augmented beyond the normal experience. One of
the basic issues in AR is the registration problem. Objects in the real world and
superimposed virtual objects must be properly aligned with respect to each
other, or the illusion that the two coexist in augmented space is compromised.
In the presented research, the global position and the 3D orientation of the
user’s viewpoint (i.e. longitude, latitude, altitude, heading, pitch, and roll) are
tracked, and this information is reconciled with the known global position and
orientation of superimposed CAD objects. The result is an augmented outdoor
environment where superimposed CAD objects stay fixed to their real world lo-
cations as the user moves freely on a construction site. The algorithms are im-
plemented in a prototype platform called UM-AR-GPS-ROVER that is capable
of interactively placing 3D CAD models at any desired location in an outdoor
augmented space.
1 Introduction
Augmented Reality (AR) is the overlaying of virtual images and computer-generated
information over the scenes of the real world so that the resulting view is enhanced or
augmented beyond the normal experience. In other words, AR allows users to see and
navigate in the real world, with virtual objects superimposed on or blended with their
view. Virtual Reality (VR), on the other hand deals with only computer-generated
models in a totally synthetic environment in which the surrounding real world is not
contributing to the simulation process.
Figure 1 presents a snapshot of an AR-based simulation for an airport terminal ex-
tension project. While the existing terminals and ongoing airport operations are used
as real background, CAD models of the crane and the new steel frame, the only the
simulation objects under study, are superimposed over this view.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 368 – 375, 2006.
© Springer-Verlag Berlin Heidelberg 2006
GPS and 3DOF Tracking for Georeferenced Registration of Construction Graphics 369
Fig. 1. AR View of a Virtual Crane Erecting a Virtual Steel Frame on a Real Jobsite
AR can be classified into two categories: indoor and outdoor. In indoor AR, the
user takes advantage of a prepared environment in which movements are usually
restricted to a finite space. For a domain such as construction, however, indoor AR
has limited applications because most construction activities are performed in out-
door, unprepared environments (e.g. heavy construction, roads, bridges, dams, build-
ings, etc.).
The main requirement of outdoor AR is the need to accurately track the user’s
viewpoint, and to respond correctly to variations in user’s movement and environ-
mental characteristics (e.g. irregular terrain, lighting conditions, etc.). The AR system
must be capable of generating an accurate representation of augmented space in real
time so that the user experiences a world in which virtual objects stay fixed to their
intended location and seamlessly blend with real entities (Azuma et al. 1997).
three-dimensional orientation, can correctly register virtual CAD objects in the user’s
viewing frustum. The CAD objects are drawn inside a standard OpenGL perspective
viewing frustum. The OpenGL viewing frustum is reconciled with the truncated view-
ing pyramid of the video camera that captures the real world view. Accurate, real time
alignment (registration) of the two viewing frustums (real and virtual) leads to a real-
istic augmented view where both real and virtual objects coexist. The AR platform
transmits images of this augmented environment to the user’s display.
The most important requirement for realizing an AR scene in which virtual models
appear to coexist with objects in the real environment is real-time knowledge of the
relationship between the models, real world objects, and the video input device
(Barfield and Caudell 2001). Registration in AR means accurate overlapping of the
real and virtual object coordinate frames. Once accurate registration is achieved and
maintained, CAD models placed in the augmented space are correctly located and
oriented in the real world regardless of where in the augmented space they are viewed
from. Registration of virtual objects in the real environment requires accurate tracking
of the user’s viewpoint position and orientation.
Now, considering the fact that the CAD objects are placed in a perspective view,
transformation matrices can be effectively used to manipulate them. In order to do
that, each object is first subjected to a translation matrix by which it is translated into
the depth of view by an amount equal to the distance between the two points (R).
Then a rotation matrix is applied to the object by which it is rotated about the Y axis
(vertical axis) by an amount α. These two transformations update the virtual object’s
location based on the user’s last incremental move (Behzadan and Kamat 2005).
In Figure 3, the basic assumption in calculating the horizontal displacement (R)
and angle (α) is that both the user and the CAD object have the same elevation (alti-
tude). Thus, in case the object is located at a different elevation, further adjustments
are needed requiring additional computation steps. In Figure 3, for example, the ob-
ject has a lower altitude than the user. In this case, the relative pitch angle between the
user and the object (β) must be calculated using properties of triangle USO.
The virtual object is then rotated about the X axis by an amount equal to the calcu-
lated angle β. Referring to Figure 3, note that by doing this the object is being rotated
along the SO’ curve. For simplicity, a good assumption can be translating the object
along the cord SO in which case the final position of the virtual model ends up to be
the point O instead of O’. A final adjustment may be needed to be made on the ob-
ject’s initial side roll angle to represent the possible ground slope (equal to β in Figure
3). In the present work, a pure rotation equal to the slope is being applied to the object
around its local X axis so that it lies completely on the ground. In other words, the
ground plane is assumed to be uniform and parallel to line segment UO’.
372 V.R. Kamat and A.H. Behzadan
Using the translation along SO instead of rotation along SO’ may cause a minute
positional error that can be safely neglected for the purposes of this study without
experiencing any adverse visual artifacts. However, for wider range applications (e.g.
objects that are to be placed far away from the user), the length of OO’ can be large in
which case, instead of translating the object from S to O, a rotation equal to β must be
applied to transform the object from S to O’.
Thus, when a mobile AR user turns his/her head, the relative change in the view-
point orientation is obtained using the three pieces of data coming from the 3-DOF
orientation tracker. Applying the reverse transformations in the amount of the com-
puted angles to the virtual objects in the form of a rotation matrix leads to a final
augmented view wherein the objects’ locations are unaffected by the user’s head
movement. In addition, in cases where the user both moves and rotates the head at the
same time, all the computation steps in the described procedure are continually re-
peated (i.e. distance and relative yaw angle calculation, and pitch adjustment) so that
the final composite AR view remains unaffected as a user moves freely.
5 Validation
To validate the research results and registration algorithms, UM-AR-GPS-ROVER
platform was used to place several static and dynamic 3D CAD models at several
known locations in outdoor augmented space. In particular, the prototype was suc-
cessfully tested in many outdoor locations at the University of Michigan north cam-
pus using several 3D construction models (e.g. buildings, structural frames, pieces of
equipment, etc.). Figure 5 presents two such snapshots in outdoor AR.
Fig. 5. Structural Steel Frames Registered in Outdoor AR Models Courtesy of Mr. Robert R.
Lipman (NIST)
the hidden portions of virtual models should not be visible in the composite AR out-
put as shown in the figure. That is, however, not currently the case because UM-AR-
GPS-ROVER draws the pixels of all virtual models after painting the captured video
image as a background.
Fig. 6. Dynamic Occlusion Problem in Outdoor Augmented Reality CAD Models Courtesy of
Mr. Robert R. Lipman (NIST)
One of the solutions being explored for this issue is using a combination of rapid
geometric modeling of the surrounding environment or other depth sensing tech-
niques (e.g. stereo cameras) and utilizing the graphics processor’s z-buffer to draw the
appropriate set of pixels in each composite AR frame. In other words, if this depth of
real objects is greater than the depth of virtual object(s) for a given view, the real
object does not occlude any virtual objects. In the opposite set of circumstances, ap-
propriate corrections should be made to user’s view to take into account the existence
of an occluding real object.
7 Conclusions
The primary advantage of graphical simulation in AR compared to that in VR is the
significant reduction in the amount of effort required for CAD model engineering. In
order for AR graphical simulations to be realistic and convincing, real objects and
augmented virtual models must be properly aligned relative to each other. Without
accurate registration, the illusion that the two coexist in AR space is compromised.
Traditional tracking systems used for AR registration are intended for use in con-
trolled indoor spaces and are unsuitable for unprepared outdoor environments such as
those found on typical construction sites. In order to address this issue in the pre-
sented research, the global outdoor position and 3D orientation of the user’s view-
point are tracked using a GPS sensor and a 3DOF orientation sensor. The tracked
information is reconciled with the known global position and orientation of CAD
objects to be overlaid on the user’s view.
Based on this computation, the relative translation and axial rotations between the
user’s eyes and the CAD objects are calculated at each frame during visualization.
GPS and 3DOF Tracking for Georeferenced Registration of Construction Graphics 375
The relative geometric transformations are then applied to the CAD objects to gener-
ate an augmented outdoor environment where superimposed CAD objects stay fixed
to their real world locations as the user moves about freely on a construction site.
Designed algorithms have been validated using several 3D construction models in a
number of outdoor locations.
Acknowledgements
The presented work has been supported by the National Science Foundation (NSF)
through grant CMS-0448762. The authors gratefully acknowledge NSF’s support.
Any opinions, findings, conclusions, and recommendations expressed in this paper are
those of the authors and do not necessarily reflect the views of the NSF.
References
1. Azuma, R. (1997). “A Survey of Augmented Reality.” Teleoperators and Virtual Environ-
ments, 6(4), 355–385.
2. Barfield, W., and Caudell, T. [editors] (2001). Fundamentals of Wearable Computers and
Augmented Reality, Lawrence Erlbaum Associates, Mahwah, NJ.
3. Behzadan, A. H., and Kamat, V. R. (2005). "Visualization of Construction Graphics in Out-
door Augmented Reality", Proceedings of the 2005 Winter Simulation Conference, Institute
of Electrical and Electronics Engineers (IEEE), Piscataway, NJ.
4. Dodson, A. H., Roberts, G. W., and Ogundipe, O. (2002). “Construction plant control using
RTK GPS.” FIG XXII International Congress, Washington, D.C.
5. Rogers, S., Langley, P., and Wilson, C. (1999). “Mining GPS data to augment road mod-
els.” In Proceedings of the 5th International Conference on Knowledge Discovery and Data
Mining, 104-113, San Diego, CA.
6. Roberts, G. W., Evans, A., Dodson, A., Denby, B., Cooper, S., and Hollands, R. (2002).
“The use of Augmented Reality, GPS, and INS for subsurface data visualization.” FIG XXII
International Congress, Washington, D.C.
7. Vincenty, T. (1975). “Direct and inverse solutions of geodesics on the ellipsoid with appli-
cation of nested equations.” Survey Review, 176, 88-93.
Operative Models for the Introduction of Additional
Semantics into the Cooperative Planning Process
1 Introduction
In the computer-supported planning process building information is described by
instances of evaluated models. Due to the iterative and distributed nature of the plan-
ning process several versions mi of the building instance are created and exchanged
between the actors involved. According to law, these versions have to be archived
over long periods of time.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 376 – 382, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Operative Models for the Introduction of Additional Semantic 377
2 State-of-the-Art
According to the State-of-the-Art software applications used in the building planning
process create and modify structured object sets called application specific model
instances.
The standardization of evaluated object models is one attempt to support the distrib-
uted planning process. An example for this approach is the introduction of the Industry
Foundation Classes (IFC) and the physical file exchange format STEP. In practice,
however, the cooperation on the basis of evaluated object models is an error-prone
process. Firstly, a common model that covers all the disciplines tends to be too com-
plex. If it existed, an application would have either to implement the standardized
model and consequently would have to be newly developed or the standardized models
and the application specific models would have to be transformed into one another.
The latter is characterized by a cumulative information loss because standardized mod-
els and application models cannot be mapped completely onto one another [1].
Information exchange, long term compulsory archiving and version management
have considerable shortcomings when applied to traditional evaluated models. The
reason for this assumption is that the private object attributes have to be processed by
tools which are not aware of the semantics of this information. Moreover, evaluated
object models can be very complex in the case of deeply nested object relationships.
The interpretation of such models is a difficult task.
Furthermore, DMSs cannot support the merging of building instance versions since
the semantics of the document are generally not known [3].
378 C. Koch and B. Firmenich
Like DMSs, model servers are used for sharing building information in the distrib-
uted planning process. A shared model server running on the Internet manages object
model instances in database systems. An example for IFC models is the IFC Model
Server Development Project [4]. Using the SOAP communication service IFC in-
stances are completely or partially exchanged between client applications and server.
However, available servers do not support versions of IFC instances yet and the al-
ready mentioned problems remain unsolved. Recently, an approach to manage IFC
instance versions was described in [7].
For the addressed reasons standardized object models in conjunction with DMSs
and model servers have considerable shortcomings in the iterative and distributed
planning process of buildings.
3 Solution Approach
Using the Scheme language of the solid modeler ACIS [5] the applied operations
for defining an unevaluated CSG model instance can be formulated in just four lines
of code:
1: (define w1(solid:block 0 0 0 10 0.5 5))
2: (define w2(solid:block 8 3 0 9 0 3.5))
3: (define d(solid:block 2 0 0 4 0.5 3))
4: (solid:subtract(solid:unite w1 w2)d)
Operative Models for the Introduction of Additional Semantic 379
In contrast, using the BRep modeling approach the evaluated description of the
building’s solid results in a SAT file (Standard ACIS Text) containing almost 300
lines of code:
1: 700 0 1 0
2: 22 ACIS/Scheme AIDE - 7.0 11 ACIS 7.0 NT 24 Thu May
27 09:50:57 2004
3: 1 9.9999999999999995e -007 1e -010
4: body $-1 -1 $-1 $1 $-1 $2 #
5: lump $-1 -1 $-1 $-1 $3 $0 #
6: transform $-1 -1 1 0 0 0 1 0 0 0 1 5 0.25 2.5 1
no_rotate no_reflect no_shear #
7: shell $-1 -1 $-1 $-1 $-1 $4 $-1 $1 #
8: face $5 -1 $-1 $6 $7 $3 $-1 $8 reversed single #
...
285: straight-curve $-1 -1 $-1 4 -1.75 -2.5 0 -1 0 II #
286: point $-1 -1 $-1 4 -0.25 -2.5 #
287: point $-1 -1 $-1 4 -3.25 -2.5 #
288: End-of-ACIS-data
This example demonstrates that an operative description (1) is compact and (2) can be
interpreted by users – not only by applications.
and can be represented as an edge (mi, mj) in the version graph. The version graph is a
rooted tree with mx as its root node (Fig. 4). It should be noted that in this approach
the tree structure is preserved, even in the case of merges. In contrast to a version
graph (Fig. 1) this structure is denoted as delta tree.
An instance version mj is a node in the delta tree and can be formulated as a root
path starting at the root node mx. A path is denoted either by a sequence of n edges or
a sequence of n+1 nodes.
( )
rootpath m j := δ x0 ,δ 01,", δ ij = m x , m0 , m1 ,", mi , m j . (4)
380 C. Koch and B. Firmenich
For example, the version m2 of the model instance in figure 4 is described by the root
path rootpath(m2 ) = δ x 0 , δ 01 , δ 12 = m x , m0 , m1 , m2 .
A prerequisite for the use of operative models in the planning process is the definition
of standard operations. Once defined, the standard operations can be used to exchange
the building information.
Additionally, existing applications need to be slightly adapted. While the evaluated
application building model remains unchanged the application itself has to be ex-
tended by journaling functionality. A journaling mechanism is responsible for recog-
nizing changes applied to the model instance and describing these changes by a
standardized language. The operations that advance the instance version mi to mj are
serialized in a journal file as a δij change.
A sequence of journal files represents a version of a building instance. Conse-
quently, building information is exchanged by journal files. Contrary to the traditional
data exchange of evaluated models the proposed approach has the advantage of a non-
accumulating information loss: Instead of sequential information transformations the
Operative Models for the Introduction of Additional Semantic 381
exchanged δij changes are only interpreted and thus remain unchanged [1]. The ex-
changed information not only describes the result but to some extent also the intent of
design. Operative modeling is applicable in both a versioned and unversioned envi-
ronment. Besides the journaling mechanism an application has to be extended by an
interpreter capable of applying standard operations of the journal file.
Figure 5 illustrates the distributed workflow between planner A and planner B on the
basis of operative building models. It is assumed, that the planners use different plan-
ning applications. Planner A starts designing and creates his native model instance M0A.
The operations issued are automatically journaled as changes δx0. Subsequently, plan-
ner B generates his native model instance M0B by interpreting and applying the changes
δx0. Then, both planner A and planner B synchronously edit the building instance m0.
These changes are automatically recorded as δ 01, δ12 and δ 03 respectively.
As described in [6], unevaluated operative models have advantages in the context
of comparing and merging versions. The reason is that an operative instance explicitly
stores the semantics of differences between versions. This results in advantageous diff
and merge algorithms that operate on the delta tree. For example, merging versions of
the building instance means to join the respective sub-paths into a new one on the
basis of the existing changes [6]. Specifically, merging the variants m2 and m3 by
planner A results in the version m4 that is described as the sequence of
changes δ x 0 , δ 04
2 ,3
in figure 5.
4 Conclusions
Evaluated standardized models like the IFC have considerable shortcomings in the
context of the cooperative planning process. This paper presents a new solution
382 C. Koch and B. Firmenich
Acknowledgement
The authors gratefully acknowledge the support of this project by the German Re-
search Foundation (DFG).
References
1. Firmenich, B.: A Novel Modelling Approach for the Exchange of CAD Information in Civil
Engineering. In: Proceedings of the 5th ECPPM. A.A. Balkema, Leiden, London, New
York, (2004) 77 pp.
2. Beer, D. G., Firmenich, B., Beucke, K.: A System Architecture for Net-distributed Applica-
tions in Civil Engineering. In: Proceedings of the Joint International Conference on Com-
puting and Decision Making in Civil and Building Engineering, Montreal, Canada, (2006), -
accepted paper
3. Firmenich, B., Koch, C., Richter, T. and Beer, D. G.: Versioning structured object sets using
text based Version Control Systems. In: Proceedings of the 22nd CIB-W78. Institute of
Construction Informatics, Dresden, (2005) 105 pp.
4. http://cic.vtt.fi/projects/ifcsvr/ (2006-02-28)
5. Corney, J., Lim, T.: 3D modeling with ACIS. Saxe-Coburg. Stirling, (2001)
6. Koch, C., Firmenich, B.: A Novel Diff and Merge Approach on the Basis of Operative
Models. In: Proceedings of the Joint International Conference on Computing and Decision
Making in Civil and Building Engineering, Montreal, Canada, (2006), - accepted paper
7. Nour, M., Firmenich, B., Richter, T., Koch, C.: A versioned IFC Database for Multi-
disciplinary Synchronous Cooperation. In: Proceedings of the Joint International Confer-
ence on Computing and Decision Making in Civil and Building Engineering, Montreal,
Canada, (2006), - accepted paper
A Decentralized Trust Model to Reduce Information
Unreliability in Complex Disaster Relief Operations
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200 , pp. 383 – 407, 2006.
© Springer-Verlag Berlin Heidelberg 2006
384 D. Kostoulas et al.
1 Introduction
Extreme Events (XEs), (i.e., rare and significant occurrences in terms of their impacts,
effects or outcomes), have become the main focus of many research studies, espe-
cially after the “9/11” terrorist attack [1] and the recent Asian tsunami disaster [2].
XEs include natural disasters, such as earthquakes, hurricanes and floods, as well as
intentional disasters, such as fires or terrorist attacks. Specifically, the way urban
areas respond to XEs has been reported to be a vital challenge confronting society
today [3, 4, 5, 6]. The current response system to such emergent occurrences has
proved to be inadequate and needs to be improved [3, 4, 7, 8].
During XEs, a large number of organizations that do not work together on a regular
basis are forced into different kinds of interactions. These interactions lead to the
formation of the so-called hastily formed networks [30].For example, a Canadian
research team in a study of a massive fire near Nanticoke, Canada identified 346 or-
ganizations that were on site, i.e., at the scene of the fire [9]. If the efforts among the
different organizations involved in complex disaster scenarios are not well coordi-
nated, the actions made by one organization can generate problems for others [10].
Therefore, the members of different organizations are expected to collaborate and
coordinate their response efforts into an integrated team, where they would share
common goals but have distinct roles determined by their agency, rank and location in
relation to the disaster.
Civil engineers should be considered as key participants in this integrated response
group. Extreme events in urban areas are inevitably followed by structural and func-
tional damage of critical physical infrastructure. According to FEMA [11] complete
knowledge and accurate structural and hazard information about the incident site may
not be readily available in a disaster relief effort. Yet, because of civil engineers’
knowledge and experience on structural analysis, their role needs to be extended be-
yond infrastructure life-cycle management and sustainability to also involve first
response to XEs; particularly, the engineers and contractors involved in the original
design and construction of the critical physical infrastructure.
The diversity of groups forming an integrated multi-agent first response team that
includes civil engineers as well as the lack of experience from pre-disaster interac-
tions inhibit collaboration and limit the effectiveness of the response team. Although
the means for communication exist, first responders are hesitant to communicate and
interact with others outside their own organization [12]. The different groups of re-
sponders involved in relief operations, police officers, firefighters, medical personnel,
civil engineers, city hall personnel, public and private agencies, among others [13],
bear distinct roles in the disaster environment and may have different backgrounds of
skills, interests, knowledge and experience, as well as policies and protocols on how
to organize and prioritize activities. Moreover, inaccuracy of communicated informa-
tion increases hesitance to collaborate. According to Heide [12], initial actions in
A Decentralized Trust Model to Reduce Information Unreliability 385
disaster relief efforts are undertaken based on vague and inaccurate information.
Therefore, every interaction between first responders has an inherent risk, because of
uncertainty. Yet, first responders, including civil engineers, must be given the ability
to assess the trustworthiness of others in order to reduce uncertainty.
However, trust establishment in first response teams is not trivial. Traditional
means of trust management by centralized approaches would not work in large-scale
disaster scenarios. Quarantelli [9] argues that disasters have implications for many
different segments of social life and the community, each with their own preexisting
patterns of authority and each with the necessity for simultaneous action and autono-
mous decision-making, and, therefore, it is impossible to create a centralized authority
system. Disasters being considered in this paper may require a large number of human
resources being deployed into the disaster zone from a municipal, state, federal or
even international level, and responding in an on-demand manner. A strictly central-
ized and hierarchical model, as the command and control model currently applied in
response efforts, would be unwieldy in such an environment, as human and artificial
resources outgrow the resource capabilities of a possibly existent central agency.
Prieto [7], for example, points out that the current command and control model has
shown limited effectiveness in complex disaster contexts. In addition, Shuster [14]
argues about the inadequacy of centralized approaches to cope with systems com-
prised of large number of individuals or elements and points out the need to establish
a complex system approach due to the increasing number of resources. For the above
reasons, decentralized mechanisms for establishing trust and providing reliable com-
munication are needed to supplement any centralized approaches, currently supported
by the response system, in order to increase the effectiveness of the response process,
even in cases of large-scale disaster contexts.
Furthermore, history-based approaches to engineer trust in a multi-agent first re-
sponse group would also fail. As shown earlier, organizations involved in complex
relief operations do not have a shared history and they may interact for the first time
during the disaster. It is clear that first responders cannot rely on their past experience
to assess the reliability of others. Under these conditions, reputation through word-of-
mouth could be used.
In this paper, we propose a decentralized reputation-based trust model to establish
trust and provide reliability in communication between civil engineers and other first
responders involved in complex disaster relief efforts. Our model uses a decentralized
recommendation scheme to allow participants of a first response network to evaluate
the trustworthiness of other participants. This scheme is piggybacking on a member-
ship maintenance protocol that is supposed to run on the system. To efficiently sup-
port reliable information dissemination we build a reputation-based nature-inspired
activation model on top of the recommendation scheme. To validate our method, we
tested it through software simulations and by conducting a search and rescue exercise
involving civil engineers and firefighters.
The remainder of this paper is organized as follows: The following section de-
scribes our System Model. In the Nature-Inspired Systems Section, some background
knowledge on systems inspired by biological paradigms is presented. The section
Related Work, presents previous work in the trust management area. Section Decen-
tralized Trust Model provides details and analysis of the proposed model and the next
section presents how information dissemination is provided based on the trust model.
386 D. Kostoulas et al.
Sections Simulation Results and System Evaluation provide results from the valida-
tion of our model based on simulations and on a search and rescue exercise respec-
tively. To conclude, the last section of this paper summarizes contributions and
describes future work.
2 System Model
In the previous section, we argued that although the communication capabilities are
available, first responders from diverse groups do not interact because of the lack of
trust. In this section, we present the communication model on which we have based
our trust model.
Several infrastructure-based initiatives have been undertaken in order to effectively
support communication between the participants involved in disaster relief operations,
[15, 16]. However, disasters studied in this paper have low probability. Furthermore,
the time and location of some of these events are difficult if not impossible to predict.
Therefore, maintaining an infrastructure with constant availability and capable of
supporting relief efforts solely in anticipation of a disaster would be prohibitively
expensive. Instead, a communication platform for relief operations needs to support
interaction in a dynamic, on-demand manner, between participants who may ran-
domly enter or abandon the disaster site at any time. An ad-hoc network, (i.e. a net-
work whose functionality does not rely on any existent infrastructure), is the ideal
candidate to meet this challenge since it allows network devices to dynamically con-
nect and be part of the network for the duration of a communications session only.
Infrastructure networks may provide backbone connectivity between actors and re-
mote analysts, scientists, or task coordinators, if they are available.
In this paper, we assume robust communication through mobile ad-hoc networks
(MANETs), (i.e. a peer-to-peer infrastructure-less communication networks formed
by short-range wireless enabled mobile devices). Each first responder equipped with
an IT-based mobile device as well as any IT component that may perform independ-
ently of a physical actor in the first response system plays the role of a node in the
mobile ad-hoc network. Nodes participate in a dynamic network which lacks any
underlying infrastructure. Communication takes place hop-by-hop. In other words,
each node acts as wireless router. Nodes may route packets through neighbors (i.e.
nodes with which they have direct communication) to reach an intended destination.
This allows the network to accommodate high mobility and frequent topology
changes. Any appropriate communication protocol can be used to provide such wire-
less communication capabilities among first responders and civil engineers. We as-
sume IEEE 802.11b/802.11g and AODV (Ad Hoc On-Demand Distance Vector)
since they are widely used standard protocols. AODV is a routing protocol for ad-hoc
mobile networks with large numbers of mobile nodes. The protocol's algorithm cre-
ates routes between nodes only when the routes are requested by the source nodes,
giving the network the flexibility to allow nodes to enter and leave the network at
will. Routes remain active only as long as data packets are traveling along the paths
from the source to the destination. The communication protocol provides connec-
tivity, data transmission and routing among the mobile devices.
A Decentralized Trust Model to Reduce Information Unreliability 387
3 Nature-Inspired Systems
Several computer system design approaches have taken inspiration from the collective
behavior of social animals, and particularly insects. Problems solved by these insects’
affinities appear to have counterparts in both engineering and computer science. The
computational and behavioral metaphor for solving distributed problems that takes its
inspiration from the biological paradigms provided by social insects (e.g., ants), is
usually referred to as Swarm Intelligence (SI).
Insects’ societies are organized in a completely decentralized manner [17], similar
to the distributed systems used in the area of computer science. For example, in the
case of bees, the queen is not in control of the whole colony, but only of a small part
of the nest [18]. Moreover, insects act on local information and make single decisions
based on simple rules defined locally. This is similar to decentralized computer net-
works, where no global knowledge of the network is assumed and operation is based
on local information provided by a single node, and its neighbors. In addition, a net-
work of positive and negative feedbacks is built between insects. This organization
scheme makes such social systems very robust [17]; in the same way decentralized
communication protocols make distributed systems robust.
Complexity in social insects emerges at the level of a group. Simple behaviors of
individuals interact in a manner that produces a range of interesting complex behav-
iors. This is usually referred to as self-organization of insects’ societies. At a global
level, structure or order appears because of interaction between lower-level entities.
However, the behavioral rules or the rules for interaction among entities are imple-
mented on a local basis. Self-organization helps social insects to easily adapt to
changes in their environment [17]. Adaptability and self-organization is needed in
some cases of distributed systems as well. For example, if the load of the system
shifts rapidly from one region to another, as in the case of an adaptive grid computing
environment when the computing needs of a node suddenly increase, then the system
needs to easily adapt to changes in the computing environment, as social insects do by
self-organization. Social insects’ behavior may be used to model many problems of
distributed computing due to the similarities found in the way insects’ colonies and
distributed systems are organized.
In this paper, the behavior of ants, and specifically ants’ division of labor process,
is examined. Under threatening situations ants secrete a specific pheromone to inform
other ants. Not all the ants react in the same way to the levels of pheromone perceived
(i.e., division of labor). The heterogeneous response to alarm pheromone avoids cas-
cading effects. In general terms, it is a binary decision problem which can be
described by the rule: an ant will react to an alarm pheromone (adopting a “alarm”
behavioral pattern and also secreting pheromone) based on the amount of pheromone
present in the area where the ant is moving and its threshold level.
Models inspired by entomology and particularly by ants, have been widely used to
address optimization and routing problems in computer networks. Moreover, ant-based
techniques have been used for designing distributed applications that work without the
use of a central authority, similar to the decentralized trust management scheme pre-
sented in this paper. Agassounon [19] presents a swarm-based system for distributed
information dissemination and retrieval. The proposed model uses mobile, autonomous
agents that interact with each other in order to provide a distributed complex process.
388 D. Kostoulas et al.
When agents are used as sensors for counting small objects, it is found that cooperation
decreases the standard deviation of the sensed data with the decrease proportional to
the number of collaborating agents. In other words, the greater the number of agents
that cooperate to sense data, the more narrowly data is distributed around the average.
It is found that the sensing information retrieved from a single collaborating agent has
as much precision as that retrieved from all agents individually, when no cooperative
action takes place and information is processed afterwards [19].
Furthermore, Yingying et al. [20] have proposed a multi-robot cooperation algo-
rithm that can organize different numbers of robots to cooperate on a task according
to the task difficulty. The algorithm is based on the labor division process of the ant
society. The main idea is that more difficult tasks possess a higher pheromone amount
than easier ones. Robots are attracted by pheromone and, as a result, more robots are
engaged in difficult tasks. Similar to this scheme, higher rated first responders in our
trust model possess higher stimulus, attracting more actors to adopt information pro-
vided by them.
Ants’ division of labor belongs to a class of processes that can be analyzed using
spreading activation models. In spreading activation models, members of the network
are represented as nodes in a mathematical graph and relationships between members
are represented as graph edges, connecting the related nodes. Nodes that are con-
nected with an edge in the mathematical graph are called neighbors. Information
spreading across the network is modeled as a Boolean state. A node has either
adopted the information or it has not. A node becomes activated, i.e. adopts the in-
formation, using some activation function, which is typically based on some threshold
[21]. In a sense, a node decides whether or not to adopt the information depending on
the trend followed by its neighbors. If a node’s neighbors appear to get activated, the
node itself gets also activated. Ants use a threshold-based activation function to de-
cide whether or not to adopt an alarm based on the trend followed by other ants,
which is represented by the amount of pheromone associated with the alarm. As more
ants adopt the alarm, the amount of pheromone increases.
4 Related Work
Trust is subjective. It can be viewed as the trustor’s perception of the trustee’s reli-
ability, or else as the subjective probability by which the trustor relies on the trustee.
Trust may be based on various factors, including personal experience and reputa-
tion. In the absence of personal experience, as in the case of diverse first response
groups, reputation systems can be used to form trust. In such systems, recommenda-
tions in the form of ratings are used to provide subjective feedback about the reliability
of other nodes. Reputation is considered here as a collective measure of trustworthiness
based on ratings. If a complete set of ratings is used to measure reputation, i.e., if rat-
ings from all nodes in the network are used, then reputation is objective. However, this
is not usually the case in distributed systems where memory constraints do not allow
the maintenance of a complete set of ratings at each node. Therefore, in the sense of
distributed systems reputation is rather subjective.
Hung [22] argues also about the need to find peripheral, i.e., word-of-mouth based,
mechanisms, like recommendation systems, to assess trustworthiness in the absence of
A Decentralized Trust Model to Reduce Information Unreliability 389
past experience. He mentions that when people first meet, the lack of personal knowl-
edge about the interacting parties hinders their ability to engage in deliberate assess-
ment, even when they have high motivation to do so. This forces people to use simple
heuristics based on the peripheral cues embedded in the interaction environment.
Recommendation (or reputation) systems have been widely used for trust estab-
lishment in distributed systems. Marti et al. [23] proposed a reputation system for ad-
hoc networks. In their system, a node monitors the transmission of a neighbor, to
make sure that the neighbor forwards others’ traffic. For example, a civil engineer that
processes information on the state of a building through a police officer checks to see
if this information is being further propagated. If the neighbor does not forward oth-
ers’ traffic, it is considered as uncooperative, and this uncooperative reputation is
propagated throughout the network. A similar approach is being used in our decentral-
ized trust model. However, we do not consider misbehavior in the sense of uncoop-
erative behavior, but rather as unreliable behavior in the sense of providing inaccurate
information.
Liu et al. [24] introduce a distributed reputation-based trust model to detect threats
and enhance the security of message routing in ad-hoc networks. The rating scheme
[24] resembles our approach in the way trust reputation is calculated by averaging on
reported ratings. However, their scheme uses discrete trust values, whereas our rating
system is based on continuous trust scores in the real interval [0,1].
CONFIDANT [25] detects malicious nodes in ad-hoc networks by means of obser-
vation and reports about non-trustworthy nodes. The ability to make direct observa-
tions is assumed and although second-hand, i.e. word-of-mouth based, observations
are used, only first-hand, i.e. direct, observations based ratings are reported. In our
system, ratings that are solely based on rumor spreading may also be taken into ac-
count as long as their reliability has been assessed and weighted
Many reputation systems use complex probabilistic approaches (Bayesian systems)
to compute reputation scores. Bayesian systems take binary ratings as input (i.e. posi-
tive or negative) and are based on computing reputation scores by statistical updating
of beta probability density functions [26]. As opposed to these systems, our rating
mechanism uses a simple weighted average approach to compute ratings.
Our rating mechanism bears resemblance to the approach proposed by Abdul-
Rahman and Hailes [27]. Both systems use conditional transitivity of trust, i.e. trust is
considered transitive only under certain conditions, for assessing trustworthiness by
applying two distinct trust ratings: a direct trust rating and a recommender one. How-
ever, the reputation system in of Abdul-Rahman and Hailes [27] uses discrete trust
values. In comparison, continuous rating values are applied in our system. Moreover,
Abdul-Rahman and Hailes [27] provide no insight on how ratings are stored and
communicated.
the entities of the system, the representation of trust, how trust reputation is built and
updated, and how the recommendations of others are integrated. We study each of
these properties in the following sections
Trust Relationships
A trust relationship exists between two nodes in the system if one of the nodes holds
a belief about the other node’s trustworthiness. However, the same belief in the re-
verse direction need not exist at the same time. In other words, trust relationship is
unidirectional.
Two different types of relationships exist in our trust model. If node i has a belief
about the reliability of node j, then there is a direct trust relationship. Direct trust
relationship may rely on first hand information based on direct observations, on sec-
ond hand information provided through recommendations or on a combination of
both. If node i has a belief about the reliability of node j to give recommendations
about other nodes’ trustworthiness, then there is a recommender trust relationship.
Recommender trust relationship is always based on first hand observations.
Trust relationships can be modeled as a trust graph. Nodes are represented by ver-
tices, direct trust relationships by straight line edges and recommender trust relation-
ships by dotted line edges on the trust graph. Since trust relationships are
non-symmetrical, edges are directed. If a trust relationship of any type (direct or re-
commender) exists between two nodes in the network then the appropriate edge
(straight or dotted line respectively) appears between the two corresponding vertices
on the trust graph. An example from our disaster scenario is shown in Figure 1. The
structural engineer and the police officer are each represented by a vertex on a trust
graph. The structural engineer has a perception of trust for the police leader based on
previous interactions between these two nodes and therefore a straight line edge, di-
rected from the structural engineer to the police officer, connects their corresponding
vertices on the trust graph. Moreover, we assume that from the past interactions the
structural engineer also has knowledge on the trustworthiness of the police leader as a
recommender. For this reason, a directed dotted line edge connects the two vertices on
the trust graph that stand for the structural engineer and the police leader respectively.
Structural Police
Engineer Officer
Trust Transitivity
Trust Representation
Trust values in our system are in the [0,1] space for both direct and recommender
trust relationships. Two different types of values are used in accordance to the differ-
ent types of trust relationships: direct trust value is relevant to the direct trust relation-
ship and recommender trust value is relevant to the recommender trust relationship.
Direct trust value shows the probability θ with which node i thinks node j will be
reliable. This outcome is drawn independently each time an observation is being made
or a recommendation from some third node on the trustworthiness of j is received.
Each independent outcome is then used to update the direct trust rating of i for j. In a
similar way, the recommender trust rating shows the probability θ with which node i
thinks node j will provide reliable recommendations. This outcome is also drawn
independently based on direct observations after a recommendation is received. Re-
commender trust value is used to update the recommender trust rating.
Trust Ratings
In our model, node i maintains two ratings about another node j. The direct trust rat-
ing represents the opinion of i about j’s reliability as an actor on the first response
network. We represent the direct trust rating that node i has about node j as a variable
Di,j. The recommender trust rating represents the opinion of i about the trustworthi-
ness of j as a recommender. We represent the recommender trust rating that node i has
about node j as a variable Ri,j.
392 D. Kostoulas et al.
The direct trust rating of i for j is based either on i’s observations of j’s behavior or
on recommendations received by i concerning the trustworthiness of j. A recommen-
dation is the communicated direct trust rating of some other node for j. It has the form
of a report containing a direct trust rating value for j. A combination of both direct
observations and recommendations may also be used. Initially, the direct trust rating
Di,j is set to NULL if no information exists for j, or else an initial direct trust value is
assigned in order for the trust model to be initiated.
A high direct rating for j indicates that j behaves in a reliable manner in the first re-
sponse system, while a low rating indicates misbehavior of j. Whenever i makes an
observation of j’s behavior or a recommendation is received, the direct trust rating Di,j
gets updated.
The recommender trust rating of i for j is based on the directly observed reliability
of j as a recommender in the trust model. Initially, there is no knowledge about j as a
recommender. Therefore, the recommender trust rating Ri,j is set to NULL. With re-
peated interaction, the subjective quality of a recommender’s recommendations can
be judged with increasingly greater accuracy since Ri,j is updated each time j provides
i with some recommendation on the trustworthiness of some third node.
If only one’s own experience is considered in order to form attitudes concerning
some other node in the network, then the reputation system is more likely to be reli-
able, since it will not be vulnerable to possible false ratings provided by others. How-
ever, in that case, the potential of learning from experience made by others goes un-
used [26]. Moreover, as shown earlier in this paper, at first the lack of personal
knowledge about the interacting parties hinders the ability to engage in deliberate
assessment, even if there is high motivation to do so. Therefore, a peripheral route
attitude formation needs to be used. In other words, ratings from others need also to
be taken into account. If the ratings given by others are considered, then the reputa-
tion system may be more vulnerable to false ratings, i.e. false praises or false accusa-
tions. However, since more information is available, the detection of behavior is
faster. The goal is to make the trust model both robust and efficient. For this reason,
direct trust ratings may be based on recommendations provided by other nodes in the
system but the recommendation trust rating is being used to weight each of the rec-
ommendations based on personal experience only.
A simple calculation of direct trust rating for j on node i could just take the average
trust value of node j, based on recommendations concerning j from all nodes reporting
to i as well as any possible direct observation of i for j’s behavior. In that case, a di-
rect observation is equivalent to a recommendation for j provided by i itself. How-
ever, such an approach would not make our model resistant to false recommendations,
as described earlier, since all direct trust rating reports (i.e. recommendations) would
be taken equally into account regardless of the reliability of the recommender.
Alternatively, we can infer the reliability of direct trust rating reports by using the
reporting node’s recommender trust rating as a means of determining the quality of the
report. In other words, a recommendation for j received from a reporting node k with a
high recommender trust rating Ri,k by i would be weighted more than a reported trust
rating from some other reporting node of i that has a lower recommender trust rating
when calculating direct trust rating of i for j. Direct observations in that case are given
the maximum weight of 1. In other words, we assume that for any node i, Ri,i = 1.
A Decentralized Trust Model to Reduce Information Unreliability 393
The direct trust rating of node i for node j will be the weighted average of all re-
membered reported direct trust ratings and/or first hand trust ratings for j that i re-
ceives or forms based on direct observations, respectively. We use the term remem-
bered because memory constraints do not allow the maintenance of a complete set of
ratings at each node. Therefore, at each point of time only the most recently received
reported ratings are remembered. If N nodes report a direct trust rating for j and are
remembered, then the direct trust rating of i for j will be
N
∑R i, t ∗ D t, j
D i, j = t =1
N
.
∑R
t =1
i, t
We use 10 as the maximum value of N for validating our scheme later in this paper.
Figure 3 shows an example of how direct trust rating is calculated in the case of
our disaster scenario. At some point of time t1, the structural engineer who has knowl-
edge only of the reliability of the police leader as an actor in the system, wants to
know about the reliability of the firefighters’ leader in order to make a trust-based
decision on whether or not to adopt information regarding the state of a building that
is being forwarded by the firefighter. The police leader and the rescuer, who are both
neighbors of the structural engineer in the first response network and at time t1 have a
direct trust rating for the firefighter, provide recommendations to the structural engi-
neer concerning the trustworthiness of the firefighter’s leader. We assume that the
structural engineer has already received other recommendations in the past (i.e., at
some time before time t1) by both the police leader and the rescuer and, therefore, he
has, at time t1, some recommender trust rating value assigned to them.
We see that the low direct trust rating of the police officer for the firefighter has a
greater affect on the calculated direct trust rating of the structural engineer since the
police officer, in this example, is more reliable as a recommender compared to the
rescuer.
Each time node i receives a recommendation from some node k on the trustworthiness
of node j, node i evaluates the quality of this trust report and updates the recom-
mender trust rating for k. The evaluation of the reliability of the recommendation is
based on the deviation of the reported direct trust rating from the weighted average of
all other reported direct trust ratings that are remembered. In other words, if N nodes
have already reported a direct trust rating for j at the time k reports one and this is still
remembered by the system, the recommender trust rating of i for k will be:
N
∑ (R i, t ∗ D t, j )
R i, k = 1 − D k, j − t =1
N
.
∑R
t =1
i, t
394 D. Kostoulas et al.
DRE,FL
DPL,FL
At time t1:
DSE,FL = NULL
D PL,FL = 0.5
D RE,FL = 0.9
R SE,PL = 0.9
R SE,RE = 0.4
Direct trust rating calculation DSE,FL:
DSE,FL = ((R SE,PL * D PL,FL) + (R SE,RE * D RE,FL)) / (R SE,RE + R SE,PL) = ((0.9*0.5) +
(0.4*0.9)) / (0.4+0.9) = (0.45 + 0.36) / 1.3 = 0.62
Fig. 3. An example of calculating direct trust ratings based on recommender trust ratings and
recommendations
Direct trust report distribution takes place in order for nodes to receive information
about other nodes in the system in the form of recommendations since direct knowl-
edge is not always possible. We use a simple approach to distribute trust reports.
A Decentralized Trust Model to Reduce Information Unreliability 395
At each point of time, every node p that wants to send some information is able to
communicate with only a subset of nodes in the network, which could be either nodes
that are within p’s communication range at this point of time, and therefore can hear
any message broadcasted by p, or nodes that p is aware of and can unicast information
to, using the capabilities of the underlying ad-hoc routing protocol. We refer to all
nodes that node p can communicate with at some point of time as the logical
neighbors of p. The topology of the ad-hoc network is continuously changing over
time as nodes are moving in and out of other nodes’ communication range. Moreover,
nodes fail making themselves inaccessible in the communication network. For the
above reasons, the group of nodes that node p can access at each point of time is con-
tinuously changing. A group membership protocol allows us to model the availability
of logical neighbors, called also group members of node p. To do so, we use a heart-
beat-style group membership protocol.
In a heartbeat-style membership protocol, like the one proposed by Friedman and
Tcharny for ad-hoc networks [28], each node p periodically multicasts a heartbeat
message (incremented sequence numbers) to all other nodes in its group list, i.e. to all
of its logical neighbors. These heartbeats are used to proactively learn about new
prospective logical neighbors, as well as for failure detection. The latter is achieved
by timing out on the time since the last heartbeat was received by node p from a
neighbor q; this results in p deleting q from its membership list.
The basic heartbeat-style membership protocol has each node p periodically (a) in-
crement its own heartbeat counter; (b) select some of its logical neighbors (defined by
the membership list of p), and send to each of these a membership message containing
its entire membership list, along with heartbeats. Each node receiving the message
merges heartbeat values in the received message with its own membership list.
To support trust recommendations distribution, we piggyback on the membership
protocol allowing each node p to periodically send trust recommendations along with
the membership information. This is done by including the direct trust ratings for the
group members, if available, together with the heartbeats when sending the member-
ship list.
A node q that receives such information from p will use it not only to update its
own membership list by merging the heartbeat values, but also to update its recom-
mender trust rating for p as well as its direct trust ratings for all other nodes that a
recommendation is included for in the reported membership list.
To update its recommender trust rating for p, q compares all its previous direct
trust ratings with those reported by p and for any common entries in the two lists it
calculates the deviation between the two trust values available for the same node.
Doing so for all nodes for which both a direct trust rating existed before and a new
one has been included by p in the reported message, it then calculates the average of
deviations for all nodes as specified earlier in this section.
After updating the recommender trust rating for p, the receiving node q will then
update its list of direct trust ratings. The new recommender trust rating for p will act
as the weight for its reported recommendations to q. Each node keeps in memory only
the last 10 recommended ratings received for some other node and only those ratings
396 D. Kostoulas et al.
are taken into account when calculating the direct trust rating for this node. That
means that if node p includes a rating for node r in its list sent to q and the heartbeat
for r in this list is higher than any of the heartbeats in the last 10 reports received for r,
then the recommendation of p for r, will be taken into account. The new direct trust
rating for r will be calculated as the weighted average of the last 10 received ratings
for r.
Initially:
activated = FALSE; // am I activated?
activated_fraction = 0; // trust fraction of neighbors being activated
trust_rating_sum = 0; // sum of all direct trust ratings
for each nodeID such that (find (nodeID,L) == TRUE) do
// for each node in the neighborhood list do
trust_rating_sum += direct_trust_rating(nodeID);
// add the direct trust rating for it in the sum of ratings
enddo
initiator = NULL; // am I the initiator of contagion process?
Once every protocol period:
if (activated == FALSE)
for each nodeID such that (find (nodeID,L) == TRUE) do
// for each node in the neighborhood list do
if activated(nodeID) == TRUE // if node is activated
activated_fraction += direct_trust_rating(nodeID);
// add the direct trust rating for it to the activated fraction
endif
enddo
if ((activated_fraction / trust_rating_sum) > t) activated = TRUE;
// if activated trust portion of nodes exceeds the threshold get activated
A Decentralized Trust Model to Reduce Information Unreliability 397
However, apart from the assessment of the reliability of the participants in a first
response network, we also want our trust model to control information dissemina-
tion based on the reliability of information. Information reliability can be associated
to that of the nodes that adopt it. The decision on whether or not to adopt the infor-
mation will then be trust-based. In other words, a node would make a decision
whether or not to adopt information and further disseminate it based on the trust-
worthiness of information, which is implied by the reputation of the nodes that have
already adopted it. In that way, unreliable information would be filtered, allowing
only trusted information to get propagated. This would cause information overload
to be substantially reduced improving the efficiency of communication. We propose
an activation model, similar to the one used by ants for division of labor, for effi-
cient and reliable information spreading on top of the decentralized reputation
scheme.
As seen earlier in this paper, under threatening situations ants secrete a specific
pheromone to inform other ants. However, not all the ants react in the same way to
the levels of pheromone perceived. Information or alarm spreading is a binary deci-
sion problem. An ant will react to an alarm pheromone (i.e. an ant will adopt an
“alarm” behavioral pattern) based on the amount of pheromone present in the area
where the ant is moving and its threshold level.
Assuming that the greater the amount of pheromone perceived, the more the ants
that have adopted the information, we can translate the behavior of ants using a
spreading activation model. In such a model, a node becomes activated, i.e. adopts the
information, using some activation function, which is typically based on some thresh-
old. This activation function in the case of nodes could be a simple fraction activation
function that takes as parameters the number of neighbors of a node and their activa-
tion states. If, for some threshold fraction t, the number of active neighbors is t or
more of a node’s total number of neighbors, the node itself gets activated next. The
threshold t is different for different nodes.
In the case of information dissemination in a first response network, the alarm is
some critical information propagated over the network. Moreover, since we want the
decision on whether or not to adopt the information to be trust-based, the amount of
pheromone and the threshold level will be related to direct trust ratings. We slightly
modify the fraction activation function in order to have direct trust ratings affect the
activation decision. We call our modified activation function trust fraction activation
function. This modification causes highly trusted nodes to be more influential in caus-
ing the activation of their neighbors. If, for some threshold fraction t, the number of
activated neighbors multiplied by their respective direct trust ratings on the node that
wants to make an activation decision is equal or greater than t of the sum of all direct
trust ratings on the same node, then this node will get activated next. The threshold t
is different for different nodes and is a function of the average of all direct trust rat-
ings for a node’s neighbors.
Tables 1 and 2 show pseudo code, and the analogy between ants’ alarm activation
and our trust-based information dissemination model, respectively.
398 D. Kostoulas et al.
Table 2. Analogy between ants’ activation model and our trust activation model
7 Experimental Results
To evaluate the performance of our trust system, we develop a simulation environ-
ment in Visual C. Each actor in the simulated disaster network is represented by a
node in the network used for our experiments. The connectivity of the network is
modeled using the modified group membership protocol for mobile ad-hoc networks
described earlier in this paper. Our simulation model performs in rounds of execution.
We consider networks of 100 or 1000 first responders, including civil engineers,
which move randomly within the disaster area. The mobility of actors is handled by
the membership protocol that updates the membership lists used from our trust model
for communication. Our trust system runs independently of the movement of first
responders in the disaster area.
We test the performance of our trust scheme as follows:
Detection of unreliability. We consider the case when a portion of the nodes in the
system are misbehaving, i.e. behave in an unreliable manner, and we assume that
A Decentralized Trust Model to Reduce Information Unreliability 399
another portion of nodes has initially classified them as not trustworthy. We simulate
our trust scheme and measure how many rounds of the recommendation protocol are
needed in order for all the nodes in the system to be able to classify the misbehaving
nodes as not trustworthy. Figure 4 shows results for a network of 110 nodes when 10
nodes are misbehaving. The number of nodes that has originally detected them is 10.
We notice that only a small number of rounds are needed in order for all nodes in the
system to be able to classify the misbehaving nodes as unreliable.
Fig. 4. Rounds of the recommendation protocol needed to have 10 misbehaving nodes detected
in the whole network, starting from 10 spontaneous detections
Robustness. We consider the case when a portion of the nodes in the system are
misbehaving and, in addition to this, another portion is lying about their trustworthi-
ness, i.e. it provides false praises. Assuming that there are a number of nodes that have
been able to detect misbehavior, we measure again how many rounds of the rating
protocol are needed in order for all the nodes in the system to be able to classify the
misbehaving nodes as not trustworthy. The performance of the trust system in that case
would be an indication of its robustness to false rating and liar strategies. Figure 5
shows results for a scenario similar to that in figure 4 and for 10 nodes reporting false
praises for the misbehaving nodes in the recommendation protocol. We see that our
protocol is robust to such liar strategies because of the use of the recommender trust
rating that gradually reduces the impact of unreliable recommenders in the system.
Trust-based information spreading to reduce information overload. To study the ef-
fect of trust in the spreading of information we consider a network of 1000 nodes,
where 250 nodes are initially activated, i.e. spontaneously insert some information in
the network. In figure 6, we measure the number of rounds needed for all the nodes to
get activated when the trustworthiness characteristics of the initially activated group
400 D. Kostoulas et al.
Fig. 5. Rounds of the recommendation protocol needed to have 10 misbehaving nodes detected
in the whole network, starting from 10 spontaneous detections and having 10 nodes providing
false ratings
of nodes change from a generally not trusted group with an average reputation of 0.25
to a generally trusted one with an average reputation of 0.75. It is clear that an activa-
tion that is initiated from more trusted nodes spreads faster in the whole network.
Similarly, figure 7 shows the number of activated nodes after 1000 rounds for differ-
ent average trust levels of an initially activated portion of nodes. This figure clearly
indicates how our trust model reduces information overload in the system, since in
cases of unreliable information it only spreads to a limited number of nodes.
8 System Evaluation
To test the usefulness of our trust model we incorporate our proposed schemes in a
prototype disaster support application and simulate a disaster scenario.
Our model is implemented as part of the Mobile Ad-Hoc Space for Collaboration
(MASC) application [29]. MASC is an application, built using Microsoft Embedded
Visual C++ for Windows and Windows CE, which runs on handheld devices enabled
with short range wireless communication. It aims to provide robust and efficient col-
laboration among first responders, using a short range wireless communication plat-
form. MASC provides all users that run the application with a shared view of the
disaster area (Figure 8). Users may enter information into the system regarding the
stability of the buildings in the disaster area. The stability information is represented
by colored flags where each color stands for a stability state (stable, caution or unsta-
ble). According to the stability state of a building, as perceived by some mobile user
running the MASC application, the building is marked with the corresponding colored
flag and this information becomes available to all users in the communication range
of the information provider, through the shared view. Apart from information regard-
ing the stability of buildings, users may share pictures taken by using a digital camera
attached to the short-range wireless enabled handheld device. To access a particular
picture, a user should first click on the representation of the user that shares this pic-
ture on the ongoing shared view of the disaster area and then request the shared image
by clicking on the appropriate button that pops-up. Clicking on the representation of a
particular user on the shared view, allows also other users to access some profile in-
formation regarding this user.
402 D. Kostoulas et al.
The exercise began after digital information about the disaster area had been dis-
persed to all participants that were part of the system testing team, i.e. the rescue
group leader, the student playing the role of the structural engineer and the other two
students playing the roles of rescuers. During the two hour exercise the members of
the MASC evaluation team that were not part of the actual rescue group that was
being trained, dynamically moved and located according to the movement of the
trained first responders without interfering with their activities, as shown in Figures
10 and 11.
The role of the structural engineer was to indicate the simulated stability of the dis-
aster area, i.e. of the metal building where the rope rescue exercise was taking place.
Information regarding the stability of the building was entered into the system by the
structural engineer in the form of colored flags indicating the level of stability. This
information was available in the shared view provided by the MASC application.
The role of one of the rescuers, rescuer A, was to act as a mobile field user and to
take pictures of the building. Those pictures had to be shared with the structural engi-
neer and other members of the testing team. Then, the structural engineer would use
them in order to better evaluate the stability of the building. Specifically, the mobile
field user took pictures of the physical infrastructure, using a digital camera attached
to the short-range wireless enabled handheld device. As the pictures of the physical
infrastructure were stored in MASC, they were transparently available for any actor
using a handheld device running the MASC application. For instance, at any time
during the exercise when the structural engineer is required to see the pictures of the
building, taken by the field agent, the engineer would click on the representation of
the field user, in the ongoing shared view of the scenario, provided by the application.
By selecting one of the items in the popup list, the structural engineer would see the
corresponding picture on his/her short-range wireless enabled PDA.
404 D. Kostoulas et al.
The second rescuer, rescuer B, was acting also as a mobile agent, but his/her role in
the simulated scenario was to provide inaccurate, unreliable information in order to
test the resistance of our trust model to such information.
Finally, the rescue group leader was using the MASC application in order to have
access to structure information provided by the structural engineer so that he/she
could direct his/her rescue group accordingly.
Fig. 10. The rescue group takes part in the exercise and a student located according to the
movement of the group uses MASC application to validate our trust model
Fig. 11. The rescue group leader uses the MASC application to access structural information
A Decentralized Trust Model to Reduce Information Unreliability 405
An Illustrative Example
The rope rescue exercise was carried out in normal conditions supported by the
MASC application and our trust system, incorporated in the application, was tested
for the following:
Detection of unreliability. To test how our system reacts when actors using the
MASC application provide unreliable information, we let rescuer B provide incorrect
information about the stability of the building. For this test we have all four actors
being part of the application testing team to be within each other’s communication
range. Therefore, the information entered by rescuer B would appear in the shared
view of all four actors, including the structural engineer. We also let the structural
engineer already have a good reputation for providing structural information. This
reputation has been gained through interactions with the rescue group leader, to whom
he provides information on stability of buildings, and the provision of recommenda-
tions from the group leader to the other actors forming the testing group. On the other
hand, we assume that no information on the trustworthiness of rescuer B is available
before he/she provides the information. Therefore, when the structural engineer ac-
cesses the information provided by rescuer B, which is incorrect, he/she first rates
rescuer B low and sends recommendations through the trust reputation model, and
second he/she provides the correct information about the stability of the building.
Since, after those actions taken by the trust model, the reputation of the structural
engineer will be high while that of rescuer B will be low, the information provided by
the engineer will be the one adopted by the other actors, as required.
Robustness. To test the robustness of our system, we use a test setting similar to
the one used for detection of unreliability, but instead of having rescuer B report false
information on stability, we let him/her report a false accusation, i.e. a false low rating
for the structural engineer. In other words, in this test we let rescuer B be unreliable
not as an actor but as a recommender. The other actors that receive the false rating for
the structural engineer from rescuer B will use it in order to evaluate rescuer B as a
recommender. To do so, they are going to compare the reported rating with the direct
trust rating they already have for the structural engineer. However, as we mentioned
above, the reputation of the structural engineer is high and therefore the deviation
between the old and the newly reported rating will also be high, causing rescuer B to
be rated low as a recommender. Therefore, our trust system proves to be robust to
false ratings.
Reducing information overload. After the rescuer B has been detected as not trust-
worthy any information on stability of buildings provided by him/her will be ignored
by other actors as long as this information does not get adopted by other, preferably
trustworthy, actors as well. It is clear that because of this feature of our trust system
information overload is reduced, since unreliable information is filtered out. This
factor, among other, proves the usability of our system in disaster scenarios, where
information overload is one of the major concerns.
9 Conclusions
The vulnerability of urban areas to extreme events is one of the most vital problems
confronting society today. Significant human and economical costs associated with
406 D. Kostoulas et al.
XEs emphasize the urgent need to improve the efficiency and effectiveness of first
responses. Any attempt to effectively support communication between civil engineers
and other first responders in disaster scenarios should among others provide trust
management between actors. This paper proposes a distributed trust model that aims
to establish trust among first responders. A reputation-based scheme, incorporated in
a group membership protocol, is being used for this purpose. A nature-inspired activa-
tion mechanism is also proposed for trust-based information dissemination on top of
the trust model. Experimental results indicate fast and robust establishment of trust
and high resilience to the spread of unreliable information.
Simulations have significantly contributed to the construction decision making proc-
ess by providing ‘what-if’ scenarios. Discrete Event Simulation (DES) has been one
of the primary means of simulation, focusing on construction operational details.
Considering the similarity between construction operations and queuing theory, DES,
which is good at representing queuing theory, would be an appropriate method to
represent construction operations.
References
1. New York Times, 9/11/2001, “The 9/11 Report”, Web Page: http://www.nytimes.com/
indexes/2001/09/11/
2. Newsweek, 1/4/05, “Tsunami Report”, Web Page: http://www.msnbc.msn.com/id/
6777595/site/newsweek/?ng=1
3. Mileti D., “Disasters by Design: A Reassessment of Natural Hazards in United States” Jo-
seph Henry Press, Washington D.C, 1999.
4. Tierney K., Perry R. and Lindell M., “Facing the Unexpected: Disaster Preparedness and
Response in the United States” The National Academies Press.
5. Columbia/Wharton Roundtable, “Risk Management Strategies in an Uncertain World”
IBM Palisades Executive Conference Center, April 2002.
6. Godschalk D., “Urban Hazard Mitigation: Creating Resilient Cities.” Natural Hazards Re-
view, ASCE, August 2003, pp. 136-146.
7. Prieto R., “The 3Rs: Lessons Learned from September 11th” Royal Academy of Engineer-
ing, Chairman Emeritus of Parsons Brinckerhoff, Co-chair, New York City Partnership In-
frastructure Task Force, October 2002.
8. National Science and Technology Council: Committee on the Environment and Natural
Resources, “Reducing Disaster Vulnerability through Science and Technology” July 2003.
9. Quarantelli E.L, “Major Criteria for Judging Disaster Planning and Managing and Their
Applicability in Developing Societies” Newark, Delaware: Disaster Research Center, Uni-
versity of Delaware.
10. Comfort, L., “Coordination in Complex Systems: Increasing Efficiency in Disaster Mitiga-
tion and Response”. Annual Meeting of the American Political Science Association, San
Francisco, USA, 2001.
11. FEMA, “Federal Response Plan Basic Plan” October 22, 2004, URL:
http://www.fema.gov/rrr/frp/
12. Erik Auf der Heide, “Disaster Response – Principles of Preparation and Coordination”, St.
Louis, Mosby, 1989.
13. FEMA, “Federal Response Plan” Federal Emergency Management Agency, 9130.1-PL.
April, 1999.
14. Shuster P., “The Disaster of Central Control” Complexity, Wiley, Vol. 9, No. 4, March-
April 2004
A Decentralized Trust Model to Reduce Information Unreliability 407
1 Introduction
Computational approaches based on optimization are increasingly used to assist
engineers in the design process. Optimization methods [1,2] that attempt to find
the best solution for a specific mathematical model of the actual design problem
assume that all the costs and objectives in the real problem are included in the
mathematical model. However, designers are seldom able to model all the factors
in a practical design problem due to the following reasons:
• Presence of certain objectives that cannot be numerically defined. e.g., aes-
thetics.
• Difficulty in quantifying the relative importance of different objectives in a
multi-objective optimization problem. For example, an engineer may prefer
to have supports at certain locations on a pipe in a support optimization
problem. However, quantifying this preference relative to minimizing the
total cost of the supports may be difficult.
Moreover, optimization models typically involve simplifications in the cost model.
Complexities in the calculation of the costs of certain aspects of the problem are
often left out from the optimization formulation. For instance, the cost of a rigid
connection in a steel frame design problem may be specified as a constant value.
In practice, the cost depends on various factors such as the type of the connection,
the member type for the beam and the column, and the weld length. Due to these
reasons, the solution obtained from optimization, while possibly good, is seldom
the best solution for design.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 408–415, 2006.
c Springer-Verlag Berlin Heidelberg 2006
MGA – A Mathematical Approach to Generate Design Alternatives 409
2 Structural Optimization
Mathematical models of real-world problems often involve some degree of ap-
proximation in the costs and the objectives. Consequently, the solution generated
using traditional single-objective optimization techniques may not perform sat-
isfactorily with respect to the unmodeled parameters. For illustration, consider
the following formulation of a design optimization problem:
Maximize z = x + 2y, subject to x + y ≤ 30; y ≤ 20; x, y ≥ 0 (1)
The decision space for the problem is shown as the shaded region in Fig. 1a.
Mathematical optimization correctly produces z = 50 at point A as the solution.
410 P. Kripakaran and A. Gupta
y y
30 30
11111111
00000000
00000000
11111111 000
111
A A
20 20
00000000 111
11111111 000
B x + 2y = 0.9(50)
00000000
11111111
x + y = 30
00000000
11111111
10 10
0
00000000
11111111
10 20 30
x 0
10 20 30
x
(a) Decision space (b) Reduced decision space
Thus, if all parameters of the real problem are present in this formulation, the
optimal design would be (x, y) = (10, 20). However, the premise is that there
may be features that are not completely captured by the model. When those
issues are considered, point A may be less desirable overall than a point, call it
B, originally deemed inferior (to the optimal solution) by the model. The issues
that are involved in finding B and establishing the computer assistance needed
in this process are discussed below.
• Generating all feasible solutions suggests that one has no confidence in the
model - one would expect that point B optimizes the objective function
nearly as well as A, so only these good solutions need to be examined. This
idea is illustrated in Fig. 1b, where only those solutions that are within 10%
of the optimal are retained.
• Available solutions should represent a cross-section of good solutions, so
that, if B is not actually among them, perhaps one of them is close enough
from which to begin “tinkering.”
• Since a decision maker can reasonably consider only a small number of de-
signs, a subset of these good solutions should be presented for inspection.
In the following section, a formal description of the technique - Modeling to
Generate Alternative (MGA), and its application to generate alternatives for a
generic optimization problem are given.
pj represents the product type of member j in the frame. For a given set of ci ,
pj are evaluated using a heuristic algorithm. m is the total number of members
in the frame. wj is the weight in tonnes per unit length of the product pj . lj
is the length of member j. Cs is the cost per tonne of steel. Cr is the cost of
a single rigid connection. In order to determine Cr , detailed discussions were
held with practicing engineers, steel fabricators and erectors. The actual cost
of a connection depends upon various factors such as the type of connections,
amount of welding, web stiffening and doubler plates. However, a fixed cost of
$900 per connection is adopted. This value is representative of the average value
for fabricating a rigid connection in the state of North Carolina. Furthermore,
Cs = $600 per tonne of steel is assumed.
The constraints for the design problem are prescribed by the strength and
serviceability requirements specified in the Manual of Steel Construction, Load
and Resistance Factor Design [10]. These constraints are not described here for
brevity. Readers are referred to [9] for a complete description of the problem and
the optimization approach.
represented as < c1a , c2a , .., cRa > and < c1b , c2b , . . . , cRb >, then the distance
between the two solutions δab is given as follows.
R
δab = |cia − cib | (6)
i=1
The Genetic Algorithm, which is used to optimize the objective function given
by Equation 5, is also used to identify alternatives by optimizing the objective
function Y given in Equation 3. The value of k is taken to be 1.1, i.e., the search
is for alternatives whose cost does not exceed the cost of the optimal solution by
more than 10%. Using this formulation, alternatives are generated for rreq = 12,
i.e., frames with exactly 12 rigid connections. These alternatives are evaluated
on the basis of the following criteria - (1) preferences for certain locations to
place rigid connections, and (2) margins against excessive lateral loads.
when the wind load is increased by 123% and 117%, respectively. The optimal
solution, M GA1, and M GA2 have costs of $30, 609, $33, 613 and $30, 884 re-
spectively. Expertise and judgment can be used to evaluate the alternative that
is best suited for final design.
MGA – A Mathematical Approach to Generate Design Alternatives 415
5 Summary
References
1. Deb, K., Gulati, S.: Design of truss-structures for minimum weight using genetic
algorithms. Finite Elements in Analysis and Design 37 (2001) 447–465
2. Hasancebi, O., Erbatur, F.: Layout optimization of trusses using simulated anneal-
ing. Advances in Engineering Software 33 (2002) 681–696
3. Shea, K.: An approach to multiobjective optimisation for parametric synthesis. In:
Proceedings of International Conference on Engineering Design, Glasgow. (2001)
4. Kicinger, R., Arciszewski, T., De Jong, K.D.: Evolutionary computation and struc-
tural design: A survey of the state-of-the-art. Computers and Structures 83 (2005)
1943–1978
5. Marler, R.T., Arora, J.S.: Survey of multi-objective optimization methods for
engineering. Structural and Multidisciplinary Optimization 26 (2004) 369–395
6. Grierson, D.E., Khajehpour, S.: Method for conceptual design applied to office
buildings. Journal of Computing in Civil Engineering 16 (2002) 83–103
7. Baugh Jr., J.W., Caldwell, S.C., Brill Jr., E.D.: A mathematical programming
approach to generate alternatives in discrete structural optimization. Engineering
Optimization 28 (1997) 1–31
8. Gupta, A., Kripakaran, P., Mahinthakumar, G., Baugh Jr., J.W.: Genetic
Algorithm-based decision support for optimizing seismic response of piping sys-
tems. Journal of Structural Engineering 131 (2005) 389–398
9. Kripakaran, P.: Computational approaches for decision support in structural design
and performance evaluation. PhD thesis, North Carolina State University (2005)
10. AISC: Manual of Steel Construction - Load Resistance Factor Design. 3rd edn.
AISC (2001)
Assessing the Quality of Mappings Between Semantic
Resources in Construction
Abstract. This paper discusses how to map between Semantic Resources (SRs)
specifically created to represent knowledge in the Construction Sector and how
to measure and assess the quality of such mappings. In particular results from
the FUNSIEC project are presented, which investigated the feasibility of estab-
lishing semantic mappings among Construction-oriented SRs. The paper points
to the next lines of inquiry to extend such work. In FUNSIEC, a ‘Semantic In-
frastructure’ was built using SRs that were semantically mapped among them.
After quite positive results from FUNSIEC, the obvious questions arose: how
good are the mappings? Can we trust them? Can we use them? This paper pre-
sents FUNSIEC research (approach, methodology, and results) and the main
directions of investigation to support its continuation, which is based on the ap-
plication of fuzzy logics to qualify the mappings produced.
The second generation of the WWW is emerging based on the addition of “mean-
ing” to data and information, provided by the development of new semantic-
oriented tools and resources. Web services are now gaining a semantic layer that
allows the development of ‘web service crawlers’ capable of understanding what
exactly a given web service does. Semantics is undoubtedly the cornerstone of the
whole evolving web.
The first generation of the web was essentially focused on the creation and publica-
tion of content with humans as the main consumers. Subsequently, the immense sea
of digital content available became attractive enough to be exploited by automatic
tools. This requires (and is essentially based on) the formal definition of meaning and
its respective association with the information published on the web. The work carried
out by the Semantic Web group has prepared the ground on this subject. We are get-
ting closer to the futuristic vision of the Web’s creator, Tim Berners-Lee [4].
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 416 – 427, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Assessing the Quality of Mappings Between Semantic Resources in Construction 417
Experts say that the use of Semantic Resources1 (SRs) can contribute to trans-
forming that vision into reality. Formalisms are required to support the definition of
the real meaning of web resources (both information and services) allowing them to
be published and to be precisely understood by other agents (more specifically, soft-
ware agents). This is where semantic interoperability then comes into play. Semantic
interoperability enables systems to process information produced by other applica-
tions in a meaningful way (i.e. in isolation or combined with their own information)
As such, it represents an important requirement for improving communication and
productivity.
The European Construction sector has been offered several results produced by in-
ternational initiatives at standardisation level related to interoperability and semantic
matters. In order to move next steps in these directions, the FUNSIEC project aimed
at evaluating the feasibility of creating an Open Semantic Infrastructure for the Euro-
pean Construction Sector (OSIECS). Such an infrastructure was to be built by select-
ing publicly available, dedicated construction semantic resources available from re-
sults produced by international initiatives and European funded projects.
Essentially, FUNSIEC looked for an answer to the following question: is it possi-
ble to establish (semantic) mappings between SRs tailored to construction needs? The
driving quest of FUNSIEC was to know if it would be possible to use in an integrated
way (some of) the SRs already available to Construction. Additionally, FUNSIEC
aimed to enhance the semantic interoperability of those SRs.
FUNSIEC, supported by its own methodology, designed and partially imple-
mented the OSIECS Kernel, which is essentially a human-centred tool to produce the
OSIECS meta-model and the OSIECS model. Through them, it is possible to evaluate
the establishment of mappings among SRs, either in a purely research-oriented per-
spective or in an (embryonic) business-oriented way.
This paper is structured as follows. Section 2 presents FUNSIEC goals and meth-
odology. Section 3 describes the FUNSIEC results. Section 4 discusses the applica-
tion of fuzzy logics to qualify the mappings. Section 5 briefly summarises the related
work. Section 6 draws some conclusions and points out future work and expectations.
The core subject in FUNSIEC work was semantic. Semantic Resources are available
in many forms and flavours even though they are still used (and exploited) in a very
embryonic level. The European Construction sector is not an exception, and it has
been offered several results produced by international initiatives at standardisation
level (e.g. CEN eConstruction workshop, IFC model, International Framework Dic-
tionary, LexiCon, Barbi, bcXML language, e-COGNOS ontology, etc.).
Taking these results into account, the FUNSIEC project analysed the feasibility of
building an Open Semantic Infrastructure for the European Construction Sector
(OSIECS). Such an infrastructure was to be built by selecting semantic resources
1
Term coined in the SPICE project to refer to controlled vocabularies, taxonomies, ontologies,
etc.
418 C. Lima, C.F. da Silva, and J.P. Pimentão
Ontology Server (e-COSer). For instance, the project manager feeds the system with
knowledge about regulations, in this case the url of regulatory bodies. During a pro-
ject, he is informed about the publication of new regulations and then he uses the
e-CKMI to verify if his projects have to be changed according to the new regulations
regarding accessibility matters for disabled people.
The OSIECS Kernel produces the OSIECS meta-model and OSIECS model. The
Kernel is composed by the Syntax Converter, the Semantic Analyser, the Converter,
the Detector of Similarities, and the Validator. Experts are required to provide the
right inputs to the OSIECS Kernel and, as such, make the best use of it.
The formalism adopted to represent both the OSIECS meta-model and model is the
OWL language, for two important reasons: i) its rich expressiveness; and ii) the
2
For more information on FUNSIEC methodology, see [1].
420 C. Lima, C.F. da Silva, and J.P. Pimentão
The OSIECS meta-model and OSIECS model are inter-connected indirectly via the
SRs they represent. Their usage depends on the level of representation required when
dealing with SRs. The OSIECS meta-model is used by the creators of semantic re-
sources where different sources have to be combined and mappings made between
them. For instance, the work currently conducted by TNO, concerning the develop-
ment of the New Generation IFC, is a potential candidate to take advantage of the
OSIECS meta-model since it already maps the IFC (Kernel only) and the ISO 12006-
3 meta-schemas [2]. Shortly, both OSIECS meta-model and model are sets of tables
respectively mapping meta-schemas and schemas of the SRs forming OSIECS.
Assessing the Quality of Mappings Between Semantic Resources in Construction 421
The OSIECS Kernel uses the ‘reasoning services’ of FUNONDIL to determine and
identify semantic correspondences, i.e., the relations between pair of entities belong-
ing to different SRs. The FUNONDIL inference engine uses two ontologies as input
(O and O') and a set of axioms (A), producing a set of inter-ontology axioms (A') that
represents the mappings.
Three types of mappings are considered, namely equivalence, subsumption and
conjunction. Equivalence means that the concept A is 100% equivalent to the concept
B, considering the semantic expressed in each SR. Subsumption has a rank relation
that defines the relation subconcept Æ superconcept between concept A and concept
B. The conjunction mappings are the result of the mappings obtained in the previous
stage.
The mapping search is performed between each pair of SRs producing semantic
correspondences considered equivalents and non-equivalents. The former refers to
absolute equivalences among the entities mapped. The latter refers to mappings in
422 C. Lima, C.F. da Silva, and J.P. Pimentão
which only a part of the concepts of the SRs is common. This is the case of subsump-
tion and conjunction.
For illustrative purposes only, bcXML to bcXML were mapped in order to help as-
sess the operation of the OSIECS Kernel. As expected, to map a SR to itself produces
equivalences (and only equivalences) between the same concepts. In addition, results
for subsumption and conjunction are also presented, but this means only redundant
information, because if A ⊑ B and B ⊑ A then A is equivalent to B. This exercise
helped us to be sure that the mapping process was working properly.
FUNSIEC work was very human-centred in the sense that the participation of experts
was essential to guarantee the quality of the results. The reason is the very essence of
the work: semantics. The SRs currently available were not developed in the context of
the need to establish mappings with other SRs. Only the experts know the meaning of
things. This scenario has been slowly changed with the advent of the semantic web
and the related elements (standards, tools, etc.). Ontologies written in one standard
format (OWL) are likely to be more easily mapped among themselves.
As previously explained, FUNSIEC relied on semantic methods to tackle the se-
mantic heterogeneity problem. It is worth noting that these methods being semanti-
cally exact do only provide an absolute degree of similarity for entities considered
equivalent. Therefore, the continuation of FUNSIEC work depends on the quality of
the mappings produced, which needs to be measured.
Part of the problem relies on the way of defining the quality of the mapping. How
can we say that the “quality” of something is between 0 (bad) and 1 (perfect)? Fuzzy
Logic theory [3] provides a qualitative approach to this inherently vague idea. In fact,
instead of relying exclusively on quantitative approaches, Fuzzy Logic represents
these concepts using linguistic variables whose values are terms that represent the
concept (e.g. bad, acceptable, good, and excellent). These terms are then mapped
onto Fuzzy Sets that are extensions to the classic sets theory where the membership
function can allow values between 0 and 1, thus denoting a degree of membership
instead of the biblical dichotomy of ‘true or false’.
enrich ontological concepts. They can be used, for instance, to provide a long list of
terms that can be used to refer to a single concept (e.g. the concept Actor could be
referred to by employee, person, driver, engineer, etc.).
E3 captures the similarity between concept annotations3. An annotation contains
natural language terms and expressions, which means that (part of) the annotation
content can be labelled or tagged as an expression representing a rich semantic con-
tent. Let n be the number of terms in the expression e, n is called the order. By exten-
sion a term is an expression of order n = 1. It is clear that n cannot be a meaningless
term. The meaningless terms are the, to, of, for, etc., also called ‘stop-list’.
The input variables are fuzzified with four linguistic terms: non-acceptable, ac-
ceptable, good, and strong. If the definition of two mapped concepts, C1 and C2, do
not share a property then the similarity related to the two concepts properties is non-
acceptable. If C1 and C2 share one or two properties then the similarity related to the
two concepts properties is acceptable. If C1 and C2 share two to five properties then
the similarity related to the two concepts properties is good. Finally, if C1 and C2 share
more than four properties then the similarity related to the two concepts properties is
strong. A similar argument is applied to the E1 and E2 input variables. For instance, if
C1 and C2 share two to six ‘lexical entries’, then the similarity of the two concepts
regarding their ‘lexical entries’ is good. If C1 and C2 associated annotations share
more than six expressions then the similarity between both concept annotations is
strong.
For illustrative purposes only, figure 4 depicts the membership functions for E3.
For instance, if E3 = 3, i.e., C1 and C2 annotations share a term and an expression of
order 2, then the similarity between C1 and C2 annotations is acceptable to a degree of
membership of 0.66 and is good to a degree of membership of 0.33. Based on that
conclusion, the similarity between a pair of concepts is qualified, which allows to
infer how good the mappings are.
Table 1. Summary of the assignment of the fuzzy linguistic terms, where z is integer
Linguistic terms
The FUNSIEC vision is shown in figure 5. It includes the OSIECS triad together with
the SRs and the respective tools used to manage them, namely the eConstruct tools
(bcXB, RS/SCS, and TS), the IFC tools (IFCViewer and IFCEngine), the e-COGNOS
tools (e-CKMI and e-COSer), and the LexiCon Explorer.
The vision is that the OSIECS Kernel, supported by both OSIECS Model and
Meta-model, acts as a bridge between the different tools providing richer possibilities
of using the SRs in a transparent way. For instance, an expert looking for knowledge
(using the e-COGNOS tools) about problems related to the fire resistance of a given
product can, at the same time, find the information about alternative products and
their suppliers, prices, etc., using the eConstruct tools in a totally open way. The
OSIECS Kernel is responsible for translating the need of the expert in the respective
bcXML query, sending it to the bcXML server and getting back the right answers.
Another example is for a designer developing a CAD drawing (IFC compliant) and, at
Assessing the Quality of Mappings Between Semantic Resources in Construction 425
the same time, needing to know about the regulations that must be followed in his/her
project. In this case, OSIECS Kernel provides the link between the IFC tools and the
e-COGNOS tools.
5 Related Works
The techniques of mapping4 are used to facilitate the interoperability among hetero-
geneous Semantic Resources. Currently, mapping of ontologies is used in several
fields ranging from machine learning, concept lattices and formal theories to heuris-
tics, database schema and linguistics.
In the literature we can find either similar or very different approaches to the one
adopted in FUNSIEC. For a good source on related work, please see [11]. Briefly four
are referred to here:
• Su [13] approached ontology mapping using mapping methods based on extension
analysis. The mapping discovery approach is based on ontological instances, sup-
ported by text categorization and Information Retrieval techniques. It creates a
“feature vector” for instances of concepts and assigns a ‘similarity value’ for each
pair of concepts. The process is completed by experts that accept/reject the map-
pings.
• MAFRA is a framework for distributed ontologies in the Semantic Web [11],
where ontologies to be mapped are normalised to a uniform representation – in
their case RDF(S) – thus avoiding syntax differences and making semantic differ-
ences between the source and the target ontology more apparent.
• The Anchor-PROMPT [12] is an ontology merging and alignment tool with a
sophisticated prompt mechanism for possible matching terms. Its alignment algo-
rithm uses two ontologies and a set of anchored-pairs of related terms5. The align-
ment produced is then refined based on the ontology structures and users feed-
back.
• The work presented by Garcia et al. [14] is focused on the early phase of the de-
sign process using extreme collaboration in an environment similar to the used by
NASA. It includes the creation of a Product-Organisation-Process ontology (using
an Excel spreadsheet) in a collaborative process involving the team of designers.
They argue that large ontologies (in their view IFC is an example) are not that use-
ful and, as such, small and really common ontologies can help to reduce signifi-
cantly the time required to complete the design process.
As might be expected, FUNSIEC has similarities and differences when compared
to other works. For instance, it shares the use of IR techniques and need for approval
from end users considered in [13] and has used the same approach as in MAFRA
regarding the normalisation of ontologies. FUNSIEC is different when compared with
IF-MAP because we believe that the previous agreement advocated is currently far
from reality, since organisations use what they have to hand. Regarding ONION,
4
There is also a multitude of terms expressing similar works in this area, such as mapping,
alignment, merging, articulation, fusion, integration, and morphism.
5
These are identified using string-based techniques or defined by the user.
426 C. Lima, C.F. da Silva, and J.P. Pimentão
FUNSIEC does not agree with the assertion that ontology merging is inefficient,
costly and not scalable. Indeed, the continuation of FUNSIEC tackles efficiency from
a quality perspective. Good mappings are likely to be useful whilst bad ones are to be
useless.
References
1. Barresi, S., Rezgui, Y., Lima, C., and Meziane, F. Architecture to Support Semantic Re-
sources Interoperability. In Proceedings of the ACM workshop on Interoperability of Het-
erogeneous Information Systems (IHIS05), Germany, ACM Press, November 2005, page
79-82.
2. Lima, C.; Ferreira da Silva, C.; Sousa P.; Pimentão, J. P.; Le-Duc, C.; Interoperability
among Semantic Resources in Construction: Is it Feasible?, CIB-W78 Conference, Dres-
den, Germany, July 2005.
3. Zadeh, L.A., Fuzzy Sets. Information and Control, 1965. 8: p. 338-353.
4. Berners-Lee, T., Hendler, J., Lassila, O.: The Semantic Web: A new form of Web content
that is meaningful to computers will unleash a revolution of new possibilities. Scientific
American, May (2001).
5. Corcho, O., Fernando-Lopez M., and Gomez-Perez A. Methodologies, tools and languages
for building ontologies. Where is their meeting point?ï Data and Knowledge Engineering
46(2003)ï ï 41-64.ï
Assessing the Quality of Mappings Between Semantic Resources in Construction 427
1 Introduction
Traffic loads are usually obtained from so-called weigh-in-motion (WIM) systems
[1]. For these systems two types can be distinguished: Pavement and bridge systems.
In the case of pavement systems, weighing sensors are embedded in or mounted on
the pavement. To achieve high accuracy and partly overcome dynamic effects in-
duced by crossing vehicles to single sensors, multiple-sensor WIM (MS-WIM) arrays
were developed. The accuracy class of A(5) according to the COST323 specifications
[2] could be achieved, using a high number of up to 16 sensors [3]. Sensor noise and
inaccuracy [3] as well as sensor stability in calibration and operation [4] are remain-
ing critical problems within the application of this type of systems. For maintenance
and installation traffic lanes have to be blocked. This may lead to inconvenience on
the part of the road users.
Bridge WIM (B-WIM) systems use an instrumented bridge as measuring device.
By means of appropriate algorithms deformations recorded at well chosen locations
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 428 – 436, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Knowledge Discovery in Bridge Monitoring Data 429
are backward analyzed to identify single vehicles and the associated gross vehicle
weight, vehicle velocity, axle spacings and axle loadings. These systems have proven
to reach high accuracy: Accuracy class of A(5) could be obtained for bridges with
very smooth pavement [5] in combination with sophisticated algorithms to analyze
measured data. Since the systems are invisible on the road surface, drivers of heavy
goods vehicles can hardly avoid crossing the weighing scale. In consequence the total
traffic flow is recorded. Moreover, compared to pavement based systems the system’s
durability is increased whereas costs for installation and maintenance are reduced.
B-WIM systems were initially studied in the USA by the Federal Highway Ad-
ministration (FHWA) in the 1970’s. A first approach based on the evaluation of influ-
ence lines was introduced in 1979 [6]. An approach making use of techniques from
the field of soft computing can be found in [7]: Given measured data as input, a first
artificial neural network (ANN) is used to obtain the type of a vehicle passing over
the bridge. Subsequently a second, vehicle type specific ANN is used to determine the
vehicle’s velocity, axle loads and spacings. For this procedure, ANNs need to be
formulated for all kinds of possible vehicles. A major drawback of the system is its
limitation to the evaluation of single vehicle events.
In the following an evolutionary algorithm based approach for the acquisition of
knowledge about traffic (i.e. vehicles and associated attributes of interest) from bridge
monitoring data will be introduced in detail. Single vehicles are identified from data
recorded during presence of one or multiple vehicles on the bridge at a given point of
time.
The overall knowledge discovery process consists of three consecutive steps [8]:
(i) data pre-processing, (ii) data mining and (iii) data post-processing. The step of data
pre-processing addresses methods of data preparation. Within the framework of the
presented approach, measured data is pre-processed by a digital filtering to reduce
noise as well as dynamic effects and data is normalized to eliminate thermal influ-
ences. The data post-processing step covers all aspects of how the gained knowledge
is eventually treated. Gained knowledge (i.e. identified gross vehicle weights) is used
for studies on changes of traffic loads and composition. The approach presented in
this paper covers the data mining step of the knowledge discovery process.
2 Bridge Monitoring
The Institute of Structural Concrete Essen performed long-term measurements for
more than twelve months at the superstructure of a post-tensioned concrete bridge
according to figure 1. The box girder bridge was built in the seventies of the last cen-
tury and consists of two independent superstructures having two lanes each. Figure 1
also shows the calculated influence line for a 5 axle articulated vehicle of a gross
weight of 40 t for an underlying linear temperature difference. Changes of strains
(Δεp), concrete and ambient air temperature (Tc, Ta) were measured at several points
of the cross section. Furthermore, displacement sensors (Δwc) were attached to the top
plate as also presented in [9] and [10].
Having more than one vehicle on the bridge the vehicles’ influence lines superpose
and the combination of single vehicles is recorded. Therefore, measured data may
contain single vehicles or combinations of vehicles.
430 P. Lubasch et al.
Possible recordings for two vehicles following each other in short distances are
demonstrated in figure 2. For the j th vehicle the variable t0,j denotes the time of
occurrence, vj the velocity and Gj the gross weight. By means of the illustrated super-
position of two influence lines shall be demonstrated that the identification of single
vehicles from the recording is a non-trivial task and analytical methods are not
sufficient.
The appearance of a single vehicle varies with its gross weight and its velocity, the
type of vehicle and the level of basic stress. The gross weight scales the recording
over the Δεp-axis whereas the vehicle’s velocity is reflected in the duration of re-
cording since measurement is performed over time. Axle configurations and spacings
as well as the distribution of axle loads determine the static influence line. The level
of basic stress describes physical non-linearities of the cross section and is caused by
permanent and temperature actions and in the case of prestressing the level of basic
stress is also dependent on the height of initial strain (also see [10]).
Knowledge Discovery in Bridge Monitoring Data 431
3 Data Analysis
The analysis of measured data is performed by the evaluation of time intervals of
length t and in steps of Δt << t . The i th interval begins at tBeg(i) and ends at tEnd(i)
(see figure 3). For effectiveness, periods without any traffic are detected and excluded
from the process of optimization.
For each time interval the data is analyzed by means of an evolutionary algorithm.
Within the overall procedure, the traffic situation to be investigated is represented in
an object-oriented manner: Event sets (ES) represent combinations of vehicles and are
set up of single events (SE), which describe real vehicles. A SE is completely de-
clared by its time of occurrence t0, its velocity v, its gross weight G and the type of
vehicle S (e.g. 3 axle rigid vehicle, 5 axle articulated vehicle among others). ES are
made up of one or several SE and are used to approximate measured data within the
optimization. Figure 4 shows four SE setting up ES.
As a very important element ES and associated SE from the former time interval
are taken into account for the initialization of the current populations (SE and ES) as
432 P. Lubasch et al.
can be seen from figure 5. Since steps of Δt << t are carried out, the environment
changes from the previous time interval i-1 to the current time interval i only slightly.
The time intervals always overlap. Thus, the situation to be investigated does not alter
significantly by performing a time step. The use of ES and their SE from the former
time interval to build up the current populations leads to a dynamical adaptation of the
algorithm to the optimization task. This way, knowledge gained from the former time
step is not rejected but transferred to the time interval of consideration towards an
improved optimization.
The fittest ES of the last generation of the i th time interval is referred to as ESwin(i).
A SE being part of ESwin(i) for a predefined number of time steps is assumed to de-
scribe a real vehicle and gets assigned to the database containing the discovered single
vehicles. Such a SE was optimized within several evolutionary algorithm runs and
thus holds a high accuracy of approximation.
3.1 Initialization
For initialization of the SE- as well as the ES-population the specialties of knowledge
transfer to the current time interval i have to be taken into account. If a former time
interval i-1 and a winner ESwin(i-1) exist, the two populations are partly initialized on
the basis of the ESwin(i-1) and its SEj. The index j refers to the j th SE of an ES.
Since ES are made up of SE, the ES-population is generated after initialization of
the SE-population (also see figure 5). Consequently a SE can belong to several ES.
For the initialization of the SE-population four basic types of SE can be
distinguished:
Knowledge Discovery in Bridge Monitoring Data 433
ES are set up of SE; whereas SE, which represent real vehicles, are made up of the
code elements t0, v, G and S. Following this architecture the evolutionary operators
are either applied on the level of the ES or on the level of the SE. The application of
an operator on the ES [SE] causes changes of the ES [SE] itself or its SE [code ele-
ments t0, v, G and/or S]. As usual, before applying evolutionary operations a selection
is performed: Parents ESpar1 and ESpar2 are selected according to their particular fitness
value f for generating an offspring ESoff. Subsequently, the two genetic operators
recombination and mutation are applied distinguishing between ES and SE.
Recombination: This operator is applied to an ES with an adaptive probability of
p1,rcb. If the operator is not applied, the prime parent ESpar1 is directly transferred for
mutation as ESoff. In case of recombination every SEpar1,j of ESpar1 is checked for ap-
plication of this operator (p2,rcb). The recombination of a selected SEpar1,j is either
performed in the level of SE or on the SE’s code elements (p3,rcb). The direct recombi-
nation of a SE signifies either the replacement of the chosen SEpar1,j by a compatible
SEpar2,j or adding a SEpar2,j to ESpar1 to form ESoff. For recombination of the chosen
SEpar1,j’s elements a compatible SEpar2,j is selected. Subsequently an interchange of
single elements t0, v, G and S between the chosen SEpar1,j and SEpar2,j is carried out.
Mutation: An adaptive probability p1,mut decides about mutation of every single
ESoff’s SE. Similar to the recombination, this operator is either directly applied to the
chosen offspring’s SE or the SE’s code elements (p2,mut). In case of directly mutating
the SE it is either replaced by a compatible SE from the SE-population, removed from
ESoff or as a third variant a compatible SE from the SE-population is added to ESoff.
Mutating the SE’s code elements signifies changes in either t0, v, G or S.
Adaptation: Within the first generations the recombination according to p1,rcb is
performed frequently on quite huge populations. Simultaneously the probability p1,mut
is held little in order not to vary the individuals too much. After a certain number of
generations the population sizes as well as the probability of recombination are re-
duced drastically. At this point mutation is raised to be carried out intensely with little
variations on already quite good solutions.
Since discrete time intervals are considered for data analysis the fitness evaluation
within the evolutionary algorithm is always carried out on the current time interval i
(see figure 5). The fitness is calculated for every ES and subsequently assigned to the
SE assembling the ES. Due to the definition of the fitness function, the individual’s
fitness is to be minimized for a better solution. This signifies that a SE being part of
several ES – as basically possible from the initialization – is to be assigned the maxi-
mal fitness of its ES (worst fitness possible).
Primarily the fitness is determined by rating the ES’s approximation performance
of measured data. For this purpose the normalized mean squared error NMSE accord-
ing to equation (1) is calculated for every ES.
Besides this most important fitness measure additional criteria are covered within
the fitness evaluation: A SE, which could contribute to a better approximation of ES
but features an improper time of occurrence t0, shall not be rejected by assigning a bad
Knowledge Discovery in Bridge Monitoring Data 435
fitness value. Thus, for the ES and its SE a certain neighborhood surrounding every
discrete value of real data is examined for a better approximation. In the case of mak-
ing use of such a neighborhood-value a worse fitness is assigned depending on the
distance of the used value to the real data value. Within the following evolutionary
operations the SE still comprises a more or less good fitness value, can be modified
towards a better approximation and may be selected for initialization of ES.
During the evolution process the t0 of SE are changed. This may lead to ES consist-
ing of SE (representing vehicles) with unrealistic distances among them. To overcome
this inconsistency, the fitness value of such ES is modified by means of a soft con-
straint based penalty function.
⋅ ∑ ( Δε p ,a ,m − Δε p ,r ,m )
1 M 2
M m=1
NMSE = (1)
( )
M M 2
max ⎡⎣ Δε p ,a ,m ; Δε p ,r ,m ⎤⎦ m=1 − min ⎡⎣ Δε p ,a ,m ; Δε p ,r ,m ⎤⎦ m=1
M total number of discrete data points of consideration
m discrete data point
Δεp,a,m strain approximation, event sets
Δεp,r,m real values of strain, measured data
References
1. WAVE: Weigh-in-Motion of Axles and Vehicles for Europe. General Report of the 4th FP
Transport, RTD project, RO-96-SC, 403, ed. B. Jacob, LCPC, Paris. (2001)
2. COST 323: European Specification on Weigh-in-Motion of Road Vehicles. EUCO-
COST/323/8/99, LCPC Paris (1999)
3. Jacob B., O’Brien E.J.: Weigh-in-Motion: Recent Developments in Europe. Proceedings of
the 4th International Conference on WIM, ICWIM4, Taipei (2005) 2-12
4. Opitz R., Kühne R.: IM (Integrated Matrix) WIM Sensor and Future Trials. Proceedings of
the 4th International Conference on WIM, ICWIM4, Taipei (2005) 61-71
5. Brozovič, R., Žnidarič, A., Vodopivec, V.: Slovenian Experience of using WIM Data for
Road Planning and Maintenance. Proceedings of the 4th International Conference on WIM,
ICWIM4, Taipei (2005) 334-341
6. Moses, F.: Weigh-in-Motion System Using Instrumented Bridges. ASCE, Transportation
Engineering Journal, Vol. 105, No. 3, (1979) 233-249
7. Gagarin, N., Flood, I., Albrecht, P.: Computing Truck Attributes with Artificial Neural
Networks. ASCE, Journal of Computing in Civil Engineering, Vol. 8 (2), (1994) 179-200
8. Fayyad, U., Piatetsky-Shapiro, G., Smyth, P.: From Data Mining to Knowledge Discovery
in Databases. AI Magazin (1996) 37-54
9. Lutzenberger S., Baumgärtner W.: Evaluation of measured Bridge Responses due to an in-
strumented Truck and free Traffic. Bridge Management 4, ed. Ryall, Parke, Hardening,
Thomas Telford, London (2000).
10. Schnellenbach-Held, M., Lubasch, P., Buschmeyer, W.: Evolutionary Algorithm based
Assessment of Traffic Density Changes. IABSE Symposium, Budapest (2006)
Practice 2006: Toolkit 2020
1 Introduction
Arup is a firm of some seven thousand designers spread across the world. The firm
has been traditionally known in the built environment for its structural consultancy
work from the Sydney Opera House onward. However, Sir Ove Arup in 1970 was
quick to point out that innovation in design occurs in a multidisciplinary practice [1].
In 2006, forty years from the date when Sir Ove Arup founded it, the firm employs
specialists consultant in many design disciplines including acoustics, lighting, fire,
flow of air and water, flow of people through spaces and during evacuation, flow of
goods through airports, manufacturing plants and hospitals, traffic and vehicle move-
ment. Professor William Mitchell1 points out that research in computing applications
is most successful when experiments occur in non-trivial scenarios. Advances in com-
puting applications as they affect the design have consistently occurred at the conflu-
ence between information technology and creative practices [2], [3]. Both authors
made a conscious decision to move to consultancy, accepting the challenges of put-
ting invention and innovation into practice, on the assumption that the real world
challenges are seldom trivial [4]. The focus of this paper is on work occurring in
1
William J. Mitchell, Professor of Architecture and Media Arts and Sciences at MIT. Mitchell
is currently chair of The National Academies Committee on Information Technology and
Creativity.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 437 – 454, 2006.
© Springer-Verlag Berlin Heidelberg 2006
438 C. Luebkeman and A. Simondetti
between the practice of architecture and engineering, in between academia and prac-
tice and finally between implicit design intuition and the explicit rule-based genera-
tion of design [5], [6].
The Building Information Model (BIM) are becoming the integrator of the supply
chain all the way along the line from the client to the fabricator. Such geometrical
2
Realtime is an interactive environment, is an innovative design tool developed by Tristan
Simmonds at Arup, based on cutting edge computer graphics technology.
3
SoundLab, with 12 channel ambisonic 3D sound system, is an innovative design tool devel-
oped by Arup to give an auralization of sound in performance spaces and other building
types remote from the spaces themselves.
Practice 2006: Toolkit 2020 439
models have become the interface for the complete database of project information.
Although they are already commonplace in the aerospace and automotive industry,
they are just emerging in the construction industry where they often challange the
standard contractual arrangement of the supply chain.
As an example, we will introduce three projects, each of which integrates a differ-
ent portion of the supply chain. The Sydney Opera House Opera Theatre Refurbish-
ment integrates design representation with facilities management, Westland Road
Tower integrates architectural, structural, mechanical and quantity surveyor disci-
plines in a single model during the schematic design phase of the project; Serpentine
Pavilion 2005 integrates design and fabrication.
Sydney Opera House – Opera Theatre Refurbishment. For this project, Arup
developed a 3D model of the existing structure of the Sydney Opera House, Opera
Theatre Refurbishment [7] using Bentley software. Each entity in the model contains
the complete set of the original information, with some three thousand architectural
drawings, one thousand structural, and several hundred services and associated sub-
contractor drawings.
Each entity in the model, in addition to geometrical modeling information, holds a
description of where the drawing information for that entity originated. Tags contain-
ing the entity’s unique number from the original drawings, list all the structural and
architectural drawings used to create that entity (the list can contain as many as 10
different drawings). All existing drawings are in TIFF format and can be retrieved by
double clicking on the entity in the 3D model.
In addition, each entity in the 3Dmodel is directly linked to the Opera House Fa-
cilities Management database. By double clicking on an entity in the 3D model, using
its unique entity number, the Facilities Management master spreadsheet opens. This
spreadsheet in turn is then hyperlinked to all other spreadsheets that are used for the
daily running of the building. Similarly, by double clicking the entity number in the
spreadsheet, it will open Bentley Structural, create a report, find the tag within the 3D
model and show the location of the entity in one view.
Fig. 1. Sydney Opera House, Opera Theatre Refurbishment linked to the facility management
spreadsheets
This BIM, directly linked with the Building Management System (BMS), is cur-
rently being used by the building owner. The long-term purpose of this 3D model is to
440 C. Luebkeman and A. Simondetti
assist the team responsible for the proposed internal Opera Theatre refurbishment in
attempting to realize the architect Jorn Utzon’s original concepts.
Westland Road Tower. Arup participated in the creation of a BIM for Westland
Road Tower in Hong Kong. Here the BIM was used for co-ordination and clash de-
tection, within a project that was constrained by a very short timeframe. The tower is
three hundred meters tall, seventy nine floors, and the client allocated a short six
month design development period to produce Structural Tender Drawings. The tower
client, Swire Properties [8], decided to use Digital Project4 software.
Approximately twenty-five team members including the client’s project manager,
Quantity Surveyor, Architect, Structural, Mechanical, Electrical and Public Health
(MEP) engineer and a 3D consultant were trained by the software supplier, Gehry
Technologies [9]. A Model Coordinator was resident in the project office.
Fig. 2. BIM Westland Road Tower in Hong Kong (courtesy of Gehry Technologies)
This represents a pilot project for the client and for the industry as a whole. We are
waiting for this tower to be completed and a few other applications before we can
measure the full scale of the success.
Serpentine Pavilion 2005. Arup used Visual Basic (VB) scripting language and
AutoCAD software for the twelve week Design Development phase of the Serpentine
Pavilion 2005 [10] in London’s Hyde Park. The driver for this method of working
was to reduce the risk of mistakes in this project with a short time frame. The 3D
geometrical model was a graphical instance of the script, the design rules evolved
during the design development. This process insured that at least 80% of the model
had no risk of mistakes. Doors and special edge conditions were added manually. The
4
Digital Project software is the customization of Dassault’s CATIA for the construction indus-
try by Gehry Technologies.
Practice 2006: Toolkit 2020 441
success of the project was to build the pavilion with zero mistakes that were unac-
ceptable to the client. To communicate with the fabricator, Arup used the geometrical
model along with a text file containing the joint number and coordinates. Based on
Arup’s rules and joint information, the German fabricator rebuilt their model and
checked it against the Arup three-dimensional model. When they started assembly on
site, Arup produced a 2D drawing upon request.
Serpentine Pavilion 2005 proves the success of this methodology for a temporary
pavilion. Now, the industry has to apply these techniques to the realm of permanent
buildings.
high speed train. The design of the project was driven by the aim of bringing natural
light thirty-five metres underground to the platform level via holes in the slabs.
Designers focussed mainly on two operational conditions of the building: one un-
der standard conditions and another in the case of an extreme event. The worst case
scenario was exemplified by the coach of a train arriving at the station on fire, aligned
with the hole in the slab, with the doors open, some smoke coming out, a loud speaker
announcement that the whole station must be evacuated, and finally with the windows
of the coach exploding and more smoke invading the station.
The architect produced a geometrical model of the architectural surfaces that in-
corporated the bare-faced concrete structural model and the complex geometry of the
steel roof. This three-dimensional model, evolving at each design iteration, was then
used as a basis to create a RADIANCE5 three-dimensional computational lighting
simulation to demonstrate and refine the natural lighting levels at platform level. The
same models were also used as a basis for a Computational Fluid Dynamics analysis,
using STAR CD6 to map the smoke propagation during the extreme event described
above, and as the basis for a STEPS7 people evacuation model of the entire station to
support communications with the Italian Regulatory Authorities. Finally, the same
model was used as the basis for a three-dimensional computational acoustical simula-
tion, or auralization, of the station which demonstrated to the client and the authorities
the intelligibility of emergency announcements with alternative acoustical insulation
and public address systems.
5
The Radiance open source software is a distributed raytracing package developed by Greg
Ward Larson, then at the Lawrence Berkeley National Laboratory (LBNL) in Berkeley, Cali-
fornia.
6
The computer program STAR CD by cd-adapco.
7
The computer program STEPS (Simulation of Transient Evacuation and Pedestrian move-
mentS) has been developed by Mott MacDonald to simulate the movement of people during
emergency evacuation scenarios.
Practice 2006: Toolkit 2020 443
M1 widening. Motorway design has changed since the M1, Britain’s north to south
motorway, was built in the mid 1960s. Designers now have to make sure that local
communities are protected from over-development and that measures to minimise any
impact on the surrounding environment are carried out. This is a process known as
‘mitigation’. Arup [13] has created a synthetic environment to introduce the public
and the project stakeholders to the key issues related to the M1 widening project:
8
3D Studio MAX by Discreet, Autodesk.
444 C. Luebkeman and A. Simondetti
visual and acoustical performance of the noise barriers; new road surface to mitigate
noise; retaining structures to mitigate impact on existing mature vegetation and light
spillage mitigation.
The two km2 synthetic environment, represented a stretch of motorway that was
eighty-three kilometres in length which had been earmarked for widening. The syn-
thetic environment was created from: three-dimensional design data of the proposed
widening design, Ordinance Survey maps, LiDAR9 data pre-processed for the ground
surface building outlines and tops of trees, three types of aerial photography to create
the models texture, accurate acoustical and lighting surveys, accurate lighting simula-
tion, accurate auralization and indicative traffic flow. Three-dimensional tool In-
Roads10 design data, in turn integrated several design disciplines including bridges,
civil and geotechnical engineering.
With a large number of disciplines coming together for the first time in one envi-
ronment, despite a virtual one, the challenge was to integrate different stages of de-
sign development and different entities which would have otherwise looked awkward
in a single visual representation. For example the DTM captured by an aircraft flying
overhead necessarily wouldn’t include data under the existing motorway over-
bridges; however in the M1 synthetic environment it would have looked awkward to
leave an empty gap in the model.
9
Light Detection and Ranging (LIDAR) is an airborne mapping technique which uses a laser
to measure the distance between the aircraft and the ground. This technique results in the pro-
duction of a cost-effective digital terrain model (DTM).
10
InRoads software by Bentley.
Practice 2006: Toolkit 2020 445
Arup uses two techniques for regenerative modelling: parametric relational model-
ling, and automated modelling.
Parametric relational modelling, also referred to as “live intelligent modelling” is a
three-dimensional model that contains information about entities as well as their
11
Timeliner by NavisWorks.
446 C. Luebkeman and A. Simondetti
12
Royal Melbourne Institute off Technology (RMIT) SIAL.
13
CATIA, by Dassault is a standard software of Aerospace industry.
14
Digital Project is a parametric relational modeling software , by Gehry Technologies.
15
Beijing National Swimming Centre by PTW Architects of Melbourne and Arup with China
State Construction International Design. Under Construction. May 2006.
16
Microstaion by Bentley.
Practice 2006: Toolkit 2020 447
One can imagine that this design technique will be more suitable to some designers
then others, however it will become necessary in all those instances were a manual
design process will take too long or it will carry too much risk of human error.
Finally, Realtime has become a necessity to a design team when the design is auto-
mated following a set of rules. This proved to be the case with the Beijing National
Swimming Centre described above. The design team used Realtime to double check
the geometry generated by the script. The interactive synthetic environment became a
powerful tool for debugging the script and developing the design itself.
geometric models, we are now able to immerse clients and project colleagues in the
project during the design phase. SoundLab for example has been used for the design of
public announcement systems for the Florence Station project mentioned above.
Fig. 10. Four contexts of practice range from the local and protective market with a single
mode of practice (Context C) to the global and open market with a multiple mode of practice
(Context A)
450 C. Luebkeman and A. Simondetti
I believe that by 2020 … [sample of 17] Strongly disagree 0% Strongly agree 100%
we will work digitally directly 82%
from bldg site
we will be designing design 73%
systems
2D Documentation will disap- 41%
pear
Multidisciplinary BIM will be 78%
commonplace
we will all work in open source 36%
will be teaching it; designers will be doing more programming and the motivation will
be in seeing what can be done.
To assess the world around us we used the STEEP framework to ask colleagues
“what do you envision lies ahead?”. The driving forces and implications that resulted
from these questions allowed us to focus on the factors which might influence our
future, some of which are extrapolations and some of which are speculation. The
following table shows the key excerpts gained from the interviews.
Drivers/ Implications
Social It’s not just a matter of learning AutoCAD.., but more
a way of transforming the idea of the design office, so
that it is intimately connected with crafting these tools
themselves. The quality of products in Aircraft engi-
neering is so much higher than the quality of build-
ings. It may well be that we need to turn to alternative
sources other than the traditional civil engineering
department or architecture department. Transitory
employment between employers will become a real
challenge for employers
Technological Search and access of knowledge will become a bit
easier. The viewer will be on the web somewhere.
You are going to have access to pretty much any-
thing, anywhere you are. Biological modeling is go-
ing to drive the next ten years.
Economical All sectors at all levels of the industry always must
necessarily remain incentivised. Reducing waste in
the construction industry can easily pay for the en-
hanced work at the front end. If the value proposition
gets redefined, then the fee structures will change.
Project insurance will be like decennial insurance, so
none of the designers indemnify themselves, the cli-
ent will actually indemnify the project
Environment Buildings need to be designed in such a way as to
diminish energy consumption. Green architecture is
probably as big a force for design integration as all
the other stuff.
Political A more open, co-operative agreement, rather than an
adversarial type of contract. It’s embedded in the
American psyche that every state gets to do whatever
it wants. It is all depending on trust between the de-
signer, the contractors and the clients. The openness
of European countries will drive change. In New
Orleans to replace three hundred thousand housing
units the traditional design bid is not going to work.
Three-dimensional objects are required by planners
452 C. Luebkeman and A. Simondetti
Having described what we are designing across Arup, and having interviewed some
two dozen professionals we conclude as follows.
The industry needs new specialists, and if academia doesn’t provide them, the in-
dustry will have to resort to setting up private academies. This has already occurred in
the past, for example in the mid 1960s, the Istituto Europeo di Design was set up in
Milan when the Italian Academic community couldn’t meet the demands for emerg-
ing specialists including the Industrial Designers. Similarly, a few years back, the
Interactive Institute in Ivrea was set up by Telecom Italia outside Turin, when once
again Italian Academia failed to address the need for multidisciplinary design educa-
tion based on advances in computation.
Don’t be mistaken, the blame is not only to be placed on academia, our profes-
sional practices will also have to develop attractive careers for these new specialists if
we are to reduce the current outflow of highly valuable professionals towards setting
up their own shop. A trend that would not be negative for the industry as a whole if it
wasn’t for the inevitable consequence that the specialist, once on their own, are forced
to manage the process as opposed to practising it, generally with the result that they
interrupt their research.
We identified four new emergent specialists in our profession: the tool maker, the
math modeller, the custodian and the embedded PhD. What follows is an attempt to
profile each of them.
Toolmakers might be individuals that create programs or scripts to generate ge-
ometry and could have a fundamental understanding of first principles of design as
well as a solid background in computer science or graphics programming. They
would be a central resource to the office or the region and would spend short and
sharp periods of time (from 2 weeks to a month) with each project team. Tool making
is a part-time activity that combines very well, but not necessarily, with design itself.
When not helping the project team, toolmakers would be given time to reprogram
relevant code written for specific projects in a more generic way for re-use throughout
the firm. These individuals would also be given time to connect with the program-
ming community outside the firm, and would regularly present their novel work at
technical conferences.
The BIM Custodian, also known as the BIM Master, or BIM Co-ordinator, would
be an individual with solid experience in 3D modelling, preferably in several different
software packages. He or she would have an understanding of how to set up model-
ling protocols with a broad grounding in construction techniques and a good under-
standing of multiple design disciplines.
The BIM co-ordinator is a fulltime position working on one or a few projects de-
pending on size. Similar to the current Project Manager, the BIM co-ordinator is one
of the foremost specialists dealing with the client and public relations. He/she would
be on the move and work with a powerful graphics laptop. Because of the nature of
the work the BIM co-ordinator would also have exceptional interpersonal and team
management skills.
Practice 2006: Toolkit 2020 453
References
Personal Interviews for Designer’s Desktop 2020 study with: Professor Mark Burry, RMIT,
Melbourne; Reed Kram, Kram Design, Stockholm; Charles Walker, Arup; Jeffrey Yim,
Swire Properties, Hong Kong; Martin Riese, Gehry Technologies, Hong Kong; Axel
Killian, MIT, Boston; Jose Pinto Duarte, Lisbon University, Lisbon; Joe Burns, Thorn
Tommasetti, Boston; Mark Sich, Ford Motor Company, Michigan; Phil Bernstein, Auto-
desk, Boston; Lars Hesselgren, KPF, London; Mikkel Kragh, Arup; Bernard Franken,
Franken Architekten, Frankfurt; Martin Fisher, Stanford University, San Francisco; Tristram
Carfree, Arup; Mike Glover, Arup; Duncan Wilkinson, Arup.
Discussions for 3D Documentation study with: Richard Houghes, Stuart Bull, Steve Downing,
Simon Mabey, Valerio Giancaspro, Dominic Carter, Neill Woodger, Tristan Simmonds,
Martin Self, Martin Simpson, Dan Brodkin, J Parrish, Nick Terry, BDP, London.
1. Sir Ove Arup: Key Speech, delivered in Whinchester (1970), internal publication (2001).
2. Mitchell, William J.: et al eds., Beyond Productivity: Information Technology, Innovation,
and Creativity. Washington, D.C.: The National Academies Press. 2003.
3. Shmitt, G.: Micro Computer Aided Design, New York, John Wiely & Sons, (1988)
4. Glymph, J.: et al. A parametric strategy for free-form glass structures using quadrilateral
planar facets, Automation in Construction 13 (2004) 187– 202, Elsevier.
5. Burry, M.: Between Intuition and Process: Parametric Design and Rapid Prototyping, in
Architecture in the Digital Age (ed. Branko Kolarevic) Spon Press, London, (2003)
6. Frazer, J.H.: An Evolutionary Architecture, Architectural Association, London, (1995)
7. Bull Stuart, Arup, Sydney Opera House, Theatre Refurbishment, Unpublished paper, Jan
2005
8. Yim, J.: Swire Properties. Unpublished interview notes. March 2006
9. Ceccato, C.: Gehry Technologies, Unpublished interview notes. March 2006
10. Self, M.: et al. in Dan Brodkin and Alvise Simondetti, 3D Documentation Report, Arup in-
ternal publication, Nov 2005
11. Clark, E., Woolf, D., Graham, D., Patel R., Shaw J., Simmonds, T., Simondetti, A.: unpub-
lished project notes, February 2002
12. Mabey S.: Arup, unpublished project notes, February 2003
13. Simmonds, T., Simondetti, A.: et al. Arup, Project notes. March 2006
14. Maher, A. and Burry, M. The Parametric Bridge: Connecting Digital Design Techniques in
Architecture and Engineering, ACADIA 2003 Proceedings Indianapolis (Indiana) pp. 39-47
Intrinsically Motivated Intelligent Sensed Environments
Abstract. Intelligent rooms comprise hardware devices that support human ac-
tivities in a room and software that has some level of control over the devices.
“Intelligent” implies that the room is considered to behave in an intelligent
manner or includes some aspect of artificial intelligence in its implementation.
The focus of this paper is intelligent sensed environments, including rooms or
interactive spaces that display adaptive behaviour through learning and motiva-
tion. We present motivated agent models that incorporate machine learning in
which the motivation component eliminates the need for a benevolent teacher to
prepare problem specific reward functions or training examples. Our model of
motivation is based on concepts of “curiosity”, “novelty” and “interest”. We
explore the potential for this model through the implementation of a curious
place.
1 Introduction
The development of intelligent rooms has been dominated by the development of
sensor configurations, effectors, and software architectures that specify protocols for
interpreting and responding to sensor data. A current practical application is the home
automation package, in which computational processes monitor activities within the
home and respond by turning lights on and off, locking and opening doors, and other
actions usually performed by the inhabitants of the home. Home automation systems
are possible with sensors and effectors that are programmed to respond as expected.
Another approach to sentient rooms, or intelligent rooms, is to use an agent ap-
proach to the computational processes and allow the agent to reason about the use of
the room so that it can facilitate human activity. This research started with the intelli-
gent room project [4, 5] and has progressed in several directions, from sensor
technology and information architectures, to possible agent models for intelligent
reasoning [14]. While these systems go beyond the home automation systems to pro-
actively support human activities, they still respond as programmed. Similarly, in the
developments in sentient buildings, with a goal of building self-aware systems, com-
putational processes monitor various environmental and human activity parameters to
maintain a model of the state of the building and to maintain a comfortable environ-
ment for the inhabitants.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 455 – 475, 2006.
© Springer-Verlag Berlin Heidelberg 2006
456 M.L. Maher, K. Merrick, and O. Macindoe
creation schemes based on cognitive theories of the mind [1, 20, 24, 29, 31]. These
models have been used to focuses an agent’s reasoning and action around a particular
subset of its perception, to generate goals or to trigger other processes that satisfy or
stimulate its motivational mechanism. A survey of the interaction the motivation
component has with the model of the environment and the agent is shown in Figure 1
and elaborated in Table 1 and Table 2.
The motivation process takes information from the sensed environment and its own
memory to trigger learning, planning, action or other agent processes. Sensors provide
the agent with information describing the state of its environment. A sensation proc-
ess transforms this data into structures more appropriate for reasoning. These
458 M.L. Maher, K. Merrick, and O. Macindoe
structures include observed states which focus attention on subsets of the sensed envi-
ronment and events which provide information about the dynamics of the environ-
ment by representing the difference between sequential observed states. The agent
may also sense examples of behaviours performed by other entities in their environ-
ment. In addition to these structures, the agent’s memory may include rewards, goals,
behaviours, actions and plans, depending on the higher reasoning processes employed
by the agent.
Reinforcement learning [33] uses rewards to guide agents to learn a function which
represents the value of taking a given action in a given state with respect to some task.
A reinforcement learning agent is connected to its environment via perception and
action as shown in Figure 2(a). On each step of interaction with the environment, the
agent receives an input that contains some indication of the current state of the envi-
ronment. The agent then chooses an action as output. The action changes the state of
the environment and the value of this state transition is communicated to the agent
through a scalar reinforcement signal. The agent’s behaviour should choose actions
that tend to increase the long-run sum of values of the reinforcement signal. This
behaviour is learnt over time by systematic trial and error.
In motivated reinforcement learning agents, shown in Figure 2(b), the sensation
process S differs from that of a standard reinforcement learning agent by producing
both an observed state O(t) and a set of events EO(t). The observed state is stored in
memory M and overwritten at each time step, while the set of events is forwarded to
the motivation process. The motivation process M stores the set of current events EO(t)
using a cumulative process to incorporate it into the set of all events E(t) in memory.
The motivation process then reasons about the set of all events to produce a reward
signal R(t) which is forwarded to the learning process. The learning process L per-
forms a reinforcement learning update such as the Q-learning update [35] to incorpo-
rate the previous action and the current observed state into the behaviour B(t-1) in
memory. Finally, the activation process A uses some exploration function of the Q-
learning action selection rule to select an action A(t) to perform from the updated be-
haviour B(t). The chosen action A(t) is stored in memory and triggers a corresponding
effector F(t) which makes a change to the agent’s environment.
Intrinsically Motivated Intelligent Sensed Environments 459
Motivated reinforcement learning agents aim to use systematic trial and error to
maximise the long term sum of rewards produced by their motivation process. In
motivated reinforcement learning agents, the reward received in each state may
change over time, resulting in the emergence, stabilisation and disappearance of mul-
tiple mappings from states to actions within the behaviour B. An extension to the
standard motivated reinforcement learning model might save multiple behaviours in
memory so that each mapping from states to actions need only be learned once and is
not forgotten [30]. Either way, over time, motivated reinforcement learning agents are
able to learn to solve multiple tasks.
W(t) W(t)
N N
S(t)
S(t)
O(t-1)
S
O(t-1)
S O(t)
O(t) M = {O(t) U E(t) U B(t) U A(t) } O(t), EO(t)
M = {O(t) U B(t) U A(t) }
E (t-1)
O(t), R(t) M
B(t-1), A(t-1) E (t)
L O(t), R(t)
B(t-1), A(t-1)
B(t) L
B(t)
B(t)
B(t)
A
A
A(t) A(t)
A(t) A(t)
F F
(a) F(t)
(b) F(t)
Fig. 2. Role of motivation in learning agents: (a) an agent model for reinforcement learning, (b)
an agent model for motivated reinforcement learning
attention can be autonomously focused on learning specific tasks. With the develop-
ment of appropriate motivation functions, motivated supervised learning agents have
the potential to derive contextual information from observations and examples, rather
than requiring a separate perception process dedicated to this task. In a curious place,
this may be useful for distinguishing task boundaries when there is more than one
person in a room, performing multiple tasks. As is the case with reinforcement learn-
ing, it may be possible to develop hierarchical behaviours to represent these tasks.
W (t) W (t)
N N
S(t)
S(t)
O(t-1)
S
S O(t)
B(t)
A
B(t-1) A
A(t)
A(t)
F F
(a) F(t)
(b) F(t)
Fig. 3. Role of motivation in learning agents: (a) an agent model for supervised learning, (b) an
agent model for motivated supervised learning
Unlike supervised and reinforcement learning algorithms which learn functions map-
ping observed states to actions, unsupervised learning algorithms aim to identify pat-
terns or important features in observed data. Agent models that incorporate unsuper-
vised learning strategies with and without motivation are shown in Figure 4. In moti-
vated unsupervised learning, the sensation process S is responsible for disassembling
S(t) into the observed state O(t) and a set of events EO(t). The motivation process M acts
as a filter to decide which observations will be passed on to the learning process for
learning. The learning process L then performs an unsupervised learning update such
as a neural network update, k-means update or data mining update to incorporate the
observed state and example action into the world model O(t-1) in memory. Unlike
motivated reinforcement and supervised learning, motivated unsupervised learning
does not build behaviours which represent mappings from input to predefined output
values such as actions. Rather, the output values produced by unsupervised learning
represent important patterns or features in the input data such as clusters, principal
462 M.L. Maher, K. Merrick, and O. Macindoe
components or repeated patterns in temporal data. The activation process A uses a set
R of predefined behavioural rules to act on the features identified by the learning
process. The chosen action A(t) triggers a corresponding effector F(t) which makes a
change to the agent’s environment.
W (t) W (t)
N N
S(t)
S(t)
O(t-1)
S
S O(t)
O(t), EO(t)
E(t-1)
M = { O(t), E(t), R}
O(t) M
O(t-1) E(t)
L O(t), EO(t)
M = { O(t), R}
O(t) O(t-1)
L
O(t)
O(t), R
A O(t), R A
A(t) A(t)
F F
(a) F(t)
(b) F(t)
Fig. 4. Role of motivation in learning agents: (a) an agent model for unsupervised learning, (b)
an agent model for motivated unsupervised learning
In a curious place, agents for which the sensed and effectible worlds are mutually
exclusive may be used to visualise information about human inhabitants. In a “win-
dow on the mind” approach, an unsupervised learning technique could be used to
model user actions, then display its model to humans using a set of predefined behav-
ioural rules defining actions to visualise the user model. However, like motivated
supervised learning agents, the motivation process can be used as a filter to allow the
agent to focus its attention on particular parts of its environment when there are mul-
tiple activities being performed around it. For example, in the visualisation scenario,
the agent might choose to visualise a user model of just one user when, in fact, there
are multiple users in the room. A standard unsupervised learning agent would simply
generalise over all users.
In scenarios where the display of digital information is more familiar, such as web-
browsers for the display of information from the world-wide-web, novel interaction
algorithms have been developed to automatically personalise the digital space [9].
Similarly, intelligent tutoring systems use artificial intelligence algorithms to tailor
learning material to the individual needs of students [12]. Large digital information
displays in public spaces have the same capacity for the use of novel techniques to
improve the usefulness of the displays, however the public, multi-user nature of these
displays calls for new algorithms to improve the ability of such displays to impart
information. We propose Curious Information Displays as a means of creating digital
displays that can attract the interest of observers and impart information by being
curious and learning about the ways in which observers respond to information being
displayed.
<node> Æ <id><itemType><source><keyword><colour>
<id> Æ [1, 999]
<itemType> Æ II | AI
<source> Æ web | database | webcam
<keyword> Æ curious | design | agent | computing
<colour> Æ red | orange | yellow | black
The information or aesthetic item being displayed is determined using the follow-
ing rules about the properties of the leaf nodes:
if <itemType> = AI then
display <colour>
else
if <source> = web
display <keyword> from web
else if <source> = database
display <keyword> from database
else
display webcam
Intrinsically Motivated Intelligent Sensed Environments 465
Both non-leaf and leaf node properties can be sensed by a software agent that rea-
sons about ways in which to modify the display by modifying these properties. These
properties are sensed according to the following grammar:
DisplayState Æ <non-leaf nodes><leaf node><leaf nodes>
<non-leaf nodes> Æ <node><non-leaf nodes> | ε
<leaf nodes> Æ <node><leaf nodes> | ε
<leaf node> Æ <node>
<node> Æ <id> II web <keyword> | <id> II database
<keyword> | <id> II webcam | AI <colour>
<id> Æ [1, 999]
<keyword> Æ curious | design | agent | computing
<colour> Æ red | orange | yellow | black
This grammar stipulates that every display has at least one leaf node and zero or
more non-leaf nodes. In addition, only the properties relevant to a particular item type
are sensed for any node at any time. For example, if the current source is webcam, the
colour property will not be sensed. Likewise, if the current item type is AI then the
source property will not be sensed as only colour is relevant. Leaf node properties
describe the individual item held in the leaf node. Leaf node properties change when-
ever the item in the leaf changes. Non-leaf node properties summarise the properties
of their child leaf nodes. Non-leaf node properties only change when all the child
items achieve the same value for some property. This allows an agent to recognise the
formation of clusters of like information or aesthetic items in the display.
The Curious Information Display is located in a meeting room equipped with sen-
sor and effector hardware that allows a software agent to monitor and modify the
466 M.L. Maher, K. Merrick, and O. Macindoe
environment. Effectors enable a software agent to turn a rear projector on and off and
change the application it displays. Floor sensors provide information about the loca-
tion of people in the room near the display and near the doorway. The layout of the
room in which the display is mounted is shown in Figure 6.
Floor Sensors
PC controlling
projector
Floor Sensors
Traditionally, representing the state of a world using attributes uses a fixed length
vector and a sensed state S(t) is constructed by combining sensations into a vector S(t)
= <s1(t), s2(t), … , sK(t)>. Two sensed states are compared by comparing the values of
sensations that have the same index in each sensed state. While this representation is
effective in many environments, it becomes inadequate in complex, dynamic envi-
ronments where it is not known what objects may need to be represented. While rela-
tional representations can be used for variable length state spaces in reinforcement
learning [10], such a representation is not viable for use in learning algorithms such as
neural networks which require an attribute based representation. One solution to this
problem uses placeholder variables in an attribute based representation to take the
value of new objects. However, such representations place a hard limit on the number
of new objects that can be introduced and hold large amounts of redundant data when
old objects are removed. As an alternative to fixed length vectors, we propose that a
state Ww of an environment may be represented as a string from a context-free gram-
mar (CFG) [23].
In worlds represented by CFGs , a sensed state may be produced as follows.
Agents have a set N = {N1, N2, … N|N|} of several different sensors. We call the data
provided by a single sensor Nn a sensation. Each sensor may return sensations of
varying length. Each sensor Nn assigns a label L to each sensation sn(t) such that that
two sensed states can be compared using the values of elements with the same label.
The world state at time t, S(t), is the tuple of all the agent’s sensor inputs, S(t) = <s1(t),
s2(t), … sL(t), …>.
Intrinsically Motivated Intelligent Sensed Environments 467
W(t)
N1 N2 … Nn …
sensors
iconic
s1 s2 … sn …
memory
S(t)
Fig. 7. A sensor model for learning agents in curious places. The sensor Nn has just registered
information about the world state W(t) at time t. The new sensation sn is combined with sensa-
tions from iconic memory for the other sensors to produce the sensed state S(t).
While CFGs can represent any environment which can be represented by a fixed
length vector, they have a number of advantages over a fixed length representation.
Using a CFG, only objects that are present in the environment need be present in the
state string. Objects that are not present in the environment are omitted from the state
string rather than represented with a zero value as would be the case in a traditional,
fixed length vector representation. This means that the state string can include any
number of new objects as the agent encounters them, without a permanent variable
being required for that object. Fixed length vectors are undesirable in dynamic envi-
ronments where it is not known what objects may occur as it is unclear at design time
what variables may be needed. Similarly they are undesirable in complex environ-
ments where there are many objects which only appear infrequently as the state vector
would be very large but only a few of its variables would hold values in any state.
Learning algorithms such as neural networks, both supervised and unsupervised,
can be modified to accept CFG representations of states by initialising each neuron as
an empty vector and allowing neurons to grow as required. Likewise, a table-based
reinforcement learner can be modified to use a CFG representation by storing strings
from the CFG in the state-action table in place of vectors.
468 M.L. Maher, K. Merrick, and O. Macindoe
4.3 Sensors and the Sensation Process for the Curious Information Display
The sensor and effector architecture of The Sentient, shown in Figure 8, consists of
five separate sensor subsystems: The C-Bus subsystem for sensing and controlling
lighting, the Teleo subsystem for sensing movement via pressure pads embedded in
the floor, the Bluetooth subsystem for sensing and controlling Bluetooth devices, the
virtual subsystem for controlling software running on machines in the Sentient and
any virtual worlds or other software-based data sources that the Sentient is coupled
with, and the camera subsystem which is an independent subsystem that provides a
video stream via network cameras. All of these subsystems have device monitors,
which are software that records the sensor inputs arriving from the subsystems into
tables in MySQL-based context database. These tables are polled by the agent to
enable it to sense the state of the room. The agent is also able to command the effec-
tor systems of the room by writing requests for actions into an action queue in the
context database. These actions are carried out by device monitors, software daemons
that poll the action queue and pass on the requested action to the appropriate effector
subsystem.
In order to sense the movement of people in The Sentient, the Curious Information
Display uses a portion of The Sentient’s Teleo sensor subsystem. The Sentient’s
Teleo sensor subsystem, shown in Figure 9, is responsible for handling data from The
Sentient’s pressure pads. The pressure pad relays are connected to a series of Teleo
modules, programmable sensor modules created by MakingThings
(http://www.makingthings.com). A series of Analog Input Modules are wired to the
relays along with a Teleo Intro Module which interfaces via USB with a PC on which
Intrinsically Motivated Intelligent Sensed Environments 469
the Teleo subsystem’s device monitor is running. The device monitor makes use of
the C Teleo Module API to receive pressure pad activation messages sent by the
Teleo Intro Module whenever the weight pressing down on a pad changes and stores
the activation data in a table in a database that records context information detected
by the Sentient’s sensor systems. The agent polls the table at regular intervals to
retrieve the sensor inputs.
We are using just the sensors directly in front of the rear projection screen as
shown in Figure 6. The sensors are polled at intervals and the sensor state at that time
is returned. The Curious Information Display can also sense information about the
state of its display. This includes the type of media being displayed on each grid cell.
Thus, a sensed state displaying a single 9x9 image of type 1will look like:
S((pic1itemType1)(pic1source:1)(pic1keyword:2))
A world state containing a combination of 1x1 and 2x2 images might look like:
S((pic141itemType:1)(pic141keyword:2)(usb/3.7.0value:1)(usb/3.6.0
value:1) (pic144itemType:1)(pic144source:3)(pic11itemType:1)
(pic11source:3)(usb/3.5.0value:1)(pic13itemType:1)
(pic13source:3)(pic141itemType:1)(pic142source:2)(usb/3.4.0value:
1)(pic120itemType:2)(pic120source:4)(pic120colour:1)(pic141source
:2)(pic142keyword:2)(pic143source:3))
The motivated agent model for our Curious Information Display incorporates two
forms of motivation as reward signals: extrinsic motivation and intrinsic motivation.
The purpose of the extrinsic motivation is to reward the agent when people come
close to the display, based on the triggering of the floor sensors near the display. We
interpret people coming close to the display as a sign that the display is interesting to
them and thus reward the agent.
While the extrinsic reward signal is present when people are attracted to the dis-
play, when people are not attracted to the display an agent using only an extrinsic
reward signal would have no learning stimulus to direct its behaviour. Rather it would
470 M.L. Maher, K. Merrick, and O. Macindoe
fall back on its random exploration strategy. The purpose of the intrinsic reward sig-
nal is to motivate the agent to develop interesting displays in periods where people are
not attracted to its current display. Thus the intrinsic motivation signal motivates the
agent to attempt to capture people’s attention while the extrinsic motivation signal
motivates the agent to maintain people’s attention.
Extrinsic Motivation: The extrinsic motivation function for the Curious Information
Display assigns a positive reward each time the agent encounters a world state in
which one or more of the floor sensors directly in front of the display are triggered:
Rex(t)= ⎧
1 if ∃ floorSensor | floorSensor ∉ doorFloorSensors & floorSensorValue = 1
⎨0 otherwise
⎩
Intrinsic Motivation: We use a motivation function based on the computational
model of curiosity created by Saunders and Gero [26]. Saunders and Gero imple-
mented a computational model of interest for social force agents by first detecting the
novelty of environmental stimuli then using this novelty value to calculate interest.
The novelty of an environmental stimulus is a measure of the difference between
expectations and observations of the environment where expectations are formed as a
result of an agent’s experiences in its environment. Saunders and Gero model these
expectations or experiences using an Habituated Self-Organising Map (HSOM) [22].
Interest in a situation is aroused when its novelty is at a moderate level, meaning that
the most interesting experiences are those that are similar-yet-different to previously
encountered experiences. The relationship between the intensity of a stimulus and its
pleasantness or interest is modelled using the Wundt curve [2].
An HSOM consists of a standard Self-Organising Map (SOM) [27] with an addi-
tional habituating neuron connected to every clustering neuron of the SOM. A SOM
consists of a topologically structured set U of neurons, each of which represents a
cluster of events. The SOM reduces the complexity of the environment for the agent
by clustering similar events together for reasoning. Each time a stimulus event E(t) =
(e1(t), e2(t), … eL(t) …) is presented to the SOM a winning neuron U(t) = (u1(t), u2(t), …
uL(t) …) is chosen which best matches the stimulus. This is done by selecting the
neuron with the minimum distance to the stimulus event. The winning neuron and its
eight topological neighbours are moved closer to the input stimulus by adjusting their
weights. The neighbourhood size and learning rate are kept constant so the SOM is
always learning. The activities of the winning neuron and its neighbours are propa-
gated up the synapse to the habituating layer. The synaptic efficacy, or novelty, N(t),
is then calculated using Stanley’s model of habituation [32]. Habituation has the ef-
fect of causing synaptic efficacy or novelty to decrease with subsequent presentations
of a particular stimulus or increase with subsequent non-presentations of the stimulus.
This represents forgetting by the HSOM and allows stimuli to become novel, and thus
interesting, more than once during an agent’s lifetime.
Once the novelty of a given stimulus has been generated, the interest I of the stimu-
lus is calculated using the Wundt curve in Equation 1. The Wundt curve provides
positive feedback F+ for the discovery of novel stimuli and negative feedback F- for
highly novel stimuli. It peaks at a maximum value for a moderate degree of stimula-
tion as shown in Figure 10, meaning that the most interesting events are those that are
similar-yet-different to previously encountered experiences.
Intrinsically Motivated Intelligent Sensed Environments 471
+ −
Fmax Fmax
I (2N(t)) = F+(2N(t)) – F-(2N(t)) = +
– −
(1)
1 + e− ρ − Fmin 1 + e− ρ − Fmin
+ −
(2N (t) ) (2N (t) )
+ −
Fmax is the maximum positive feedback, Fmax is the maximum negative feedback, ρ+
+
and ρ- are the slopes of the positive and negative feedback sigmoid functions, Fmin is
−
the minimum novelty to receive positive feedback and Fmin is the minimum novelty to
receive negative feedback. The interest value I is used as the reward Rin(t) which is
passed from the motivation process M to the learning process L as follows:
1
F+
F+
Interest (Reward) I(2N(t)) = R(t)
0.5
Interest
0
0 0.5 Novelty
1 2N(t) 1.5 2
-0.5
F F-
-
-1
Fig. 10. The Wundt curve is the difference between positive and negative feedback functions
The agent modifies the display area using the following actions to manipulate the leaf
nodes in the underlying data structure. Certain changes will cause an affect that can be
472 M.L. Maher, K. Merrick, and O. Macindoe
sensed while others will not. For example, changing the colour of a leaf node cur-
rently displaying a webcam image will have no sensed affect. However, changing the
source of a leaf node to ‘database’ while a webcam image is displayed will cause the
agent to sense an image from the database rather than the webcam image.
A0 = Change itemType of <leaf node> to <itemType>
A1 = Change source of <leaf node> to <source>
A2 = Change keyword of <leaf node> to <keyword>
A3 = Change colour of image <leaf node> to <colour>
A4 = Change resolution of <non-leaf node> to <res>
The agent cannot directly sense changes to the resolution of a node but it can sense
this indirectly as a change in the configuration of nodes and leaf nodes it can sense.
4.7 Reflexes
In a real-world application, a number of practical issues arise. Firstly, the bulb on the
projector has a limited lifetime and must be conserved where possible by switching
the projector off. We propose to use two reflex actions to achieve this. Reflex actions
use rules of the form if <condition> then <action> to decide how to respond to
changes in the environment.
W(t)
S(t)
O(t-1), R
S
M = {O(t) U E(t) U B(t) U A(t) U R }
O(t)
O(t), EO(t) , Rex(t)
E(t-1)
M
E(t)
O(t), Rin(t) A(t)
B(t-1), A(t-1)
L
B(t)
B(t)
A
A(t)
A(t)
F(t)
Fig. 11. A motivated reinforcement learning agent incorporating both intrinsic and extrinsic
rewards and reflexes
Intrinsically Motivated Intelligent Sensed Environments 473
if SR1(FloorSensors:On) then
AR1(Projector,On) and AR2(Agent,On)
if SR2(TimeFloorSensorsOff:>5mins) then
AR3(Projector,Off)
AR4(Agent,Off)
A second issue arises because the room in which the Curious Information Display
is to be displayed is also used as a space for teaching, seminars and meetings within a
university. To make it possible for these activities to use the rear projection screen,
we propose the following reflexes to allow people to turn the agent off:
if SR3(SoftwareCommand:On) then
AR2(Agent,On)
if SR4(SoftwareCommand:Off) then
AR4(Agent,Off)
These reflexes will respond to a Pause button on the application. An updated agent
model showing how reflexes are implemented is shown in Figure 11. It differs from
the standard motivated reinforcement learning framework in that the sensation proc-
ess S also retrieves a set of rules R from memory which can trigger an immediate
action A(t) by the activation process A.
5 Conclusion
This paper has presented three agent models for intrinsically motivated learning as a
basis for intelligent environments that adapt learned behaviours from patterns of us-
age derived from their sensor data. We develop the process of intrinsic motivation as
computational models of novelty and interest that allow the agent to be rewarded for
responding to unexpected changes in the patterns of sensor data, and apply these
models to an application called a curious place. A model of curiosity acts as a filter
for the agent to process the sensor data to determine a reward for achieving the cur-
rent state of the environment. Our observations of the curious information display
reveal that the model of intrinsic motivation causes the information display to gener-
ate different repeating patterns of changes in order to attract people to the sensor in
front of the display, and then the display repeats a specific pattern in order to continue
to receive the extrinsic reward until people move away from the sensors in front of the
display. Our plans for the curious place application include the development of a
parallel virtual place that can be used as a platform for motivated reinforcement learn-
ing and motivated supervised learning. This allows us to move in the direction of
adaptable intelligent rooms that learn how to use the room by observing how people
use the room.
Our current model of intrinsic motivation is based on novelty and interest. This has
provided a good starting point for a curious place that responds to the movement of
people in the place by generating novel and interesting patterns of behaviour. We plan
to augment the “curiosity” approach to motivation with a “competency” approach.
This combination may be more suitable for applications in which the intelligent room
responds to changes in sensor technology by being curious about new events that
meet the criteria of “similar yet different” because it is rewarded for repeating behav-
iours that generate reward for novelty, and also being motivated to develop new
474 M.L. Maher, K. Merrick, and O. Macindoe
learned behaviour patterns that receive reward for developing competencies. Experi-
ments with the different models of motivated learning will help determine the triggers
for different kinds of motivation, the scenarios for different types of learning (rein-
forcement, supervised, and unsupervised), and the parameters for learning and forget-
ting, essential features for adaptive behaviour.
References
1. Aylett, R.A. et al., Agent-based continuous planning, in: Proceedings of the 19th Work-
shop of the UK Planning and Scheduling Special Interest Group (PLANSIG 2000), 2000.
2. Berlyne, D. E. Aesthetics and psychobiology. Englewood Cliffs, NJ: Prentice-Hall, (1971).
3. Brdiczka, O., Reignier, P., Crowley, J., Supervised learning of an abstract context model
for an intelligent environment, in: Proceedings of the Joint sOc-EUSAI Conference,
Grenoble, (2005).
4. Brooks, R.A., Coen, M., Dang, D., DeBonet, J., Kramer, J., Lozano-Perez, T., Mellor, J.,
Pook, P., Stauffer, C., Stein, L., Torrance, M., Wessler, M.: The Intelligent Room Project.
In: Proceedings of the Second International Cognitive Technology Conference (CT’97).
Aizu, Japan (1997) 271-279.
5. Canamero, L., Modeling motivations and emotions as a basis for intelligent behaviour, in:
Proceedings of the First International Symposium on Autonomous Agents, ACM Press,
New York, (1997).
6. Coen, M.H.: Design Principles for Intelligent Environments. In: Proceedings of the Fif-
teenth National / Tenth Conference on Artificial Intelligence / Innovative Applications of
Artificial Intelligence. Madison, Wisconsin, United States (1998) 547–554.
7. Deci, E., & Ryan, R. Intrinsic motivation and self-determination in human behaviour. New
York: Plenum Press, (1985).
8. Gemeinboeck, P., Negotiating the In-Between: Space, Body and the Condition of the Vir-
tual, in: Crossings - Electronic Journal of Art and Technology, 4(1), (2004).
9. Dieterich, H., Malinowski, U., Khme, T. and Schneider-Hufschmidt, M. State of the art in
adaptive user interfaces. In: M. Schneider-Hufschmidt, T. Khme and U. Malinowski (Edi-
tors), Adaptive User Interfaces: Principle and Practice, North Holland (1993).
10. Dzeroski, S., De Raedt, L., & Blockeel, H. Relational reinforcement learning. Paper pre-
sented at the International Conference on Inductive Logic Programming, (1998).
11. Gershenson, C., Artificial Societies of Intelligent Agents, Bachelor of Engineering Thesis,
Fundacion Arturo Rosenblueth, (2001).
12. Graesser, A., VanLehn, K., Rose, C., Jordan, P. and Harter, D. Intelligent tutoring systems
with conversational dialogue. AI Magazine, 22(4): 39-52, (2001).
13. Green, R.G., Beatty, W.W. and Arkin, R.M., Human motivation: physiological, behav-
ioural and social approaches. Allyn and Bacon, Inc, Massachussets, (1984.).
14. Hammond, T., Gajos, K., Davis, R. and Shrobe, H., An Agent-Based System for Capturing
and Indexing Software Design Meetings, in: Gero, J.S. and Brazier F.M.T., eds., Agents in
Design 2002, Key Centre of Design Computing and Cognition, University of Sydney
(2002) 203-218.
15. Horvitz, E., J. Breese, Heckerman, D., Hovel, D. and Rommelse, K., The Lumiere project:
Bayesian user modelling for inferring the goals and needs of software users, in: Proceed-
ings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, (1998), 256-
265.
Intrinsically Motivated Intelligent Sensed Environments 475
16. Kandel, E.R., Schwarz, J.H. and Jessell, T.M., Essentials of neural science and behaviour.
Appleton and Lang, Norwalk, (1995).
17. Kaplan, F., & Oudeyer, P.-Y. Motivational principles for visual know-how development.
Paper presented at the Proceedings of the 3rd international workshop on Epigenetic Robot-
ics : Modeling cognitive development in robotic systems, Lund University Cognitive Stud-
ies, (2003).
18. Kohonen, T. Self-organisation and associative memory. Berlin: Springer, (1993).
19. Krueger, M., Gionfriddo, T., et al., Videoplace: an artificial reality, in: Human Factors in
Computing Systems, CHI'85, ACM Press, New York, (1985).
20. Luck, M., & d'Inverno, M. Motivated behaviour for goal adoption. Paper presented at the
Multi-Agent Systems: Theories, Languages and Applications - Proceedings of the fourth
Australian Workshop on Distributed Artificial Intelligence, (1998).
21. Maher, M.L., Merrick, K., and Macindoe, O. Can designs themselves be creative?, in:
Computational and Cognitive Models of Creative Design, Heron Island, (2005), 111-135.
22. Marsland, S., Nehmzow, U., & Shapiro, J. A real-time novelty detector for a mobile robot.
Paper presented at the EUREL European Advanced Robotics Systems Masterclass and
Conference, (2000).
23. Merceron, A. Languages and Logic: Pearson Education Australia, (2001).
24. Norman, T. J. and Long, D., Goal creation in motivated agents, in: Intelligent agents: theo-
ries, architectures and languages, Springer-Verlag, (1995).
25. Russel, J. and Norvig P., Artificial intelligence: a modern approach, Prentice Hall Inc,
(1995).
26. Saunders, R., Gero, J.S.: Designing for Interest and Novelty: Motivating Design Agents.
In: de Vries, B., van Leeuwen, J., Achten, H. (eds.): CAADFutures 2001. Kluwer,
Dordrecht (2001) 725-738.
27. Saunders, R., & Gero, J. S. Curious agents and situated design evaluations. In J. S. Gero &
F. M. T. Brazier (Eds.), Agents In Design (pp. 133-149): Key Centre of Design Computing
and Cognition, University of Sydney, (2002).
28. Schmidhuber, J. A possibility for implementing curiosity and boredom in model-building
neural controllers. Paper presented at the The International Conference on Simulation of
Adaptive behaviour: From Animals to Animats, (1991).
29. Schmill, M. and Cohen, P., A motivational system that drives the development of activity,
AAAMAS, ACM, Bologna, (2002).
30. Singh, S., Barto, A.G., and Chentanez, N.: Intrinsically Motivated Reinforcement Learn-
ing. http://www.eecs.umich.edu/~baveja/Papers/FinalNIPSIMRL.pdf Accessed 7/4/2004
(2004)
31. Sloman, A. and M. Croucher, Why robots will have emotions, in: Proceedings of the 7th
International Joint Conference on Artificial Intelligence, Vancouver, (1981).
32. Stanley, J. C. Computer simulation of a model of habituation. Nature, (1976) 261, 146-
148.
33. Sutton, R.S. and Barto, A.G., Reinforcement learning: an introduction, MIT Press, (2000).
34. Wang, Y., Huber, M. et al., User-guided reinforcement learning of robot assistive tasks for
an intelligent environment, in: Proceedings of the IEEE/RJS International Conference on
Intelligent Robots and Systems, IEEE, Las Vegas, (2003).
35. Watkins, C., Learning from delayed rewards, PhD Thesis, Cambridge University, (1989).
36. White, R. W. Motivation reconsidered: The concept of competece. Phychological Review,
(1959), 66, 297-333.
How to Teach Computing in AEC
1 Motivation
Information Technology in Architecture, Engineering and Construction (IT in AEC -
sometimes called Construction Informatics) is a mature research field with a dynamic
and growing body of knowledge (Turk 2006). However, the impact of this research on
construction practise has been limited (Froese 2004). This impact is generally
achieved through three mechanisms: (1) developments for products (such as software
that embeds that knowledge), (2) standardisation and best practise that prescribe the
knowledge to be used in the industry and most importantly (3) education (Turk 2004).
Generally, curricula development has not followed the results of research- the
range of information and communication technology in Civil Engineering curricula,
for example, is incomplete and often restricted only to skills related to the use of
technology (Table 1 “Heitmann”). Therefore, traditional teaching and learning
scenarios need to be re-shaped, interconnected and extended to meet the needs for
specialized civil engineers with deeper IT knowledge in the AEC-industry (Table 1
“ASCE”).
Surveys and experiences show (Rebolj and Tibaut 2005) that the share and content
of courses related to computer science and IT in undergraduate civil engineering
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 476 – 483, 2006.
© Springer-Verlag Berlin Heidelberg 2006
How to Teach Computing in AEC 477
LEGEND:
ONE FLEXIBLE INTERNAT.
Unique Features: Department of Civil & Environmental Engineering
curriculum work places exchange Seite 26
full accreditationDepartment
contributing lectures
of Civil & Environmental Engineeringmember
Seite 25
K. Menzel 29.März 2004 20040329_EinführungEbusiness_KMZ.ppt K. Menzel 29.März 2004 20040329_EinführungEbusiness_KMZ.ppt
Based on the results of a skill audit, the review of existing courses at partner
institutions, as well as market research and analysis, a course structure has been
developed consisting of 12 subjects (as listed in Table 2).
Table 2. ITC-Euromaster course pool: the commonly agreed upon curriculum in IT in AEC
Teaching Style
Table 3. ICT-support for different teaching styles within the ITC-Euromaster network
The main function of the CM is to enable access to teaching and learning material,
as well as other relevant functions (e.g. forums) and information (teacher and student
list, timetables etc.) from the Internet. The course management system is based on the
Modular Object-Oriented Dynamic Learning Environment (Moodle 2005).
The VC is supported by the ClickToMeet videoconferencing system, enabling
teachers to directly communicate with their classes. A participant list, chat, audio and
video control, document sharing, application sharing and a whiteboard are the basic
parts of the VC. Both systems are interlinked and can be used as an integrated system.
There is no specific electronic tool or methodology to assess the students’ learning
progress or teachers’ performance. Lecturers organize short tests during lectures and
seminars. Students deliver the test results by eMail as electronic documents. Essays
and theses are delivered by the students and commented on by the supervisors in an
electronic way. Final exams are prepared by the responsible lecturer. The exam itself
is organized locally by each participating university and monitored by local lecturers
or teaching assistants. At the end of each teaching period, questionnaires are handed
out to the students in order to get their feedback with regards to teaching style,
performance of the ICT-infrastructure, etc.
5 Conclusion
At the University of Maribor and the University of Ljubljana, the program was
accredited in 2004. Since January 2006 University College Cork has become the tenth
member of the ITC-Euromaster network and accredited a 12-month postgraduate
master program on “Information Technology in Architecture, Engineering, and
Construction,” using most of the ITC-Euromaster modules (http://zuse.ucc.ie/master).
The other partner institutions are either in the accreditation process; or, the integration
of the new program into existing programs is in progress.
The ITC-Euromaster network and the ITC@EDU workshop series are the two
basic elements to support sustainable, further development of the ITC-Euromaster
framework. The ITC-Euromaster network has managed the programme organization
since 2004. It has organized the transition process from an EU-project into a self-
sustaining course pool after the EU-funding period ended in the middle of 2005. All
project members have signed a common “course pool agreement”.
Since 2002 the ITC@EDU workshop series has proven to be a stable platform to
maintain the discussion process amongst the members of the ITC-Euromaster
network, external advisors and international partners. Through the workshops, the
internal discussion process is stimulated and feedback is given by external experts to
the network members, contributing to the continuous refinement and improvement of
the program content, the “delivery” mode and the ICT-infrastructure of the ITC-
Euromaster program.
With ten partners and three program accreditations developed out of the EU-funded
ITC-Euromaster project, our network has developed the necessary critical mass to
promote “IT in AEC” as an interdisciplinary scientific discipline. It has also
substantially contributed to the sufficient transfer of knowledge and technology from
academic institutions into the different areas of the AEC- and FM industry.
How to Teach Computing in AEC 483
References
1. Abudayyeh, Cai, Fenves, Law, O’Neill, Rasdorf. (2004). “Assessment of the Computing
Component of Civil Engineering Education 2004.” J. Comp. in Civ. Engrg., 18(3), 187-195
2. Adelsberger, Collis, Pawlowski, (Eds.) (2002). “Handbook of information technologies for
education and training.” Springer, Berlin - Heidelberg.
3. Bento, Duarte, Heitor, Mitchell, (Eds) (2004). “Collaborative Design and Learning –
Competence Building for Innovation”. Praeger Publisher, Westport CT, USA.
4. Froese (2004). “Help wanted: project information officer.” eWork and eBusiness in
Architecture, Engineering and Construction A.A. Balkema, Rotterdam, The Netherlands.
5. R. Fruchter (1996). “Multi-Site Cross-Disciplinary A/E/C Project Based Learning" in:
Proceedings of the Third Conference on Computing in Civil Engineering S. 126 ff,
American Society of Civil Engineers, New York, 1996 (ISBN 0-7844-0182-9).
6. Heitmann, Avdelas, Arne. (2003). Innovative Curricula in Engineering Education. Firenze
University Press.
7. Henry P. (2001), “E-learning technology, content and services”, Education + Training,
MCB University Press, USA, 43(4), (251)
8. ITC EUROMASTER (2005), The programme portal. <http://euromaster.itcedu.net>.
9. Kolb (1984). “Experiential Learning: experience as the source of learning and
development.” New Jersey: Prentice-Hall.
10. Menzel, Garrett, Hartkopf, Lee. “Technology Transfer in Architecture and Civil
Engineering by Using the Internet - Illustrated Through Multi-National Teaching Effort”
Forth Congress on Computing in Civil Engineering. ASCE. New York. 1997. (224-231).
11. Moodle (2005), The Moodle homepage. <http://moodle.org>.
12. Pahl and Damrath. “Mathematische Grundlagen der Ingenieurinformatik”; Springer,
Berlin, Heidelberg. 2000.
13. Raphael and Smith. “Fundamentals of Computer Aided Engineering”. John Willey. 2003.
14. Rebolj and Menzel (2004). “Another step towards a virtual university in construction IT”,
Electroic Journal of Information Technology in Construction, 17(9), 257-266
<http://www.itcon.org/cgi-bin/papers/Show?2004_17> (Oct. 20, 2005).
15. Rebolj and Tibaut (2005). “Computer Science and IT in Civil Engineering Curricula”
Proceedings of the IVth International Workshop on Construction Information Technology
in Education. TU Dresden. (35-42) (ISBN 3-86005-479-1).
16. Smith and Raphael (2000). “CA course on fundamentals of computer-aided engineering”
Computing in Civil and Building Engineering (VIIIth ICCCBE). American Society of Civil
Engineers, Reston, VA, USA, pp 681 ff.
17. Smith, I.F.C. (2003). “Challenges, Opportunities, and Risks of IT in Civil Engineering:
Towards a Vision for Information Technology in Civil Engineering.” American Society of
Civil Engineers, Reston, VA, USA (on CD) (1-10).
18. Steindorf, G. “Grundbegriffe des Lehrens und Lernens”. Klinkhardt. Bad Heilbunn. 2000.
19. Turk (2006). "Construction Informatics: Definition and Ontology", accepted paper,
Advanced Engineering Informatics.
20. Turk Z (2004). “Construction Informatics Themes in the Framework 5 Programmed.” 5th
European conference on product and process modelling in the building and construction
industry - ECPPM 2004, A.A. Balkema: Taylor & Francis Group. (399-405).
21. Turk and Delic (2003): “Undergraduate Construction Informatics Curriculum.”
Concurrent engineering - The vision for the future generation in research and application,
A.A. Balkema: Taylor & Francis Group. (1185-1191).
Evaluating the Use of Immersive Display Media
for Construction Planning
John I. Messner
Abstract. The aim of this research is to develop construction planning and plan
review processes within virtual environments that result in the consistent devel-
opment of more innovative and higher quality construction plans. During the
early stages of this research, we have shown that affordable immersive display
systems can be constructed and effectively used to allow construction project
teams to better visualize product and process information in a stereoscopic 3D
environment. The Immersive Construction (ICon) Lab has been built at Penn
State University as a test bed facility for 3D and 4D CAD model visualization.
This paper presents an overview of three case study projects performed in the
immersive display system with a focus on construction planning activities.
Early results illustrate that a project team can identify innovative solutions to
construction challenges when performing a plan review of 3D and 4D virtual
prototypes in an immersive environment.
1 Overview
The ability to visualize and experience facility design and construction plan informa-
tion is critical to providing valuable information and feedback into design and con-
struction process decisions. Many valuable research efforts have focused on the
development of solutions for modeling the product and process information for a
facility. An overview of many of these modeling efforts can be found in Eastman[1]
and Lee et al.[2]. Additional research has also focused specifically on the development
of information models and modeling methods related to construction planning with
4D CAD models along with the addition of space planning information to the models.
Several valuable references include Koo and Fischer[3], Akinci et al.[4], and Dawood
et al.[5] along with others.
While significant research has focused on the development of models that can
benefit the Architecture/Engineering/Construction (AEC) Industry, much less re-
search has focused on the most appropriate methods for displaying and interacting
with the models. This research focuses on the impact of the display media used to
visualize 3D and 4D models during the construction planning and plan review proc-
ess. The use of immersive display systems allows the project team to gain an in-
creased sense of immersion within the 3D or 4D model, sometimes referred to a sense
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 484 – 491, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Evaluating the Use of Immersive Display Media for Construction Planning 485
of presence[6]. This additional immersion can provide an experience where people feel
embedded in the design, and they can gain a better sense of scale since they can visu-
ally navigate the model at full scale.
For the three case study projects, both 3D and 4D models were developed for each
project. These models were developed in various 3D CAD applications. The models
were then converted to formats that could be easily loaded into the immersive display
systems. The conversion varied based on the display media used. It is important to
note that the conversion of 4D models into the immersive display system for stereo-
scopic visualization was not always accomplished since it is a more tedious process to
address the temporal data. Some construction plan reviews included a combination of
immersive 3D models and non-immersive (or non-stereoscopic) 4D models.
This research focused on the use of immersive display systems that allowed users to
be placed within a 3D stereoscopic virtual model. Two different immersive display
systems were used in this research. The first system is a CAVE (Cave Automatic
Virtual Environment) in the SEA Lab at Penn State[10]. This display system has five
projected surfaces (four walls and a floor) and includes active stereoscopic visualiza-
tion, position tracking, and specialized audio (see Fig. 1).
The second display system is a 3 screen immersive projection display system in the
Immersive Construction (ICon) Lab[11]. The 3 screen display uses 6 projectors for
passive stereo visualization which allow users to be immersed within a 3D or 4D
virtual model of a facility at full scale (see Fig. 2). The display system can be oper-
ated from a single computer with a Windows XP operating system or from a four
computer Linux cluster. This allows for the use of standard applications developed
for a Windows operating system along with immersive virtual reality applications.
This lab was developed to provide an affordable and relatively easy to use immersive
486 J.I. Messner
Fig. 2. Rendering of three screen display system in the Immersive Construction (ICon) Lab at
Penn State
The first case study was one part of a larger project aiming to investigate the value of
virtual mock-ups for nuclear power plant design, construction, and operation. A vir-
tual prototype of the Westinghouse AP 600 was developed from the 3D CAD design
of the facility (see Fig. 3). Two construction planning experiments were then per-
formed to assess the value of the virtual facility prototype.
Fig. 3. Construction plan review meeting for Room 12306 displayed in the SEA Lab CAVE
Prototype Components. The prototype for a room (Room 12306) within the AP 600
plant was developed which included a 3D virtual model in the SEA Lab CAVE. In
addition, a 4D interface was developed to allow for the sequential display of the con-
struction components within the modular design.
Summary of Results. Two experiments were performed in the virtual facility proto-
type. The first was an investigation of the ability for two groups of two graduate stu-
dents to develop logical construction schedules for Room 12306 in the CAVE. The
results show that relatively inexperienced participants could use the mock-up to de-
velop reasonable schedules for the room.
The second experiment focused on evaluating the value of reviewing construction
plans in 4D within the prototype. For this experiment, four experienced construction
superintendents (in two teams of two people) were provided with the paper drawings
488 J.I. Messner
in isometric format for Room 12306. They were then asked to develop a schedule for
the room and identify constructability issues. Following the development of their
schedule, they traveled to the SEA Lab and reviewed their schedule in the CAVE. It
was interesting to note that prior to the CAVE review, the two teams only identified a
total of 2 constructability issues. Following the review, they identified a total of 10
module boundary suggestions and a total of 9 weld location change suggestions. The
superintendents also rated the CAVE review as high on ease of use and the planners
gained confidence level in their schedules based on pre and post surveys. By interac-
tively developing a schedule within the CAVE, the planners were able to reduce their
previous schedule duration of 35 days to a revised schedule of 25 days, primarily
through the identification of opportunities to perform multiple activities at the same
time without physical space interferences [12].
The Stuckeman Family Building is a 4 story building on the Penn State campus at
University Park, PA opened in August 2005. During construction, a virtual facility
prototype of the building was developed to gain feedback from future occupants, and
to aid in the construction planning and plan communication for the project.
a) b)
Prototype Components. The virtual facility prototype for this building project in-
cluded a 3D Building Information Model that was developed in Graphisoft ArchiCAD
along with a 4D CAD model developed in Common Point Project 4D. The 3D model
was converted into VRML format for stereoscopic display in the ICon Lab using the
BS Contact Stereo application. The modeling team worked with the construction
project manager to develop a 4D model of the project.
Summary of Results. Participants from the project team reviewed the virtual proto-
type during a construction progress meeting. Following the meeting, the participants
completed a survey on their perspectives. Results from the survey show that 85% of
the participants felt that they had a better understanding of the building design follow-
ing the 1 hour meeting in the ICon Lab. 85% of the participants also felt that the
virtual prototype could be a valuable communication tool on the project and that it
could help avoid delays in construction[13, 14].
Evaluating the Use of Immersive Display Media for Construction Planning 489
a) b)
Fig. 4. Shirlington Village Project: a) 4D CAD model; and b) construction plan review meeting
in ICon Lab
Prototype Components. The prototype included a 3D CAD model that was con-
verted into VRML format for display within the ICon Lab. A 4D CAD Model was
also developed in Common Point Project 4D from the contractor’s baseline schedule.
Summary of Results. The prototype was reviewed by seven members of the project
team during a project team meeting that occurred within the ICon Lab. During the
meeting, a time study was performed to evaluate the types of discussion that occurred
during the meeting using discussion categories outlined by Liston et al.[15] Following
the meeting, these values were compared to an average value taken from 17 more
traditional project meetings. The results show that the type of conversation that oc-
curred in the meeting in the ICon Lab included a greater percentage of evaluative and
predictive discussions with less time spent on descriptive and explanative discus-
sion[16]. In post meeting interviews, the participants stated that they felt the team
members were able to ‘play off of other’s comments and could look at different alter-
natives.’ The team was able to identify opportunities to recover 14 working days of
schedule time, primarily through changes in the scaffolding method for the masonry
work, and the team gained an improved level of confidence in the schedule.
5 Future Research
Research is continuing on the evaluation of the use of immersive display systems for
construction planning. If the process can be systematized to take full advantage of the
virtual prototype, the author believes even more benefit can be gained. For example, if
the participants can be provided with a review process which specifically focuses on
the information available to them in the virtual prototype, then they could gain in-
creased benefits. Another goal is to continue to isolate the various attributes of the
virtual facility prototype, along with other factors (e.g., people, physical place, and
planning process), in more controlled experiments. Additional research is also needed
to improve the use of product and process information in immersive display environ-
ments. Throughout these case studies, information was frequently lost when converting
from one format to another and these issues associated with application interoperability
and data transfer can significantly hinder the value of the developed prototype since it
does not contain detailed information.
While it remains difficult to specifically quantify the additional value of displaying
3D and 4D models within immersive display systems, it is clear that the display media
can impact the communication between team members and aid in the identification of
innovative planning solutions.
Acknowledgements
The author would like to thank all the participants and researchers who aided in the
development and execution of the case study projects with specific reference to mem-
bers of the CIC Research Program and the SEA Lab. The author would also like
to thank the U.S. Department of Energy and the National Science Foundation
(Grants 0343861 and 0348457) for supporting this research. Any opinions or
Evaluating the Use of Immersive Display Media for Construction Planning 491
recommendations expressed in this paper are those of the author and do not necessar-
ily reflect the views of the NSF or the DOE.
References
1. Eastman, C. M. (1999). Building product models: Computer environments supporting de-
sign and construction, CRC Press LLC, Boca Raton, FL.
2. Lee, A., Marshall-Ponting, A. J., Aouad, G., Wu, S., Koh, I., Fu, C., Cooper, R., Betts, M.,
Kagioglou, M., and Fischer, M. (2003). "Developing a vision of nD-enabled construction."
The Center of Excellence for Construct IT, Salford, U.K.
3. Koo, B., and Fischer, M. (2000). "Feasibility study of 4D CAD in commercial construc-
tion." Journal of Construction Engineering and Management, ASCE, 126(4), 251-260.
4. Akinci, B., Fischer, M., and Kunz, J. (2002). "Automated generation of work spaces re-
quired by construction activities." Journal of Construction Engineering and Management,
ASCE, 128(4), 306-315.
5. Dawood, N., Sriprasert, E., Mallasi, Z., and Hobbs, B. (2002). "4D visualisation develop-
ment: real life case studies." Distributing Knowledge In Building, CIB W78 International
Conference, The Aarhus School of Architecture, Denmark.
6. Slater, M., and Wilbur, S. (1997). "A Framework for Immersive Virtual Environments
(FIVE): speculations on the role of presence in virtual environments." Presence: Teleop-
erators and Virtual Environments, 6(6), 603-616.
7. Schrage, M. (2000). Serious play: how the world's best companies simulate to innovate,
Harvard Business School Press, Boston, MA.
8. Wang, G. G. (2002). "Definition and review of virtual prototyping." Journal of Computing
and Information Science in Engineering (Transactions of the ASME), 2(3), 232-236.
9. Chua, C. K., Leong, K. F., and Lim, C. S. (2003). Rapid prototyping: principles and appli-
cations, World Scientific Publishing Co. Pte. Ltd., Singapore.
10. Shaw, T. (2002). "Applied Research Lab at Penn State University, Synthetic Environment
Applications Lab (SEA Lab)." May 7, (www.arl.psu.edu/facilities/facilities/sea_lab/
sealab.html), Accessed: Dec. 20, 2002.
11. Computer Integrated Construction Research Program. (2005). "Immersive Construction
(ICon) Lab." (http://www.engr.psu.edu/ae/cic/facilities/ICon.aspx), Accessed: February
28, 2006.
12. Yerrapathruni, S., Messner, J. I., Baratta, A., and Horman, M. (2003). "Using 4D CAD and
immersive virtual environments to improve construction planning." CONVR 2003, Confer-
ence on Construction Applications of Virtual Reality, Blacksburg, VA, 179-192.
13. Gopinath, R. (2004). "Immersive virtual facility prototyping for design and construction
process visualization," M.S. Thesis, Architectural Engineering. The Pennsylvania State
University, University Park.
14. Gopinath, R., and Messner, J. I. (2004). "Applying immersive virtual facility prototyping
in the AEC industry." CONVR 2004: 4th Conference of Construction Applications of Vir-
tual Reality, Lisbon, Portugal, 79-86.
15. Liston, K., Fischer, M., and Wingrad, T. (2001). "Focused sharing of information for mul-
tidisciplinary decision making by project teams." ITcon, 6, 69-82 (http://www.itcon.org/
2001/6).
16. Maldovan, K., and Messner, J. I. (2006). "Determining the effects of immersive environ-
ments on decision making in the AEC Industry." Joint International Conference on Com-
puting and Decision Making in Civil and Building Engineering, Montreal, Canada, (Sub-
mitted for Review).
A Forward Look at Computational Support for
Conceptual Design
John Miles1, Lisa Hall2, Jan Noyes3, Ian Parmee4, and Chris Simons4
1
Cardiff School of Engineering, Cardiff University, UK
2
Institute of Biotechnology, University of Cambridge, UK
3
Department of Experimental Psychology, University of Bristol, UK
4
Faculty of Computing, Engineering & Mathematics, University West of England, UK
1 Introduction
In 2004, two of the UK’s research councils, the Arts and Humanities Research
Council and the Engineering and Physical Sciences Research Council launched an
initiative to set the future design research agenda for the UK. The initiative invited
bids for funding for “design clusters”. These were to be groups of people who would
spend a year looking at a design area of their choosing. The bids were to contain
information regarding the people within the proposed cluster, the programme of work
and the deliverables. Bidders were encouraged to be as innovative as possible in their
thinking and also multi-disciplinary. It was hoped that by bringing together people
from diverse backgrounds that generic ideas and concepts would emerge.
The call for proposals was aimed as widely as possible, including, for example,
choreographers, as well as more traditional design backgrounds. A substantial number
of bids were received from which 20 clusters were selected. One of the successful
clusters is entitled Discovery in Design: People Centred Computational Issues and the
work and findings of this cluster form the subject matter of this paper.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 492 – 499, 2006.
© Springer-Verlag Berlin Heidelberg 2006
A Forward Look at Computational Support for Conceptual Design 493
cluster has a central core of people who have experience in the usage of software
techniques to support conceptual design and particularly in the use of evolutionary
computation. Also, the cluster has an emphasis on human factors. Cluster members
believe that the way forward is a cooperative blending of knowledge and skills
between humans and computers. To ensure that human factors are fully considered,
the cluster also contains psychologists and social scientists. The objective of the
cluster was defined as “… to identify primary research aspects concerning the
development of people-centred computational design environments that engender
concept and knowledge discovery across diverse design domains”.
The cluster’s main way of working has been through a series of four workshops.
The participants have been the members of the cluster (typically some 15 of these
attended each workshop) plus invited guests. Most of the guests have strong track
records in design. However there were one or two people from other disciplines such
as, for example, a specialist in detecting and predicting trends relating to lifestyles and
fashions. In all cases the guests gave presentations on their particular speciality.
Further information about the cluster can be found at www.ip-cc.org.uk.
3 The Workshops
The first three workshops were used to explore ideas and concepts and to highlight
problems and weaknesses in terms of conceptual design and its computational
support. These were then explored as potential areas for future research. This was an
essentially divergent process. The fourth workshop was convergent, with the work of
the previous workshops being analysed and synthesized.
4 Previous Work
There have been a number of other efforts to develop roadmaps for future research
directions. For the construction industry alone notable examples which have
concentrated on IT are the FIATECH / NSF initiative [3] and Amor at al [1].
Predicting what technologies will succeed in the future is always difficult and often
the real breakthroughs come from something which cannot be foreseen. This is
always a problem with roadmapping exercises. The cluster has largely avoided the
pitfall of prediction and has limited itself to identifying areas of need. Also the focus
of the cluster is different in that it is multi-disciplinary.
There was a discussion about whether or not this area should be included. This is a
well established area of research in which many research teams are working. Hence,
some people thought that it was outside the cluster’s remit and that by including
knowledge capture and enabling environments, human factors were sufficiently
considered. However the majority argued that without understanding human needs,
abilities and reactions to different developments, the proposed research would never
fulfil its aims. It should be appreciated though that the cluster’s suggestion is limited
to the specific context of conceptual design assisted by computational support rather
than a more global understanding of human behaviour.
The basic argument is that, computational tools have to fit in with human
capabilities and needs. For example humans are very good at pattern matching and
assimilating visual information, although from any image they typically only take in
30% of the information. Research is needed to better define human abilities,
especially in relation to design. The design studies undertaken to date (e.g. [6]) have
shown that designers have problems with cognitive overload and bias and tend to
stick with their initial thoughts and decisions. One obvious area for research is to
ensure that the sort of software environments that are envisaged, will help designers to
avoid these problems. The other research need is that of communication between the
user and the computer, not in the terms described in the knowledge capture section
but more in the fundamental area of what sort of tools and interface strategies are best
suited to transferring information. Finally the cluster unanimously agreed that any
software should ideally be exciting and interesting to use. This is something which
design software has so far largely failed to achieve.
5.2 Representation
The term representation caused the cluster problems because it means different things
to different people. Some within the cluster argued against representation being a
research area because they considered that it is a part of search and exploration.
However once the cluster had fixed on a common definition, it was agreed that
representation should be included. The cluster’s definition of representation is that it
includes all areas within the software where the properties and characteristics of the
problem domain are described. If the specific example of genetic algorithms is
considered, the genome and the coding strategy are a part, as is the fitness function.
Also as Zhang & Miles [7] show, for certain classes of problems, crossover and
mutation can affect the form of the final solution and so, some cases can be
considered to be a part of the representation.
To date, much of the representation used, especially in genetic algorithms, has
relied on the ability of the problem to be expressed as a series of parameters. As
Zhang et al [8] show, for some classes of problems such as topological search, this
A Forward Look at Computational Support for Conceptual Design 495
can be limiting. Also, one of the forthcoming challenges for design software is for it
to be able to tackle complex, multi-participant, multi-objective, highly constrained
problems. These will require far more complex representation strategies than are
currently used. The work of some cluster members on software design has shown that
there are many areas for which the development of the relevant software techniques is
still in its infancy. Even for the more mature domains, there are significant challenges
in terms of representation techniques. For example, for topological search, there is yet
to be an established a generic form of representation which can handle a multiplicity
of highly complex shapes. Without this, true topological search is not possible.
Although representation has not been a significant limiting factor to date, as work
progresses in other areas, the limitations of current strategies will start to hinder
progress and the need for further work in this area will become more apparent.
Search and exploration are basic features of any viable conceptual design tool. The
potential complexity of multi-discipline, multi-objective search spaces has already
been discussed but what has yet to be covered is the difficulty of searching such
spaces in a sensible manner. As computational support for design tackles ever more
complex and obtuse domains then the search will become more difficult. With multi-
objective search, there are techniques such as Pareto analysis for selecting areas of
high performance but only for a limited number of objectives. Parmee & Abraham [4]
present a method which avoids these limitations. However, there is still a concern
that, with substantial numbers of objectives and constraints, trade offs in the search
process will render the results meaningless. The implications of such searches need to
be thoroughly investigated to either prove or assuage these fears.
The cluster spent some time looking at innovation and creativity. Undoubtedly the
successful economies of the future will be those that are the most innovative and
creative in terms of commercial products. Creativity typically arises from moments of
inspiration, which are often the coalescing of random and previously unconnected
thoughts. The cluster discussed whether it would be possible to assist with this
process using computational support using, for example, a “nonsense” generator. This
could be attractive but also could be extremely wearing if one had to spend hours
considering random nonsense. The idea of “jump out” agents was considered, these
being agents which somehow leave the current search space and look elsewhere for
solutions [2]. Another idea was contradiction; going against the accepted wisdom.
Linked to the ideas discussed in Two Way Knowledge Capture is the concept that,
if the system could understand what the designer is trying to achieve, then it could
A Forward Look at Computational Support for Conceptual Design 497
search for relevant ideas and information, very much in the way that the semantic web
anticipate needs. This for example could be in the form used by Amazon: people who
designed one of these also looked at….. or it could be more like a Google search.
6 Sub-classes
In addition to the above 5 key areas of work, a considerable amount of time was spent
looking at how the areas could be broken down into sub-classes. At the end of
workshop three, some 350 potential subjects for sub-classes were identified. These
ranged from statements made by some of the guest speakers such as one designer
saying he has a “butterfly mind” to categories such as “team integration”. Cluster
members were asked to place each of the 350 potential sub-classes into one of the 5
key areas. Inevitably there was some divergence of opinion but the exercise made it
possible to identify groupings within the potential sub-classes. The analysis was used
at the start of workshop four to reduce the 350 down to 39 sub-classes.
An exercise was then undertaken to analyse these 39 sub-classes and determine
how they relate to the five key areas in terms of importance. This was achieved by
plotting two dimensional graphs with the graph axes being two of the key areas,
giving in total ten graphs. The purpose of the exercise was to make the cluster
members think about the relevance of the 39 categories in relation to each of the key
areas and also to provide a visual aid to stimulate discussion. An example of a graph
is shown in fig.1.
The exercise was useful because it stimulated discussion and it gives an indication
of the potential difficulty of the research within each of the key areas. The amount of
information obtained from this exercise is so large and complex that its analysis is
incomplete but Parmee et al [5] have extracted the sub-classes that lie in the upper
quartiles of the four graphs of each key area and identified the sub-classes that occur
most often. These are shown in table 1. Note that the sub-classes are not exclusive to a
given key area. This is an important finding and one which is still being analysed.
7 Discussion
The multi-disciplinary nature of the cluster was very beneficial and the interaction
brought out some interesting concept and ideas. The body of information produced by
the cluster is large and contains useful pointers as to the way forward. The cluster has
identified that there is a huge amount of research yet to be undertaken before we can
provide comprehensive software environments to support most areas of conceptual
design. Some of the work to be done is fairly straight forward but much of it will
require a substantial amount of fundamental research.. The cluster has focussed on
areas where current approaches are lacking and identified the shortcomings. The
workshops have produced a huge amount of information and this is still being
analysed, especially with regard to the sub-classes. The work has been so rewarding
and information rich that the members of the cluster have come together to form the
Institute for People Centred Computation (www.ip-cc.org.uk). This will inherit the
intellectual property of the cluster and continue its work both in looking at future
research requirements but also in delivering the research.
498 J. Miles et al.
Increasing relevance
28 18 14 29 13
26
4
Search and
Exploration 3
1
22
26
10
Increasing relevance
Understanding
humans
Fig. 1. An example of relating the 39 sub-classes to the key areas. (Some of the 39 categories
have been omitted in the interests of clarity). [5]
8 Conclusions
A cluster consisting of people from diverse backgrounds has come together to look at
the requirements for software support for conceptual design. The starting point of the
cluster was that the work needed to be people centred and nothing that has arisen in
the workshops has caused this assumption to be questioned. The cluster has identified
five key areas in which further research is needed. Beneath these five areas are thirty
nine sub-classes which relate to one or more of the key areas. The cluster has
identified a significant body of research that needs to be undertaken to enhance the
current technology of computational support for conceptual design.
Acknowledgements. The work of the cluster was supported by the UK’s Engineering
and Physical Sciences Research Council and the Arts and Humanities Research
Council. Also thanks to the guest speakers for their contribution, notable Tom Karen,
Simeon Barber, Chris Jofeh and Pat Jordan.
References
1. Amor, R, Betts, M & Coetzee, G, 2002. IT for research: Recent work and future directions,
ITcon, 7, 245-258.
2. Cvetkovic D & Parmee I, 2002. Agent-based support within an interactive evolutionary
design system, AIEDAM, 16(5), 311-342.
3. Vanegas J, 2004. http://www.ce.gatech.edu/research/NSF-FIATECH_Charrette/index13.htm
4. Parmee I & Abraham J, 2004, Supporting implicit learning via the visualization of COGA
multi-objective data, Proc EVOTEC, IEEE, 395-402.
5. Parmee I, Hall A, Miles J, Noyes J & Simons C, 2006. Discovery in Design: Developing a
People Centred Computational Approach, Design2006, Dubrovnik Croatia.
6. Ullman D, Stauffer L & Diettrich T, 1987, Preliminary results of an experimental study of
the mechanical design process, Waldron M (ed), NSF workshop on the Design Process,
Ohio State Univ, 143-188.
7. Zhang, Y & Miles J, 2004. Representing the problem domain in stochastic search
algorithms, in Schnellenbach-Held, M & Hartmann, M. (eds) Next Generation Intelligent
Systems in Engineering, EG-ICE, Essen, 156-168.
8. Zhang Y, Wang K, Shaw D, Miles J, Parmee I & Kwan A, 2006, Representation and its
Impact on Topological Search in Evolutionary Computation, Hughes R (ed), ICCCBE XI,
Montreal Canada,
From SEEKing Knowledge to Making Connections:
Challenges, Approaches and Architectures for
Distributed Process Integration
1 Introduction
This paper reports on ongoing research by the authors to create new mechanisms for
distributed process integration in the construction domain. The construction industry
poses significant challenges for integration given a large number of firms of varying
sophistication and a corresponding variety of data formats including a range of pro-
prietary legacy sources. Technical integration challenges are made more difficult
given the business climate that includes short-term associations and differing business
practices (especially practices that involve firms operating on different levels of de-
tail, making integration and constraint propagation exceptionally challenging.) These
collective integration challenges exceed the ability of extant systems to create scal-
able, rapidly deployable information systems that address coordination needs. Our
paper reports on a broad architecture for distributed process integration in the con-
struction domain, with specific emphasis on (1) rapid discovery of legacy data, (2)
mapping and management of discovered data to support process coordination.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 500 – 518, 2006.
© Springer-Verlag Berlin Heidelberg 2006
From SEEKing Knowledge to Making Connections 501
heterogeneous schedules. Section five brings together the SEEK and schedule map-
ping approach within the Process Connectors architecture, presenting a broad and
scalable approach to process integration. Some concluding remarks are made regard-
ing implementation and future research.
mature research models in the area [11]. Schedule integration is less defined, in part
because schedule definition in practice is an evolving process and we lack good repre-
sentation and solution models for such a dynamic view of scheduling [12, 13].
Process integration requires both semantic integration and a shared description of
the process(es) to be integrated (see [14] for a related discussion). With respect to
process representation, Froese has produced a seminal summary of high level process
descriptions in the construction domain [15]. However, Froese notes that these high
level descriptions must be significantly enriched for implementation. It is safe to say
that definition and testing of process related definitions in data standards such as the
IFC is less mature than that for engineering products; some initial evaluations are
being conducted several researchers [16, 17].
Probably the most extensive test of process integration standards to-date was con-
ducted by Law and his students [18], who used the Process Specification Language
(PSL) developed by NIST [19] as a neutral data format for exchange of information
between engineering and construction applications. Law’s group found PSL to be
reasonably rich, but that it required extension to support specific project management
components. Further, the group spent considerable effort manually developing wrap-
pers to each legacy source as well as specifying mediators [20] to translate informa-
tion for specific service applications. The intent of Law’s research was to provide a
framework for integration of applications as services rather than provide a complete
test of process integration, so it is unclear the extent to which that group addressed
challenges in integrating multiple views of the same processes or constraint represen-
tation beyond those needed for the specific service definitions.
We can illustrate these challenges in a small scheduling example. Fig. 1 depicts a
mapping between the schedule of a construction manager (Centex) and a subcontrac-
tor (Miller) (further details of the case are found in [21]). Note that these mappings
exist on several levels (e.g. 1st floor to 1st floor and activity number to activity num-
ber). Further, although it the mappings in Fig. 1 are easy for a human to interpret,
both the ID numbers activities names are semantically different at the instance level.
It is hard to automatically generate a match or mapping even if the underlying data
schemas of the scheduling applications are machine interpretable. Most important,
note that the mappings between individual activities do not have a 1:1 or even 1:n
mapping in many cases (i.e.., mappings 3, 6, 9 and 11 in Fig. 1). A 1:1 one mapping
(such as mapping 2) makes it easy to coordinate these two schedules as the set of
information for each activity is largely the same (i.e., start finish dates, duration, etc.).
Similarly, a 1:n mapping could be considered a hierarchical mapping where the con-
struction manager’s or general contractor’s schedule summarizes a more detailed
breakdown by the subcontractor. Such a hierarchical level makes it easier to integrate
differing views of the schedule. For example, the LEWIS system specifies four differ-
ent levels of detail within a schedule [22]; as long as subcontractors plan as a
pre-specified level integration is reasonably straightforward. It is more difficult to
integrate activities that have an m:n mapping as such a mapping implies potentially
very different perspectives on the part of the firms. In the case of Miller in Fig. 1,
their scheduling system is build from cost codes where every activity corresponds to a
standard cost code, directly supporting continuous refinement of the firm’s estimating
database [21]. This is a very different view than the spatial and set related view of the
schedule evinced by the construction manager Centex. We lack a formal mechanism
to represent such a complex mapping, although elements of such mapping to support
504 W.J. O’Brien, J. Hammer, and M. Siddiqui
Fig. 1. An example of mapping activities between two firms’ schedules for the same construc-
tion tasks (electrical work). The sets of activities associated with each other are in many cases
m:n sets, suggesting a disjoint view of the work plan.
The example of Centex and Miller can be generalized. The different parties in-
volved in a construction project view their responsibilities at a level of detail that is
most useful for their business objectives. A general contractor (GC) or construction
manager (CM) in most cases owns the master schedule and uses critical path networks
[23] often representing activities for a subcontractor at an aggregated level of detail.
The GC is usually unaware of the specific resource and capacity constraints of the
subcontractors, which can limit the ability of the GC to coordinate schedules [24]. A
subcontractor models its responsibility in the project at a finer level of detail for
proper control and management relying on critical path networks for the overall pro-
ject view. Site staff usually relies on bar charts and activity lists for detailed planning
of specific site tasks [25]. These views are typically disjoint and the current research
does not provide the necessary theory and tools to integrate these independent views.
Lacking such a representation, constraint propagation is similarly difficult. It is thus
no surprise that, as Smith notes, most scheduling optimization algorithms operate
within a static and single party view of temporal and resource constraints [12].
To summarize, we can generally state that current technology and understandings
show that, with effort, it is possible to manually integrate processes across software
for specific applications. Further, research demonstrates that data standards such as
the IFC and specific process related data standards such as PSL and OZONE (a
scheduling ontology) [26] can be used to support integration, although extensions
may need to made for specific applications. More research is needed, however, to
allow more general and reusable integration efforts. With respect to semantic
From SEEKing Knowledge to Making Connections 505
formed database specifications often found in older legacy systems. Furthermore, the
knowledge extraction module also enables step-wise refinement of templates and
wrapper configuration to improve extraction capabilities.
Fig. 2. High-level SEEK architecture and relation to legacy sources and querying applications.
The SEEK wrapper and analysis components are configured during build-time by the knowl-
edge extraction module. During run-time, the wrapper and analysis component allow rapid,
value-added querying of firm’s legacy information for decision support applications.
The authors believe that the modular structure of the architecture provides a general-
ized approach to knowledge extraction that is applicable in many circumstances. That
said, it is useful to provide more details of the current implementation architecture with
respect to discovery. Such a more detailed schematic is shown in Fig. 3, which can be
seen as a more detailed view of the knowledge extraction module in Fig. 2. SEEK
applies Data Reverse Engineering (DRE) and Schema Matching (SM) processes to
legacy database(s), to produce a source wrapper for a legacy source (shown in Fig. 2).
The source wrapper will be used by another component wishing to communicate and
exchange information with the legacy system. We assume that the legacy source uses a
database management system for storing and managing its enterprise data.
First, SEEK generates a detailed description of the legacy source, including enti-
ties, relationships, application-specific meanings of the entities and relationships,
business rules, data formatting and reporting constraints, etc. We collectively refer to
this information as enterprise knowledge. The extracted enterprise knowledge forms a
knowledgebase that serves as input for subsequent steps outlined below. In order to
extract this enterprise knowledge, the DRE module shown on the left of Fig. 3 con-
nects to the underlying DBMS to extract schema information (most data sources
support at least some form of Call-Level Interface such as JDBC). The schema infor-
mation from the database is semantically enhanced using clues extracted by the se-
mantic analyzer from available application code, business reports, and, in the future,
perhaps other electronically available information that may encode business data such
as e-mail correspondence, corporate memos, etc.
From SEEKing Knowledge to Making Connections 507
Second, the semantically enhanced legacy source schema must be mapped into the
domain model (DM) used by the application(s) that want(s) to access the legacy
source. This is done using a schema matching process that produces the mapping
rules between the legacy source schema and the application domain model. In addi-
tion to the domain model, the schema matching module also needs access to the do-
main ontology (DO) describing the model. Third, the extracted legacy schema and the
mapping rules provide the input to the wrapper generator (not shown in Fig. 3), which
produces the source wrapper.
With these three steps, SEEK is thus able to discover semantically rich legacy data
and supply this description to a wrapper generator [30] and value-added mediator.
The wrapper generator allows automatic generation of source wrapper while the me-
diator can be configured by the source description. The generated wrapper can be
refined in a bootstrap manner by human domain experts and the entire build-time
process can be re-run to rapidly generate new wrappers should the sources change (for
example, by adding an additional or upgrading an existing application program).
The current SEEK prototype is capable of extracting schema information including
relationships and constraints from relational databases and inferring semantics about
the extracted schema elements and any business rules from the accompanying (Java)
application/report generation code using the database. With some modifications, non-
relational databases or other programming languages (C++, PHP, TKL/TK) can be
supported. The SEEK prototype has been tested using sample databases and report
interfaces developed as part of a class project in an introductory database course at
508 W.J. O’Brien, J. Hammer, and M. Siddiqui
UF. We are currently evaluating SEEK against the THALIA integration benchmark
consisting of 44+ University course catalogs and 12 challenge queries [31]. In addi-
tion, we validated the SEEK approach using legacy data sources from the construction
and first responder domains. For example, in the case of the Gainesville Fire Depart-
ment, the SEEK software was used to extract schema and source descriptions for a
variety of legacy sources (e.g., regional utilities, property appraiser, telephone com-
panies) and helped produce translations between the legacy sources and the informa-
tion contained in a emergency dispatching system, improving the information avail-
able to first responders. Using SEEK drastically reduced the time it took to make the
data in the legacy sources available for sharing with the fire department [32]. Con-
struction data was used mainly to develop the SEEK algorithms initially and to vali-
date their correctness. We also used sample data from a construction project for most
of the SEEK demonstration scenarios. SEEK is continuing development with applica-
tion data from construction and other domains.
Fig. 4. Different schedule networks for the same physical tasks as viewed by the participants.
The coordinator’s schedule is a subset of a larger project-wide schedule.
Fig. 5. Mapping network viewed as a result of an overlay of two networks that are the internal
process representation of the coordinating firm and project stake holder
510 W.J. O’Brien, J. Hammer, and M. Siddiqui
typically the CM or GC on the project that is responsible for the master schedule. As
such, the coordinating firm directs precedence constraints for any given schedule
mapping. In this sense, the mapping process can be considered conceptually as an
overlay of networks where the coordinating firm determines the overall precedence
constraints of the final mapping network. A set of mapping networks is developed for
a project (e.g., a project with one CM/GC or coordinator and ten subcontractors
would contain ten mapping networks). The designation of a single coordinating firm
helps to ensure consistency when building and relating multiple mapping networks.
Each mapping in the mapping network is a collection of matching activity identifi-
ers extracted from the respective schedules and the time period associated with these
activities. Fig. 6 shows two sets of start and finish dates as a representation of the time
periods associated with some activities of the coordinator and the corresponding ac-
tivities for the other participant. The coordinator sets the time period the other firm
must fall within. Note that each box in Fig. 6 may contain multiple; the set in each
box represents the mappings (e.g., mapping M3 in Fig. 5).
Fig. 6. Single mapping showing two pairs of start and finish dates. The outer start and finish
dates represent the limits set by the coordinating firm.
As noted above, the mapping process recursively discovers smaller mappings. This
process is illustrated in Fig. 7, progressing from a simple mapping of overall schedule
to overall schedule down to the smallest possible mappings. However, a network
representation at any point in the decomposition does not provide information about
the previous decomposition. As such, potentially useful information might be lost.
Referring back to the Centex and Miller example in Fig. 1, an intermediate decompo-
sition can be activities grouped by floor. Such an association could be useful for rea-
soning about trade space constraints [34] and we would not want to lose that informa-
tion in subsequent decompositions. As decomposition is hierarchical, we can utilize a
tree based representation [35] to retain information about each step in the decomposi-
tion. The tree representation of each mapping is shown in Fig. 7 aside the network
representation. During construction of the tree, it is possible to record additional
information about the decomposition in a descriptive fashion that supports further
reasoning and analysis.
Once the tree based representation of the mapping network is created, all the in-
termediate networks from the final tree based representation can be generated auto-
matically by combining stored information for the children of any node at any level.
Summarizing all the nodes up to the root level will give us the initial mapping. The
only way to record this information using networks alone would be to store all
From SEEKing Knowledge to Making Connections 511
Fig. 7. Tree and network based representations of the mapping network. Note that the tree
representation retains information about the previous decompositions, allowing reasoning at
multiple levels of detail in the hierarchy.
Fig. 8. A tree based representation of the schedule mappings, incorporating precedence infor-
mation at the leaf node or smallest possible decomposition level
intermediate networks separately. Using a tree based structure to store the mapping
information allows better consistency and reduces redundancy of information.
Note that information about precedence constraints are also stored as a part of the
schedule mappings, shown as links between the leaf nodes of the tree. Fig. 8 depicts
the network and tree based representations in more detail, incorporating pictorially the
individual mapping shown in Fig. 6 in the leaf nodes of the tree. A further advantage
of the tree-based representation is that it is easy to represent in a computer and add
information at each node. Thus once a basic framework has been established, other
constraints (e.g. resource constraints) associated with the activities of the participants
can also be attached to these mappings. Ideally, once the mapping is established,
associating additional organizational process information from existing legacy sys-
tems should can be accomplished semi-automatically using SEEK or related
technologies.
There are multiple screens in the user interface for our mapping application. The
screen shown in Fig. 9 allows a user to go through the step by step process of
512 W.J. O’Brien, J. Hammer, and M. Siddiqui
Fig. 10. Cross tree connectors showing precedence connections between two mapping trees
decomposing a mapping and building a hierarchy of decomposition (Fig. 7 and Fig. 8).
The Mapping Tree area updates as the user performs various operations on the tree.
Building from schedule mappings between pairs of firms, it is possible to generate
connections between mapping networks using precedence information from the coor-
dinating firm. This is shown in Fig. 10, showing links (cross tree connectors) between
the leaf node level of two pairs of mapping networks. These cross tree connectors
supplement the predecessor/successor information contained within each individual
mapping network, allowing coordination of all firms’ schedules on the project. Rea-
soning about coordination can be done at several levels. At a simple level, once map-
pings are created and populated with constraint information, it is possible to validate
that there are no conflicts. A more complex use of the mappings is to explore alterna-
tives should there be a conflict or a change in schedule. Schedule optimization is also
possible. Once the mappings are created and linked, the resulting representation can
be used for a variety of schedule coordination activities.
From SEEKing Knowledge to Making Connections 513
Fig. 11. Process Connectors architecture, depicting bridge and stub components between firms’
existing legacy sources. Stubs represent firms’ schedule process data, and the bridge performs
mappings and supports process coordination.
Stubs are a main component of the process Connectors Architecture as they are the
connection point between a firm’s internal data and applications and connection to
other components and analysis. Stubs are key to scalability as they translate firms’
process information to a data format for sharing and further processing. We envision a
stub will include elements of the SEEK architecture [9] and will support extraction of
information from existing legacy systems. Stubs must also send information back to
the firm (for example, a revised schedule for confirmation and acceptance by subcon-
tractor management). Thus beyond being a wrapper or translator, they must also con-
tain some application logic to process information as well as an interface for setup and
confirmation. With respect to the SEEK architecture (Fig. 2), a stub uses the wrapper
component but extends the functionality of the analysis module beyond mediation to
include some analysis and conditional processing. In a larger context, we view the role
of the stub as a gateway to the firm’s internal data and process information.
For each stub, there is an initial, one-time setup or build-time process. In this process,
the firm management must map the firm data and processes to the internal data format
514 W.J. O’Brien, J. Hammer, and M. Siddiqui
contain many modules that can be called upon as needed. At a minimum, these mod-
ules will support the mapping process and perform simple analysis to aid schedule
coordination. More sophisticated analysis can be supported by external applications.
In this sense, the Process Connectors architecture acts as a data collection source for
the entire project, making it easier to gather information heretofore difficult to collect.
Fig. 12. Internal details of the Process Connectors architecture, depicting from left to right:
Existing legacy systems at the coordinating firm, a stub that contains analysis functionality as
well as a mapping repository to store mapping networks, a bridge component to construct and
manipulate mappings, a stub attached to another firm that depicts the incorporation of SEEK
components, and legacy systems at the stub. The Process Connectors architecture also supports
links to external analysis tools.
The discussion of mapping networks and their implementation within the Process
Connectors architecture has focused primarily on a single coordinating firm whose
schedule is mapped to a set of firms participating in the same activities. This corre-
sponds to the activities of a CM/GC and a number of subcontractors. Of course, this
does not represent all the firms on a project as each subcontractor will likely have one
or more tiers of suppliers. In this case, the authors believe it is possible to augment the
Process Connectors architecture to enable coordination across all firms. We can view
each subcontractor as a coordinating firm for its suppliers, mirroring the mapping
network setup between the CM/GC and the subcontractors. This enables local instan-
tiation of stubs and bridges for these firms. This recursive structure is shown in Fig.
13. Constraints on the part of the suppliers can be reflected in constraints of the sub-
contractor, and in turn incorporated into the subcontractor’s schedule coordination
with the CM/GC. Constraints of suppliers to suppliers can also be represented in this
way, with the Process Connectors architecture being recursively implemented down
each tier of the project supply chain. Such a recursive structure is easier to implement
than direct links between a central hub at the CM/GC and all firms in the supply chain
both in terms of number of links and in terms of following the contractual structural
typical of projects.
516 W.J. O’Brien, J. Hammer, and M. Siddiqui
Fig. 13. The Process Connectors architecture can be extended recursively down the supply
chain, following contractual arrangements. Constraint information from lower tiers is summa-
rized at each local coordinating firm to make it available to the higher tiers.
6 Conclusions
In the broadest sense, the Process Connectors architecture is an implementation level
view of a generalized services architecture for process coordination in the construc-
tion supply chain. Stubs provide both the translation of semantically heterogeneous
data to a sharable data format and representation of a firm’s process information. As
such, they can be viewed as a general-use, value-added gateway between firm’s inter-
nal data/processes and external data/process coordination needs. The example of
schedule coordination, although very important to construction applications, need not
be the sole distributed process that is supported by the Process Connectors architec-
ture. Other applications for the bridge, such as order processing, can be envisaged. An
insight of the schedule mapping representation is that firms’ internal process represen-
tations, though dissimilar, may have enough common elements such that a general-
ized representation can maintain links between them. In the same sense we describe
the schedule mapping process as an overlay of networks, it is possible to consider
process coordination between firms as an overlay of processes. This is an important
shift of thinking as distributed process coordination is often viewed as imposition of a
single view of the process on many firms. Rather, our view opens new lines of re-
search by considering process coordination as the intersection of multiple processes.
Acknowledgements
The authors would like to thank the faculty (Sherman Bai, Joseph Geunes, Raymond
Issa, and Mark Schmalz) and students (Sangeetha Shekar, Nikhil Haldavnekar,
From SEEKing Knowledge to Making Connections 517
Huanqing Lu, Oguzhan Topsakal, Bibo Yang, Jaehyun Choi, and Rodrigo Castro-
Raventós) involved on the SEEK project for their contributions to the research and
understanding behind this chapter. The authors also thank the National Science Foun-
dation who supported SEEK research under grant numbers CMS-0075407 and CMS-
0122193. Additionally, the authors wish to thank the faculty (Ron Wakefield and
Nashwan Dawood) and students (Ting-Kwei Wang, Jungmin Shin, and Tim Apsia)
involved on the Process Connectors project. The authors wish to thank the National
Science Foundation for its support of the Process Connectors research under grant
numbers CMS-0542206 and CMS-0531797.
References
1. IAI: End user guide to Industry Foundation Classes, enabling interoperability in the
AEC/FM industry. International Alliance for Interoperability (IAI). (1996)
2. Crowley, A.: The development of data exchange standards: the legacy of CIMsteel. Vol.
2001. The CIMsteel Collaborators (1999) 6 pages
3. Kam, C., Fischer, M.: Product model and 4D CAD final report. Center for Integrated Facil-
ity Engineering, Stanford University (2002) 51 pages
4. Fischer, M.A., Kunz, J.: The Circle: Architecture for Integrating Software. ASCE Journal
of Computing in Civil Engineering 9 (1995) 122-133
5. Goldberg, H.E.: AEC From the Ground Up: The Building Information Model. Cadalyst
(2004)
6. Zamanian, M.K., Pittman, J.H.: A software industry perspective on AEC information mod-
els for distributed collaboration. Automation in Construction 8 (1999) 237-248
7. Turk, Z.: Phenomenological foundations of conceptual product modeling in architecture,
engineering, and construction. Artificial Intelligence in Engineering 15 (2001) 83-92
8. Amor, R., Faraj, I.: Misconceptions about integrated project databases. ITcon 6 (2001) 57-66
9. O'Brien, W.J., Issa, R.R., Hammer, J., Schmalz, M., Geunes, J., Bai, S.: SEEK: Accom-
plishing enterprise information integration across heterogeneous sources. ITcon - Elec-
tronic Journal of Information Technology in Construction - Special Edition on Knowledge
Management 7 (2002) 101-124
10. Hammer, J., O'Brien, W.: Enabling Supply-Chain Coordination: Leveraging Legacy
Sources for Rich Decision Support. In: Geunes, J., Akçali, E., Pardalos, P.M., Romeijn,
H.E., Shen, Z.J. (eds.): Applications of Supply Chain Management and E-Commerce Re-
search in Industry. Kluwer Academic Publishers, Boston/Dordrecht/London (2005) 253-298
11. Borghoff, U.M., Schlichter, J.H.: Computer Supported Cooperative Work: Introduction to
Distributed Applications. Springer-Verlag, Heidelberg, Germany (2000)
12. Smith, S.: Is Scheduling a Solved Problem? : Proceedings of the First MultiDisciplinary Con-
ference on Scheduling: Theory and Applications (MISTA), Nottingham, UK (2003) 16 pages
13. O'Brien, W.J., Fischer, M.A., Jucker, J.V.: An economic view of project coordination.
Construction Management and Economics 13 (1995) 393-400
14. Ducq, Y., Chen, D., Vallespir, B.: Interoperability in enterprise modelling: requirements
and roadmap. Advanced Engineering Informatics 18 (2004) 193-203
15. Froese, T.: Models of construction process information. ASCE Journal of Computing in
Civil Engineering 10 (1996) 183-193
16. Danso-Amoako, M., O’Brien, W.J., Issa, R.: A case study of IFC and CIS/2 support for steel
supply chain processes. Proceedings of the of the 10th International Conference on Comput-
ing in Civil and Building Engineering (ICCCBE-10), Weimar, Germany (2004) 12 pages
17. Pouria, A., Froese, T.: Transaction and implementation standards in AEC/FM industry.
Proceedings of the 2001 Conference of the Canadian Society for Civil Engineers, Victoria,
British Columbia (2001) 7 pages
518 W.J. O’Brien, J. Hammer, and M. Siddiqui
18. Liu, D., Cheng, J., Law, K., Wiederhold, G., Sriram, R.: Engineering information service
infrastructure for ubiquitous computing. ASCE Journal of Computing in Civil Engineering
17 (2003) 219-229
19. Schlenoff, C., Gruninger, M., Tissot, F., Valois, J., Lubell, J., Lee, J.: The Process Specifi-
cation Language (PSL): Overview and Version 1.0 Specification. NIST (2000) 83 pages
20. Wiederhold, G.: Weaving data into information. Database Programming and Design 11
(1998)
21. Castro-Raventós, R.: Comparative Case Studies of Subcontractor Information Control Sys-
tems. M.E. Rinker, Sr. School of Building Construction. University of Florida (2002)
22. Sriprasert, E., Dawood, N.: Multi-constraint information management and visualization for
collaborative planning and control in construction. ITcon, Special Issue on eWork and
eBusiness 8 (2003) 341-366
23. Antill, J.M., Woodhead, R.W.: Critical Path Methods in Construction Practice. Wiley,
New York (1990)
24. O'Brien, W.J., Fischer, M.A.: Importance of capacity constraints to construction cost and
schedule. ASCE Journal of Construction Engineering and Management 126 (2000) 366-373
25. Mawdesley, M., O'Reilly, M.P., Askew, W.: Planning and controlling construction pro-
jects: the best laid plans. Longman, Essex, England (1997)
26. Smith, S.F., Becker, M.A.: An Ontology for Constructing Scheduling Systems. Working
Notes of the 1997 AAAI Symposium on Ontological Engineering. AAAI Press (1997) 10
pages
27. Hammer, J., Schmalz, M., O'Brien, W., Shekar, S., Haldavnekar, N.: Enterprise knowledge
extraction in the SEEK project part I: data reverse engineering. Department of Computer
and Information Science and Engineering, University of Florida (2002) 30 pages
28. Chawathe, S., Garcia-Molina, H., Hammer, J., Ireland, K., Papakonstantinou, Y., Ullman,
J., Widom, J.: The TSIMMIS Project: Integration of Heterogeneous Information Sources.
Tenth Anniversary Meeting of the Information Processing Society of Japan. Information
Processing Society, Tokyo, Japan (1994) 7-18
29. Bayardo, R., Bohrer, W., Brice, R., Cichocki, A., Fowler, G., Helal, A., Kashyap, V.,
Ksiezyk, T., Martin, G., Nodine, M., Rashid, M., Rusinkiewicz, M., Shea, R., Unnikrish-
nan, C., Unruh, A., Woelk, D.: Semantic Integration of Information in Open and Dynamic
Environments. MCC (1996)
30. Hammer, J., Breunig, M., Garcia-Molina, H., Nesterov, S., Vassalos, V., Yerneni, R.:
Template-based wrappers in the TSIMMIS system. Twenty-Third ACM SIGMOD Interna-
tional Conference on Management of Data, Tuscon, Arizona (1997) 532-543
31. Hammer, J., Stonebraker, M., Topsakal, O.: THALIA: Test Harness for the Assessment of
Legacy Information Integration Approaches. Proceedings of the 21st Int'l Conf. on Data
Engineering (ICDE2005), Tokyo, Japan (2005) 2 pages
32. O’Brien, W.J., Hammer, J.: A case study of information integration problems in first re-
sponder coalitions. Proceedings of the 3rd International Conference on Knowledge Sys-
tems for Coalition Operations (KSCO-2004), Pensacola, Florida (2004) 145-149
33. Siddiqui, M., O'Brien, W.J., Wang, T.: A mapping based approach to schedule integration
in heterogeneous environments. Proceedings of the Joint International Conference on
Computing in Building and Civil Engineering, Montreal, Canada (2006) 10 pages
34. Thabet, W.Y., Beliveau, Y.J.: SCaRC: Space-constrained resource-constrained scheduling
system. ASCE Journal of Computing in Civil Engineering 11 (1997) 48-59
35. Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to algorithms. MIT
Press, Cambridge, Mass. (2001)
36. Wang, T.-K., O'Brien, W.J., Siddiqui, M., Hammer, J., Wakefield, R.: Process Connectors:
Mapping Distributed Processes in the Construction Supply Chain. Proceedings of the Joint
International Conference on Computing in Building and Civil Engineering, Montreal, Can-
ada (2006) 10 pages
Knowledge Based Engineering and Intelligent Personal
Assistant Context in Distributed Design
Jerzy Pokojski
Abstract. The work focuses on the problem concerning the application of the
KBE approach and its tools in distributed environments. The first period of ap-
plying industrial KBE systems did not only show their significant advantages
but also revealed their shortcomings. This problem is especially disturbing with
distributed design where the potential of communication is very limited. Every
design process is closely connected with the designer’s knowledge. In general,
this is a very individual knowledge which is stored in the designer’s personal
memory. The work represents an attempt of integration of KBE and IPA (Intel-
ligent Personal Assistant), [11], which is the designer’s personal knowledge
repository.
1 Introduction
The work focuses on the problem concerning the application of the Knowledge Based
Engineering (KBE) approach [7] and its tools in distributed environments [16].
Today’s CAD systems enable building geometric models which allow parametric
modeling at large scale. Often, when applying the conception of parametric modeling
a vision of the product’s further development is implied – a vision of the various
versions being planned [1, 4, 7, 12]. Along with these CAD systems it became also
possible to record the calculation procedure and design rules (Knowledge Based En-
gineering) [1, 7, 12, 15, 16] (fig. 1). It is difficult to find publications of complete
description of industrial KBE applications. One reason is that firms are not willing to
publish their know-how (work [4] is an exception). Second, KBE applications in
general are large and complicated. Additionally the CAD systems can be integrated
with external calculation processes as well as with external information and knowl-
edge sources [12]. All these modeled and integrated components (forming the com-
puterized form of procedural or declarative knowledge) retrieve the parameterized
geometric objects (modeled in the CAD system) and thus support in the same process
the generation, evaluation and modification of the objects [1, 7]. Consequently, mod-
els of a geometric construction based on the recorded knowledge can be created very
quickly (it is also possible to create non-geometric models, for instance simulation
models or FEM models [12]).
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 519 – 528, 2006.
© Springer-Verlag Berlin Heidelberg 2006
520 J. Pokojski
The first period of applying industrial KBE systems did not only show their sig-
nificant advantages but also revealed their shortcomings [7, 12]. The two most
significant ones have to be mentioned: 1) the lack of an universal work methodology
with the KBE systems and 2) the lack of an universal methodology for the storing and
managing of the engineer’s knowledge implemented in the KBE system. In both cases
the shortcomings are decisively influenced by the individual development of a branch,
a firm or even a single person.
It became obvious that the process of building KBE applications also reflects the
engineer’s knowledge which he assigned to the tools and the representation in the
KBE system. This process is realized by many a’priori conditions which are never
completely articulated in an accessible form. Because of that the resulting applica-
tions may differ from each other (concerning the same domain) by their various struc-
tures and their final version. That means that they can only be fully understood by
their creators. Any attempts of non-insiders to comprehend the process involves the
risk of misunderstandings. This problem is especially disturbing with distributed de-
sign where the potential of communication is limited.
Every design process is closely connected with the designer’s knowledge [2 –3,
7-11]. In general, this is a very individual knowledge which is stored in the de-
signer’s personal memory. As it comprises formally articulated knowledge, imple-
mented procedural knowledge and computer tools it makes its exploitation a relative
individual matter. Only in rare cases the designer articulates and stores his knowl-
edge explicitly for external users. Moreover, the knowledge is not static, it develops
individually, it is constantly enriched and continuously integrated with the current
available knowledge. Thus it creates the work basis for the activities of a profes-
sional designer. Considering the inadequacy of the human mind the recording of the
designer’s knowledge on external storage units would make a future access to one’s
Knowledge Based Engineering and Intelligent Personal Assistant Context 521
own knowledge easier (both resolved and accidentally obtained) and also provide a
more efficient management of the knowledge. At the same time new knowledge
elements could be better articulated and become available for external partners
which would be especially helpful in the case of distributed design. The practical
consequence of the above observations is the building of a personal notepad (Intelli-
gent Personal Assistant) [11] which is computerized and functions to store and man-
age the designer’s personal knowledge. After having fulfilled the mentioned
requirements the notepad can become the designer’s knowledge repository [11].
The ideas presented in this paper are based on concepts which the author explicitly
laid out in his book (Pokojski, J., 2004 [11]). Among them can be found: the nature of
individual and team based design, a survey of software concepts of the IPA (Intelli-
gent Personal Assistant), a general model of the IPA concept, issues of integrating, a
survey of engineering knowledge representations, design process modeling, knowl-
edge modeling, relationships between the IPA and optimization as well as its imple-
mentation. It would be beyond the scope of this article to summarize the whole book.
Consequently, merely the most basic points are taken up in the introduction of the
paper, i.e. the role of knowledge in the design and knowledge based engineering in
the IPA-context.
In many practical tasks we see that a professional’s knowledge finds its represen-
tation in various forms and that all the knowledge elements are connected to each
other even when they are recorded in different representations [11]. This fact becomes
especially obvious when we cooperate with designers who can look back on a rela-
tively long professional career. When such an experienced engineer reveals the work
proceedings of a design task he is mostly able to explain the sources of the knowledge
on which he based certain design steps. In general those sources are recorded multi-
medially as texts or drawings.
Often the designer is also able to make clear how the knowledge developed in
the past. If there was any kind of knowledge representation – procedural or declara-
tive recordings – then the designer is enabled to show its evolution as well. If we
want to record a single designer’s knowledge in a personal knowledge repository, the
knowledge and its structuring have to be decomposed. For that purpose it is neces-
sary to predict the data structures which allow to record the dynamics of the knowl-
edge development. Equally important is the proper classification of each knowledge
element as it makes its possible reuse in the future easier. Concerning the notebook
we require that the picture of the recorded knowledge depicts as truly as possible its
real form. Consequently, the proposed tool has to become one of the most essential
means used by a designer. At the same time it should guarantee fast access to the
required knowledge element if necessary. As another important function the note-
book offers the possibility of managing the available knowledge in design processes
of distributed design. There the notebook, i.e. the personal assistant, apart from pro-
viding single, detailed knowledge elements should also present a wider context of
the selected knowledge or information element and make clear how it is connected
with other knowledge elements from its user’s point of view. The work represents an
attempt of KBE and IPA (intelligent personal assistant) integration in distributed
design.
522 J. Pokojski
Fig. 2. Design process without (upper) and with (lower) KBE application
It has to be stated that in general real modeled processes are not completely articu-
lated. Only certain stages and typical paths are known. As each step is manually
indicated we often have to cope with surprising outcomes: correctly or incorrectly
calculated steps as well as meritorically correct or incorrect ones [12]. Of course, we
can try to solve most of the tasks correctly by testing certain mathematical models but
this attempt requires a lot of work. Moreover, with diversity tasks which are charac-
terized by a big complexity such an undergoing wouldn’t be economically worth
while doing. Apart from that it is quite difficult to build an universal formalism for
that task. Because of those difficulties we mostly decide to apply the engineer’s tradi-
tional way of proceeding in the design in the KBE system. This means, first the cal-
culations are done then the constraints and the relations between the parameters of
certain objects are checked. Any time a condition in the step is not fulfilled it is indi-
cated to the user in an appropriate way. It has to be pointed out that every single step
has its geometric consequences in the form of partial models. They can be visualized
and their mistakes can become obvious. The designer does not only obtain informa-
tion whether the implemented constraints are formally fulfilled, he can also virtually
Knowledge Based Engineering and Intelligent Personal Assistant Context 523
observe the model which he generated and study the constraints he took into account.
After some experience with partial models the designer is mostly able to build mod-
ules which indicate that a certain relation is not fulfilled. They reflect the evaluation
stages which naturally appear with manual calculation.
As presented here the most common way of implementing KBE applications is to
make automatic the design steps and to perform trial modeling on the evaluation basis
of the achieved relations. The realization of a design step is initiated manually and
directly controlled by the engineer. The selection of the following steps is also done
by the designer himself. In principle the whole design process is equipped with tools
for an interactive support of single steps which are accompanied by the visualization
of their geometric consequences. Thus the complete application is given an elastic
structure which adequately reflects the knowledge the designer has applied in a rela-
tively exact but realistic way. If aiming at a wider complexity the presented approach
can be further automatized. However, the user should be aware of the fact that any
automation is very labour intensive and not necessarily economic.
In general, the knowledge owned and used by a designer represents a quite rich and
coherent complexity which has been accomplished over many years [10, 11, 12]. As it
would take a lot of effort and be very difficult to garner that knowledge entirely, se-
lections and simplifications have to be made. It is advantageous when the selections
and simplifications are carried out by the designer himself because he best knows
the status of certain problems. One of the most basic classifications is to assign the
knowledge elements to different activities carried out by the designer. Mostly the
engineers opt for those activities which were used by them and whose appearance,
evaluation and vanishing they know.
Each activity is accompanied and defined by its knowledge support whereby the
knowledge may have various forms and sources. It is possible to establish a recording
of the computer knowledge elements on which the designer founds his activities. When
the knowledge elements are connected to a certain activity the evolution process of the
knowledge for a given activity becomes obvious. If the activity is carried out with the
help of KBE tools, we are able to trace back the process of the knowledge evolution
which stands behind each version of the given tool – from the designer’s personal
perspective (fig. 3). We can then look back and try to comprehend the wider context
and endeavour to pursue the individual path of development. The insight acquired by
the retracing is especially helpful in the case of distance-cooperation as we may regard
it a kind of a condensed professional biography of the designer which makes us better
understand his work proceedings. For projects in distributed design which are realized
by many cooperators this is advantageous because of the possibility to integrate the
personal knowledge stores of the team members (fig. 4). Each participant can then
suitably manage the process and offer his knowledge elements for the other team
members. This brings about a better mutual understanding of the participants at a dis-
tance especially with various cultural backgrounds.
524 J. Pokojski
KBE applications are an effective means to reduce the time of realizing design pro-
jects significantly. However, it is a very time consuming task to build KBE applica-
tions. When we want that applications of this class be used by engineers who are not
in one way or the other involved in their creating then understanding the knowledge
which forms the basis of the used application becomes an essential problem. Such a
situation is easily foreseeable with teams cooperating over distance. In most cases
the team members never meet and as a consequence never work face-to-face. As a
result the application may have to be used by designers who are not its author and
Knowledge Based Engineering and Intelligent Personal Assistant Context 525
are also far away from him. This means that non-authors only exploit the results of
the respective application. In both cases the users might want to or even have to
understand the knowledge behind the design process whose model is to a certain
degree the KBE application. In any case this model cannot easily be depicted or
restored. Similar situations may also occur when different firms start to cooperate. It
can happen that potential partners use different approaches in their projects and at
least at the beginning of the cooperation are not able to completely understand each
other’s procedure.
The presented applications of a designer’s personal assistant which contain the re-
cordings of the evolution of the knowledge that was the bases for the establishing of
further modules and their KBE application version may turn out to be helpful with
finding information which explains its functioning . The advantage of this concept is
that the personal assistant’s content can be recorded parallel to the process by which
the KBE application is created. At the same time it is possible to integrate the various
knowledge elements with each other. Additionally, management providing modeled
and controlled access to the information can be implemented.
4.1 Example
The example below shows how a computer application supports the design process of
a car gear box [11, 12, 13, 14]. In each of the variants the gear box has the same struc-
ture (fig. 5), that means the same system of wheels, shafts, bearings, clutches etc. The
wheels and clutches are KBE models. Because of that the complete geometric model
of the gear box can be changed according to selected and calculated parameters. The
presented KBE application is integrated with a calculation module which is equipped
with a data base and a Case-Based- Reasoning module. Additionally, the designer
can add to each gear box variant – manually or automatically – design rationale in-
formation concerning the respective variant. The module for the calculation of the
tooth wheels is integrated with a multi-criteria optimization system.
and the consultants A. Wąsiewski and S. Skotnicki. Figure 6 depicts the structure and
the extremely abstracted layer of the IPA content (belonging to the supervisor). The
two kinds of information – 1) description of the KBE application and 2) personal
knowledge development reflected in an IPA content – illustrate how a KBE applica-
tion actually works.
Knowledge Based Engineering and Intelligent Personal Assistant Context 527
5 Conclusions
The paper lays out an approach of how the process of creating a KBE application
can be stored in a wider range. A well elaborated application is compiled of many
modules. They may have been carried out in several version and are meant for use in
distributed design.
Many KBE applications can be understood when we analyze their objects, attrib-
utes, activities, etc. But we should remember that even the knowledge of single auto-
motive units contains huge amounts of data and information. A car transmission
system for example can have thousands of attributes. Nowadays there are attempts of
building models by using formal knowledge [5, 6, 7]. But they turn out to be too re-
strictive. Moreover, the level of these attempts is not necessary for potential users of
KBE applications. In most cases they need a general informal description of the prob-
lem, explaining the foundations of a KBE application.
The example described in the work only wants to illustrate the concept. It is not an
example which was originated in an industrial reality. The approach to the concept
itself, however, results from actual experience gained while implementing it industri-
ally in other domains [15], (fig. 7).
References
1. CATIA – manual, (2005)
2. Clarkson, J., Eckert, C. (ed.): Design Process Improvement. A review of current practice.
Springer – Verlag, London (2005)
3. Fujita, K., Kikuchi, S.: Distributed Design Support System for Concurrent Process of Pre-
liminary Aircraft Design. Concurrent Engineering: Research and Applications 11 2 (2003)
93- 105
4. Glymph, J., Shelden, D., Ceccato, C., Mussel, J., Schober, C.: A parametric strategy for
free-form glass structures using quadrilateral planar facets. Automation in Construction 13
(2004) 187-202
5. Gu, N., Xu, J., Wu, X., Yang, J., Ye, W.: Ontology based semantic conflicts resolution in
collaborative editing of design documents. Advanced Engineering Informatics 19 (2005)
103-111
6. Kim, T., Cera, C.D., Regli, W.C., Choo, H., Han, J.: Multi-level modeling and access con-
trol for data sharing in collaborative design. Advanced Engineering Informatics 20 (2006)
47-57
7. Managing Engineering Knowledge, MOKA - project. Professional Engineering Publishing
Limited, London (2001)
8. Moran, T.P., Carroll, J.M.: Design Rationale, Concepts, Techniques, and Use. Lawrence
Erlbaum Associates, Publishers, Mahwah, New Jersey (1996)
9. Nahm, Y., Ischikawa, H.: Integrated Product and Process Modeling for Collaborative De-
sign Environment. Concurrent Engineering: Research and Applications, 12, 1(2004) 5- 23
10. Pokojski, J., (Ed.): Application of Case-Based Reasoning in Engineering Design. WNT,
Warsaw (2003) (in Polish)
11. Pokojski, J.: IPA (Intelligent Personal Assistant) – Concepts and Applications in Engineer-
ing. Springer-Verlag, London (2004)
12. Pokojski, J.: Expert Systems in Engineering Design. WNT, Warsaw (2005) (in Polish)
528 J. Pokojski
13. Pokojski, J., Niedziółka, K.: Transmission system design – intelligent personal assistant
and multi-criteria support. In: Next Generation Concurrent Engineering – CE 2005, ed. by
M. Sobolewski, P. Ghodous, Int. Society of Productivity Enhancement, NY (2005) 455-
460
14. Pokojski, J., Okapiec, M., Witkowski, G.: Knowledge based engineering, design history
storage, and case based reasoning on the basis of car gear box design. In: AI-Meth 2002,
Gliwice (2002) 337-340
15. Pokojski, J., Skotnicki, S.: Knowledge-based Engineering in support of industrial stairs
geometric model generation. XV Conference „Methods of CAD” , Proceedings, Institute
of Machine Design Fundamentals, Warsaw University of Technology (2005) 299-306
16. Sriram, R.D.: Distributed and Integrated Collaborative Engineering Design. Sarven Pub-
lishers (2002)
Model Free Interpretation of Monitoring Data
Daniele Posenato1, Francesca Lanata2, Daniele Inaudi1, and Ian F.C. Smith3
1
Smartec SA, via Pobiette 11,CH-6928 Manno, Switzerland
[email protected], [email protected]
2
Department of Structural and Geotechnical Engineering
University of Genoa Via Montallegro, 116145 Genoa, Italy
3
Ecole Polytechnique Fédérale de Lausanne (EPFL)
GC G1 507, Station 18, CH-1015 Lausanne, Switzerland
[email protected]
1 Introduction
Structural health monitoring engineers may employ sensors to perform non-
destructive in-situ structural evaluation. These sensors produce data (either continu-
ously or periodically) that are analyzed to assess the safety and performance of struc-
tures [1]. For static monitoring, damage can be identified through comparing static
structural response with predictions of behavior models [2]. However, models can be
expensive to create and may not accurately reflect undamaged behavior. Difficulties
and uncertainties increase in the presence of complex civil structures so that well
defined and unique behavior models often cannot be clearly identified [3]. Further-
more, multiple-model system identification may not succeed in identifying the right
damage [4]. Despite important research efforts into interpretation of continuous static
monitoring data [5], no reliable strategy for identifying damage has been proposed
and verified for broad classes of civil structures [3][6]. Another approach is to evalu-
ate changes statistically [7]. This methodology is completely data driven; the evolu-
tion of the data is estimated without information of physical processes [8-10].
The objective of this paper is to propose methodologies that discover anomalous
behavior in data generated by sensors without using behavior models. The paper is
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 529 – 533, 2006.
© Springer-Verlag Berlin Heidelberg 2006
530 D. Posenato et al.
Fig. 1. The FE model used to test the methodology showing where sensors are placed (bold
lines) and positions of simulated damage (black squares). The beam is 10x0.5x0.3 m, and each
cell of the mesh is 10x8 cm.
measurement session, the covariance matrix and principal components are computed
only for the points inside the active window.
4 Results
A comparative study between the proposed algorithms and wavelet transform (CWT)
[11-12], short-term Fourier transform (STFT) [12], the instance-based method (IBM)
[15-16] has been carried out for several damage scenarios [13]. In this paper, only
comparisons with using the algorithms and CWT for damage between sensors 2 and 3
(4 cells with reduction to 20% of original stiffness) are presented, see Figure 1. Re-
sults show that the algorithms proposed in this paper identify anomalous behaviour
more effectively than CWT. Figure 2 shows plots of the eigenvectors related to the
two main MPCA eigenvalues. The moment when damage occurs and its location are
visible in both plots. Specifically, one of the eigenvectors (eigenvector 11) indicates a
new state when it becomes stable, while the other (eigenvector 12) indicates when the
damage occurred. The location of the damage is detected by the fact that within the
532 D. Posenato et al.
main eigenvectors, there are one or more rapidly changing components that are asso-
ciated with sensors close to the damage. In Figure 2 the location of the damage can be
detected by sensor 3 (Sn3), which is the closest to the damage location since its varia-
tion bigger than the other sensors.
Figure 3 shows Moving Correlation results for sensor pairs that are closest to the
damage. The moment when damage occurred and when the behavior of the structure
can be considered to be stable are visible. Figure 4 shows CWT results. The moment
when damage occurred is not visible and there is no information regarding whether
the anomaly is due to a new temporary situation or due to permanent damage.
Normalized eigenvector [-]
Fig. 2. MPCA plots of eigenvectors related to the two main eigenvalues. They show the mo-
ment when damage occurs and its location. One eigenvector (eigenvector 11, only values of
sensors 2, 3 and 5 are presented) gives an indication of the new state of the structure when it
becomes stable while the second eigenvector (eigenvector 12) gives an indication of the dam-
age. The symbols, sn1, sn2, ... are the components of the eigenvector referred respectively to
sensor 1, sensor 2, etc.
Sensor 3 vs Sensor 9
Coefficients
Correlation
Scale
Days Days
Fig. 3. Diagnostic plots of Moving Correla- Fig. 4. CWT calculated from the difference
tion calculated from measurements of two between results of the two sensors normal-
sensors close to the damage. Calculations ized, closest to the damage. The Gauss
were performed using a moving window of wavelet with a scale of 1024 has been used.
one year.
5 Conclusions
Moving Principal Component Analysis and Moving Correlation are useful tools for
identifying and locating anomalous behavior in civil engineering structures. These
approaches can be applied over long periods to a range of structural systems to
Model Free Interpretation of Monitoring Data 533
discover anomalous states even when there are large quantities of data. A comparative
study has shown that for quasi-static monitoring of civil structures, these new meth-
odologies perform better than wavelet methods. While these methodologies have
good capacities to detect and locate damage, they also require less computational
resources. Another important characteristic is adaptability. Once new behavior is
identified, adaptation allows detection of further anomalies. The next step of the re-
search is to apply the proposed methodology to a database of measurements taken
from full-scale structures.
References
1. A. Bisby, An Introduction to Structural Health Monitoring. ISIS Educational Module 5
(2005)
2. Y. Robert-Nicoud, B. Raphael, O. Burdet & I. F. C. Smith, Model Identification of
Bridges Using Measurement Data, Computer-Aided Civil and Infrastructure Engineering,
Volume 20 Page 118 - March 2005
3. F. Lanata Damage detection algorithms for continuous static monitoring of structures PhD
Thesis Italy University of Genoa DISEG, (2005)
4. S. Saitta, B. Raphael, I.F.C. Smith, Data mining techniques for improving the reliability of
system identification, Advanced Engineering Informatics 19 (2005) 289–298
5. A. Del Grosso, D. Inaudi and F. Lanata Strain and displacement monitoring of a quay wall
in the Port of Genoa by means of fibre optic sensors 2nd Europ. Conf. on Structural Con-
trol Paris, (2000)
6. A. Del Grosso and L. Lanata, Data analysis and interpretation for long-term monitoring of
structures Int. J. for Restoration of Buildings and Monuments, (2001) 7 285-300
7. J. BROWNJOHN, S. C. TJIN, G. H.TAN, B. L. TAN, S. CHAKRABOORTY, “A Struc-
tural Health Monitoring Paradigm for Civil Infrastructure”, 1st FIG International Sympo-
sium on Engineering Surveys for Construction Works and Structural Engineering, Not-
tingham, United Kingdom, 28 June – 1 July 2004
8. H. M. Jaenisch, J. W. Handley, J. C. Pooley, S. R. Murray, “DATA MODELING FOR
FAULT DETECTION” , 2003 MFPT Meeting.
9. F. Lanata and A. Del Grosso, Damage detection algorithms for continuous static monitor-
ing: review and comparison 3rd Europ. Conf. On Structural Control (Wien, Austria), 2004
10. Sohn, H., J. A.Czarneski and C. R. Farrar. 2000. “Structural Health Monitoring Using Sta-
tistical Process Control”, Journal of Structural Engineering, 126(11): 1356-1363
11. C. K. CHUI, Introduction to Wavelets, San Diego, CA: Academic Press, p.264, 1992
12. I. Daubechies, Ten Lectures on Wavelets, Philadelphia: Soc. for Indust. and Applied
Mathematics, p. 357, 1992
13. Daniele Posenato, Francesca Lanata, Daniele Inaudi and Ian F.C. Smith, “Model Free
Data Interpretation for Continuous Monitoring of Complex Structures”, submitted to Ad-
vanced Engineering Informatics, 2006
14. M. Hubert and S. Verboveny, “A robust PCR method for high-dimensional regressors”,
Journal of Chemometrics, 17, 438-452.
15. Kaufman and Rousseeuw, 1990,L. Kaufman and P.J. Rousseeuw. Finding Groups in Data:
An Introduction to Cluster Analysis. Wiley, New York, 1990.
16. Mahamud and Hebert, S. Mahamud and M. Hebert. Minimum risk distance measure for
object recognition. Proceedings 9th IEEE International, 2003 Conference on Computer
Vision (ICCV), pages 242–248, 2003.
Prediction of the Behaviour of Masonry Wall Panels
Using Evolutionary Computation and Cellular Automata
Abstract. This paper introduces methodologies that not only predict the failure
load and failure pattern of masonry panels subjected to lateral loadings more
accurately, but also closely matches deflection at various locations over the sur-
face of the panel with their experimental results. In this research, Evolutionary
Computation is used to model variations in material and geometric properties
and also the effects of the boundary types on the behaviour of the panel within
linear and non-linear ranges. A cellular automata model is used that utilises a
zone similarity concept to map the failure behaviour of a single full scale panel
‘the base panel’, tested in the laboratory, to estimate variations in material and
geometric properties and also boundary effects for any unseen panels.
1 Introduction
Due to the highly composite and anisotropic material properties of masonry, it has
been difficult to accurately predict the behaviour of masonry panels. The research
presented in this paper proposes a numerical model updating technique that studies
the behaviour of masonry panels subjected to lateral loading within the full linear and
non-linear ranges. The method uses evolutionary computation (EC) techniques to
model variations in geometric and material properties over the entire surface of the
panel. The EC search produces factors ‘the corrector factors’, which reflect the col-
lective effects of the above mentioned variations. These factors are then used to vary
the value of flexure rigidity at various locations over the entire surface of the panel.
The modified flexure rigidity are then used in a specialised non-linear finite element
analysis (FEA) program to predict the failure load, failure pattern and load deflection
relationships over the full linear and non-linear ranges. The EC exploration also in-
cludes the effect that boundary types may have on the response of panels to lateral
loading. Results obtained from the non-linear FEA are compared with the experimen-
tal results from a full scale panel tested in the laboratory. Finally a cellular automata
(CA) is used to map information obtained from the single full scale panel ‘base
panel’1 to an ‘unseen panel’2 for which an estimate in variations of material and geo-
metric properties and boundary effects is produced. A non-linear FEA is then used to
predict the failure behaviour of these unseen panels.
1
The base panel is a panel for which displacement values are known at various load levels and
locations over the surface of the panel and for which failure load and failure pattern are also
known.
2
The unseen panel is a panel for which the above parameters are normally not known.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 534 – 544, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Prediction of the Behaviour of Masonry Wall Panels Using EC and Cellular Automata 535
The generality of the methodologies proposed in this paper was tested on several
‘unseen panels’ with and without openings and the results were found to have a rea-
sonable match with their experimental results. A sample of this study is presented
later in this paper.
Robert-Nicoud et al. [1] define modelling error (emod) as the difference between the
predicted response of a given model and that of an ideal model representing the real
behaviour accurately. Raphael and Smith [2] categorised the modelling error into
three components, e1, e2 and e3. The component e1 is the error due to the discrepancy
between the behaviour of the mathematical model and that of the real structure. The
component e2 is introduced during the numerical computation of the solution to the
partial differential equations representing the mathematical model. The component e3
is the error due to the assumptions that are made during the simulation of the numeri-
cal model.
For masonry wall panels, assumptions regarding the choice of boundary conditions
are very difficult to justify. This is because the true nature of the panel boundaries
either for a wall tested in the laboratory or real structures does not comply with the
known boundary types (fixed, simply supported etc.). Hence the error resulting from
the use of incorrect boundary types would be relatively large.
Another factor that greatly affects the behaviour of masonry wall panels is the exis-
tence of a large error due to the component e2 [2] introduced during the numerical
computation process. This error is mainly due to the uncertainty in modelling the
material and geometric properties of highly composite anisotropic material such as
masonry. Yet, there is not an agreed material model for masonry to represent the true
anisotropic nature of this material. There are very few commercial packages for mod-
elling masonry structures. Adding to this error is the modelling complexity due to the
propagation of cracks over the surface and along the depth of the masonry panel,
when performing a non-linear FEA.
The value of e1 for steel and reinforced concrete structures may be reduced through
the use of more precise mathematical models [2]. However, due to the extremely
complex nature of masonry material, the applicability of this approach in practice
would be almost impossible. Another reason for ambiguity in this error is the lack of
sufficient laboratory test data and the high cost of these tests for masonry wall panels.
The majority of tests performed on masonry wall panels only report the failure load of
the panel, as this is the major design requirement, and the failure pattern, which is the
crack pattern observed during the laboratory tests. Recording load and deflection
information is generally limited to a single location, the location of maximum deflec-
tion, over the entire surface of the panel. This makes it extremely difficult to under-
stand the true behaviour and the boundary effects on the response of masonry wall
panels. One of the mostly cited published data available is the data from 18 full scale
masonry wall panels tested in the University of Plymouth (UoP) by Chong [3] that
reports load deflection data at 36 locations over the surface of the panel. These data
are used in this research.
536 Y. Rafiq et al.
2.1 Error Due to Incorrect Support Types (University of Plymouth Test Panels)
The panel’s vertical sides were supported on a steel angle connected to the test frame
abutment to simulate a simply supported support type, and the base of the wall was
enclosed in a steel channel packed with bed joint mortar at both sides (refer to Fig.1
for support details). It was assumed that a combined effect of the support details and
the self weight of the wall might provide sufficient restraint to the base of the panel to
simulate a fixed support type.
From Fig. 1, one can argue that the vertical edges of the panel are not truly simply
supported and there is some degree for restrain to rotation. Similarly the base of the
panel is by no means fully fixed and allows some degree of rotation. Due to the flexi-
ble nature of the edge support, some degree of movement perpendicular to the plane
of the wall was observed at the right hand support.
The load was applied to the wall by means of an air bag and it was assumed to be
uniformly distributed over the entire surface of the panel. This assumption may be
true when the air bag is fully inflated, but not at the lower load levels.
Fig. 2 shows the location of the measurement points (36 points in total) on the face
of the base panel. This panel was a solid single leaf clay brick masonry wall panel.
Linear Variable Differential Transformers (LVDTs) were placed at each gridline
intersection to measure the wall movement perpendicular to the plane of the wall. Due
to the unevenness of the surface, inherent to masonry panels, irregularities were ob-
served in the deformed shape of the panel surface, as shown in Figs. 2(b) and 3. The
reason for these irregularities could be the slippage of the LVDTs from their intended
location and/or inaccuracy in the LVDT readings.
Controlling the load levels at each load increment and maintaining a uniform load
over the entire surface of the panel by means of the airbag is another source of error
that needs to be considered.
Prediction of the Behaviour of Masonry Wall Panels Using EC and Cellular Automata 537
Fig. 2. Position of recorded measurement points and 3D deformed shape of panel SBO1
2.8
Load (kN/m )
0.4kN/m2 1.6kN/m2
2
Deflection (mm)
Fig. 3. Measured load deflection along grid line C at various load levels
As mentioned earlier, it was very difficult to find a FEA package that accurately mod-
els the masonry material properties, crack propagation and failure characteristics.
Therefore, in this study a specialised non-linear FEA program, developed by Ma and
May [4], was used. This FEA program was purely developed for research on masonry
wall panels, but it lacks essential flexibility of the FEA packages.
In this analysis the following essential aspects were considered:
♦ The non-linearity failure criteria include both tension cracking and compression
crushing of the masonry.
♦ The wall thickness was divided into 10 equal slices to monitor the crack propaga-
tion through the depth of the panel.
♦ Due to the inflexibility of this FEA program, degrees of freedom are only allowed
to be either free or restrained. Spring stiffness was not provided to model support
flexibility.
538 Y. Rafiq et al.
♦ The vertical edges of the panel were modelled as simply supported and the base
edge as fully fixed. A full investigation into the effect of boundary modelling was
conducted (not reported in this paper).
Although the proposed methodologies improved the predicted failure load and fail-
ure pattern of a number of unseen panels, two important issues were not given enough
attention:
(i) the effects of panel aspect ratio on the response of the panel;
(ii) the load deflection relationships
The research presented in this paper, has concentrated more these issues.
Fig. 4. shows that all three models give a good fit for the experimental data while
maintaining symmetry about the centreline of the panel. A more detailed investigation
proved that the Timoshenko type surface gives a better fit with the experimental data
at all measured points.
Deflection (mm)
6 3
Load kN/m2
5 2.5
4 2
Experimental A1_Timoshenko
3 Polynomial 1.5 A1_Experimental
Trigonomet- A9_Experimental
2 1 A4_Timoshenko
Timoshenko
1 A4_ Experimental
0.5 A6_ Experimental
0
0 1 2 3 4 5 6 7 8 10 0 2 4 6 8 10
Grid locations Deflection (mm)
2
(a) Deflection plots along grid A (b) Load deflection plot at various locations
Fig. 4. Load def. plots using various regression models (for nodal positions refer to Fig. 2a)
In this research, the Genetic Algorithm (GA) was used to directly derive the corrector
factors at various locations over the surface of the panel. At first the panel was di-
vided into 36 locations to cover all measurement points (see Fig. 2). For simplicity a
symmetrical half model was used. Corrector factors at each location were assigned to
a GA variable (20 different variables for the symmetrical half model). Corrector fac-
tors, identified by the GA, were used to modify the flexural rigidity at each location
on the panel. The objective function of the GA was designed to minimise the error
between the modified experimental deflection (def_3D) and the deflection obtained
by the FEA (def_FEA), over the entire surface of the panel.
Although the GA was able to find models that improved the predicted deflected
shape of the panel, due to compensatory effects of many variables it was difficult to
identify a suitable model. At this stage a regression analysis was used to refine correc-
tor factors, selected from a number of the GA runs, to obtain a set of corrector factors
that represent a best fit for the experimental deflected shape of the panel. Table 1
gives details of corrector factors derived by the GA and refined by regression. It
should be noted that these corrector factors are used in the FEA to modify the flexural
stiffness by multiplying these factors to the global elastic modulus (E) at each zone.
1 2 3 4 5 6 7 8 9
A 0.697 0.981 1.125 1.152 1.153 1.152 1.125 0.981 0.697
B 0.704 1.016 1.174 1.204 1.205 1.204 1.174 1.016 0.704
C 0.716 1.076 1.258 1.292 1.294 1.292 1.258 1.076 0.716
D 0.749 1.237 1.484 1.531 1.533 1.531 1.484 1.237 0.749
Prediction of the Behaviour of Masonry Wall Panels Using EC and Cellular Automata 541
A careful study of the corrector factors in Table 1 revealed that the flexural rigidities
were mainly modified around the panel boundaries with relatively small changes
inside the panel. It was therefore necessary to investigate the effect that boundary
types may have on the behaviour of the panels.
At this stage it was decided to conduct a parametric study by changing the bound-
ary types at the panel supports, and the GA was allowed to obtain corrector factors
that produced a best fit with the modified experimental deformed shape. In this paper
only the effect of the boundary at the base of the panel is discussed.
At first, the same boundary conditions as shown in Fig. 1 were assumed. The re-
sults from the FEA showed a kink around load level of 1.0 kN/m2 (Fig. 5).
SBO1 at Grid A5
3
load (kN/m2)
Fig. 5. Comparison of modified experimental with the predicted deflection using Table 1 Cor-
rector Factors – Base simply supported and Fixed
Careful study revealed that this kink was due to the development of tensile cracks
parallel to the bed joints, produced by the hogging moments along the panel base. As
the tensile strength of the masonry is low parallel to the bed joints compared with that
perpendicular to the bed joints, the first crack appears at a very low load level near the
fixed support, which causes a kink in the load deflection curve. As this kink was not
visible in the experimental load deflection data, it caused some concern.
The obvious choice for the next step was to change the boundary condition at the
base of the panel to a simply supported type that allows full rotation of the base sup-
port. This eliminated the kink, but the stiffness of the panel was naturally reduced.
This was reflected in the load deflection plots, (see Fig 5 for details).
A close investigation of Fig. 5 also revealed that the gradient of the predicted
curves at various load levels were different from those of the experimental curves. In
order to obtain a suitable set of corrector factors, it was decided to modify the objec-
tive function of the GA to include the gradient effect. The following errors were
used:
542 Y. Rafiq et al.
1. Deflection error: minimise deviation of the FEA deflection values from the tar-
get values over the entire surface of the panel.
2. Gradient error: minimise deviation of the gradients of the FEA load deflection
curve between two adjacent load levels.
3. Load error: minimise deviation of the FEA failure load from the target failure
load.
A study of the corrector factors, derived from the simply supported base, revealed
an increase in the corrector factors around the base of the panel. By changing the base
of the panel to a fixed support (Table 2 corrector values) the opposite effect was ob-
served. A close look at the results clearly strengthened the initial findings that the
base of the panel is neither simply supported nor fixed, but there is only some degree
of fixity at this edge.
1 2 3 4 5 6 7 8 9
A 1.283 1.278 1.278 1.278 1.278 1.278 1.278 1.278 1.283
B 1.187 1.182 1.181 1.181 1.181 1.181 1.181 1.182 1.187
C 0.927 0.921 0.920 0.920 0.920 0.920 0.920 0.921 0.927
D 0.223 0.218 0.218 0.218 0.218 0.218 0.218 0.218 0.223
From Fig. 5 it was observed that the FEA predicted failure load for simply sup-
ported base was much below the measured failure load. Although the failure load for
the fixed base model was relatively increased, it was still below the measured values.
A closer look at this revealed that the decrease in failure load was due to the lower
values of tensile strengths perpendicular to bed joints.
The results of the full boundary investigations revealed that:
♦ Boundary conditions shown in Fig 2 give closer results than the other models.
♦ Corrector factors derived by the GA for this model (refer to Table 2) give a better
load deflection match at various locations over the surface of the panel.
♦ Changing the tensile strength perpendicular to the bed joints by 50% improved
the predicted failure load of the panel.
4 Case Study
The corrector factors not only modeled the boundary effects, but also modelled varia-
tion in the material and geometric properties. One of the objectives of this research
was to use these corrector factors to predict the behaviour of unseen panels with and
without openings and panels for which the boundary conditions are different from the
base panel. In this investigation it is important to note that the corrector factors de-
rived in Table 2 are used to estimate the correctors at various locations on any unseen
panel (Panel SBO2 in this study). The Cellular Automata ‘zone similarity’ technique
[14,16] was used to estimate corrector values for unseen panels. These corrector
Prediction of the Behaviour of Masonry Wall Panels Using EC and Cellular Automata 543
factors are then used in a non-linear FEA to predict the load deflection, failure load
and failure pattern for the unseen panel.
To assess the validity of the numerical model updating techniques presented in this
paper a panel which was the same size as the base panel SBO1, but with an opening at
the middle of panel (Panel SBO2), was investigated. Results of this investigation are
presented in Fig 6.
5 Further Work
The challenging task is to extend this model updating technique to panels tested else-
where, under different laboratory conditions and using different material constituents
for construction of the panels. We were able to locate test data for a limited number
of panels, tested elsewhere and would greatly appreciate the offer of further data,
particularly on load deflection and tensile strength information for any type and size
of masonry panels from researchers around the world.
The plan would be to investigate the suitability and generality of the corrector fac-
tors derived for the base panel to predict the failure criteria for as many unseen panels
as possible.
6 Conclusions
This is perhaps the first time that an attempt has been made to use a numerical model
updating technique to study the behaviour of a highly composite anisotropic material
such as masonry within the full linear and non-linear range.
The research presented in this paper introduces a numerical model updating tech-
nique that has the potential to be extended to masonry material, which is highly com-
posite and anisotropic.
In this research, corrector factors from a single panel tested in the laboratory were
used for a number of unseen panels with different boundary types, size and configura-
tions. The results produced more accurate prediction of the behaviour of the laterally
loaded masonry wall panels.
544 Y. Rafiq et al.
References
1. Robert-Nicoud, Y., Raphael, B., and smith I. F. C. (2005). “ System Identification through
Model Composition and Stochastic Search”, ASCE Journal of Computing in Civil Engi-
neering, Vol. 19, No. 3, pp 239-247.
2. Raphael, B., and Smith, I. F. C., (2003) Fundamentals of computer aided engineering,
Wiley, New York.
3. Chong, V. L. (1993). The Behaviour of Laterally Loaded Masonry Panels with Openings.
Thesis (Ph.D). University of Plymouth, UK.
4. Ma, S. Y. A. and May, I. M. (1984). Masonry Panels under Lateral Loads. Report No. 3.
Dept of Engineering, University of Warwich.
5. Saitta, S., Raphael, B. and Smith, I. F. C., “Data mining techniques for improving the reli-
ability of system identification”, Advanced Engineering Informatics, Vol. 19, No 4, 2005,
pages 289-298.
6. Friswell, M. I., and Mottershead, J. E. (1995). Finite element model updating in structural
dynamics, Kluwer, New York
7. Brownjohn, J. M. W., Moyo, P., Omenzetter, P., and Lu, Y. (2003). “Assessment of high-
way bridge upgrading by testing and finite-element model updating.” J. Bridge Eng. 8 (3),
162–172.
8. Castello, D. A., Stutz, L. T., and Rochinha, F. A., (2002). “A structural defect identifica-
tion approach based on a continuum damage model.” Comput. Struct. 80, 417–436.
9. Teughels, A., Maeck, J., and Roeck, G. (2002). “Damage assessment by FE model updat-
ing using damage functions.” Comput. Struct. 80, 1869–1879.
10. Modak, S. V., Kundra, T. K., and Nakra, B. C. (2002). “Comparative study of model up-
dating studies using simulated experimental data.”Comput. Struct. 80, 437–447.
11. Hemez, F. M., and Doebling, S. W. (2001). “Review and assessment of model updating
for non-linear, transient dynamics.” Mech. Syst. Signal Process. 15 (1), 45–74.
12. Hu, N., Wang, X., Fukunaga, H., Yao, Z. H., Zhang, H. X., and Wu, Z. S. (2001). “Dam-
age assessment of structures using modal test data.” Int. J. Solids Struct., 38, 3111–3126.
13. Cheng, Y. and Melhem, H. G., “Monitoring bridge health using fuzzy case-based reason-
ing”, Advanced Engineering Informatics, Vol. 19, No 4, 2005, pages 299-315.
14. Zhou, G. C. (2002). Application of Stiffness/Strength Corrector and Cellular Automata in
Predicting Response of Laterally Loaded Masonry Panels. School of Civil and Structural
Engineering. Plymouth, University of Plymouth. PhD Thesis.
15. Rafiq, M. Y., Zhou G. C., Easterbrook, D. J., (2003). "Analysis of brick wall panels sub-
jected to lateral loading using correctors." Masonry International 16(2): 75-82.
16. Zhou,G. C., Rafiq M. Y. Easterbrook, D. J., and Bugmann, G. (2003). “Application of
cellular automata in modelling laterally loaded masonry panel boundary effects”, Masonry
International. Vol. 16 No 3, pp 104 -114.
17. Timoshenko, S. P., Woinowsky-Krieger, S. (1981) Theory of Plates and Shells,2nd Edi-
tion, McGraw-Hill.
Derivational Analogy: Challenges and Opportunities
B. Raphael
1 Introduction
There are two approaches to case based reasoning (CBR) namely, transformational
analogy and derivational analogy. In transformational analogy, similar past solutions
are retrieved and adapted in order to propose new solutions [1]. In contrast, deriva-
tional analogy involves the application of the reasoning steps that were used to perform
tasks in the past [2]. Most CBR applications that have been developed during the last
two decades follow the transformational analogy approach. This is evident from recent
publications in this area. In the proceedings of the latest international conference on
CBR (ICCBR 2005), thirty six papers discuss issues related to transformational anal-
ogy [3]. Only one paper mentions about derivational analogy. Similarly, in the pro-
ceedings of ECCBR 2004, there are forty one papers that discuss applications of trans-
formational analogy, but there are none related to derivational analogy [4].
The apparent lack of interest in derivational analogy might be partly due to the lack
of knowledge among researchers about this approach. However, the primary reason is
that it is more difficult to implement. This paper discusses difficulties and challenges in
the implementation of derivational analogy and proposes solutions to overcome them.
2 Complexity of Representation
The issue of representing solutions in transformational analogy has been extensively
studied [5]. Popular options include semi-structured forms such as text and images,
and structured forms such as attribute-value pairs, tables and objects. Representing
cases in these forms does not require considerable expertise and the task of storing
cases are routinely performed by non-programmers using graphical user interfaces
(GUI). On the other hand, representing methods is inherently more complex. In
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 545 – 553, 2006.
© Springer-Verlag Berlin Heidelberg 2006
546 B. Raphael
Figure 1, differences between the two representations are illustrated using an example
in steel plate girder design. In the first representation, only the solution is stored in the
form of attribute-value pairs. The reasons for choosing the values are not known. In
the second representation, the method used to arrive at the solution is described. Even
in this simple example, the complexity of representation of methods compared to
solutions is evident.
Early works on derivational analogy were in the context of autonomous problem
solving [2,6]. Cases are automatically captured by recording actions during the
course of problem solving. In such situations, representation is not a serious issue. A
case consists of essentially a sequence of operators that are selected from a pre-
defined set, along with additional information related to the choice of these operators.
Carbonnell recommends recording the following pieces of information in each step of
the solution process: Task decomposition (sub-goal structure), Alternatives consid-
ered and rejected, Rationale, Dependencies and Final solution. It is clear that all the
above information can be easily captured by autonomous problem solvers. However,
in complex domains where cases are not automatically generated, these details are
rarely available. Furthermore, complete domain knowledge is not available in the
form of a predefined set of operators. Usually, solutions are computed using complex
methods that contain a number of activities that are unique in each case. Representing
methods containing these activities requires special purpose languages. This issue is
largely ignored even in more recent research in derivational analogy [11, 12].
There are many options available for representing methods that are used to formulate
solutions in cases. A simple option is production rules. A case contains a sequence of
rules that represent actions taken in a particular context. These rules should not be
Derivational Analogy: Challenges and Opportunities 547
interpreted as generic knowledge that can be applied in any context such as in a rule
based system. Instead, they should be viewed as case-specific actions which may or
may not be repeated in a new situation. Knowledge related to when and where they
can be reused need to come from an external source.
Not all activities can easily be represented as rules. More complex schemes have
been developed in order to alleviate the drawbacks of rules. An object oriented repre-
sentation was employed in CADREM [7,8] in which case-specific methods are organ-
ised in the form of an abstraction hierarchy. A generic class called an MREM (mem-
ory reconstruction method) encapsulates common features of all types of methods and
is at the root of the hierarchy. Specialized classes are derived from generic classes in
order to represent different types of methods. More than 40 MREM classes have been
identified in [7]. Out of these, about five are required in most applications. Com-
monly used MREM classes are described below. The abstraction hierarchy is shown
in Figure 2.
MREM
carrying out each sub-task in the decomposition. These MREMs might belong to one
of the classes described above.
The above classes correspond to programming language constructs available in
standard languages. However, the object oriented representation of these constructs
permit a higher level understanding of the processes used in a case. For example,
when a case-method is represented as an instance of TaskDecomposition, the subtasks
and their objectives are explicitly stored. This information is not readily available if
the method is coded in a low level programming language.
The abstract classes described above can be used to represent both generic and
case-specific processes. However, the derivational analogy system treats all instances
of these classes which are found in cases, as case-specific. The retrieval engine de-
termines the re-usability of case-specific methods in new situations.
Potentially, any method can be represented using the MREM scheme. It is enough to
derive a new class if existing classes are inadequate. However, this is not practical
because end-users of CBR systems are non-programmers. It has been observed that
many engineers are unable to input even simple expressions in syntactically correct
forms. Even though, this can be overcome by training, a CBR system is unlikely to be
used effectively if end-users have to write complex procedures.
Complementary Views
A solution to reducing the complexity of representation is proposed here using a con-
cept called complementary views (CV) [7,8]. A view is a mapping between the at-
tributes of an object and the attributes of another class. Many views might be defined
for an object and these are analogous to observing the object from different perspec-
tives. The views are complementary and help to improve the understanding of the
object. For example, the attribute topFlangeArea of the class BuiltupBeam is mapped
to the attribute area of the class CompressionMember. This mapping helps to view
the top flange of a simply supported built-up beam as a compression member.
In the context of representing methods, a CV brings out the correspondence be-
tween a case-method and an existing MREM class. The case-method is not instanti-
ated from the class. Instead, a CV is used to explain qualitatively what is done by the
method. Consider an MREM called adaptDepthOfWeb that performs this activity:
“The depth of the web is incremented by 10 mm until deflections are within limits”.
Now, consider a generic MREM class ConstraintSolver that implements a rigorous
method of solving constraints. The MREM adaptDepthOfWeb is not an instantiation
of the class ConstraintSolver. Details of the two methods vary in many aspects. How-
ever, the MREM adaptDepthOfWeb may be viewed as a constraint solving procedure
using the CV by mapping the attributes of ConstraintSolver to the attributes of
adaptDepthOfWeb (Figure 3). This CV brings out the correspondence between the
two classes. This information is enough for reusing the ConstraintSolver MREM
instead of adaptDepthOfWeb in a new context.
complex. Complementary views help avoid the problem. There is no need to repre-
sent the method in a syntactically correct form. It might be described as plain text.
Even though such a form does not permit reuse of the method, the CV enables the
CBR system to apply an alternative method for performing the same task. The advan-
tage is that the person is responsible for inputting cases need not have the expertise to
code all the methods in a formal language. When a method is not formally repre-
sented, the system ignores this method and uses the mappings provided in the CVs to
create an instance of an equivalent class.
reduces system validation problems because users do not expect accurate answers.
Even in the derivational analogy approach, users have to critically analyze proposed
solutions and adapt them to take care of requirements that have not been considered
by the case method. However, users expect a derivational analogy system to propose
the correct answer all the time because it appears that solutions have already been
adapted. This makes system validation all the more important. The system should be
thoroughly tested for situations where wrong solutions are proposed.
There are many reasons why a derivational analogy system might propose solu-
tions that are far from ideal. First of all, methods within cases usually contain only
operational knowledge, that is, what was done in the past. Deeper knowledge related
to reasons for choosing a method and its range of validity is missing. If methods are
retrieved using only dependency relationships and similarity, wrong methods might
be chosen. Better results are obtained using retrieval examples. However, this re-
quires a developer to spend considerable amount of time examining the methods that
are retrieved in a variety of situations and providing new retrieval examples whenever
wrong methods are selected.
Another reason for getting incorrect answers is lack of case coverage. Adequate
number of cases that can handle all possible situations may not be available. Such
conditions should be recognized during system validation. If there are missing meth-
ods for accommodating special situations, fictitious cases might be added.
It is easy to overlook system validation issues. Many CBR systems have failed be-
cause developers did not anticipate the complexity of these knowledge engineering
tasks. Sufficient time should be budgeted for creating and fine-tuning the case base
and to ensure that its performance matches expectations.
5 Unique Opportunities
Rule based systems failed because it was difficult to find rules that are generic and to
maintain the consistency of the rule base when more rules are added. Readers might
suspect that similar problems exist with the derivational analogy approach. This is
not true because cases contain methods which are case-specific and do not depend on
methods in other cases. It fact, it is possible to find methods having varying levels of
generality for generating the same case-solution. The more generic a method, the
more reusable it is; but at the cost of increasing complexity. Even with less generic
methods, solutions that satisfy vital integrity constraints can be generated in a new
context. This is not possible with transformational analogy, unless the adaptation
method simply regenerates a new solution.
A simple example is taken to demonstrate the idea that through the reuse of a
method in a new situation, a solution that possesses essential qualities of the original
case can be generated. Consider the truss configuration that is shown in Figure 3. The
coordinates (x,y) of the nodes are generated using the method
Nodes = { {0,0}, {0,D}, {L/2,0}, {L/2,D/2}, {L,0} }
Reusing this method in a new situation where L and D are different, results in es-
sentially the same truss configuration. Now, suppose that only the solution is repre-
sented as follows:
552 B. Raphael
6 Concluding Remarks
The derivational analogy approach presents several challenges as well as opportuni-
ties. If careful attention is paid to issues such as representation and retrieval, this
approach offers unique opportunities and permits going beyond what is possible with
transformational analogy.
D=1 m
L=6 m
Fig. 4. A truss
Acknowledgements
The author wishes to thank Prof. Kalayanaraman and Mr. Rama Mohan for discus-
sions on the use of derivational analogy for steel plate girder design. Collaborations
with Dr. B. Domer, Mr. S. Saitta, Prof. B. Kumar and Prof. I.F.C. Smith on various
CBR applications.are also gratefully acknowledged. This work would not have been
possible without the financial support of School of Design and Environment, NUS.
References
1. Carbonell J.G.: Learning by analogy: formulating and generalizing plans from past experi-
ence, Machine Learning, An artificial intelligence approach, Michalski, Carbonell and
Mitchell (Eds.), Morgan Kaufmann, Boston, (1983).
2. Carbonell, J.: Derivational analogy: A theory of reconstructive problem solving and exper-
tise acquisition, Machine Learning, An artificial intelligence approach, Vol 2, Michalski,
Carbonell and Mitchell (Eds.), Morgan Kaufmann, Boston, (1986).
3. Muñoz-Avila H., Ricci F. (ed.): Case-Based Reasoning Research and Development, 6th
International Conference on Case-Based Reasoning, ICCBR 2005, Chicago, IL, USA,
August 23-26, 2005. Proceedings, Lecture Notes in Computer Science, Vol. 3620.
Springer-Verlag, Berlin Heidelberg New York (2005).
4. Funk P., Calero P.A.G. (ed.): Advances in Case-Based Reasoning: 7th European Confer-
ence, ECCBR 2004, Madrid, Spain, August 30 - September 2, Lecture Notes in Computer
Science, Vol. 3155. Springer-Verlag, Berlin Heidelberg New York (2004).
Derivational Analogy: Challenges and Opportunities 553
5. Kolodner J.L., Case-based Reasoning, Morgan Kaufmann, San Mateo, CA, 1993.
6. Veloso M. and Carbonell J., Derivational analogy in PRODIGY: Automating case acquisi-
tion, storage, and utilization, Machine Learning, 10:249-278, 1993
7. Raphael B., Reconstructive Memory in Design Problem Solving, PhD thesis, University of
Strathclyde, Glasgow, UK, 1995
8. Kumar B. and Raphael B., Derivational Analogy Based Structural Design, Saxe-Coburg
Publications, UK, 2002
9. B. Raphael and S. Saitta, Knowprice: using derivational analogy to estimate project costs,
Proceedings of the eighth international conference on the application of artificial intelli-
gence to civil, structural and environmental engineering, B.H.V. Topping, (ed.), Civil-
Comp Press, 2005.
10. Y. Cheng and H.G.Melhem, Monitoring bridge health using fuzzy case-based reasoning,
Advanced Engineering Informatics, 19, 4, 2005.
11. E. Plaza, Cooperative Reuse for Compositional Cases in Multi-agent Systems, Muñoz-
Avila H., Ricci F. (ed.): Case-Based Reasoning Research and Development, 6th Interna-
tional Conference on Case-Based Reasoning, ICCBR 2005, Chicago, IL, USA, August 23-
26, 2005. Proceedings, Lecture Notes in Computer Science, Vol. 3620. Springer-Verlag,
Berlin Heidelberg New York, 2005.
12. M.T. Cox and M. Veloso, Supporting Combined Human and Machine Planning: An Inter-
face for Planning by Analogical Reasoning, In D. Leake & E. Plaza (Eds.), Case-Based
Reasoning Research and Development: Second International Conference on Case-Based
Reasoning (pp 531-540). Berlin: Springer-Verlag, 1997.
Civil Engineering Communication – Obstacles and
Solutions
Danijel Rebolj
1 Introduction
Communication plays a very important role when people have to solve a problem
together. Collaboration and thus communication is a normal way of working in civil
engineering. Before computers were introduced, all information in a construction
project has been communicated either in printed form, as text and drawings on paper,
or by voice communication between actors. The only way of transmitting information
was by carrying paper from one person to another, therefore centralised hierarchical
organisation has been a necessity to ensure effective decision making.
Telephones made communication possible on long distances and fax machines did
the same for text and drawings on paper, but no significant improvement in commu-
nication patterns has been introduced. In civil engineering communication has not
been systematically improved even by the introduction of computers, or by any other
information or communication technology. It is still following the same traditional
hierarchical patterns although it could become much more flexible and dynamic.
Various reasons are causing the delay in application of information technology and
are preventing the quantum leap in ICT based communication. According to our ex-
periences the main reasons are the lack of ICT innovation and standardisation, lack of
R&D cooperation in AEC industry, and deficiencies in civil engineering education.
Some companies are trying to apply the ICT potentials in a higher extent, like in Ja-
pan, where the Daito Trust Construction Company developed a large-scale mobile
computing system called the DK Network [1]. But problems occur when trying to use
advanced technology in projects with other partners, who have implemented ICT on
different levels. This can lead to even more complicated communication in joint pro-
jects, where different technologies and forms are used for information representation,
then in projects where no sophisticated ICT is used at all. Experiencing such problems
certainly discourages AEC companies from further investments in ICT related
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 554 – 558, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Civil Engineering Communication – Obstacles and Solutions 555
Fig. 1. Dyce architecture is enabling user tailored context sensitive access to information as
well as communication with contacts, which assures optimal decision making support.
Mobile and ubiquitous computing are proven to become a very important technology
for the AEC sector, since they are extending the information systems to construction
sites [3][4]. More sophisticated integrated communication systems with hidden com-
plexity are already emerging [5], which shall finally free the humans from the limited
modes of using computers, and assure creative collaboration and sharing of ideas.
556 D. Rebolj
With mobile computing location of the user can become vital information in an in-
formation system, delivering a new context parameter to the system [6]. Being over-
loaded with input, users need context sensitive systems to work effectively and to be
able to communicate efficiently [7]. Development of DyCE, the dynamic communica-
tion system [8] (Fig. 1 is presenting the architecture of the system), has not only
proven the effectiveness of context sensitivity, but has also shown, that new, more
efficient organisational patterns are now possible, since information for decision mak-
ing can be brought to any actor in the organisation. A network organisation (in oppo-
site to hierarchical) is assuring better use of knowledge and expertise, and a higher
level of innovation and co-operation.
From the technical viewpoint joining the ITC course pool is easy. The agreement is
available at the ITC@EDU network web page (www.itcedu.net). The appendix of the
agreement consists of the template of the Accession declaration to the ITC Course
Pool, which includes the institution wanting to join, the offered courses, and nomina-
tion of the steering committee member. Once the document is accepted by the steer-
ing committee the courses are at disposal for all members. The acceptance procedure
shall among others include the quality check of the offered course materials.
The ITC course pool will need a strong support from collaborating institutions. The
experiences in the current ITC Euromaster program showed that much effort is neces-
sary to prepare high quality e-learning materials, to become familiar with the on-line
communication, to manage and further develop the e-learning system, and to coordi-
nate the whole program. But even in the short term the investments give a high return.
Having a whole pool of courses at hand certainly gives each partner a strong back-
ground to form a whole new program and to offer their students specialized knowl-
edge and skills which they could possibly never be able to offer only by themselves.
4 Conclusion
Engineering communication is crucial for the efficiency of AEC industry. Not only to
support decision making in a project, but also to support innovation. To improve civil
engineering communication we suggest a concerted set of actions:
• introduction of a new engineering profile, project information officer, to focus
on project communication support,
• higher education shall offer the relevant profile,
558 D. Rebolj
References
1. Daito Trust Construction Co.: Annual Report (2000)
2. Fruchter, R.: A/E/C Teamwork: A Collaborative Design and Learning Space. J. Comp. in
Civ. Engrg., ASCE, New York, NY, Vol. 13 (1999) 261-269
3. Rebolj, D., Magdič, A., Čuš Babič, N.: Mobile computing in construction. In: Roy, R.
(ed.), Prasad, B. (ed.). Advances in concurrent engineering. Tustin: CETEAM Interna-
tional (2001) 402-409
4. Satyanarayanan, M..: Pervasive Computing: Vision and Challenges. IEEE Personal Com-
munications, Vol. 8. (Aug 2001) 10-15
5. Sousa, J. P., Garlan, D.: Aura: an Architectural Framework for User Mobility in Ubiqui-
tous Computing Environments. Software Architecture: System Design, Development, and
Maintenance. Jan Bosch, Morven Gentleman, Christine Hofmeister, Juha Kuusela (Eds),
Kluwer Academic Publishers, August 25-31 (2002) 29-43
6. Judd, G, Steenkiste, P.: Providing Contextual Information to Pervasive Computing Appli-
cations. IEEE International Conference on Pervasive Computing (PERCOM), Dallas,
March 23-25 (2003) 133-142
7. Čuš Babič, N., Rebolj, D., Magdič, A., Radosavljević, M.: MC as a means for supporting
information flow in construction processes. Concurr. eng. res. appl. Vol. 11. (2003) 37-46
8. Magdič, A., Rebolj, D., Šuman, N.: Effective control of unanticipated on-site events: a
pragmatic, human-oriented problem solving approach. Electron. j. inf. tech. constr., Vol.
9. (2004) 409-418 (available at http://www.itcon.org/cgi-bin/papers/Show?2004_29)
9. Froese, T.: Help wanted: project information officer. eWork and eBusiness in Architec-
ture, Engineering and Construction A.A. Balkema, Rotterdam, The Netherlands. (2004).
10. Rebolj, D., Menzel, K.: Another step towards a virtual university in construction IT. Elec-
tron. j. inf. tech. constr. Vol. 9 (2004) 257-266 (available at http://www.itcon.org/cgi-bin/
papers/Show?2004_17).
11. ITC EUROMASTER: The programme portal. (2006) (available at http://euromaster.
itcedu.net).
Computer Assistance for Sustainable Building Design
Hugues Rivard
1 Introduction
The earth’s environment is undergoing alarming changes due to human activity [1],
and buildings account for a large portion of the environmental impacts: at the global
scale (ozone depletion, global warming, acid rain, and resource depletion); at the local
scale (urban sprawl, solid waste, smog, and water run-offs); and at the indoor scale
(indoor air pollution, hazardous materials, and workplace safety) [2]. A case in point,
in 2001, buildings accounted for 30 percent of the total secondary energy use and of
the CO2 equivalent greenhouse gas emissions in Canada [3]. The urgent reduction of
the environmental burdens of buildings can only be achieved by considering sustain-
able development for buildings. This concept, also known as green buildings, implies
planning, designing, constructing, operating and discarding buildings in a manner to
meet the needs of people today without compromising those of future generations.
The CIB Working Commission W82 investigated the needs for research in con-
struction with respect to sustainable development and made the following two rec-
ommendations: 1) researchers need to develop adapted tools to assist designers in
considering sustainability concepts; and 2) designers need to adopt an integrated ap-
proach to building design [4].
A new Canada Research Chair in computer-aided engineering for sustainable
building design has recently been established at ETS. The objective of the paper is to
present recent efforts toward the long term goal of this Chair for developing the next
generation of computer assistance to designers of sustainable buildings. This
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 559 – 575, 2006.
© Springer-Verlag Berlin Heidelberg 2006
560 H. Rivard
long-term objective is addressed through two research thrusts each addressing one of
the two recommendations of W82: 1) Conceptual building design support to provide
assistance to designers in considering sustainability concepts early in the process; and
2) Design collaboration support to assist designers that do adopt an integrated build-
ing design approach. Each thrust is presented in the next two sections.
Fig. 1. Recent research efforts to assist the early stages of architectural and structural design
There are many other relevant and promising research efforts that have focused on
conceptual building design, a few of which are briefly mentioned here. M-RAM, a
case-based design system, combines heuristic rules for case classification; and a ge-
netic algorithm for case adaptation to recommend the type of structure for the build-
ing (e.g., braced frame, moment-resisting frame, or shear wall) [17]. CADREM,
another case-based design system, uses derivational analogy; upon retrieval the pro-
cedures stored in the case are re-executed with the new system’s parameters [18].
SHIQD- is a representative logic-based system that uses description logic and ordered
planning to design structural components [19]. Genetic algorithms (GA) have been
proposed for solving conceptual design problems. Grierson and Khajehpour used a
multi-objective genetic algorithm to search for Pareto-optimal design solutions for
office buildings [20]. Three cost-revenue objectives were considered: (1) minimize
initial capital cost, (2) minimize annual operating cost, and (3) maximize annual in-
come revenue. BGRID is another GA-based prototype for conceptual design of steel
office buildings [21]. Its inputs are plan dimensions, number of floors, site location,
dimensional constraints, and position of cores and atria. A structured genetic algo-
rithm (SGA) has been developed for conceptual design of steel and concrete office
buildings [22]. SGA is a variant of GA in which design solutions are encoded in a
hierarchy of interrelated genes in order to better model the hierarchical aspect of
building systems. Packham et al. have developed an interactive and visual mean to
564 H. Rivard
Fig. 4. The 3D models for the architecture on the left and the structural system on the right
Design exploration: Once the main aspects of the green building design are decided
and the design space has been circumscribed, some of the unsolved design parameters
could be refined through multi-objective optimization. There is always more than one
possible solution to a given design problem. This almost infinite number of possible
solutions defines a solution or design space. Hence, an important aspect of concep-
tual design is the generation of alternatives to explore this design space in order to
find the “best” solution. The greater the number of alternatives that are generated, the
greater is the odd to find a near optimal solution. Optimization can be used to provide
assistance in exploring this design space in a systematic manner.
Computer Assistance for Sustainable Building Design 565
Optimization is applied here to find building shapes that perform better. Shape is
an important consideration in building design due to its significant impacts on energy
performance and construction costs. There are a number of previous studies on build-
ing shape optimization. A number of studies construct a building shape from its spa-
tial constituents thus adopting a part-whole approach: one optimized whole building
2D plans from elements such as space units, rooms and zones [25]; another optimized
3D architectural forms that maximizes daylighting use and minimizes operating en-
ergy consumption based on basic blocks [26]; and finally another optimized 3D
shapes of apartment buildings obtained from the aggregation of two fundamental
shapes, a circulation unit and an apartment unit [27]. This part-whole approach is
capable of defining a wide range of shapes, some of which may be innovative solu-
tions for a design problem. In contrast, the whole-part approach defines a building
shape by its external boundaries and represents its internal spatial elements implicitly.
An advantage of the whole-part approach is that it can easily describe the building
geometry for energy simulation programs. This approach is adopted in several shape
optimization studies focusing on energy performance: two studies assumed a rectan-
gular building plan and optimized its aspect ratio [28-29]; one optimized a building
with a symmetrical octagonal plan by chamfering a rectangle [30]; and another con-
sidered both L-shape and rectangular shape [31]. These previous studies using the
whole-part approach are limited to simple shapes thus precluding some possibly more
promising shapes from the design space right from the start.
A multi-objective optimization software has been developed that searches the de-
sign space to find an optimal solution for the building envelope and the building
shape defined as a n-sided polygon with the whole-part approach [31, 32, 33]. The
software uses a multi-objective genetic algorithm and considers two objective func-
tions: life-cycle costs and environmental impacts. It allows the visualization of the
trade-off options between the two objectives. It lets the designer select which aspects
of the building design to optimize (e.g., the shape, the wall section, or the window
ratio) and which are fixed as well as their possible range of values. The optimization
problem is formulated in terms of variables and objective functions, as presented
below.
Variables
The variables are categorized into four groups: shape, structure, envelope configura-
tion, and overhang. The shape being optimized is the building footprint or typical
floor shape that can be defined as a simple n-sided polygon with no intersection of
non-consecutive edges. The simple polygon is adopted because most energy simula-
tion programs model walls as line segments, and a curve can be approximated with
line segments. The n edges of a polygon are defined with a length-bearing representa-
tion [33] that includes the edge length (m) and the edge bearing (degree). The optimi-
zation is carried out here with n=5 for a pentagon.
A variable for the structure defines different available alternatives for the building
structural system (e.g., steel frame vs. concrete frame). Its purpose is to ensure the
compatibility between walls, roofs, floors and overhangs [31].
The following variables for the building envelope are considered: window types;
window ratio for each façade; wall types (e.g., concrete block wall vs. steel stud wall),
roof types, and floor types for each considered structural system; and each layer (e.g.,
566 H. Rivard
type of insulation and its thickness) of the wall types, roof types, and floor types. A
detailed description of these variables can be found in [32]. The type and layer vari-
ables are dealt as discrete variables.
An overhang is a passive solar architectural element installed to reduce the direct
solar radiation through windows in the summer. The design of the overhang for each
facade is defined by two variables: its type (e.g., aluminum overhang or no overhang);
and its depth (i.e., the distance in meters between the wall and the outer edge of the
overhang).
Objective functions
Life cycle analysis is employed to evaluate design alternatives for both economical
and environmental criteria. Thus, the two objective functions to be minimized are life
cycle cost (LCC) and life cycle environmental impact (LCEI). The life cycle cost
includes the initial construction costs of the building envelope including exterior
walls, windows, roof, and floor; and the life cycle operating cost including both de-
mand and energy consumption costs. A period of 40 years is considered in the life
cycle analysis. The energy consumption is obtained by coupling the optimization
program with a building energy simulation program.
The life cycle environmental impacts are evaluated in terms of expanded cumula-
tive exergy consumption. Integrating various environmental impact categories with
different units and magnitudes into a unique objective function is not a trivial task.
Some studies overcome this problem by considering energy consumption only, while
others use weights or monetary values to aggregate different impacts into a normal-
ized value [34]. The drawbacks are that there are no generally agreed weights or
reference situation. A novel approach is to use the concept of exergy to evaluate the
environmental impacts.
Exergy is “the maximum theoretical work that can be extracted from a combined
system consisting of the system under study and the environment as the system passes
from a given state to equilibrium with the environment---that is, passes to the dead
state at which the combined system possesses energy but no exergy” [35]. Unlike
energy, exergy is always destroyed because of the irreversible nature of the process.
Therefore, the evaluation of exergy depends on both the state of a system under study
and the conditions of the reference environment.
Exergy is adopted here as a unifying indicator to evaluate life cycle environmental
impacts. Cumulative exergy consumption, proposed by Szargut et al. [36], sums the
exergy of all natural resources consumed in all the steps of a production process.
Unlike considering only energy consumption, it also takes into account the chemical
exergy of the nonfuel raw materials extracted from the environment. Therefore, cu-
mulative exergy consumption is a measure of natural resource depletion. This concept
of cumulative exergy consumption is further expanded to include abatement exergy as
a measure of waste emissions. Abatement exergy evaluates the required exergy to
remove or isolate the emissions from the environment. As indicated by Cornelissen
[37], it is feasible to determine an average abatement exergy for each emission based
on current available technologies. Thus, the resulting expanded cumulative exergy
consumption represents a single objective function that considers both resource inputs
and waste emissions to the environment, across all life cycle phases. Some of its ad-
vantages are that it avoids the subjectivity of weights setting in the evaluation of envi-
Computer Assistance for Sustainable Building Design 567
ronmental impacts and it combines fuel and nonfuel materials together to characterize
the resource depletion [32].
In this study, the scope of the life-cycle analysis is limited to natural resource ex-
traction, building material production, on-site construction, operation, and transporta-
tion associated with the above phases. Only global and long-lasting impact categories
are considered and they include natural resources consumption, global warming, and
acidification. The emissions considered are restricted to three major greenhouse gases
(CO2, CH4, N2O) and two major acidic gases (SOx and NOx).
Results
The results of the optimization are shown in Figure 5 in terms of both objective func-
tions: life-cycle environment (LCEI) and life-cycle costs (LCC). The Pareto front is
illustrated from non-dominated individual solutions for the 300th and last generation.
2,74
2,72
2,70 Pareto zone B
LCEI (10 MJ)
2,68
2,66
7
It can be seen that the Pareto front is divided into two isolated zones. These two
zones correspond to the two structural systems considered. In this study, the steel
frame system (corresponding to zone A) is found to have lower costs but higher envi-
ronmental impacts than the concrete frame system (zone B).
The building footprint corresponds to a typical floor with a fixed floor area of 1000
m2 and a fixed height. The building footprint, represented by a five-sided polygon,
takes different shapes along the Pareto front. The building shape corresponding to the
lowest cost and highest environmental impact (the point on the upper left of the
Pareto front in Figure 5) converges toward a regular polygon with a minimum perime-
ter. The building shape corresponding to the highest cost and lowest environmental
impact (the point on the lower right of the Pareto front in Figure 5) is elongated along
the East-West axis with a longer south-facing wall to benefit from the passive solar
568 H. Rivard
Fig. 6. Building footprints for minimum cost and for minimum environmental impact
aspect of collaboration is how the interaction among the participants is managed. The
system proposed by [39] offers three modes for participants in construction to
interact: chairman; agenda and brainstorming. When conflicts or disagreements arise,
the design environment should provide a mechanism to resolve them. A system pro-
posed by [40] focuses on a mechanism for controlling when and how negotiation
proceeds in an environment where humans and software agents collaborate. Another
approach proposed by [41] helps settle conflicts that arise from unarticulated concerns
from participants. A number of sources of conflicts were identified. Other studies
have looked at the infrastructure to support collaboration. A collaborative design
space with a flexible arrangement of networked computers and tables has been used
for teaching and experimenting with collaboration along with specifically developed
Internet support for distant collaboration [42]. An interactive information workspace
has been demonstrated to be more intuitive and efficient in sharing information and
establishing common focus in multi-disciplinary project team meetings than tradi-
tional meetings [43]. A telepresence environment has been proposed to search for
available professionals and to offer a space for interactions [44]. Finally, a CAVE (a
virtual reality visualisation space) was successfully used in the design and visualisa-
tion by stakeholders of an auditorium building in Finland [6].
Project collaboration can occur either synchronously (at the same time) or asyn-
chronously (at different times) and either co-located (in the same room) or distributed
(at a distance). IT offers still untapped potentials to support and facilitate all of these
types of collaboration. This thrust focuses on the more challenging synchronous
collaboration support. Still, much research is needed to understand the process of
collaboration among designers, to formalize mechanisms to support such collabora-
tion, to implement intuitive systems, and to assess the assistance provided by these
new technologies. Synchronous collaboration may either occur in person in the same
room or with people geographically distant through the Internet.
Synchronously co-located collaborations typically occur in traditional meeting
rooms. In green building design, it is becoming common practice to have meetings at
the beginning of a project, called “design charette”, where all stakeholders get to-
gether to collaboratively plan the project and brainstorm possible solutions. Later,
meetings are necessary to handle the complexity of green building design and to dis-
cuss, interact, and negotiate in order to arrive to an integrated solution. The tradi-
tional meeting room needs to be re-hauled and augmented with appropriate IT. The
aim is to better support the collaboration of the project team particularly with respect
to large quantity of disparate design information, complex 3D issues, automatic rapid
analyses, and decision-making. Three important steps are necessary to further re-
search in these areas. The first step is to study collaboration in situ or in a meeting
room setting and develop an appropriate theoretical and practical foundation. The
second step is to develop and test tools and methodologies to support and facilitate
collaboration. And the third step is to validate the new environment in actual settings
with practitioners.
Synchronously distributed collaborations can solve problems that could not be
solved in a timely fashion otherwise between actual meetings. Current technologies
support videoconferencing and whiteboards to allow discussions in real time as well
570 H. Rivard
as discussion boards and Web portals to keep track of asynchronous discussions and
organize common documents. Beyond current support, improved collaboration could
be supported by allowing designers to interact in real time with building sketches, 3D
virtual models, analyses, and so on. This is an even more complicated problem than
co-located collaboration.
A new laboratory is under construction at ETS that will provide the basic research
infrastructure to understand, to develop, and to test synchronous collaborative design
environment. The laboratory is called Computer-Aided Collaborative Conceptual
Design Laboratory. The infrastructure envisaged consists in a collaborative environ-
ment built with low-cost and off-the-shelf hardware and software because the envi-
ronment developed must be affordable for consulting engineering firms, manufactur-
ers, and architectural firms. The laboratory will have three main components.
• A collaborative design room will be constructed in order to develop a sup-
porting environment for and experiment with live collaboration among a group
of designers. The room will be an “augmented meeting room” equipped with
the latest technology to support collaboration among a group of designers for
brainstorming, presenting designs, and decision-making. A large backlit 3D
virtual reality screen and tracking devices will let participants see and interact
with shapes and models, discuss spatial issues, and visualize results of com-
plex analysis. A smart board will be used to capture handwritten notes, and
ideas could be sketched. With a video and audio conference system tied to an
Access Grid node, the facility will allow distant people to virtually join a
group in the room. Thus, this room will be able to support both synchronously
co-located and distributed collaboration. The room will be instrumented
(video cameras and interaction capture) so that activities and interactions can
be recorded for detailed post-analysis.
• Collaboration software usability testing dual rooms will consist of two
separate and isolated cubicles equipped with networked computers and captur-
ing devices to carry out protocol analysis of users testing an environment for
synchronously distributed design collaboration.
• A virtual design desk will work like a drafting board without paper. The
sketches will be captured and analyzed by a computer to provide quick feed-
back to the designer. This section of the laboratory will provide another test-
ing facility to experiment distant collaboration over the Internet, but this time
using a virtual desk to discuss over sketches. The virtual design desk could
also be used to discuss and collaborate with 3D models or analysis results.
This component of the laboratory was developed by the LuciD Research group
at the University of Liege mentioned earlier. Figure 7 illustrates the virtual
design desk.
This research facility will provide the infrastructure to experiment and develop a
computer-aided collaborative environment that will support live collaboration of pro-
ject teams either in person or with a distance. The proposed infrastructure, shown in
Figure 8, will be located in a permanent 100 m2 space in the main ETS building in
downtown Montreal.
Computer Assistance for Sustainable Building Design 571
Fig. 7. The virtual design desk by the LUCID group at the University of Liege
4 Conclusions
The construction industry’s output represents about 10% of the national gross domes-
tic product. The building sector accounts for 60% of it. This industry has been
plagued with plummeting productivity and lack of innovation. Improved design
572 H. Rivard
practices are direly needed. To compound the situation, the environment has become
a priority and every nation must ensure that new buildings and building renovations
are more sustainable.
The goal of a Canada Research Chair recently established at ETS, in Montreal, is
to address this situation by developing the next generation of computer assistance for
designers of sustainable buildings. This goal will be achieved by two research thrusts,
which were presented here. The first thrust focuses on providing computer support in
the early stages of design when the most important decisions are taken. The second
thrust focuses on facilitating collaboration among designers involved in an integrated
design process. The presented research projects can improve collaboration between
architects and engineers by allowing earlier feedback and by relieving design conflicts
as they appear. The projects introduce computer-supported design synthesis and
exploration much earlier in the process while the presented laboratory will provide the
basic infrastructure to research better collaboration support. The potential benefits are
a more efficient and integrated design process resulting in better and more integrated
buildings.
The construction industry is bound to benefit from collaborative support technol-
ogy because of its unparalleled fragmentation. A new generation of IT collaborative
support would facilitate collaborative meetings as well as distance collaboration. The
approach would introduce computer-aided collaboration earlier in the process to a
level that does not currently exist with current technologies. The end results will be a
more productive and efficient design process with a more successful outcome in terms
of energy efficiency, users comfort, integrated building, and environmental impacts.
This approach has the potential to improve the way building designers and stake-
holders collaborate. These innovations are geared toward sustainable buildings, but
could also benefit the design of more traditional buildings, and even other engineering
fields.
References
1. IPCC (2001) “Climate Change 2001: The Scientific Basis”, Edited by Houghton JT et al.,
the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge,
UK.
2. Tucker, SN, Ambrose, MD, Johnston, DR, Newton, PW, Seo, S, and Jones, DG (2003)
“LCADesign”, Proceedings of the CIB W78 20th Int’l Conf. on IT for Construction, R
Amor (Editor), Waiheke Island, New Zealand, CIB Report: Publication 284, pp. 403-412.
3. NRCAN (2003) “Energy Efficiency Trends in Canada, 1990 to 2001”, Natural Resources
Canada.
4. Bourdeau, L, Huovila, P, Lanting, R, and Gilham, A (1998) “Sustainable Development
and the Future of Construction”, CIB Working Commission W82, CIB Report, Publ. 225
5. Baouendi, R, Rivard, H, and Zmeureanu, R (2001) “Use of Life-Cycle Assessment Tools
During the Design of Green Buildings”, 4th Conference of the Canadian Society for Eco-
logical Economics organized by F. Müller and T. Naylor at the McGill School of Envi-
ronment, Montreal, p. 46.
6. Kam, C, Fischer, M, Hänninen, R, Karjalainen, A and Laitinen, J (2003) "The product
model and Fourth Dimension project", ITcon, Vol. 8, pp. 137-166,
http://www.itcon.org/2003/12
Computer Assistance for Sustainable Building Design 573
24. Miles, JC, Cen, M, Taylor, M, Bouchlaghem, NM, Anumba, CJ and Shang, H (2004)
“Linking sketching and constraint checking in early conceptual design”, in Beucke K. et
al. (Editors) 10th Int. Conf. on Computing in Civil & Building Eng, 11pp.
25. Rosenman MA, Gero JS. (1999) “Evolving designs by generating useful complex gene
structures.” In: Bentley PJ, editor. Evolutionary Design by Computers, San Francisco:
Morgan Kaufmann Publishers, 1999, p. 345-64.
26. Caldas, L (2002) “Evolving three-dimensional architecture form: An application to low-
energy design.” In: Gero JS, editor. Artificial Intelligence in Design’02, Dordrecht, The
Netherlands: Kluwer Publishers, p. 351-70.
27. Chouchoulas, O (2003) “Shape evolution: An algorithmic method for conceptual architec-
tural design combining shape grammars and genetic algorithms.” Ph.D. Thesis, Depart-
ment of Architecture and Civil Engineering, University of Bath, UK.
28. Bouchlaghem N (2000) “Optimizing the design of building envelopes for thermal per-
formance.” Automation in Construction Vol. 10, No.1, pp.101-112.
29. Peippo K, Lund PD, Vartiainen E. (1999) “Multivariate optimization of design trade-offs
for solar low energy buildings.” Energy and Building, Vol. 29, No.2, pp.189-205.
30. Jedrzejuk H, Marks W (2002) “Optimization of shape and functional structure of buildings
as well as heat source utilization: Partial problems solution.” Building and Environment,
Vol. 37, No. 11, pp. 1037-43.
31. Wang, W, Rivard, H, and Zmeureanu, R (2005) “An Object-Oriented Framework for
Simulation-Based Green Building Design Optimization with Genetic Algorithms” Journal
of Advanced Engineering Informatics, Vol. 19, No. 1, pp. 5-23.
32. Wang, W, Zmeureanu, R, and Rivard, H (2005) “Applying Multi-Objective Genetic Algo-
rithms in Green Building Design Optimization”, Building and Environment, Vol. 40, No.
11, pp. 1512-1525.
33. Wang, W, Rivard, R, and Zmeureanu, R (2006) “Floor Shape Optimization for Green
Building Design” In press in the Journal of Advanced Engineering Informatics.
34. Finnveden, G (1994) “Methods for describing and characterizing resource depletion in the
context of life cycle assessment.” Technical Report. Swedish Environmental Research In-
stitute, Stockholm, Sweden.
35. Moran, MJ (1982) “Availability analysis: a guide to efficient energy use.” Englewood
Cliffs, NJ: Prentice-Hall.
36. Szargut, J, Morris, DR, Steward, FR (1988) “Exergy analysis of thermal, chemical, and
metallurgical process.” New York: Hemisphere Publishing.
37. Cornelissen, RL (1997) “Thermodynamics and sustainable development---the use of ex-
ergy analysis and the reduction of irreversibility.” Ph.D. Thesis. Laboratory of Thermal
Engineering, University of Twente, Enschede, The Netherlands.
38. Cera, CD, Regli, WC, Braude, I, Shapirstein, Y, Foster, CV (2002) “A Collaborative 3D
Environment for Authoring Design Semantics”, IEEE Computer Grapics and Applica-
tions, May, pp. 43-55.
39. Pena-Mora, F, Dwivedi, GH (2002) “Multiple Device Collaborative and Real Time
Analysis System for Project Management in Civil Engineering”, ASCE, Journal of Com-
puting in Civil Engineering, Vol. 16, No. 1, pp. 23-38.
40. Cooper, S, Taleb-Bendiab, A (1998) “CONCENSUS: Multi-Party Negotiation Support for
Conflict Resolution in Concurrent Engineering Design” Journal of Intelligent Manufactur-
ing, Vol. 9, No. 2, pp. 155-159.
41. Adelson, B (1999) “Developing Strategic Alliances: A Framework for Collaborative Ne-
gotiation in Design” Research in Engineering Design, Springer Publ., Vol. 11, No. 3, pp.
133-144.
Computer Assistance for Sustainable Building Design 575
42. Fruchter R (1999) “AEC Teamwork: A Collaborative Design and Learning Space”, Jour-
nal of Computing in Civil Engineering, ASCE, Vol. 13, No. 4, pp. 261-269.
43. Liston K, Fischer M and Winograd T (2001) “Focused Sharing of Information for Multid-
isciplinary Decision Making by Project Teams”, ITcon Vol. 6, pg. 69-82,
http://www.itcon.org/2001/6
44. Anumba CJ and AK Duke (2000) “Telepresence in Concurrent Lifecycle Design and Con-
struction”, Artificial Intelligence in Engineering, Elsevier, Vol. 14, pp. 221-232.
Interoperability in Building Construction Using
Exchange Standards
Abstract. Standard product and/or process models are a key enabling technol-
ogy for the AEC industry to realize many of the benefits of more advanced
computing approaches. The steel building industry has standardized on CIS/2.
More broadly AEC has been striving to move IFCs into practice. STEP, the in-
ternational product data standard ISO 10303, serves as the basic format for both
CIS/2 and the IFCs. An overview of both formats accompanies example ex-
change files. An example one-way translation from CIS/2 to IFC2x3 illustrates
some of the difficulties that must be overcome if the sought for harmonization
of these standards is to be achieved.
1 Introduction
For decades, those involved in Architecture/Engineering/Construction (AEC) comput-
ing have bemoaned the lack of interoperability, classically phrased as the “islands of
automation” problem. Software products used commercially typically address a part
of the constructed facilities product or process, but there is no provision for system-
atic interaction and integration of the isolated individual implementations. This lack
of a “common language” has proved a persistent barrier to realizing in practice the
possible benefits of more advanced computing approaches.
The need for standard product and/or process models has been well known for
many years. Despite many research demonstrations of feasibility and several major
standardization efforts, progress was markedly slow during the 1980’s and 1990’s. In
contrast the speed of transfer from theory to practice has picked up rapidly since
2000, particularly in the North American steel building industry. Interoperability and
the exchange of product and/or process models between players and phases is a key
enabling technology necessary to move more advanced computing approaches into
practice. An understanding of the knowledge representation approaches used in exist-
ing standards is informative about what can be implemented now. Description of the
development trajectory of the standards gives insight as to what may be implemented
in the near future.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 576 – 596, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Interoperability in Building Construction Using Exchange Standards 577
2.1 STEP
Development of current AEC EDI standards began with STEP, an open set of stan-
dards for data exchange and sharing used to help engineering coordination. Interna-
tional adoption of the standard began in 1994 through the International Standards
Organization as ISO-10303. The standard is now known as ‘Industrial Automation
Systems and Integration: Product Data Representation and Exchange’ (Eastman
1999). The standard consists of a number of Parts, Resources, and Application Proto-
cols (APs). APs are a set of exchange standards governed by a product model in the
EXPRESS language. Examples of APs include: AP230 “Building Structural Frame:
Steelwork” and AP228 “Heating, Ventilation and Air Conditioning” protocol. Parts
can be considered specifications for STEP. Part 21 governs the format of the STEP
File Structure. A STEP data-exchange file is divided into two sections: Header and
Data. The Header contains exchange structure data, such as file conformance and file
name. The Data contains the information to be transferred, including physical project
data. The project data, such as members type, attributes, and locations, is represented
using EXPRESS. Part 11 specifies the EXPRESS modeling language (Crowley and
Watson 1997).
The CIMsteel project (Crowley 1999), also known as the EUREKA Project EU 130,
began in Europe with the collaboration of nine countries and 70 organizations. The
project objectives were to help the growth of the steel industry, reduce design and
construction times, and produce more economic steel structures. A result of the pro-
ject is the CIMsteel Integration Standards (CIS), which allows for exchange of infor-
mation throughout the steel design and construction process. In 1999, the American
Institute of Steel Construction chose its second release, CIS/2, as the interoperability
interface of choice for the AISC EDI initiative. To use CIS/2 data software compa-
nies must develop translators that map the model from the application to that of the
578 W.M. Kim Roddis, A. Matamoros, and P. Graham
common Logical Product Model. These mappings and the standards mentioned
above are used to create CIS/2 files to import and export information between pro-
grams (Eastman, Sacks, and Lee 2002). The following discussion of CIS is intended
to provide an overview while including details needed for comparisons below (Crow-
ley and Watson 2003a).
CIS defines its supply chain as information contained within the design, detailing,
scheduling, tendering, ordering, purchasing, and payment of structural steel buildings.
CIS is similar to AP230 in that it relates information about the steelwork in structural
frame buildings. However, it is a less formal version of the STEP protocol. This re-
duced the time necessary to establish the AP and made CIS more practical. CIS uses
the STEP Part 21 exchange format as its file format.
the Resource layer for various structural member profiles. Profile Property Resource
defines properties for structural members, such as, section weight and cross-sectional
area. Profile Resource includes schemas for various section profiles, including W-
sections, L-sections, T-sections, and hollow tube sections. Thus, properties for structural
members may be specified at different levels of the IFC hierarchy.
Likewise, properties of structural members may be specified using multiple geometric
representations. The simplest representation is a Bounding Box defining a rectangular
solid enclosing the member. Another representation is a Surface Model specified by sur-
faces or faces. Bounding Box and Surface Model are general representation types of the
Building Element. In addition to these two representations, a Beam or Column may have
Swept Solid (with or without Clipping), Boundary Representation (Brep), and Mapped
Representation. Swept Solid uses an extruded solid approach. Clipping may be combined
with a Swept Solid to cut off pieces sliced by specified planes. Brep uses formal bound-
ary facets with or without voids. Mapped Representation allows an existing representa-
tion to be reused for more than one element.
The data structures of CIS/2 and IFCs are quite similar. Analyzing the main elements
that define both sets of standards gives a better understanding of these structures. By
584 W.M. Kim Roddis, A. Matamoros, and P. Graham
3.1.2 Scope
CIS/2 and IFC2x have different scopes. CIS/2 only defines information related to
structural steel while the plan for the IFCs is to define a Building Information Model
(BIM) that relates all facets of AEC/FM industry. Currently, the IFCs are behind the
CIS in terms of implementation of structural steel elements because IFC2x2 was the
first release to aggressively address this issue. In addition to differences in structural
definitions, CIS/2 also provides specifications for database management, partial ex-
changes through Intelligent Translators, and more rigorous testing standards, the CCs.
Table 1 provides an overview of the differences in the overall scope of the standards.
section. It then uses these points to define other properties of the section, for exam-
ple, area. When defining these properties the IFCs omit the smaller features of a sec-
tion. CIS/2 also allows for creation of non-standard sections. CIS/2 has specifica-
tions for even the smallest properties of a section, such as, flange edge radius. The
specifications also base the geometry of a section of the geometric centroid of the
section (Crowley and Watson 2003c). For these generic sections in CIS/2 a Bounding
box defines the shape, just as in IFC2x2. The box is rectangle that defines the ex-
treme dimensions of a shape.
3.1.4 Connections
The differences in connection entities are another issue. As mentioned, the CIS de-
fine even the smallest part of a joint system. The IFCs define connections in a very
simple way, by defining the location along an element that another element intersects
it. This enables structural analysis information to be exchanged but because IFC2x
lacks the schemas for bolts and welds detailing information cannot be transferred.
Table 2 diagrams the differences in connection definitions. A simple way to describe
the difference between the IFCs and CIS/2 is to describe them in terms of analysis,
design, and manufacturing models. CIS/2 incorporates all these models into its steel
specifications. The IFCs incorporate only analysis and design models, which define
elements, nodes, boundary conditions, and parts but do not define part features or
connection assemblies.
Two identical frames were constructed to compare the exchange file structures of
CIS/2 and IFC2x. The frames consisted of two 12-foot tall W10x33 columns located
20 feet apart with a W12x40 beam connected to the tops of columns. A simple frame
was used to evaluate the differences in the exchange files.
One model was produced in RAM Structural System, Appendix A, and the other in
Architectural Desktop 2004, Appendix B. RAM uses CIS/2 specifications to ex-
change project information while a plug-in can be installed to create IFC2x2 files in
Architectural Desktop. Since IFC 2x3 was released February 2006, a third file was
generated CIS/2 file in Appendix A to a IFC2x3 file (not included due to space) using
the CIS/2 to VRML and IFC Translator based on research at NIST downloadable
from http://ciks.cbt.nist.gov/cgi-bin/ctv/ctv_request.cgi. This demonstrated that the
difference between IFC2x2 and IFC2x3 were not important for this particular file
comparison. This also illustrates some of the results of translation between standards.
As discussed, both sets of standards define their exchange files using STEP Part 21.
The similarities between these files are easy to see once the files are broken down into
586 W.M. Kim Roddis, A. Matamoros, and P. Graham
sections, as is done with annotations in the appendices. For example, both files begin
with the Header section and then move to the Data section. Also, the first line of each
file is ‘ISO-10303-21’ and the last line is ‘END-ISO-10303-21’, referencing the stan-
dard from which they get their structure. Table 3 compares the two Part 21 files.
Within the Header section both files contain File_Description, File_Name, and
File_Schema. File_Description describes the file’s level of conformance to the stan-
dards. In CIS/2 the file lists the CCs in which the file conforms. For example, the file
references CC003, a generic CC for Cartesian_point, and CC305, a specific CC for
Material_Isotropic. File_Name details file name, time, author, organization, preproc-
essor version, originating system, and authorization. In the case of CIS/2, the proces-
sor version is ‘ST-Developer V10’ which is used to keep the file in line with STEP.
File_Schema describes the standard from which the frame gets its information. For
example, the File_Schema for the IFC file is ‘Ifc2x_Final’.
The rest of the CIS/2 file is organized in the following way. The Header ends with
the file schema, Structural_Frame_Schema. The remained of the file is in the Data
section, beginning with global geometry representations and the definition of units.
The next portion includes general geometry, defining connectivity of element nodes by
referencing schemas defined later in the file. Element schema follow, defining the
geometry, section type, and material of each element. Next, the material properties and
then the node points for each member are defined. The next set of schema contains the
references to the beam and column sections. These include Item_Reference_Assigned,
Section_Profile, Item_Reference_Standard, and Item_Ref_Source_Standard.
Item_Ref_Source_Standard references the AISC EDI Standard Nomenclature, the
standard for naming sections. Forces and specific units are defined. Finally, the spe-
cific assembly geometry for each member is called out. For example, the Cartesian
points of each node are identified to give the unit length and orientation. The angles of
each member are also included in this section.
The general layout of the IFC file is almost identical. The file begins with the
Header and specifies general file information. The Data section then begins. The first
portion of this section is the definition of units and conversion. The IFC file requires
a conversion from SI units to English units when applicable. The next portion of the
file is global axis geometry and file information, including the introduction of IfcPro-
ject. Much like the CIS/2 file, the specific member information is the next portion of
the file. However, this portion includes additional section information unique to the
IFC file. The IFCs do not include standard references to specific ‘W’ shapes. There-
fore, the geometry of a section must be defined by using Cartesian points for each
edge point of the section. An edge’s distance from the centroid of the shape defines
its point. For example, the point (3.98, 4.865), or (bf/2, d/2), corresponds to the cor-
ner of a W10x33 section, highlighted in Appendix B. The points are located in the
local axes of each element and then referenced to a global point later in the file.
CIS/2 only defines global geometry. IfcPolyline then uses these points to construct the
shape of the section by connecting each point with a bounding line. The cross-
sectional area can then be defined using this information. The element lengths and
directions are also defined in this portion of the file. The final part of the IFC file
Interoperability in Building Construction Using Exchange Standards 587
The third file was a translation of the CIS/2 file to IFC2x3 (Lipman 2006). As
pointed out by the translator originator, Dr. Robert Lipman of NIST, the generated
IFC2x3 file is just one of many different ways to write out the information contained
in the CIS/2 file in IFC. This is a natural result of the provision in the IFCs of many
ways to represent information such as units, and coordinate systems, and member
shapes. This means a translation of a single item from CIS/2 is inherently ambiguous
and the choice of which possible one-to-many mapping is correct is unclear.
Of interest in the third file was shuffling of the sections of the Data portion into
significantly different order as well as scattering of information associated with a sin-
gle element. Another result was the addition of multiple transformations of units and
coordinates. Since the IFCs have many ways to represent unit and coordinate informa-
tion, any translation can introduce nested transformations that collapse to match the
input. If a file is translated multiple times, these arbitrary transformations will grow.
This is like trying to deal with many manipulations of mathematical equations where
no canonical form is defined. Simply determining equivalence becomes a daunting
task. Another example of the possible results of the multiplicity of the IFCs represen-
tation options is the representation of the standard structural shapes. In this case, the
translated file maps the CIS/2 wide flange information to a Swept Solid. This ex-
truded representation is most natural, another possible translation is a Brep. These
types of semantically invalid mappings are common in translation of natural lan-
guage. Development of the standards must include agreement on translation choice
preferences, as is done for mathematical operation precedences, and standard repre-
sentation mappings, as is done for useful clichés in programming.
IFCs would be used at the design level. The data would then be passed on to a CIS/2
file for detailing because CIS/2 provides better detailing guidelines than the IFCs.
Finally, the files would be translated back to IFC format for checking. The develop-
ment of this translator is very time consuming due to the differences between the
standards and the complexity of the language. There remain substantial issues to re-
solve to include bi-directional translation and round-tripping of exchange files.
A simpler approach is the mapping approach taken in the NIST CIS/2 to VRML
and IFC Translator. This permits a CIS/2 exchange file to be translated to IFC2x3 as a
one-to-many mapping while avoiding the much more problematic many-to-one map-
ping entailed in an IFC2x3 file translation to CIS/2 (Lipman 2006).
Some believe the Extensible Markup Language (XML) will replace current data
exchange file formats. In fact, the CIMsteel Integration Standards Release 2: Second
Edition – Overview states XML is an accepted alternative to STEP Part 21 file format
(IAI 2006).
Many researchers have their own opinions on the future of interoperability, but
what remains is the need for a full scale study and actual implementation. AISC and
IAI recently began working together to further develop exchange standards for the
AEC/FM industry. The newly formed team will be mainly concerned with ‘harmo-
nizing’ CIS/2 and the IFCs, allowing structural steel to be incorporated into a building
information model (BIM) of the IFCs (“International” 2004). Integrated all portions
of the industry into a BIM is the goal of the next generation of interoperability
standards.
Acknowledgement
Dr. Robert Lipman of NIST provided both the translation of the CIS/2 file in Appen-
dix A to IFC2x3 using the CIS/2 to VRML and IFC Translator based on research at
NIST as well as a preprint of his 2006 paper cited in the references.
References
1. Gibson Jr., G. E. and Bell, L. C. “Electronic Data Interchange in Construction.” Journal
of Construction Engineering and Management 116. 4 (December 1990): 727-737.
2. Eastman, C. M. Building Product Models: Computer Environments Supporting Design
and Construction. Boca Raton, FL: CRC Press, 1999.
3. Crowley, A. J. and Watson, A. S. “Representing Engineering Information for Construc-
tional Steelwork.” Microcomputers in Civil Engineering 12. 1 (January 1997): 69-81.
4. Crowley, A. J. “The Evolution of Data Exchange Standards: The Legacy of CIMsteel.”
(1999). 5 Nov. 2004. http://www.cis2.org/faq/crowley1999.
5. Eastman, C. M., Sacks, R., and Lee, G. (September 2002). “Strategies for Realizing the
Benefits of 3D Integrated Modeling of Buildings for the AEC Industry.” 19th International
Association for Automation and Robotics in Construction. Washington D.C. Sept. 2002.
6. Crowley, A. J., and Watson, A. S. “CIMsteel Integration Standards Release 2: Second Edi-
tion, P265: CIS/2.1:Volume 1 – Overview”, The Steel Construction Institute, 2003a.
590 W.M. Kim Roddis, A. Matamoros, and P. Graham
7. Crowley, A. J., and Watson, A. S. “CIMsteel Integration Standards Release 2: Second Edi-
tion, P268: CIS/2.1:Volume 3 – LPM/6”, The Steel Construction Institute, 2003c.
8. Crowley, A. J., and Watson, A. S. “CIMsteel Integration Standards Release 2: Second Edi-
tion, P269: CIS/2.1:Volume 4 – Conformance Requirements”, The Steel Construction In-
stitute, 2003d.
9. Crowley, A. J., and Watson, A. S. “CIMsteel Integration Standards Release 2: Second Edi-
tion, P266: CIS/2.1:Volume 2 – Implementation Guide”, The Steel Construction Institute,
2003b.
10. Liebich, T. (ed.), IFC 2x Edition 2 Model Implementation Guide Version 1.7,. IAI.
http://www.iai-international.org/Model/files/20040318_Ifc2x_ModelImplGuide_V1-7.pdf
(March 2004)
11. Bazjanac, V. “Industry Foundation Classes: Bringing Software Interoperability to the
Building Industry.” The Construction Specifier 15. 6 (June 1998): 47-54.
12. IAI, IFC 2x Edition 3. http://www.iai-international.org/Model/R2x3_final/index.htm
(February 2006)
13. Liebich, T., and Wix, J. (eds.), IFC Technical Guide, Release 2x.. IAI. http://www.iai-
international.org/Model/documentation/IFC_2x_Technical_Guide.pdf (October 2000)
14. Lipman, R. R., “Mapping Between the CIMsteel Integration Standards and Industry Foun-
dation Classes Product Models for Structural Steel, ICCCBE-XI, 2006, preprint.
15. Eastman, C.M. "Harmonization for CIS/2 and IFC", http://www.coa.gatech.edu/~aisc/cisifc
2006
16. “International Model for EDI.” Structure Magazine, CASE, NCSEA, and ASCE. (Sep-
tember 2004): 33.
Interoperability in Building Construction Using Exchange Standards 591
Appendix A
CIS/2 File
ISO-10303-21;
HEADER;
/* Generated by software containing ST-Developer
* from STEP Tools, Inc. (www.steptools.com) */
Conformance
FILE_DESCRIPTION(
/* description */ ('','CC003, CC005, CC014, CC019, CC024, CC026,
CC029, CC030, CC031, CC032, CC034, CC035, CC110, (CC166, +CC167),
CC170, CC305, CC306, (CC177, +CC307), CC310, CC325, CC327, CC331'),
/* implementation_level */ '2;1');
FILE_NAME(/* name */ 'frame2',
/* time_stamp */ '2004-11-12T11:02:32-06:00',
#27=(ELEMENT('Flr1Col1',$,#66,1)ELEMENT_CURVE($)
ELEMENT_CURVE_SIMPLE(#45,#78)ELEMENT_WITH_MATERIAL(#29));
#28=(ELEMENT('Flr1Col3',$,#66,1)ELEMENT_CURVE($)
ELEMENT_CURVE_SIMPLE(#45,#78)ELEMENT_WITH_MATERIAL(#29));
#29=MATERIAL_ISOTROPIC(0,'steel',$,#30);
Materials
#30=MATERIAL_REPRESENTATION('Fy 50.00',(#32),#53);
#31=MATERIAL_REPRESENTATION('material representation for all',
#32),#53);
#32=MATERIAL_STRENGTH('yield strength',50.);
#33=NODE('np0',#72,$,#66);
#34=NODE('np1',#73,$,#66);
Assembly
#35=NODE('np2',#74,$,#66);
#36=NODE('np3',#75,$,#66);
#37=NODE('np4',#76,$,#66);
#38=NODE('np5',#77,$,#66);
592 W.M. Kim Roddis, A. Matamoros, and P. Graham
#39=ASSEMBLY_DESIGN_STRUCTURAL_MEMBER_LINEAR(0,'Flr1Bm3',$,$,$,$,
.T.,.F.,(),(),$,.COMBINED_MEMBER.,.UNDEFINED_CLASS.,.BEAM.);
#40=ASSEMBLY_DESIGN_STRUCTURAL_MEMBER_LINEAR(1,'Flr1Col1',$,$,$,$,
.T.,.F.,(),(),$,.COMBINED_MEMBER.,.UNDEFINED_CLASS.,.COLUMN.);
#41=ASSEMBLY_DESIGN_STRUCTURAL_MEMBER_LINEAR(2,'Flr1Col3',$,$,$,$,
.T.,.F.,(),(),$,.COMBINED_MEMBER.,.UNDEFINED_CLASS.,.COLUMN.);
#42=ITEM_REFERENCE_ASSIGNED(#46,#44);
#43=ITEM_REFERENCE_ASSIGNED(#47,#45);
Section Ref.
#44=SECTION_PROFILE(0,'W12X40',$,$,8,.F.);
#45=SECTION_PROFILE(1,'W10X33',$,$,5,.F.);
#46=ITEM_REFERENCE_STANDARD('W12X40',#48); Section Reference
#47=ITEM_REFERENCE_STANDARD('W10X33',#48);
#48=ITEM_REF_SOURCE_STANDARD('AISC','AISC EDI Standard Nomencla-
ture',2001,'1');
#49=(CONTEXT_DEPENDENT_UNIT('KIP')FORCE_UNIT()NAMED_UNIT(#89));
#50=FORCE_MEASURE_WITH_UNIT(FORCE_MEASURE(0.),#49);
#70=DIRECTION('unit y vector',(0.,1.,0.));
#71=CARTESIAN_POINT('cp1',(0.,0.,0.));
#72=CARTESIAN_POINT('cp1',(0.,0.,144.));
#73=CARTESIAN_POINT('cp2',(240.,0.,144.));
#74=CARTESIAN_POINT('cp3',(0.,0.,144.));
#75=CARTESIAN_POINT('cp4',(0.,0.,0.));
#76=CARTESIAN_POINT('cp5',(240.,0.,144.));
#77=CARTESIAN_POINT('cp6',(240.,0.,0.));
#78=PLANE_ANGLE_MEASURE_WITH_UNIT(PLANE_ANGLE_MEASURE(0.),#85);
#79=PLANE_ANGLE_MEASURE_WITH_UNIT(PLANE_ANGLE_MEASURE(90.),#85);
#80=PLANE_ANGLE_MEASURE_WITH_UNIT(PLANE_ANGLE_MEASURE(
1.5707963267949),#85);
#81=PLANE_ANGLE_MEASURE_WITH_UNIT(PLANE_ANGLE_MEASURE(180.),#85);
#82=PLANE_ANGLE_MEASURE_WITH_UNIT(PLANE_ANGLE_MEASURE(
3.14159265358979),#85);
Interoperability in Building Construction Using Exchange Standards 593
#83=PLANE_ANGLE_MEASURE_WITH_UNIT(PLANE_ANGLE_MEASURE(270.),#85);
#84=PLANE_ANGLE_MEASURE_WITH_UNIT(PLANE_ANGLE_MEASURE(
2.0943951023932),#85);
Assembly Geometry
#85=(CONTEXT_DEPENDENT_UNIT('DEGREE')NAMED_UNIT(#88)
PLANE_ANGLE_UNIT());
#86=LENGTH_UNIT(#87);
#87=DIMENSIONAL_EXPONENTS(1.,0.,0.,0.,0.,0.,0.);
#88=DIMENSIONAL_EXPONENTS(0.,0.,0.,0.,0.,0.,0.);
#89=DIMENSIONAL_EXPONENTS(1.,1.,-2.,0.,0.,0.,0.);
#90=DIMENSIONAL_EXPONENTS(-1.,1.,-2.,0.,0.,0.,0.);
#91=DIMENSIONAL_EXPONENTS(0.,1.,0.,0.,0.,0.,0.);
ENDSEC;
END-ISO-10303-21;
Appendix B
IFC2x2 File
ISO-10303-21;
HEADER;
FILE_DESCRIPTION(('IFC 2x'),'2;1'); File Conformance
FILE_NAME('C:\\Documents and Settings\\student\\Desktop\\paul''s
File Information
frame\\frame5.dwg','2004-11-12T15:15:38',(''),
('University of Kansas'),'IFC-Utility 2x for ADT V. 2, 0, 2, 16
(www.inopso.com)
- IFC Toolbox Version 2.x (00/11/07)','Autodesk Architectural
Desktop','');
FILE_SCHEMA(('IFC2X_FINAL'));
ENDSEC;
DATA;
#1=IFCSIUNIT(*,.TIMEUNIT.,$,.SECOND.);
#2=IFCSIUNIT(*,.MASSUNIT.,$,.GRAM.);
#3=IFCDIMENSIONALEXPONENTS(1,0,0,0,0,0,0);
#4=IFCSIUNIT(*,.LENGTHUNIT.,$,.METRE.);
#5=IFCMEASUREWITHUNIT(IFCRATIOMEASURE(0.0254),#4); Units
#6=IFCCONVERSIONBASEDUNIT(#3,.LENGTHUNIT.,'Inch',#5);
#7=IFCSIUNIT(*,.AREAUNIT.,$,.SQUARE_METRE.);
#8=IFCSIUNIT(*,.VOLUMEUNIT.,$,.CUBIC_METRE.);
#9=IFCUNITASSIGNMENT((#6,#7,#8,#1,#2));
#10=IFCCARTESIANPOINT((0.,0.,0.));
Global Geometry and Info.
#11=IFCDIRECTION((0.,0.,1.));
#12=IFCDIRECTION((1.,0.,0.)); Global Axis Definition
#13=IFCAXIS2PLACEMENT3D(#10,#11,#12);
#14=IFCGEOMETRICREPRESENTATIONCONTEXT('TestGeometricContext',
'TestGeometry',3,0.,#13,$);
#15=IFCPERSON('','','',$,$,$,$,$);
#16=IFCORGANIZATION('','University of Kansas','',$,$);
#17=IFCPERSONANDORGANIZATION(#15,#16,$);
#18=IFCAPPLICATION(#16,'IFC-Utility 2x for ADT V. 2, 0, 2, 16
(www.inopso.com)','Autodesk Architectural Desktop','');
#19=IFCOWNERHISTORY(#17,#18,$,.ADDED.,0,$,$,1100294138);
1st Column Geometry
#20=IFCPROJECT('3KSvRQcWT9p9vDPtVRdlEm',#19,'frame5','','',$,$,
(#14),#9);
#32=IFCCARTESIANPOINT((-3.98,-4.865)); (bf/2,d/2)
#33=IFCCARTESIANPOINT((3.98,-4.865));
#34=IFCCARTESIANPOINT((3.98,-4.430000000000001)); (bf/2,d/2-tf)
#35=IFCCARTESIANPOINT((0.145,-4.430000000000001));
594 W.M. Kim Roddis, A. Matamoros, and P. Graham
#36=IFCCARTESIANPOINT((0.145,4.430000000000001)); (tw/2,d/2-tf)
#37=IFCCARTESIANPOINT((3.98,4.430000000000001));
#38=IFCCARTESIANPOINT((3.98,4.865));
#39=IFCCARTESIANPOINT((-3.98,4.865));
#40=IFCCARTESIANPOINT((-3.98,4.430000000000001));
#41=IFCCARTESIANPOINT((-0.145,4.430000000000001));
#42=IFCCARTESIANPOINT((-0.145,-4.430000000000001));
#43=IFCCARTESIANPOINT((-3.98,-4.430000000000001));
#44=IFCCARTESIANPOINT((-3.98,-4.865));
#45=IFCPOLYLINE((#32,#33,#34,#35,#36,#37,#38,#39,#40,#41,#42,#43,
#44));
#46=IFCARBITRARYCLOSEDPROFILEDEF(.AREA.,$,#45);
#47=IFCCARTESIANPOINT((0.,0.,0.));
#48=IFCDIRECTION((1.,0.,0.));
#49=IFCDIRECTION((0.,1.,0.));
#50=IFCAXIS2PLACEMENT3D(#47,#48,#49); Local Axis Definition
#51=IFCDIRECTION((0.,0.,1.));
#52=IFCEXTRUDEDAREASOLID(#46,#50,#51,144.);
#54=IFCSHAPEREPRESENTATION(#14,'Body','SweptSolid',(#52));
#31=IFCLOCALPLACEMENT(#25,#30);
#30=IFCAXIS2PLACEMENT3D(#27,#28,#29);
#27=IFCCARTESIANPOINT((0.,0.,0.));
#82=IFCCARTESIANPOINT((-0.145,-4.430000000000001));
#83=IFCCARTESIANPOINT((-3.98,-4.430000000000001));
#84=IFCCARTESIANPOINT((-3.98,-4.865));
#85=IFCPOLYLINE((#72,#73,#74,#75,#76,#77,#78,#79,#80,#81,#82,#83,
#84));
#86=IFCARBITRARYCLOSEDPROFILEDEF(.AREA.,$,#85);
#87=IFCCARTESIANPOINT((0.,0.,0.));
#88=IFCDIRECTION((1.,0.,0.));
#89=IFCDIRECTION((0.,1.,0.));
#90=IFCAXIS2PLACEMENT3D(#87,#88,#89);
#91=IFCDIRECTION((0.,0.,1.));
#92=IFCEXTRUDEDAREASOLID(#86,#90,#91,144.);
Interoperability in Building Construction Using Exchange Standards 595
#94=IFCSHAPEREPRESENTATION(#14,'Body','SweptSolid',(#92));
Beam Geometry
#124=IFCCARTESIANPOINT((-4.0025,-5.97));
#125=IFCPOLYLINE((#112,#113,#114,#115,#116,#117,#118,#119,#120,
#121,#122,#123,#124));
#126=IFCARBITRARYCLOSEDPROFILEDEF(.AREA.,$,#125);
#127=IFCCARTESIANPOINT((0.,0.,-5.97));
#128=IFCDIRECTION((1.,0.,0.));
#129=IFCDIRECTION((0.,1.,0.));
#130=IFCAXIS2PLACEMENT3D(#127,#128,#129);
#131=IFCDIRECTION((1.729958125484675E-033,0.,1.));
#132=IFCEXTRUDEDAREASOLID(#126,#130,#131,228.);
#134=IFCSHAPEREPRESENTATION(#14,'Body','SweptSolid',(#132));
#111=IFCLOCALPLACEMENT(#25,#110);
#110=IFCAXIS2PLACEMENT3D(#107,#108,#109);
#107=IFCCARTESIANPOINT((6.,-1.776356839400251E-015,144.));
#108=IFCDIRECTION((0.,0.,1.));
#109=IFCDIRECTION((1.,1.558207753859869E-017,0.));
#25=IFCLOCALPLACEMENT($,#24);
#24=IFCAXIS2PLACEMENT3D(#21,#22,#23);
#21=IFCCARTESIANPOINT((0.,0.,0.));
#22=IFCDIRECTION((0.,0.,1.));
#23=IFCDIRECTION((1.,0.,0.));
#137=IFCCARTESIANPOINT((0.,-4.0025,-11.94));
#138=IFCBOUNDINGBOX(#137,228.,8.005000000000001,11.94);
#139=IFCSHAPEREPRESENTATION(#14,'','BoundingBox',(#138));
#135=IFCPRODUCTDEFINITIONSHAPE($,$,(#134,#139));
#136=IFCBEAM('3YzDSDki9E4eSGD1VTj8Ny',#19,'','','',#111,#135,$);
596 W.M. Kim Roddis, A. Matamoros, and P. Graham
#140=IFCPROPERTYSINGLEVALUE('Layername',$,IFCLABEL('S-Beam'),$);
#141=IFCPROPERTYSINGLEVALUE('Red',$,IFCINTEGER(204),$);
#142=IFCPROPERTYSINGLEVALUE('Green',$,IFCINTEGER(0),$);
#143=IFCPROPERTYSINGLEVALUE('Blue',$,IFCINTEGER(0),$);
#144=IFCCOMPLEXPROPERTY('Color',$,'Color',(#141,#142,#143));
#145=IFCPROPERTYSET('0qQ8E9gjv8dh94v2MLCO$K',#19,
'PSet_Draughting',$,(#140,#144));
#146=IFCRELDEFINESBYPROPERTIES('3kVlLf6mn1Lv2_95wWeM4I',#19,$,$,
(#136),#145);
#147=IFCRELCONTAINEDINSPATIALSTRUCTURE('37ElBQvVLCqBmnelJ1Huak',
#19,$,$,(#56,#96,#136),#26);
#148=IFCPROPERTYSINGLEVALUE('Layername',$,IFCLABEL('IfcBuilding'),
$);
#149=IFCPROPERTYSINGLEVALUE('Red',$,IFCINTEGER(255),$);
#150=IFCPROPERTYSINGLEVALUE('Green',$,IFCINTEGER(255),$);
#151=IFCPROPERTYSINGLEVALUE('Blue',$,IFCINTEGER(255),$);
1 Introduction
The Architecture, Engineering, and Construction (AEC) industry is fragmented geo-
graphically and functionally [1], [2]. Furthermore, for the characteristics of construc-
tion industries; uniqueness of projects, difficulty of data collection on site and varying
processes [3], a construction information system which has the potential to facilitate
communication among participants is considered as one of the key factors for the
success of construction projects. Managing knowledge is important to construction
industry, because of unique characteristic of its projects; the temporary teams and
heavy reliance on experiences [4]. For this potential benefit, construction enterprises
have been forced to integrate their information systems with an aim to increase mak-
ing great use of construction information and knowledge. For example, Enterprise
Resource Planning (ERP) and Project Management Information System (PMIS) solu-
tions have been attempted to provide various applications with an integrated platform.
Such information systems are developed to manage system integration and share
different kinds of functions.
However, most applications rarely consider interoperability with other applica-
tions. Construction companies have their own legacy systems and point-to-point
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 597 – 605, 2006.
© Springer-Verlag Berlin Heidelberg 2006
598 S. Roh et al.
integrations are increasing IT investment volume more and more [5]. Accordingly,
compared to other industries, the construction information system brings lower Return
on Investment (ROI) and high total cost of ownership. Moreover, users are not famil-
iar with information systems as a result of the complexity, inefficiency, and inconven-
ience of the systems. Particularly, financially less capable construction companies do
not have their own information systems, although they have the same needs for IT
systems.
To address these problems of the current construction information system, a Web
Services technology is suggested in this paper. To deal with diverse project manage-
ment functions and facilitate to use construction information, three main functional
features of the proposed Web Service-based construction information system are
discussed. A conceptual model of construction information system is introduced to
enhance interoperability and internetworking for AEC industry participants. Using
Web Services standards, the proposed model is designed to intelligently search for
construction information and timely transfer the acquired information according to
workflow and users. The research result would provide a platform for Web Service-
based construction information system by exploring a new direction for integration,
transfer, and search of construction information.
2 IT Paradox
Construction enterprises also firmly believe that the invested construction information
system gives them lots of benefits [6]. However, most of enterprises are unsatisfied
with the performance of their systems. Their dissatisfaction is attributed both to inef-
fective operation and system-itself problems. Ineffective operation is mostly caused
by the lack of employee education, slow adaptation speed by employee’s psychologi-
cal resistance to new work systems. The other problem is due to the limitation of IT
technology and difficulties in describing the construction business process, which is
focused in my research.
As an effort to meet the need for improving productivity, IT investment is done
mainly for hardware and software upgrade, collaboration information system, on-line
Decision-making Support System (DSS) and integration effort. However, each in-
vestment has rarely reached the invest goal. While investment on hardware and soft-
ware upgrading increases data processing capacity and productivity, such upgrading
also increases communication incompatibility with other work divisions or compa-
nies. In consequence, incompatibility reduces productivity by requiring additional
effort to convert data and integrate heterogonous systems. As business size grows,
collaboration information systems are required as they can improve productivity with
data transferring automation. However, the problems inherited from data transferring
and integration cause the low usage of the collaboration system. Moreover, the cur-
rent on-line DSS is not widely used due to insufficient knowledge, information and
links. Long search time is another reason for the low level of usage. As a result, IT
investment in construction companies has not been associated with the expected pro-
ductivity improvement. The main reason of IT paradox is unreasonable system inte-
gration with no consideration of information usefulness. The one way to overcome the
IT paradox is building an information system with more interoperable components.
A Conceptual Model of Web Service-Based Construction Information System 599
3 Web Services
Web Services approach can be an alternative to deal with the IT paradox. Web Ser-
vices provides a much more efficient way to manage current various construction
information systems and supports more flexible collaboration, both among a com-
pany’s own units and partners. Therefore, construction information systems that are
linked with Web Services standards would be enabling construction participants to
use a higher level of integrated information.
3.1 Definitions
Abrams [7] defines Web Services as software components that perform distributed
computing using standard internet protocols. W3C [8] defines a software system de-
signed to support interoperable machine-to-machine interaction over a network. To
summarize, Web Services is defined by a collection of web and object-oriented tech-
nologies for linking Web-based applications running on different hardware, software,
database, or network platforms.
Processes
Discovery, Aggregation, Choreography
Descriptions
Web Services Descriptions (WSDL)
M
S A
E Messages N
C A
U Soap Extensions G
R Reliability, Correlation, Transactions E
I M
T Soap E
Y N
T
Base Technologies: XML, DTD, Schema
Communications
HTTP, SMTP, FTP, JMS, etc
A Web Services model uses Web Service Definition Language (WSDL), Universal
Description Discovery Integration (UDDI), and Simple Object Access Protocol
(SOAP)/eXtensible Markup Language (XML) that allow information to be exchanged
easily among different applications [9]. These tools provide the common languages
for Web services, enabling applications to connect freely to other applications and to
read electronic messages from them. As shown in Fig. 1, Web Services architecture
involves many layered and interrelated technologies. A WSDL description is retrieved
from the UDDI directory. WSDL descriptions allow the software to extend to use
those of the other directly. The services are invoked over the World Wide Web using
the SOAP/XML protocol. Where two companies know about each other’s web ser-
vices they can link their SOAP/XML protocol interfaces.
600 S. Roh et al.
3.2 Benefits
Web Services can be used to enable existing systems to inter-operate without replace-
ment and evolve along other applications [10]. The potential benefits include [11]:
• Interoperability in a heterogeneous environment: the key benefit of the Web
Service model is that it permits different systems to operate on a variety of soft-
ware platforms. Construction enterprises add systems that require different plat-
forms and frequently don't communicate with each other. Later, due to a consolida-
tion or the addition of another application, it becomes necessary to tie together.
• Integration with existing systems: Basing on interoperability with other systems,
Web Services let enterprise application developers reuse and commoditize an enor-
mous amount of data stored in existing information systems.
• Support more client types: Since a main objective of Web Services is improving
interoperability, existing applications as Web Services increases their reach to dif-
ferent client types.
The development of information system has been complicated by the requirement
that a particular application support a specific type of project or business process [12].
Web services, because they promote interoperability, is a solution of developed con-
struction information system.
Construction information are scattered, unformailzed, and temporized. Thus, it’s dif-
ficult to search information without efforts to adjust its formats. Advanced search for
construction information through Web Services facilitates its usefulness. Web Ser-
vice-based search makes it easy to create a unified search index across multiple data
sources, programming platforms and software applications. Because it allows local
customization of search results and search options no matter where content is stored,
it is valuable for departmental or user-specific adjustments of search results.
Additionally, advanced search supports to facilitate knowledge management sys-
tem. Construction knowledge which is consisted with explicit and tacit knowledge
can be easily obtained and transferred from a current project management system.
Facilitating to share and search knowledge is more useful and effective for managing
construction information. The search result is finally stored at knowledge manage-
ment systems for reusing and categorizing. As information and knowledge accumu-
late and categorize at the knowledge system, advanced search activates to share
knowledge by providing the opened communities and dashboards among participants.
The value of reliable information is increased by instantly detecting the main events
and removing delay time of information. Real-time information transfer is fundamen-
tal element considering that well-timed construction information is the key to the
success of construction projects. Connecting with project process and knowledge
map, real-time information transfer system automatically brings fitted references and
history data in an integrated system where events are responded to as they happen.
The main construction information which is essential in decision making and project
carrying out is sent to managers before the planned work is performed through any
types of web devices. Therefore, Web Services provide high levels of automation to
the solution of information delivery process.
Query Adapter
Project Activity Monitoring Records/Expertise/Feedback
Knowledge Management
The basic function of real-time information transfer is managing the project proc-
ess with work references and knowledge that are automatically sent to users. The
status of the project process is mapped as a stream of construction information in
order to monitor and analyze project site activities. Real-time construction informa-
tion is transferred to diverse information and enforces to stack on database and
knowledge management storage. Knowledge management system is naturally grow-
ing to store and retrieve from transferred information. Specific work supporting sys-
tems synchronize with knowledge management system to serve real-time exchanging
data. Construction information search allows the information broker which is main-
taining a UDDI repository to gather and structure construction information. Catego-
rized construction information is also linked with knowledge management system in
common.
Fig.3 illustrates the flow of information in Web Services adopting real-time infor-
mation transfer and construction information search system. On the web browser,
users can access to Web Service-based construction information system through Sin-
gle Sign-On (SSO) and Extranet Access Management (EAM). Real-time informa- ①
tion transfer system detects the work processes from project management systems. ②
To instantly serve information to user before performance, this system request for
existing systems and knowledge management system to transfer data, manual and
know-how. ③ Gathered information is analyzed and matched with project processes.
④ Automatically, work information and process monitoring data, e.g. schedule,
specifications, site performance and web cam, is transferred on the user interface. ⑤
After a work is finished, the results and data are retrieved to the knowledge manage-
ment system to reuse. For example of submitting material purchasing requisitions,
past application allows managers to fill in a purchase items using a browser, but it
does not automate the ordering process by interoperability. Using the Web Services,
the rules for purchase ordering process are described in a WSDL document. A partner
of construction enterprise would request the WSDL document over the internet as an
A Conceptual Model of Web Service-Based Construction Information System 603
XML/SOAP. The partner would use this WSDL document to create a bid and RFQ
application. This application would be linked with real-time information transfer
system to automate the ordering process. Real-time information transfer system gives
a manager the purchasing progress to check out and the information of purchasing
items.
The other system, construction information search, gives user more accurate infor-
mation and knowledge at once. Using one of system, it’s difficult to directly search
construction information. Users wander usually related systems without any rules.
Contrary to a common research, the information broker will support the reliable search
of information that is held in heterogeneous forms. ⑴ When users input a key word,
⑵ information broker requests related information to existing systems and knowledge
management system. ⑶ It allows each system to response via common data format
and protocols, as the SOAP. ⑷ Before output is sent, the information broker catego-
rizes and melds it on demand. ⑸ Served information is stored at knowledge manage-
ment system to increase information searching time. To continue using above example,
a project manager additionally wants quantity surveyor to request original quotation to
five databases stored in separate partners and access to additional items. Using Web
Services, a single interface can run a variety of disparate systems, and the information
broker instantly pulls information from many databases and the knowledge manage-
ment system. With categorized results, quantity surveyor can compare and evaluate
items cost information to support a project manager’s decision-making.
604 S. Roh et al.
6 Conclusions
IT investment in construction industry is focused to integrate and upgrade systems.
Nevertheless, current construction information systems have difficulty to deal with
the complexity and distribution of information. The result of investment has not been
associated with the expected productivity improvement. As a solution of these prob-
lems, Web Services is suggested to provide a standard hub for interoperability.
This research presented a conceptual model of Web Service-based construction in-
formation system which has three main functional features, interoperability for inte-
gration, advanced construction information system and real-time information transfer.
Proposed system has components to intelligently search for construction information
and timely transfer the acquired information. Real-time information transfer and con-
struction information search system share a knowledge management system so that it
enables companies seeking developed information usage.
The Web Service-based construction information system provides a new platform
of interoperability with partners and system. Moreover, transfer and search of con-
struction information would enable to facilitate the development of knowledge man-
agement system. Due to above possibilities of improvement, Construction information
system can still benefit from moving to a web services model for certain functions.
Therefore, the suggested model would facilitate efficient interoperable environments
to integrate diverse projects and systems that promote interaction of many stake-
holders with different cognitive experiences and knowledge. For the future study, a
conceptual model needs to be developed for the adaptation in the case project process
and to meet the specific requirements of construction enterprises and partners.
Acknowledgement
The authors would like to acknowledge the support for this research from the Korean
Ministry of Construction and Transportation, Research Project 05 CIT D05-01.
References
[1] Howard, H.C., Levitt, R.E., Paulson, B.C., Tatum, C.B.: Computer-Integration: Reducing
Fragmentation in AEC Industry, Journal of Computing in Civil Engineering, Vol. 3, No.
1, January (1989) 18-32
[2] Nitithamyong, P., Skibniewski, M. J.: Web-based construction project management sys-
tems: how to make them successful?, Automation in Construction, Vol. 13, No. 4, July
(2004)
A Conceptual Model of Web Service-Based Construction Information System 605
[3] Koskela, L.: Application of the new production philosophy to construction, CIFE Tech-
nical Reprot #72, Stanford University (1992)
[4] Khalfan, M. M. A., Bouchlaghem, N. M., Anumba, C. J., Carrillo, P. M.: Knowledge
Management for Sustainable Construction: The C-CanD Project, Journal of Management
in Engineering, Vol. 22, No. 1, January (2006), pp. 2-10
[5] Zhu, Y.: Web-based construction document processing through a malleable Frame, PhD
thesis, University of Florida (1999)
[6] Deng, Z. M., Li, H., Tam, C. M., Shen, Q. P., Love, P. E. D.: An application of the Inter-
net-based project management system, Automation in Construction, Vol. 10, No. 2
(2004), pp. 239-246
[7] Abrams, C.: Web Services Scenario: Setting and Resetting Expectations, Gartner Sympo-
sium ITXPO, Cannes, France, 4-7 November (2002)
[8] W3C: Web Services Architecture, W3C Working Group Note, 11 February (2004)
[9] Kreger, H.: Web Services Conceptual Architecture, IBM Software Group (2001)
[10] Hagel, J., Brown, J.S.: Your Next IT Strategy, Harvard Business Review, Vol. 79, No. 9
(2001) 105-115
[11] Singh, I., Strearns, B., Brydon, S., Murray, G., Ramachandran, V.: Designing Web
Services with the J2EE 1.4 Platform (The Java Series): JAX-RPC, SOAP, and XML
Technologies, Addison-Wesley (2004)
[12] Wilkes, L.: Web Services-Right Here, Right Now, CBDI Forum, 16 December (2001)
Combining Two Data Mining Methods for
System Identification
1 Introduction
The goal of system identification [6] is to determine the properties of a system
including values of system parameters through comparisons of predicted behav-
ior with measurements. By definition, system identification is an abductive task
and therefore, computational complexity may hinder the success of full-scale
engineering applications. Abduction can be supported by multi-model reason-
ing since many causes (model results) usually lead to the same consequences
(sensor data). In addition, several kinds of assumptions and measurement errors
influence the reliability of system identification tasks. The effect of incorrect
modeling assumptions may be compensated by measurement error and this may
lead to inaccurate system identification when single-model reasoning is carried
out. Therefore, instead of optimizing one model, a set of candidate models is
identified in our approach. These candidate models lie below a threshold which
is defined by an estimate of the upper bound of errors due to modeling assump-
tions as well as measurements. When data mining techniques [4] [17] [18] are
applied to model parameters, engineers may obtain useful knowledge for system
identification tasks. In general, the objective is to inform engineers of the accu-
racy of their diagnoses given the information that is available. For example, if it
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 606–614, 2006.
c Springer-Verlag Berlin Heidelberg 2006
Combining Two Data Mining Methods for System Identification 607
and their parameter values are considered as input points for two data mining
techniques.
The paper is structured as follows. In Section 2, a clustering procedure us-
ing PCA and k-means is presented. Section 3 describes the evaluation of the
methodology and the interpretation of the results obtained from an engineering
perspective. Section 4 contains the results of an application of the methodology
as well as a discussion of limitations. The final section contains conclusions and
a description of future work.
PCA is a linear method for dimensionality reduction [5]. The goal of PCA is
to generate a new set of variables - called Principal Components (PC) - that
are linear combinations of original variables. Ultimately, PCA finds a set of
principal components that are sorted so that the first components explain most
of the variability of the data. A property of principal components is that each
component is mutually uncorrelated.
PCA begins with determining S, the covariance matrix of the normalized
data (for more details see [5]). The term parameter refers to the parameter in its
normalized form. To obtain the principal components, the covariance matrix S
is decomposed such that S = V LV T where L is a diagonal matrix containing the
eigenvalues of S and the columns of V contains the eigenvectors of S. Finally,
the transformation is carried out as follows:
PC
xnew = αi x (1)
i=1
where x is a point in the normal space, αi is the ith eigenvector, PC is the number
of Principal Components and xnew is the point in the feature space. A point used
by the data mining algorithm represents set of parameter values (models) in the
system identification perspective. Principal components (PCs), which are linear
combination of the original variables, are represented as an orthogonal basis for
a new representational space of the data. PCs are sorted in decreasing order
of their ability to represent the variability of the data. Finally, each sample is
transformed into a point in the feature space using Equation 1.
Clustering procedure
1. Transform the data using principal component analysis.
2. Choose the number K of clusters (see Section 3).
3. Randomly select K initial centroids.
4. Do
5. Assign each point to the closest cluster.
6. Recompute centroid of the K clusters.
7. While centroid position change
the evaluation process is different since the goal is not prediction. Results are
evaluated in two ways. Firstly, a criterion is used to evaluate the performance
of the clustering procedure. Secondly, from a decision-support point-of-view, the
performance is evaluated by users.
The main theme of this section is to develop a metric in order to evaluate
results obtained with the methodology described in Section 2. Without a met-
ric, the way clusters are seen and evaluated is subjective. Furthermore, it is not
possible to know the real number of cluster in the data since the task is un-
supervised learning and this means that the answer - the number of clusters -
is unknown. In this paper, the results obtained by the clustering technique are
evaluated using a score function (SF ). The score function combines two aspects:
the compactness of clusters and the distance between clusters. The first notion
is referred to as within class distance (WCD) whereas the second is the between
class distance (BCD). Since objectives are to minimize the first aspect and to
maximize the second aspect, combining the two is possible through maximizing
SF = BCD/W CD. This idea is related to the Fisher criterion [4].
In this research the WCD and the BCD are defined in Equation 2. It is
important that an engineering meaning in terms of model-based diagnosis can be
given to these two distances. They are both directly related to the space of models
for the task of system identification using multiple models. The WCD represents
the spread of model predictions within one cluster. Since it gives information on
the size of the cluster, a high WCD means that models inside the class are widely
spread and that the cluster may not reflect physical similarity. The BCD is an
estimate of the mean distance between the centers of all clusters and therefore,
it provides information related to the spread of clusters. For example, a high
BCD value means that classes are far from each other and that the system
identification is not currently reliable. The detailed score function is given by
Equation 2.
K
i=1 dist(ci , ctot )2 · size(ci )
SF = (2)
K x∈Cidist(ci ,x)2
i=1 size(ci )
where K is the number of clusters, Ci the cluster i, ci its centroid and ctot the
centroid of all the points. The function dist and size define respectively the Eu-
clidean distance between two points (each point is a model which is represented
by parameter values) and the number of points in a cluster. From a system iden-
tification point of view, BCD values indicate how different the K situations are.
Values of WCD give overviews of sizes of groups of models. As explained in Sec-
tion 2, the number of clusters - in our application, this is the number of classes
of models - for a data set is unknown. The idea to determine the most reliable
number of clusters is to run the procedure for P different predefined number of
clusters. The criterion used to see if the number of cluster is appropriate is the
same as the score function described above. The higher the value of the criterion,
the more suitable the number of clusters.
The second weakness of the procedure is the random choice of the K first
centroids. One solution is to run the algorithm N times and to average the value
Combining Two Data Mining Methods for System Identification 611
Table 2. Procedure to limit the effect of the random choice of the starting centroids
Controlling Randomness
1. Loop i from 1 to N
2. Loop j from 2 to P
3. Run clustering procedure described in Table 1 with j centroids.
4. Calculate score function.
5. End
6. End
7. Average score function.
The case study described in this section is a beam structure that was presented in
[15]. It is used to illustrate the methodology described in Section 2.2. Although
this study focuses on bridge structures, it can be applied to other structures
and in other domains. The procedure for generating models from modeling as-
sumptions is given in [15]. In this particular example, six parameters consisting
of position and magnitude of three loads have been chosen. After running the
procedure described in Section 3, the number of clusters is chosen to be three in
this case. The results are shown in Table 3.
Table 3. Comparison of values for between class distance (BCD), within class distance
(WCD) and score function (SF) for various numbers of clusters
It can be seen that the maximum value for the score function is reached with
three clusters and a local maximum can be observed with six clusters. This effect
can also be seen on the right part of Figure 1 where the three clusters could be
divided into two to obtain six clusters. Furthermore, BCD and WCD are always
increasing. This is due to the fact that at maximum there could be one cluster
for each point.
Once the number of clusters is fixed, the procedure outlined in Table 1 is
followed. To judge the improvement of the methodology with respect to the
standard k-means algorithm, the two techniques are compared. Figure 1 shows
the improvement in a visualization point of view. The left part of Figure 1
corresponds to standard k-means whereas the right part is the result of the
methodology described in this paper. It is easy to see that our methodology is
better able to present results visually.
Fig. 1. Visual comparison of standard k-means (left) with respect to the proposed
methodology (right). Every point represents a model and belongs to one of the three
possible clusters (+, O or ).
5 Conclusions
A methodology that combines PCA and k-means for studying the solution space
of models obtained during system identification is presented in this paper. The
conclusions are as follows:
• Combining the data mining techniques, PCA and k-means, helps improve
visualization of data
• Evaluation of results obtained through clustering is difficult The metric that
has been developed in this work helps in the evaluation
• Application of data mining to complex tasks such as system identification
requires an expert user
Future work involves the use of more complex data mining methods to
obtain other ways for separating and clustering models. Furthermore, better
visualization of solution spaces needs to be addressed in order to improve en-
gineer/computer interaction. Finally, strategies for models containing a varying
number of parameters are under development.
Acknowledgments
This research is funded by the Swiss National Science Foundation through grant
no 200020-109257. The authors recognize Dr. Fleuret for several fruitful discus-
sions on data mining techniques. The two anonymous reviewers are acknowledged
for their propositions which have modified the direction of the paper.
References
1. Alonso C., Rodriguez J.J. and Pulido B. Enhancing Consistency based Diagnosis
with Machine Learning Techniques. LNCS, Vol. 3040, 2004, pp. 312-321.
2. Chan Z.S.H., Collins L. and Kasabov N. An efficient greedy k-means algorithm for
global gene trajectory clustering. Exp. Sys. with Appl., 30 (1), 2006, pp. 137-141.
3. Ding C. and He X. K-means clustering via principal component analysis. Proceed-
ings of the 21st International Conference on Machine Learning, 2004.
4. Hand D., Mannila H. and Smyth P. Principles of Data Mining. MIT Press, 2001,
546p.
5. Jolliffe I.T. Principal Component Analysis. Statistics Series, Springer-Verlag, 1986,
271p.
6. Ljung L. System Identification - Theory For the User. Prentice Hall, 1999, 609p.
7. Melhem H.G. and Cheng Y. Prediction of Remaining Service Life of Bridge Decks
Using Machine Learning. J. Comp. in Civ. Eng., 17 (1), 2003, pp. 1-9.
8. Nguyen H.H. and Chan C.W. Applications of data analysis techniques for oil pro-
duction prediction. Art. Int. in Eng., Vol 13, 1999, pp. 257-272.
614 S. Saitta, B. Raphael, and I.F.C. Smith
1 Introduction
Efficient collaborative and concurrent engineering are vital prerequisites for com-
petitive advantage in today’s global AEC business. They require achievement of
adequate asynchronous teamwork in the highly heterogeneous project environments
of one-off virtual organizations. This requirement can be met if interoperability and
consistency of data and tools based on the use of common shared models are suc-
cessfully realized.
Indeed, the main criterion for the efficiency of collaboration is the degree of com-
mon understanding of the communicated information. Connecting computers by fast
communication channels is only a preliminary first step. Interoperability and consis-
tency of the data require to go much further [1].
According to the ICH Glossary1 interoperability is defined as “the ability of informa-
tion systems to operate in conjunction with each other encompassing hardware, com-
munication protocols, applications and data compatibility layers”. On software level
this includes syntactic, structural and semantic aspects that must be taken into account.
Achieving interoperability has been a major research and development issue in the
past decades, giving rise to various new methods of model-based IT-supported work.
It has led to the idea of a common standard Building Information Model (BIM) that
can serve as reference for business process aware information delivery (Fig. 1). With
the advance of the IFCs, this idea is now approaching fruition [2, 3].
1
(c) ICH Architecture Resource Center, http://www.ichnet.org.glossary.htm
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 615 – 626, 2006.
© Springer-Verlag Berlin Heidelberg 2006
616 R.J. Scherer and P. Katranuschkov
Standardized data models are based on a normative meta model and are typically struc-
tured into several inter-connected sub-schemas to allow shared development and main-
tenance of the data model. The main objectives of such models are (1) to serve a broad
range of applications, (2) to be lean, (3) to be flexible and (4) to be extensible. The larg-
est and most widely accepted model in AEC/FM is currently IFC [2], based on the ISO
STEP methodology [6]. Standardized global data models should neither be too expres-
sive, nor too sophisticated or too much focused on a specific use. However, several
models that do not fulfill all these criteria have also been standardized to answer particu-
lar industry needs. Examples are CIS/2 (in the domain of steel construction) and
OKSTRA (for road works). In the context of general standardized data models like IFC,
618 R.J. Scherer and P. Katranuschkov
such models can be considered as special kinds of domain models that are not harmo-
nized with the global schema and hence require mapping, if their data are to be shared.
Proprietary data models are developed and used for a specific software tool or for
several tools of one software provider. Such data models are tightly focused and
strongly optimized to the functionality of the respective tool(s). Typically, they are
formalized only to an extent needed for in-house software development and often do
not expose a clear meta model. They are also very detailed and granular but not nec-
essarily very explicit. Consequently, non-trivial mappings between proprietary and
standardized data models are necessary leading to sophisticated and often ambiguous,
potentially error-prone procedures [7, 8].
As remedy to that problem, both proprietary models and standardized global mod-
els are providing constructs that are intended to facilitate mapping and harmonization.
Such constructs include object containers (proxies) that can hold instances of arbitrary
classes, or – as in IFC – generalized property objects (property sets) that enable ex-
tended, non standardized agreements. However, whilst this helps to simplify the map-
ping process and to agree upon data exchange requirements not foreseen in advance,
recognition of changed data and consistency management become even more difficult
as in many cases it cannot be clearly determined what exactly has been changed and
by whom. Therefore, beside the efforts within the models, adequate server-side map-
ping and matching methods are required to ensure consistency of the model data.
A domain model is usually understood as the data model established for a certain spe-
cific domain of interest (architecture, structural engineering, building services engi-
neering …) and explicitly linked to other domain models. Generally, in the develop-
ment of such models the core modeling approach is used where a lean core model is
the overarching bridge for various domain modeling extensions. Like global core mod-
els, domain models should also be as lean as possible, extensible and broadly app-
licable data models. Therefore concepts including specific engineering knowledge
should generally be avoided. For example, in a structural domain model the definition
of a frame, which depends on the domain theory and the specifically used approxima-
tion method, should be avoided, whereas linear and planar elements are to be included
as basic entities of the targeted engineering models of the domain [9].
The modern modeling approach is to clearly separate data and knowledge models.
Knowledge models use data models but not vice versa. Data models should be time-
less, whereas knowledge models are evolutionary, described by ontologies, and there
may exist different ontology models for one and the same domain, depending upon
the particular objectives and focus. Therefore, conceptually, domain models merely
provide for further structuring of a global schema, attempting larger degree of har-
monization and enabling easier definition of useful model views. They extend and
partition the agreements in a shared environment but they do not resolve the consis-
tency problems as such.
From Data to Model Consistency in Shared Engineering Environments 619
A model view is a specific projection of a global model and/or one or more domain
models. To help understand the meaning and use of model views better, below we
take an IFC-based environment as example.
The IFC schema is a standardized global model that can capture project data about
AEC projects over the complete life cycle. Potentially, it can support a broad spec-
trum of business requirements, but it does not particularize on any of these. This is
illustrated on the left side of Fig. 2 where the ‘periscope’ allows to see everything in
general but nothing in particular. However, from the perspective of an end user, the
real need is to support specific business requirements in specific processes over one or
more project stages. This produces a different periscope view in which support for
particular business requirements can be seen (Fig. 2, right).
Fig. 2. Support of business requirements on the example of the IFC model (left: IFC supports
all business requirements in all life cycle phases without particular differentiation; right: Specific
business requirements at a specific project stage can be supported via dedicated model views).
Whilst there are many different use cases where the suggested methods can be ap-
plied, they can all be derived from the generalized collaboration scenario presented
on Fig. 3 below. It starts at time point ti with a consistent shared model version Mi,
based on the product data model M defining the data that have to be shared, and pro-
ceeds until the next coordination point ti+c is reached. The data processing sequence
for a single actor is comprised of the following seven steps:
1. Model view extraction: Msi = extractView (Mi , viewDef (Mi))
2. Mapping of the model view Msi to the (proprietary) discipline-specific model Si ,
an instantiation of the data model S: Si = map (Msi , mappingDef (M, S))
3. Modification by the user of Si to Si+1 via some legacy application, which can be
expressed abstractly as: Si+1 = userModify (Si , useApplication (A , Si))
4. Backward mapping of Si+1 to Msi+1, i.e.: Msi+1 = map (Si+1, mappingDef (S , M))
5. Matching of Msi and Msi+1 to find the differences: ΔMs i+1,i = match (Msi , Msi+1)
6. Reintegration of ΔMs i+1,i into the model: Mi+1 = reintegrate (Mi , ΔMs i+1,i)
7. Merging of the final consistent model Mi+1 with the data of other users, that may
concurrently have changed the model, to obtain a new stable model state, i.e.:
Mi+c = merge (Mi+1 , Mi+2 , … , Mi+k), with k = the number of concurrently
changed checked out models.
From Data to Model Consistency in Shared Engineering Environments 621
h
reintegrate
i
merge
g ?Msi+1,i Mi+1
Mi c
createView match
Mi+2 Mi+c
Msi d f Msi+1
coord. e map ...
map userModify
point Si Si+1 coord.
Mi+k point
Model View Extraction is the first step in the presented generalized collaboration
scenario. To be usefully applied, a model view should (1) be easily definable with as
few as possible formal statements, (2) allow for adequate (run-time) flexibility on
instance and attribute level and (3) provide adequate constructs for subsequent reinte-
gration of the data into the originating model. To meet these requirements, a General-
ised Model Subset Definition schema (GMSD) has been developed [12]. It is a neutral
definition format for EXPRESS-based models comprised of two subparts which are
almost independent of each other with regard to the data but are strongly inter-related
in the overall process. These two parts are: (1) object selection, and (2) view defini-
tion. The first is purely focused on the selection of object instances using set theory as
baseline. The second is intended for post-processing (filtering, projection, folding …)
of the selected data in accordance with the specific partial model view. Fig. 4 shows
the top level entities of the GMSD schema and illustrates the envisaged method of its
use in run-time model server environments. More details on GMSD are provided in
[11] and [12], along with references to other related efforts such as the PMQL lan-
guage developed by Adachi.
PartialModelQuery ‘all-embracing’
product model
1
SelectedObjectSets S [1:?] DefinesFeatureSubset S[1:?]
selected
1 QuerySetSelect 2 ViewDefinitionSelect data objects
with references
to unneeded
data objects
... ... 2
1 2
Selection of data objects Adjusting the adjusted data set
by recursively used data content of the containing the
QuerySetSelects selected objects necessary and
sufficient data
Fig. 4. Top level structure of the GMSD schema and the related instance level operations
622 R.J. Scherer and P. Katranuschkov
Model Mapping is needed by the transformation of the data from one model schema to
another. Typically, this would happen in the transition from/to an agreed shared model
or model view to/from the proprietary model of the application the user works with (see
Fig. 3). The overall mapping process consists of four steps: (1) Detection of schema
overlaps, (2) Detection of inter-schema conflicts, (3) Definition of the inter-schema cor-
respondences with the help of formal mapping specifications, and (4) Use of appropriate
mapping methods for the actual transformations on entity instance level at run-time.
Mapping patterns allow to understand better the mapping task and to formalize what
and how has to be mapped in each particular case. By examining the theoretical back-
ground of object-oriented modeling the following types of mapping patterns can be
identified: (1) Unconditional class level mapping patterns, depicting the most general
high-level mappings, (2) Conditional instance level mapping patterns, including logical
conditions to select the set of instances to map from the full set of instances in the
source model, and (3) Attribute level mapping patterns, specifying how an attribute with
a given data type should be mapped. For each of these categories, several sub-cases
have been identified in [8]. Examples on attribute level include simple equivalence, set
equivalence, functional equivalence, homomorphic mapping, transitive mapping, in-
verse transitive mapping, functional generative mapping and so on.
All mapping patterns can be defined by means of the developed formal mapping lan-
guage CSML. Fig. 5 below shows the formalism to present these patterns graphically,
along with respective typical examples. More details and examples are provided in [8].
Fig. 5. Symbolism for graphical presentation of mapping patterns with illustrative examples
From Data to Model Consistency in Shared Engineering Environments 623
Model matching has to deal with the identification of the changes made by one or
more applications to the used model data (step 4 of the generalized collaboration
scenario on Fig. 3). This may be done by a dedicated client application but a more
natural implementation is a server-side procedure within a product model server.
In our approach, matching exploits the object structure without considering its
semantic meaning. Hence, it can be applied to different data models and different
engineering tasks. It does not require nor involve specific engineering knowledge.
Comparison of the model data of the old and new model versions begins with the
identification of pairs of potentially matching objects, established by using their unique
identifiers or some other key value. However, such identifiers are not always available
for all objects. Moreover, unidentifiable objects may also be shared via references from
different identifiable objects. The general complexity of this problem is shown in [14]
where a fully generic tree-matching algorithm is shown to be NP-complete. Therefore,
we have developed a pragmatic algorithm that provides a simple scalable way for find-
ing corresponding data objects. Its essence is in the iterative generation of object pairs
by evaluating equivalent references of already validated object pairs. The first set of
valid object pairs is built by unambiguously definable object pairs. Any newly found
object pair is then validated in a following iteration cycle, depending on its weighting
factor derived from the type of the reference responsible for its creation. Attribute values
are only used if ambiguities of aggregated references do not allow the generation of new
object pairs. To avoid costly evaluation of attribute values a hashcode is used indicating
identical references. In this way, the pair-wise comparison of objects can be signifi-
cantly reduced with regard to other known approaches.
Fig. 6 illustrates the outlined procedure. Before starting any comparison of objects
the set VP of validated object pairs and the set UO of unidentifiable objects are initially
created using available unique identifiers. After that the object pairs of VP are compared
624 R.J. Scherer and P. Katranuschkov
as shown on the right side of the figure for {A1, A2}. Using their equivalent references
“has_material” a new object pair can be assumed for the objects E1 and G2 of UO. A
weighting factor indicating the validity of this assumption is then derived from the ref-
erence type2 of “has_material” and added to the newly created object pair, which is
then placed in the set AP containing all such derived matching pairs. After comparison
of all object pairs of VP, the highest weighted object pairs of AP are moved from AP
(and UO) to VP and a new iteration cycle is started. However, now the weighting fac-
tors of newly created object pairs are combined with the weighting factors of already
validated object pairs. More details on the developed algorithm along with an overview
of related efforts are provided in [11] and [15].
VP = validated
object pairs validated
VP UO = unidentifiable Object object pair Object
{A1,A2} objects
A1 A2
{B1,B2} AP = assumed object
{Ø,D2} pairs
{C1,D2}
AP weighting reference:
factor "has_material"
UO
{H2} {E1} {E1,G2,0.75} assumed
Object object pair Object
{H1} {G2} E1 G2
{K2}
Whilst this procedure is pretty clear and more or less the same for various different
scenarios, there are several detailed problems that have to be dealt with. These can be
subdivided into (1) structural problems (1:m version relationships for reference attrib-
utes, n:1 version relationships, change of the object type in a version relationship), and
(2) semantic problems. The latter cannot be resolved solely by generic server-side pro-
cedures but require domain knowledge and respective user interaction. Such problems
typically occur when a change to a model view requires propagation of changes to an-
other part of the overall model in order to restore consistency. Therefore data consis-
tency must be evaluated by all involved actors during a final merging process [11, 15].
Model Merging has to deal with the consistency of concurrently changed data that
exist in two or more divergent versions. It should be performed at a (pre)defined co-
ordination point in cooperation of all involved users. The aim is to provide a proce-
dure by which modifications can be reconciled and appropriately adjusted to a consis-
tent new model state, marking the begin of a new collaboration cycle.
Fig. 8 schematically shows the suggested approach of using available prior knowl-
edge to enable efficient management of the iterative agreement process. However, this
is a highly complex process where world-wide research is still at an early phase.
Mi Modify data by
M (i+1).A M i+2
executing design task Actor A Reconciliation
Actor T upon agreement
b/n A , S and T
M (i+1).S
Actor S
Fig. 8. Principal schema for the merging of concurrently changed design data
626 R.J. Scherer and P. Katranuschkov
4 Conclusions
We described a set of inter-related methods to support consistency of the distributed
project data in a shared collaborative work environment. These data are represented in
various data models and are continuously transformed and moved in the process of their
evolution from initial estimates to detailed specifications of the designed/built facility.
The developed generic server-side methods which rely mainly on the model struc-
tures can help a lot to achieve correct data management but they are not solely sufficient
to warrant consistency. Whilst at many places theoretical complexity can be overcome
by taking into account the actual features of BIM, realistic use cases and common sense,
it is nevertheless not possible to solve all consistency problems without deeper engineer-
ing knowledge. The transitions between global and local models, man-machine interac-
tion in the decision-making process and the combination of structure and semantics are
issues that require further research. Enabling smooth and correct data handling from
global shared models down to engineering ontologies where knowledge encoded in
rules can be applied will be a huge step towards achievement of model consistency.
References
1. Ducq, Y, Chen, D., Vallespir, B.: Interoperability in Enterprise Modeling: Requirements
and Roadmap. J. Advanced Engineering Informatics, Vol. 18, No. 4, pp 193-204 (2004).
2. IAI: IFC2x Edition 3, Online documentation. © International Alliance for Interoperability
(1996-2006). Available at: http://www.iai-international.org/Model/R2x3_final/index.htm
3. Wix J. /ed./: Information Delivery Manual: Using IFC to Build Smart (2005). Available at:
http://www.iai.no/idm/learningpackage/idm_home.htm
4. Gehre, A., Katranuschkov, P., Wix, J., Beetz, J.: InteliGrid Deliverable D31: Ontology
Specification. The InteliGrid Consortium, c/o University of Ljubljana, Slovenia (2006).
5. Eastman, C. M.: Building Product Models: Computer Environments Supporting Design
and Construction. CRC Press, Boca Raton, Florida, USA (1999).
6. Fowler, J.: STEP for Data Management, Exchange and Sharing. Technology Appraisals Ltd.,
Twickenham, UK (1995).
7. Amor, R., Faraj, I.: Misconceptions about Integrated Project Databases. ITcon, Vol. 6 (2001).
8. Katranuschkov, P.: A Mapping Language for Concurrent Engineering Processes. Diss.
Report, Institute of Construction Informatics, TU Dresden, Germany (2001).
9. Weise, M., Katranuschkov, P., Liebich, T., Scherer, R. J.: Structural Analysis Extension of
the IFC Modelling Framework. ITcon, Vol 8 (2003). Available at: http://www.itcon.org
10. Zeller, A. Configuration Management with Version Sets. Ph.D. Thesis, TU Braunschweig,
Germany (1997).
11. Weise, M. An Approach for the Representation of Design Changes and its Use in Building
Design (in German). PhD Thesis (submitted 05/2006), TU Dresden, Germany (2006).
12. Weise, M., Katranuschkov, P., Scherer R. J.: Generalised Model Subset Definition
Schema. Proc. CIB-W78 Conference 2003, Auckland, NZ (2003).
13. Eisfeld, M.: Assistance in Conceptual Design of Concrete Structures by a Description Logic
Planner. PhD Thesis, Institute of Construction Informatics, TU Dresden, Germany (2005).
14. Spinner, A.: A Learning System for the Creation of Complex Commands in Programming
Environments (in German). PhD Thesis, TH Darmstadt, Germany (1989).
15. Weise, M., Katranuschkov, P., Scherer, R. J.: Generic Services for the Support of Evolv-
ing Building Model Data. Proc. Xth ICCCBE, Weimar, Germany (2004).
Multicriteria Optimization of Paneled Building Envelopes
Using Ant Colony Optimization
1 Introduction
Building envelopes are molded by a large number of design influences including
structural, aesthetic, lighting, energy and acoustic considerations. While there have
been significant amounts of research in structural optimization of buildings, there has
been significantly less research on aspects of building physics. Further, compared to
structures, less expert knowledge exists about optimizing building physics, especially
for complex projects, thus providing an opportunity to use computational optimiza-
tion. A need exists to increase design understanding of the tradeoffs involved to create
optimized building envelope designs considering multiple viewpoints. This paper
presents work carried out at Arup, London towards developing a computational de-
sign and optimization tool that facilitates the design of optimized building envelopes
for lighting, energy, cost and architectural criteria. The initial focus presented in this
paper is the optimization of paneled building envelopes mainly considering lighting
performance and cost.
Previous approaches to building envelope optimization include both evolutionary
and numerical optimization approaches. Caldas and Norford [1] describe a method
that combines genetic algorithms with detailed building energy simulation, via
DOE-2, to optimize the placement and sizing of windows in an office building. The
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 627 – 636, 2006.
© Springer-Verlag Berlin Heidelberg 2006
628 K. Shea, A. Sedgwick, and G. Antonuntto
2 Method
The task of optimizing panel configurations on building walls and roofs can be classi-
fied as a discrete, combinatorial optimization task. A multicriteria ant colony optimi-
zation (MACO) method using Pareto filtering is applied. The software Radiance is
used to calculate lighting performance.
Ant colony optimization (ACO) algorithms are discrete optimization methods inspired
by the behavior of natural ant colonies [6,7]. They solve tasks by multi-agent co-
operation using indirect communication through modifications in the environment,
just like other social insects. Natural, or real, ants release a certain amount of phero-
mone while walking, e.g. from their nest to food. Subsequent ants preferentially
follow directions and paths with higher pheromone concentration. The main idea is to
use repeated and often recurrent simulations of artificial ants to dynamically generate
new solutions.
An overview of the method, which is implemented in Matlab, is shown in Fig. 1.
As this paper focuses on its application, only basic information will be given. Artifi-
cial ants modify some aspects of their environment in the same way as real ants do.
By analogy, this numeric information is called an artificial pheromone trail. A search
space can be formulated by a finite number of states where the connectivity of each
state to all others is fully defined to completely model the movement of ants in the
search space. In this application, a state is a unique design defined by its configuration
of panels. The search space is then the combinatorial expansion of all possible panel
configurations. A “nest” in the search space defines the position in the search space
Multicriteria Optimization of Paneled Building Envelopes Using ACO 629
from which a group of ants start their exploration. Given a set of initial states, or
“nests”, ants are “let out” from these positions to search for improved states. Moving
from one state to another is controlled by a set of “moves”, or design modifications,
and in this application a single panel material in a configuration is changed to a dif-
ferent panel material.
The pheromone table at each state contains probabilities representing the strength
of pheromone, which are updated as soon as an ant reaches a new state. Updating the
probabilities thus mimics pheromone laying. An ant-decision table is obtained by the
composition of the local pheromone trail values with local heuristic values. The prob-
ability with which an ant chooses to go from one state to another is then calculated
from this ant-decision table and a move is selected using a weighted roulette wheel
technique. After each ant makes its move, pheromones are exponentially decreased to
mimic evaporation in the natural system.
FINISH
To create a multicriteria ACO method, Pareto tests and a Pareto archive are used.
Here, the design criteria, or objectives, are treated independently in the optimization
and are not weighted, although they are scaled. The goal of the multicriteria optimi-
zation is then to generate a well spread archive of Pareto optimal designs that repre-
sents the tradeoffs among the design performance criteria.
Given all feasible solutions to a multicriteria optimization problem, a Pareto opti-
mal, or non-dominated, solution is defined as one where no other feasible solution
exists that is at least as good in all criteria and for at least one criterion the solution is
better than all other designs. The complete set of Pareto optimal solutions is called
the Pareto front. In theory there is no limit to the number of criteria that can be
included. However, with the addition of each criterion the computational time for
630 K. Shea, A. Sedgwick, and G. Antonuntto
testing non-dominated status increases and the archives can become extremely large.
A method for filtering evolving Pareto archives is implemented based on the smart
Pareto filtering method described in [8] and extended to consider 11 independent
criteria. The method works by constructing regions of insignificant tradeoff and
maintaining only one Pareto solution in that region while discarding the rest. The
regions are constructed using two control parameters set for each criterion that define
the size of the regions of insignificant tradeoff. Filtering can be used both during
optimization and after to reduce the size of the archives for easier visualization and
interaction. Further details can be found in [8].
Daylight factor is one of the key quantitative metrics for analyzing day lighting in
building spaces. It is a measure of the proportion of available exterior illumination
that reaches an internal surface under overcast conditions. Therefore, if a point has a
daylight factor of 1%, then, when the external available horizontal illuminance is
10,000 lux, that point is illuminated by daylight to 100 lux. Sun hours is defined as
the number of hours that a point in a space receives direct sunlight. This is expressed
on a per annum basis or for a given time period, e.g. summer months. As a reference,
at the latitude of London the probable annual sun hours are 1486.
Radiance is a software program for the analysis and visualization of lighting and is
used by architects and engineers to predict illumination, visual quality and appearance
of design spaces. Lighting performance is calculated using precompiled influence
matrices constructed from lighting simulation output from Radiance. Radiance has
been used to calculate the single contribution of each building envelope panel, for
each different configuration, and for each reference point. This has been done by
means of a script. In particular the script used allows parametric subdivision of any
surface type, since it is based on mixtures of patterns and not on exact geometry.
Results for both daylight factor and sun hours are collected into matrices and used as
input to the optimization algorithm. Daylight factors and sun hours metrics are calcu-
lated within the optimization for each response point by summing the contribution of
daylight factor and sun hours from each panel in the configuration. Since lighting
analysis can be computationally expensive, the advantage of calculating lighting per-
formance “off-line” is that it only needs to be carried out once for a given envelope
definition and can be used for multiple optimization studies as long as design parame-
ters that influence the calculations, e.g. interior wall height, do not change.
3 Results
The MACO method is now applied to two examples: one benchmark and one project-
motivated scenario. In both examples the internal walls separating the internal spaces
do not reach the ceiling, allowing light to interact by passing between the spaces.
3.1 Benchmark
The benchmark explores the known tradeoff between maximizing daylight factor and
minimizing the number of direct sun hours for a small, 48 panel roof. The scenario is
Multicriteria Optimization of Paneled Building Envelopes Using ACO 631
The best archive of designs generated for the benchmark task is shown in Fig. 3.
The extreme points (top left and lower right) of the archive indicate the best design
found for each single objective. While it is often difficult for multicritieria
632 K. Shea, A. Sedgwick, and G. Antonuntto
optimization to find the exact extreme solutions, in this case the extremes, which are
well known to designers, occur when all panels are either opaque or clear glass. Of
more interest to designers are the solutions in between the extremes that tradeoff the
two criteria. A designer can select designs based on the compromise in design criteria
that they are willing to make.
all panels 01
all panels 02 all panels 03
00 opaque
01 clear glass
02 diffusive
03 low
transmission
glass
The method is now applied to a larger, project-driven scenario for a media centre in
Paris. The scenario is shown in Fig. 4 and consists of a 12m x 20m x 8m parallelepi-
ped shaped space, which is located at the corner of a larger building. The space de-
fined is split into five internal spaces (gallery wall 1, gallery wall 2, meeting area,
reception, office) all with different lighting performance requirements, e.g. gallery
walls must have no direct sun and low daylight factors. The roof and two walls are
divided into 1m x 1m panels, yielding a total of 496 panels. The roof and wall panels
can be made of four different materials (Table 2), hence 4.2x10298 possible designs
exist. The other two walls are internal and are not considered. The building is located
in Paris at longitude 2°21’E and latitude 48°51’N.
The optimization model is defined as:
Maximize {daylight factor (P1, P2, P3, P4, P5), view (P4, P5)} (2)
Multicriteria Optimization of Paneled Building Envelopes Using ACO 633
Fig. 4. Paris media centre space and response point (P1-P5) specification
Fig. 5. Resulting performance trade-offs for Media Centre. (3D view given in Fig. 6)
634 K. Shea, A. Sedgwick, and G. Antonuntto
View is calculated as a single criterion, using predefined matrices, that scores the
view from response points four (P4) and five (P5) through the smallest wall, Sc, to a
defined location. This indicates where clear glass panels are required on wall Sc to
view a desired object in the distance. To illustrate the possibility of including a ther-
mal performance criterion within the optimization studies, the average insulation
value, or U-value (Table 2), over all panels is minimized as a design indicator for
minimizing conductive heat loss. Since view is a single criterion, a total of 11 inde-
pendent design criteria are defined.
Initial results are shown in Fig. 5. To aid understanding of the overall tradeoffs in-
volved, the average daylight factor across all response points and the total sun hours
at all response points are plotted versus cost.
4 Discussion
Given design archives from MACO, the most significant challenge for using them in
design is providing an intuitive, simple GUI for exploring the trade-offs and selecting
designs. To achieve this, a design perspective rather than data perspective, must be
taken. A prototype GUI developed for this project and written in Matlab is shown in
Fig. 6. The GUI allows designers to select designs by activating individual response
point criteria (1-5) as well as adjusting sliders to the desired value for each design
criterion within the range provided in the archive. A design is then selected by find-
ing the nearest solution to the set values, calculated as the average of normalized
distances, and displayed as a 3D model with all criteria values listed. If multiple,
equivalent designs exist the user is notified. The corresponding point in the Pareto
archive is highlighted in the performance view (upper right) using a yellow marker
(Fig. 6). Ranges of individual performance criteria and their corresponding panel
configurations can be studied by deselecting all other criteria. Future improvements to
visualization and interaction include enabling designers to iteratively select perform-
ance criteria and ranges to reduce the solution space and focus in on a desired per-
formance region. Further, means to create customized graphs of the performances,
e.g. daylight factor at response point 2 vs. sun hours at response point 3 vs. cost,
would help to visualize design tradeoffs.
In the results presented, the spread of solutions in design archives is better for day-
light factor as the range is smaller (0-15) than for sun hours (0-100): see Fig. 5 and
Fig. 6. Since the space of potential solutions in the second scenario described is so
vast, improvements to the MACO are necessary to create a robust design tool that can
be used on live building projects. One approach is to improve the Pareto archive fil-
tering technique and how Pareto solutions are selected as new nests from which the
ants start their search (Fig. 1). Convergence tests and MACO parameter studies also
need to be carried out. Further extensions to the calculation of views as well as add-
ing aesthetic models will extend the applicability of the method to architectural design
criteria. Accurate consideration of energy criteria would require full building energy
performance simulation, which can be very computationally intensive and pose diffi-
culties for integration within design processes.
Multicriteria Optimization of Paneled Building Envelopes Using ACO 635
Fig. 6. Prototype building envelope optimization GUI for visualization of Pareto design ar-
chives and design selection
5 Conclusions
A multiobjective ant colony optimization (MACO) method was presented and applied
to optimize paneled building envelopes for individual lighting performance and cost
criteria. Examples of a thermal performance criterion, average U-value, and a view
criterion were also explored. Initial results confirmed a known tradeoff for a bench-
mark example and produced promising results for a project-motivated scenario, which
included 11 independent design criteria defined over five individual spaces and re-
sponse points. A prototype GUI developed to visualize Pareto archives and select
designs enabled designers to explore the issues involved in using such an approach for
live building projects. Further work is required to enable the MACO method to better
handle the large design spaces and large number of independent design criteria in-
volved in the application explored. However, visualization of and interaction with the
high dimensional performance spaces produced by MACO are the most important
factor for creating a beneficial design tool.
Acknowledgements
The authors would like to thank Arfon Davies and Jeff Shaw in the Arup Lighting
Group and Gianni Botsford, Gianni Botsford Architects for their collaboration and
636 K. Shea, A. Sedgwick, and G. Antonuntto
contributions to this work. The MACO method was originally developed by Dr. Onur
Cetin in the Cambridge Engineering Design Centre (EDC) under an IMRC grant
funded by the UK EPSRC.
References
1. Caldas, L. G., L. K. Norford (2002), “A design optimization tool based on a genetic algo-
rithm”, Automation in Construction, 11: 173–184.
2. Caldas, L. G., L. K. Norford (2003), “Genetic Algorithms for Optimization of Building En-
velopes and the Design and Control of HVAC Systems”, ASME Journal of Solar Energy
Engineering, 125: 343-351.
3. Wang, W., H. Rivard, R. Zmeureanu (2005), “An Object-Oriented Framework for Simula-
tion-Based Green Building Design Optimization with Genetic Algorithms”, Advanced En-
gineering Informatics, 19: 5-23.
4. Bouchlaghem, N., (2000), “Optimising the design of building envelopes for thermal per-
formance”, Automation in Construction, 10(1): 101-112.
5. Choudharya, R., A. Malkawib, P.Y. Papalambros (2005), “Analytic target cascading in
simulation-based building design”, Automation in Construction, 14: 551– 568.
6. Dorigo M., G. Di Caro, G., L.M. Gambardella, (1999), “Ant algorithms for discrete optimi-
zation”, Artificial Life, 5(2):137-172.
7. Bonabeau E., M. Dorigo & T. Theraulaz (1999), From Natural to Artificial Swarm Intelli-
gence, New York.
8. Mattson, C. A., Mullur, A. A., and Messac, A., (2004), “Smart Pareto Filter: Obtaining a
Minimal Representation of Multiobjective Design Space,” Engineering Optimization,
36(6): 721–740.
Data Analysis on Complicated Construction Data
Sources: Vision, Research, and Recent Developments
Lucio Soibelman1, Jianfeng Wu1, Carlos Caldas2, Ioannis Brilakis3, and Ken-Yu Lin4
1 Department of Civil and Environmental Engineering, Carnegie Mellon University,
Pittsburgh, PA 15213, USA
{lucio, jianfen1}@andrew.cmu.edu
2 Department of Civil, Architecture and Environmental Engineering,
Abstract. Compared with construction data sources that are usually stored and
analyzed in spreadsheets and single data tables, data sources with more compli-
cated structures, such as text documents, site images, web pages, and project
schedules have been less intensively studied due to additional challenges in data
preparation, representation, and analysis. In this paper, our definition and vision
for advanced data analysis addressing such challenges are presented, together
with related research results from previous work, as well as our recent devel-
opments of data analysis on text-based, image-based, web-based, and network-
based construction sources. It is shown in this paper that particular data prepara-
tion, representation, and analysis operations should be identified, and integrated
with careful problem investigations and scientific validation measures in order
to provide general frameworks in support of information search and knowledge
discovery from such information-abundant data sources.
1 Introduction
With the ever-increasing application of new information technologies in the construc-
tion industry, we have seen an explosive increment of data in both volume and com-
plexity. Some research efforts have been focused on applying data mining tools on
construction data to discover novel and useful knowledge, in support of decision mak-
ing by project managers. However, a majority of such efforts were focused on analyz-
ing electronic databases, such as productivity records and cost estimates available in
spreadsheets or single data tables. Other construction data sources, such as text docu-
ments, digital images, web pages, and project schedules were less intensively studied
due to their complicated data structures, even though they also carry important and
abundant information from current and previous projects.
Such a situation may thwart further efforts in improving information search or les-
sons learning in support of construction decision making. For example, a productivity
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 637 – 652, 2006.
© Springer-Verlag Berlin Heidelberg 2006
638 L. Soibelman et al.
database could help project planners obtain an approximate range of productivity val-
ues for a specific activity, say, building a concrete wall based on historical records.
However, other contextual information influencing the individual values of previous
activities may not be available in the database for various reasons: specifications for
different walls might be available only in text formats; correlations between wall-
building activities and other preceding/succeeding/parallel jobs are stored in sched-
ules; and pictures verifying such correlations, e.g., space conflicts, could be buried in
hundreds of, even thousands of digital images taken in previous projects. This demon-
strates the need for the development of advanced tools for effective and efficient in-
formation search and knowledge discovery from these texts, schedules, images, so
that construction practitioners could make better informed decisions on more compre-
hensive data sources.
In this paper, our vision for data analysis on complicated construction data sources
is first introduced, including its definition, importance, and related issues. Previous
research efforts relevant to data analysis on various construction data sources are re-
viewed. Our recent work in graphical analysis on network-based project schedules is
also shown, and initial results from this ongoing research are demonstrated for further
discussions.
due to difficulties in extracting relevant files for various purposes in a timely manner,
while many information-intensive historical data could be left intact after projects are
finished, without being further analyzed to improve practices in future projects.
Three major challenges could be attributed to such a relative scarce of related re-
search. First, in addition to data preparation operations for general KDD processes
[3], special attention should be taken to filtering out incorrect or irrelevant data from
available construction data sources, while ensuring integrity and completeness of
original information and further integrating it with other contextual information from
other data sources if necessary. Second, extra efforts are needed to reorganize project
data, originally recorded in application-oriented formats, into analysis-friendly repre-
sentations that support efficient and effective KDD implementation. Third, data
analysis techniques, which were initially developed to identify patterns and trends
from single tables or spreadsheets, should be adapted in order to learn useful knowl-
edge from text-based, image-based, web-based, network-based, and other construc-
tion data sources.
In our vision, a complete framework should be designed, developed, and validated
for data analysis on particular types of complicated construction data sources. Three
essential parts should be included in such a framework. First, careful problem investi-
gations should be done in order to understand data analysis issues and related work in
previous research. Second, detailed guidelines for special data preparation, represen-
tation, and analysis operations are also needed to ensure generality and applicability
of the developed framework. And eventually, scientific criteria and techniques should
be employed to evaluate validity of data analysis results and feasibility of extending
such a framework to same or similar types of construction data sources.
where multiple words share the same meaning, where words have multiple meanings,
and where relevant documents don’t contain the user-defined search terms [4].
Recent research addressed some issues related to text data management. Informa-
tion systems and algorithms were designed to improve document management [5,6,7];
controlled vocabularies (thesauri) were used to integrate heterogeneous data represen-
tations including text documents [8]; various data analysis tools were also applied on
text data to create thesauri, extract hierarchical concepts, and group similar files for
reusing past design information and construction knowledge [9,10,11]. In one of the
previous studies conducted by our research group, a framework was devised to ex-
plore the linguistic features of text documents in order to automatically classify, rank,
and associate them with objects in project models [4,12]. This framework involved
several essential steps as discussed in our vision for data analysis on complicated con-
struction data sources:
− Special Data Preparation operations for text documents were identified, such
as transferring text-based information into flat text files from their original for-
mats, including word processors, spreadsheets, emails, and PDF files; removing
irrelevant tags and punctuations in original documents; removing stop words
that are too frequently used to carry useful information for text analysis like ar-
ticles, conjunctions, pronouns, and prepositions; and finally, performing word
stemming to get rid of prefixes and/or suffixes and group words that have the
same conceptual meanings.
− The preprocessed text data were transformed into a specific Data Representa-
tion using a weighted frequency matrix A={aij}, where aij was defined as the
weight of a word i in document j. Various weighting functions were investigated
based on two empirical observations regarding text documents: 1) the more fre-
quent a word is in a document, the more relevant it is to topic of the document;
and 2) the more frequent a word is throughout all documents in the collection,
the more poorly it differentiates between documents. By selecting and applying
appropriate weighting functions, project documents were represented as vectors
in a multi-dimensional space. Query vectors could then be constructed to iden-
tify similar documents based on similarity measures such as Euclidian distance
and the cosine between vectors.
− Data Analysis tools were applied to build document classifiers, as well as docu-
ment retrieval, ranking, and association mechanisms. The proposed methods
require the definition of classes in the project model. These classes are usually
represented as items of a construction information classification system (CICS)
or components of a work breakdown structure (WBS). Classification models are
created for each of these classes and project documents are automatically classi-
fied. Project document classification creates a higher semantic level that facili-
tates the identification of documents that are associated to selected objects. This
is one of the major characteristics of the proposed methodology and represents a
major distinction from existing project document management approaches. In
order to retrieve documents that are related to selected objects, data that charac-
terize the object is extracted from the model and used by the retrieval and rank-
ing mechanisms. In the retrieval and ranking phase, all project documents are
analyzed, but a higher weight is given to documents belonging to the object’s
class. In the Association step, documents are linked to the model objects.
Data Analysis on Complicated Construction Data Sources 641
The proposed methodology uses both the object’s description terms and the ob-
ject’s class as input for the data analysis process. In methods based just on text search,
only documents that contain the search terms can be located. Documents that use
synonyms would not be found. If only the object’s class is considered, some docu-
ments that belong to the object’s class but are not related to the object would be mis-
takenly retrieved. By analyzing all model classes, but giving a higher weight to the
object’s class, related documents in other classes can also be identified.
A prototype system, Unstructured Data Integration System (UDIS), was developed
to classify, retrieve, rank, and associate documents according to their relevance to
project model objects [12]. Figure 1 in the next page illustrates UDIS.
The validation of the proposed methodology was based on the analysis of more
than 25 construction databases and 30,000 electronic documents. The following two
measures were used for comparison: recall and precision, which represent the per-
centage of the retrieved documents that were related to the selected model object, and
the percentage of the related documents that were actually retrieved, respectively.
Figure 2 in the following page shows how recall and precision are defined in informa-
tion retrieval problems. The UDIS prototype achieved better performance in both re-
call and precision than typical project contract management systems, project websites,
and commercial information retrieval systems [12].
Some of the major advantages of this framework include: 1) it did not involve
manual assignments of metadata to any text documents; 2) it did not require a con-
trolled vocabulary that would only be effective if it is adopted by all users of an in-
formation system; and 3) it could provide automated mechanisms to map text docu-
ments to project objects using their internal characteristics instead of user-defined
search terms.
3.2 Text Mining for Ontology-Based Online A/E/C Product Information Search
structure. The employment of machine learning methods on training data sets to gen-
erate a classification system represented by hierarchical clusters of grouped project
documents [4] is one noble transformed example of domain knowledge utilization.
Other applications of domain knowledge represented in the form of taxonomy have
also been reported [14,15,16].
Our research in [17] used thesaurus as a more elaborate form of ontology to help
organize domain concepts related to a given A/E/C product. This application demon-
strated how domain knowledge in the form of a thesaurus was utilized to support data
preparation, representation, and analysis to identify highly relevant online product
documents for a given product domain. Main steps as defined by our vision for ana-
lyzing complicated construction data sources in [17] are described below.
− Data Preparation operations were conducted automatically at two levels. Level
one leveraged the represented thesaurus to identify a high quality pool of possi-
bly relevant online documents. This was achieved by first expanding a single
query into multiple queries (i.e., the growing strategy) and then pooling (i.e., the
trimming strategy) the top-k documents returned by a general search engine
after the expanded queries were entered as inputs. Level two performed word
operations using the linguistic techniques such as removing stop words and
stemming, similar to that introduced in Section 3.1.
− The prepared textual data were converted into document vectors for Data Rep-
resentation purposes. The document vectors were assumed to exist in a multi
dimensional space whose dimensions were defined by the concepts derived
from the product thesaurus. The values of the vectors were defined by a
weighted term-frequency matrix A={aij} where aij was the weight of a thesaurus
term i in document j.
− The Data Analysis was achieved by two stages. In the first stage, each docu-
ment vector was evaluated based on each expanded query, considering the the-
saurus terms as well as the Boolean connectors resided within the query. In the
second stage, a synthesis evaluation was calculated for a given document vector,
considering all the possibly expanded queries with the use of domain thesaurus.
A semi-automated thesaurus generation method was developed in this research in
order to supply domain knowledge for retrieving relevant A/E/C online product in-
formation. With the aid of domain knowledge in the generated thesaurus, the enor-
mous Web space was successfully reduced into a manageable set of candidate online
documents during data preparation. The use of thesauri in data representation also
helped solve the language ambiguity problems so that synonyms like “lift” and “ele-
vator” were treated indifferently. In this research for the purpose of validation, five
data sets originating from different A/E/C product domains were processed by a de-
veloped prototype to return a list of search results that contained highly relevant
online product information. To assess the true relevance of the returned search results,
each web document was examined and labeled manually as ‘relevant’ or ‘non-
relevant’. Then retrieval performance was evaluated accordingly. The performance
evaluation focused particularly on the number of distinct product manufacturers iden-
tified because the goal of this research is to help A/E/C industry practitioners obtain
more product information from the available marketplace. Hence for each data set, the
number of distinct product manufacturers derived from 1) the search results generated
from the established prototype; 2) the searching results returned by Google; and 3) an
existing catalog compiled by SWEETS were compared as displayed in Table 1. It was
observed that in most cases, the developed research prototype consistently retrieved
more distinct product manufactures than the other two approaches.
644 L. Soibelman et al.
Table. 1. Prototype performance comparisons for each of the five data sets [17]
Compared with approaches that enforce standardized data representation, the de-
veloped framework in [17] has more flexibility in dealing with text-based and hetero-
geneous A/E/C product specifications on the Internet. The automatic query expansion
process in the framework also relieved searchers’ burden to structure complex queries
using highly related terms. Therefore, an industry practitioner can take advantage of
the developed A/E/C domain-specific search engine to survey the virtual product
market for making more informed decisions.
Pictures are valuable sources of accurate and compact project information [18]. It is
becoming a common practice for jobsite images to be gathered periodically, stored in
central databases, and utilized in project management tasks. However, the volume of
images stored in an average project is increasing rapidly, making it difficult to manu-
ally browse such images for their utilization. Moreover, the labeling systems used by
digital equipment are tweaked to provide distinct labels for all images acquired to
avoid overlaps, but they do not record any information regarding the visual content.
Consequently, images stored in central databases cannot be queried using conven-
tional data and text search methods and thus, retrievals of jobsite images have so far
been dependent on manual labeling and indexing by engineers.
Research efforts have been developed to address the image retrieval concerns de-
scribed above. A prototype system, for example, was developed by Abudayyeh [19]
based on a MS Access database that allowed engineers to manually link construction
multimedia data including images with other items in an integrated database. The use
of thesauri was also proposed by Kosovak et al. [8] to assist image indexing by ena-
bling users to label images with specific standards. However, both solutions are diffi-
cult to use in practice due to the large number of manual operations that is needed to
classify images in meaningful ways. Also, besides manual operations that are still
involved in both solutions, the issue of how to index images in different ways was not
addressed. This issue is important since it is not unusual that an image could be used
for multiple purposes, even if it had been taken for just one use. For example, an en-
gineer may have taken a picture of protective measures to prove their existence, but
the same picture might contain other useful content needed in the future, such as ma-
terials or structural components included in its background [18].
A novel construction site image retrieval methodology based on material recogni-
tion was developed to address these issues [18,20]. A similar framework was devel-
oped based on our vision for data analysis on complicated construction data sources:
Data Analysis on Complicated Construction Data Sources 645
− Data Preparation: images from construction site were first preprocessed by ex-
tracting basic graphical features, such as intensity, color histograms, and texture
presentations using various filtering methods. Features unrelated to image con-
tent such as intensity of lights and shades were then removed automatically.
− Data Representation: preprocessed images were first divided into separate re-
gions representing different construction objects using appropriate clustering
methods; graphical features of each region were further compressed into quanti-
fiable, compact, and accurate descriptors, or signatures. As a result, a binary im-
age was transformed into several clusters, each with a list of values for a given
set of feature signatures.
− Data Analysis: the feature signatures of each image region were compared with
signatures in an image collection of material samples. Using proper threshold
and/or distance functions, the actual material of each region was recognized. An
example is given in the next page (Figure 3) to show how a site image is de-
composed into clusters, represented with signatures, and identified with actual
materials in each separated region.
Fig. 3. An example for image data preparation, representation, and analysis [20]
The material information was then combined with other spatial, temporal, and de-
sign data to enable automated, fast, and accurate retrieval of relevant site images for
various management tasks in a developed system as illustrated below (Figure 4).
As shown in Figure 4, in addition to image attributes (e.g., materials) that can be
automatically extracted using the above method, time data can be obtained from digi-
tal cameras when image are taken on construction sites; 2D/3D locations can be ob-
tained from positioning technologies; and as-planned and as-built information for
construction products can be retrieved from a model-based system or a construction
database. By combining all such information together, images in the database can be
compared with the query’s criteria in order to filter out irrelevant images and rank the
selected images according to their relevance. The results are then displayed on the
646 L. Soibelman et al.
screen, and the user could easily browse through a very limited amount of sorted im-
ages to identify and choose the desired ones for specific tasks. Figure 4 provides a
detailed example of retrieving images for a brick wall using the developed prototype.
This method was tested on a collection of more than a thousand images from several
projects and evaluated with similar criteria like recall and precision in Section 3.1.
The results showed that images can be successfully classified according to the con-
struction materials visible within the image content [20].
Fig. 4. Image retrieval based on image contents and other attributes [20]
The original contribution of this research work on construction site image indexing
and retrieval could be extended to solve other related problems. For example, our
research group is currently investigating a new application of image processing, pat-
tern recognition, and computer vision technologies for fully automated detection and
classification of defects found in acquired images and video clips of underground
pipelines.
A major motivation of our new research work resides on the rapid aging of pipeline
infrastructure systems. It is desirable to develop and use more effective methods for
assessing the condition of pipelines, evaluating the level of deterioration, and deter-
mining the type and probability of defects to facilitate decisions making for mainte-
nance/repair/re-habilitation strategies that will be least disruptive, most cost-effective
and safest [21]. Currently, trained human operators are required to examine acquired
visual data and identify the detailed defect class/subclass according to the standard
pipeline condition grading system, such as Pipeline Assessment and Certification
Program (PACP) [22], which makes the process error-prone and labor-intensive.
Building on our previous research efforts in image analysis [18] and former research
efforts in defect detection and classification of pipe defects [23,24,25], we are study-
ing more advanced data preparation, representation, analysis solutions to boost
Data Analysis on Complicated Construction Data Sources 647
Since computer software for Critical Path Method, or CPM-based scheduling started
to be used in the construction industry decades ago, planning and schedule control
history of previous projects has been increasingly available in computerized systems
in many companies. However, there is a current lack of appropriate data analysis tools
for scheduling networks with thousands of activities and complicated dependencies
among them. As a result, schedule historical data are left intact without being further
analyzed to improve future practices. At the same time, companies are still relying on
human planners to make critical decisions in various scheduling tasks manually and in
a case-by-case manner. Such practices, however, are human-dependent and error-
prone in many cases due to subjective, implicit, and incomplete scheduling knowl-
edge that planners have to use for decision making within a limited timeframe.
Many research efforts have been focused on improving current scheduling prac-
tices. Delay analysis tools were applied to identify fluctuations in project implementa-
tion, responsible parties for the delays, and predict possible consequences [26,27];
machine learning tools like genetic algorithms (GA) were applied to explore optimal
or near-optimal solutions for resource leveling and/or time-cost tradeoff problems, by
effectively searching only a small part of large sample space for construction
alternatives [28,29,30]; simulation and visualization methods were developed on do-
main-specific process models to identify and prevent potential problems during im-
plementation [31,32]; and knowledge-based systems were also introduced to collect,
process, and represent knowledge from experts and other sources for automated gen-
eration and reviewing of project schedules [33,34]. Such research efforts helped im-
prove efficiency and quality of scheduling work in many aspects, but none of them
addressed the above “data rich but knowledge scarce” issue, i.e., no research de-
scribed here has worked on learning explicit and objective knowledge from abundant
planning and schedule control history of previous projects.
Our research is intended to address this issue related to scheduling knowledge dis-
covery from historical network-based schedules, so that more explicit and objective
patterns could be identified from previous schedules in support of project planners’
decision making. A case study was implemented on a project control database to iden-
tify special problems in preparing, representing, and analyzing project schedules for
this purpose.
648 L. Soibelman et al.
The project control database for this case study was collected from a large capital
facility project. Three data tables were used in the scheduling analysis: one data table
with detailed information about individual activities; one with a complete list of
predecessors and successors of dependencies among the activities; and a third table
with historical records about activities that were not completed as planned during im-
plementation, and reasons for their non-completions.
In this step, we first separated nearly 21,000 activities into about 2,600 networks
based on their connectivity. Another special data preparation task identified was the
removal of redundant dependencies in networks [35]. As shown in Figure 5, a de-
pendency from activity X to Y is redundant if there is another activity Z, such that Z
is succeeding to X, and Y is succeeding to Z, directly or indirectly. Obviously, the
dependency from X to Y is unnecessary because Y can not be started immediately
after X anyway.
Type descriptions for all 2,600 networks in this study were generated using a com-
puter program developed on a Matlab® 7.0 platform. By comparing type descriptions
of networks with their probabilities of non-conformances, i.e., having non-completion
tasks during implementation, some interesting patterns were found. For example, in
this project, the non-conformance rate of networks that started with a single task (type
description: “1ÆnÆ…”) was significantly lower than that of networks started with
650 L. Soibelman et al.
multiple tasks (type description: “nÆ1Æ…”). This pattern is simple in its statement,
but it could be hard to discover if the complicated network structures had not been
abstracted into precise and concise descriptions. And it makes sense considering that a
schedule that starts with multiple parallel tasks at the beginning could be more risky
due to lack of sufficient preparations by managerial team at an early stage. Also,
comparisons between non-completion rates of activities at different positions turned
out some meaningful results. The non-completion rate of tasks connecting sequential
or parallel sub-networks, i.e., those with more than one preceding and/or succeeding
tasks, was significantly larger than that of tasks among ‘atomic’ sequences, i.e., those
with at most one preceding and succeeding activities. It also verify our previous hy-
pothesis made during data representation that how a network is split into branches or
converged from branches could be a good indicator of reliability during implement
for specific networks and activities.
These initial results also demonstrated the potentials of extending our current work
in many directions: 1) the recursive division method makes it possible to decompose
large and complicated scheduling networks into multiple levels of components for
further representation of project planning and schedule control history; 2) the abstract
descriptions could be viewed as extracted graphical features of scheduling networks,
which could be integrated with other features of activities and dependencies for iden-
tifying other general and useful patterns in scheduling data; and 3) the complete proc-
ess of scheduling data preparation, representation, and analysis provided primary
insights and experiences for the following development of this research on a larger
scale.
with improved access to historical data and/or knowledge; such research efforts may
also be combined with current data modeling and information management research
to integrate construction data sources with various formats and broad contexts for data
retrieval, queries, and analysis; and it is expected that new challenges and pattern rec-
ognitions tasks could be identified and inspire innovative research methods in data
representation and data mining communities as well.
Acknowledgements
The authors would like to thank US National Science Foundation for its support under
Grants No. 0201299 (CAREER) and No. 0093841, and to all industrial collaborators
who provided the data sources for our previous and current studies.
References
1. Buchheit, R.B., Garret J.H. Jr., Lee, S.R. and Brahme, R.: A Knowledge Discovery Case
Study for the Intelligent Workplace. Proc. of the 8th Int. Conf. on Comp. in Civil and
Building Eng., Stanford, CA (2000), 914-921
2. Soibelman, L. and Kim, H.: Data Preparation Process for Construction Knowledge Gen-
eration through Knowledge Discovery in Databases. J. of Comp. in Civil Eng., ASCE,
1(2002), 39-48.
3. Soibelman, L, Kim, H., and Wu, J.: Knowledge Discovery for Project Delay Analysis.
Bauingenieur, Springer VDI Verlag, Feburary 2005
4. Caldas, H.C., Soibelman, L., and Han, J.: Automated Classification of Construction Pro-
ject Documents. J. of Comp. in Civil Eng., ASCE, 4(2002), 234-243
5. Hajjar, D. and AbouRizk, M.S.: Integrating Document Management with Project and
Company Data. J. of Comp. in Civil Eng., ASCE, 1(2000), 70-77.
6. Zhu, Y., Issa, R.R.A., and Cox, R.F.: Web-Based Construction Document Processing via
Malleable Frame. J. of Comp. in Civil Eng., ASCE, 3(2001), 157-169.
7. Fruchter, R., and Reiner, K.: Model-Centered World Wide Web Coach. Proc. of 3rd Con-
gress on Comp. in Civil Engineering, ASCE, Anaheim, CA (1996), 1-7.
8. Kosovac, B., Froese, T.M. and Vanier, D.J.: Integration Heterogeneous Data Representa-
tions in Model-Based AEC/FM Systems. Proc. of CIT 2000, Reykjavik, Iceland (2000),
556-567
9. Yang, M.C., Wood, W.H., and Cutkosky, M.R.: Data Mining for Thesaurus Generation in
Informal Design Information Retrieval. Proc. of Int. Congress on Comp. in Civil Eng.,
ASCE, Boston, MA (1998), 189-200
10. Scherer, R.J. and Reul, S.: Retrieval of Project Knowledge from Heterogeneous AEC
Documents. Proc. of the 8th Int. Conf. on Comp. in Civil and Building Eng., Stanford, CA
(2000), 812-819
11. Wood, W.H.: The Development of Modes in Textual Design Data. Proc. of the 8th Int.
Conf. on Comp. in Civil and Building Eng., Stanford, CA (2000), 882-889
12. Caldas, H.C., Soibelman, L., and Gasser, L.: Methodology for the Integration of Project
Documents in Model-Based Information Systems. J. of Comp. in Civil Eng., ASCE,
1(2005), 25-33
13. Dewan, R., Freimer, M., and Seimann, A.: Portal Kombat: The Battle between Web Pages
to become the Point of Entry to the World Wide Web. Proc. of the 32nd Hawaii Intl. Conf.
System Science, IEEE, Los Alamitos, CA, USA (1999)
652 L. Soibelman et al.
14. El-Diraby, T.A., Lima, C., and Feis, S.: Domain Taxonomy for Construction Concepts:
Toward a Formal Ontology for Construction Knowledge. J. of Comp. in Civil Eng.,
ASCE, 4 (2005), 394–406.
15. Lima, C., Stephens, J., and Böhms, M.: The bcXML: Supporting Ecommerce and Knowl-
edge Management in the Construction Industry. ITCON, (2003), 293–308.
16. Staub-French, S., Fischer, M., Kunz, J., Paulson, B., and Ishii, K.: An Ontology for Relat-
ing Features of Building Product Models with Construction Activities to Support Cost Es-
timating”, CIFE Working Paper #70, July 2002, Stanford University.
17. Lin, K. and Soibelman, L.: Promoting Transactions for A/E/C Product Information. Auto-
mation in Construction, Elsevier Science, in print
18. Brilakis, I. and Soibelman, L.: Material-Based Construction Site Image Retrieval. J. of
Comp. in Civil Eng., ASCE, 4(2005), 341-355
19. Abudayyeh, O.: Audio/Visual Information in Construction Project Control. J. of Adv. Eng.
Software, Elsevier Science, 2(1997), 97-101
20. Brilakis, I. and Soibelman, L.: Multi-Modal Image Retrieval from Construction Databases
and Model-Based Systems, J. of Constr. Eng. and Mgmt., ASCE, July 2006, in print
21. Water Environment Federation (WEF): Existing Sewer Evaluation and Rehabilitation.
ASCE Manuals and References on Eng. Practices, Ref. No. 62, Alexandria, VA, 1994.
22. Pipeline Assessment and Certification Program (PACP) Manual, Second Edition Refer-
ence Manual, NASSCO, 2001
23. Chae, M.J. and Abraham, D.M.: Neuro-Fuzzy Approaches for Sanitary Sewer Pipeline
Condition Assessment. J. Comp. in Civil. Eng, ASCE, 1(2001), 4–14
24. Sinha, S.K. and Fieguth, P.W.: Automated Detection of Cracks in Buried Concrete Pipe
Images. Automation in Construction, Elsevier Science, 1(2005), 58-72
25. Sinha, S.K. & Fieguth, P.W.: Neuro-Fuzzy Network for the Classification of Buried Pipe
Defects. Automation in Construction, Elsevier Science, 1(2005), 73-83
26. Hegazy, T. and Zhang, K.: Daily Windows Delay Analysis. J. of Constr. Eng. and Mgmt.,
ASCE, 5(2005), 505-512
27. Shi, J.J., Cheung, S.O. and Arditi, D.: Construction Delay Computation Method. J. of
Comp. in Civil Eng., ASCE, 1(2001), 60-65
28. Hegazy, T. and Kassab, M.: Resource Optimization Using Combined Simulation and Ge-
netic Algorithms. J. of Constr. Eng. and Mgmt., ASCE, 6(2003), 698-705
29. Feng, C., Liu, L. and Burns, S.A.: Using Genetic Algorithms to Solve Construction Time-
Cost Trade-Off Problems. J. of Comp. in Civil Eng., ASCE, 3(1997), 184-189
30. El-Rayes, K. and Kandil, A.: Time-Cost-Quality Trade-off Analysis for Highway Con-
struction. J. of Constr. Eng. and Mgmt., ASCE, 4(2005), 477-486
31. Akinci, B., Fischer, M. and Kunz, J.: Automated Generation of Work Spaces Required by
Construction Activities. J. of Constr. Eng. and Mgmt., ASCE, 4(2002), 306-315
32. Kamat, V.R. and Martinez, J.C.: Large-Scale Dynamic Terrain in Three-Dimensional
Construction Process Visualizations. J. of Comp. in Civil Eng., ASCE, 2(2005), 160-171
33. Dzeng, R. and Tommelain, I.D.: Boiler Erection Scheduling Using Product Models and
Case-Based Reasoning. J. of Constr. Eng. and Mgmt., ASCE, 3(1997), 338-347
34. Dzeng, R. and Lee, H.: Critiquing Contractors’ Scheduling by Integrating Rule-Based and
Case-Based Reasoning. Automation in Construction, Elsevier Science, 5(2004), 665-678
35. Kolisch R., Sprecher A. and Drexl A.: Characterization and Generation of a General Class
of Resource-Constrained Project Scheduling Problems. Mgmt. Science, (1995), 1693-1703
Constructing Design Representations
Abstract. Supporting the early phases of design requires, among others, support
for the specification and use of multiple and evolving representations, and for
the exchange of information between these representations. We consider a
complex adaptive system as a model for the development of design representa-
tions, and present a semi-constructive algebraic formalism for design represen-
tations, termed sorts, as a candidate for supporting this approach. We analyze
sorts with respect to the requirements of a complex adaptive system and com-
pare it to other representational formalisms that consider a constructive ap-
proach to representations.
1 Introduction
Design is a multi-disciplinary process, involving participants, knowledge and infor-
mation from various domains. As such, design problems require a multiplicity of
viewpoints each distinguished by particular interests and emphases, and each of these
views, in turn, requires a different representation of the design entity. Even within the
same task and for the same person, various representations may serve different pur-
poses defined within the problem context and the selected approach. Especially in the
early phases of design, the exploratory and dynamic nature of the design process
invites a variety of approaches and representations, and any particular representation
may be as much an outcome of as a means to the design process. Therefore, support-
ing the designer in these early phases requires, among others, support for the specifi-
cation and use of multiple and evolving representations, and for the exchange of in-
formation between these representations.
Various modeling schemes for defining product models and ontologies exist (e.g.,
[1], [8]). These allow for the development of representations in support of different
disciplines or methodologies, and enable information exchange between representa-
tions and collaboration across disciplines. However, they still require an a priori effort
at establishing an agreement on concepts and relationships, which offer a complete
and uniform description of the project data, mainly independent of any project specif-
ics. We are particularly interested in providing the user access to the specification of a
design representation, and the means to create and adapt design representations ac-
cording to the designer’s intentions in the task at hand, in support of creativity. Crea-
tivity, as an activity in the design process, relies on a restructuring of information that
is not yet captured in a current information structure — that is, emergent information
— for example, when the design provides new insights that lead to a new interpreta-
tion of constituent design entities.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 653 – 662, 2006.
© Springer-Verlag Berlin Heidelberg 2006
654 R. Stouffs and A. ter Haar
1
Note that this syntax differs slightly from the syntax adopted by Baader et al. [3], which, for
example, differentiates the intersection constructor on concepts from the operation of inter-
section on interpretations. Interpretations do not play a role in this example.
658 R. Stouffs and A. ter Haar
Behaviors also apply to composite sorts, that is, a part relationship can be defined
for its component data elements belonging to a composite sort defined under a con-
junction (attribute operator) or disjunction. The composite sort inherits its behavior
from its components in a manner that depends on the compositional relationship.
The disjunctive operator distinguishes all operand sorts such that each data element
belongs explicitly to one of these sorts. Consequently, a data element is part of a dis-
junctive data collection if it is a part of the partial data collection of elements from the
same component sort. In other words, data collections from different component
sorts, under the disjunctive operator, never interact; the resulting data collection is the
set of collections from all component sorts. When the operation of addition, subtrac-
tion or product is applied to two data collections of the same disjunctive sort, the
operation instead applies to the respective component collections.
Under the attribute operator a data element is part of a data collection if it is a part
of the data elements of the first component sort, and if it has an attribute collection
that is a part of the respective attribute collection(s) of the data element(s) of the first
component sort it is a part of. When data collections of the same composite sort (un-
der the attribute operator) are pairwise summed (differenced or intersected), identical
data elements merge, and their attribute collections combine, under this operation.
Elements with empty attributes are removed.
When reorganizing the composition of sorts under the attribute operator, the corre-
sponding behavior may be altered in such a way as to trigger data loss. Consider a
behavior for weights [12] (e.g., line thickness or surface tones) as becomes apparent
from drawings on paper — a single line drawn multiple times, each time with a dif-
ferent thickness, appears as if it were drawn once with the largest thickness, even
though it assumes the same line with other thickness. When using numeric values to
represent weights, the part relation on weights corresponds to the less-than-or-equal
relation on numeric values. Thus, weights can combine into a single weight, which
has as its value the least upper bound of all the respective weight values, i.e., their
maximum value. Similarly, the common value (intersection) of a collection of weights
is the greatest lower bound of all the individual weights, i.e., their minimum value.
The result of subtracting one weight from another is either a weight that equals the
numeric difference of their values or zero (i.e., no weight), and this depends on their
relative values.
Now consider a sort of weighted entities, say points, i.e., a sort of points with at-
tribute weights, and a sort of pointed weights, i.e., a sort of weights with attribute
points. A collection of weighted points defines a set of non-identical points, each
having a single weight assigned (possibly the maximum value of various weights
assigned to the same point). These weights may be different for different points. On
the other hand, a collection of pointed weights is defined as a single weight (which is
the maximum of all weights considered) with an attribute collection of points. In both
cases, points are associated with weights. However, in the first case, different points
may be associated with different weights, whereas, in the second case, all points are
associated with the same weight. In a conversion from the first to the second sort, data
loss is inevitable.
An understanding of when and where exact translation of data between different
sorts, or representations, is possible, becomes important for assessing data integrity
and controlling data flow [16]. Data loss can easily be assessed under the subsumption
Constructing Design Representations 659
relationship. If one sort subsumes another, exact translation is trivial from the part to
the whole. If two sorts subsume a third, exact translation only applies to the data that
can be said to belong to the third sort. When the subsumption relationship doesn’t
apply, such as under the attribute operator — as is the case in the examples above —
sorts can still be compared, roughly, as equivalent, similar, convertible and incongru-
ent [15]. Two sorts are said to be equivalent if these are semantically derived from the
same sort — through renaming. Equivalent sorts are syntactically identical; this guar-
antees exact translation of data, except for a loss of semantic identity. Two sorts are
denoted similar if these are similarly constructed from equivalent sorts. The similarity
of sorts relies on the existence of a semi-canonical form of a composite sort as a dis-
junctive composition over sorts, each of which is either a primitive sort or composed
of primitive sorts under the attribute operator [15]. Associative and distributive rules
with respect to both compositional operators allow for a syntactical reduction of sorts
to this semi-canonical form. If two sorts reduce to the same semi-canonical form, then
these sorts are considered similar, and exact translation, except for a loss of semantic
identity, applies. Otherwise, two sorts are either convertible or incongruent. If two
sorts are convertible, data loss depends also on their behavioral specification, as in the
examples above.
5 Functional Descriptions
The part relationship that underlies the behavioral specification for a sort enables data
recognition to be implemented for this sort; since composite sorts inherit their behavior
and part relationship from their component sorts, any technical difficulties in imple-
menting data recognition apply just once, for each primitive sort. Data recognition
plays an important role in the specification of design queries. So does counting. Stouffs
and Krishnamurti [13] indicate how a query language for querying graphical design
information can be built from basic operations and geometric relations that are defined
as part of a maximal element representation for weighted geometries, augmented with
operations that are derived from techniques of counting and data recognition. For
example, by augmenting networks of utility pipes, represented as volumes (or plane
segments) with appropriate behavioral specification, with labels as attributes, and by
combining these augmented geometries under the operation of sum, colliding pipes
specifically result in geometries that have more than one label as attribute. These colli-
sions can easily be counted, while the labels on each geometry identify the colliding
pipes, and each geometry itself specifies the location of the collision [13].
In order to consider counting and other functional behavior as part of the represen-
tational approach, sorts consider data functions as a data kind, offering functional
behavior integrated into data constructs. Data functions are assigned to apply to one
or more selected sorts — specifically, they apply over tuples of data entities, one from
each selected sort, where these data entities relate to the function under a sequence of
one or more compositional relationships. Then, the result value of the data function is
computed (iteratively) from the values of these tuples of data entities. The value of a
data entity used in the computation is the actual value of the entity, such as its
numeric value, or the position vector for a point, but may also be a derived value,
such as the length of a line segment, or its direction vector. The data function’s result
value is automatically recomputed each time the data structure is traversed, e.g., when
660 R. Stouffs and A. ter Haar
visualizing the structure. As a data kind, data functions specify both a functional de-
scription, a result value, and one or more sorts and their respective value methods.
Data functions can introduce specific behaviors and functionalities into representa-
tional structures, for the purpose of counting or other numerical or geometric opera-
tions. Consider, for example, a sort of linear building elements, represented as line
segments, with an attribute sort specifying the cost of each element per unit length.
Then, by augmenting the corresponding data construct with a sum-over-product func-
tion applied to the numeric value of the cost entities and the length value of the linear
elements, the value of this function is automatically computed as the total cost of all
the building elements. As another example, consider a composite sort specifying both
a reference point and a number of emergency exits represented as line segments.
Then, a minimum-value function in combination with a function that computes the
distance between a position vector and a line segment, specified by two end vectors,
will yield the minimum distance from the reference point to any emergency exit.
Moving data functions in the data construct, by altering the compositional structure
of the representation, alters the scope of the function — that is, the sorts’ data entities
that relate to the function under a sequence of one or more compositional relation-
ships — and thereby its result. In this way, data functions can be used as a technique
for querying design information, where moving the data function alters the query.
6 Discussion
Sorts present an algebraic formalism for constructing design representations. A sortal
representation is a composition of sorts that can easily be modified by adding and
removing sorts or by altering the constructive relationships, and that can be given a
name. A subsumption relationship over sorts, in combination with a behavioral specifi-
cation of sorts, allows sortal representations to be compared and related with respect to
scope and coverage, and data loss to be assessed when converting data from one repre-
sentation to another. Data functions can be integrated into data constructs in order to
query design information. In this way, the design representation is an intricate part of
the design outcome, and the construction of a design representation can be the result of
correspondence that forms part of the design process. This naturally raises the question
how such correspondence can be facilitated through an application interface.
In developing such an application interface, we consider three aspects in particular;
these are the ability to conceptualize representational structures, the need for effective
visualizations of these structures and the embedding of the application in a practical
context. First, we’re considering the definition of sorts as the specification of a con-
cept hierarchy that, subsequently, can be detailed into a representational structure
consisting of primitive sorts and constructive relationships. By separating the specifi-
cation of the representational semantics (the names of the structures and their hierar-
chical relationships) from the specification of the nuts and bolts (the data types and
their behaviors, and the distinction between disjunctively compositional and attribute
relationships) we aim to ease a conceptualization of the intended representational
structures that facilitates their development. Secondly, we’re exploring effective
(graphical) visualizations of (parts of) the representational and data structures that can
offer the user insight into these structures. In particular, we’re implementing a dy-
namic visualization of these structures with variable focus and level of detail.
Constructing Design Representations 661
Acknowledgments
The first author wishes to thank Ramesh Krishnamurti for his collaboration on this
research.
662 R. Stouffs and A. ter Haar
References
1. AIA Model Support Group: IFC2x Edition 3. International Alliance for Interoperability
(2006). http://www.iai-international.org/Model/R2x3_final/index.htm (1 May 2006)
2. Aït-Kaci, H.: A lattice theoretic approach to computation based on a calculus of partially
ordered type structures (property inheritance, semantic nets, graph unification). Ph.D.
Diss. University of Pennsylvania, Philadelphia, PA (1984)
3. Baader, F., Calvanese, D., McGuinness, D., Nardi, D., Patel-Schneider, P.: The Descrip-
tion Logic Handbook: Theory, Implementation and Applications. Cambridge University
Press, Cambridge (2003)
4. Datta, S.: Modeling dialogue with mixed initiative in design space exploration. Artificial
Intelligence for Engineering Design, Analysis and Manufacturing 20 (2006) 129-142
5. Dooley, K.J.: A complex adaptive systems model of organization change. Nonlinear Dy-
namics, Psychology, and Life Sciences 1 (1997) 69-97
6. Kooistra, J.: Flowing. Systems Research and Behavioral Science 19 (2002) 123-127
7. Krishnamurti, R., Stouffs, R.: The boundary of a shape and its classification. The Journal
of Design Research 4(1) (2004)
8. Manola, F., Miller, E. (eds.): RDF Primer. W3C World Wide Web Consortium (2004).
http://www.w3.org/TR/rdf-primer/ (1 May 2006)
9. Park, K., Krishnamurti, R.: Flexible design representation for construction. In: Lee, H.S.,
Choi, J.W. (eds.): CAADRIA 2004. Yonsei University Press, Seoul, South Korea (2004)
671-680
10. Park, K., Krishnamurti, R.: The digital diary of a building. In: Bhatt, A. (ed.):
CAADRIA'05, Vol 2. TVB School of Habitat Studies, New Delhi (2005) 15-25
11. Prigogine, I., Stengers, I.: Order out of Chaos. Bantam Books, New York (1984)
12. Stiny, G.: Weights. Environment and Planning B: Planning and Design 19 (1992) 413-430
13. Stouffs, R., Krishnamurti, R.: On a query language for weighted geometries. In: Moselhi,
O., Bedard, C., Alkass, S. (eds.): Third Canadian Conference on Computing in Civil and
Building Engineering. Canadian Society for Civil Engineering, Montreal (1996) 783-793
14. Stouffs, R., Krishnamurti, R.: The extensibility and applicability of geometric representa-
tions. In: Architecture proceedings of 3rd Design and Decision Support Systems in Archi-
tecture and Urban Planning Conference. Eindhoven University of Technology, Eindhoven,
The Netherlands (1996) 436-452
15. Stouffs, R. Krishnamurti, R., Cumming, M.: Mapping design information by manipulating
representational structures. In: Akın, Ö., Krishnamurti, R., Lam, K.P. (eds.): Generative
CAD Systems. School of Architecture, Carnegie Mellon University, Pittsburgh, PA (2004)
387-400
16. Stouffs, R., Krishnamurti, R., Eastman, C.M.: A Formal Structure for Nonequivalent Solid
Representations. In: Finger, S., Mäntylä, M., Tomiyama, T. (eds.): Proc. IFIP WG 5.2
Workshop on Knowledge Intensive CAD II. IFIP WG 5.2, Pittsburgh, PA (1996) 269-289
17. van Leeuwen, J.P., Hendrickx A., Fridqvist, S.: Towards dynamic information modelling
in architectural design. Proc. CIB-W78 International Conference IT in Construction in Af-
rica. CSIR, Pretoria (2001) 19.1-19.14
18. Woodbury, R., Burrow, A., Datta, S., Chang, T.: Typed feature structures and design space
exploration. Artificial Intelligence in Design, Engineering and Manufacturing 13 (1999)
287-302
Methodologies for Construction Informatics Research
Žiga Turk
1 Introduction
Construction informatics1 research is pursuing a direction much along the demarca-
tion line between scientific research and engineering problem solving. The goal of the
first is to help us understand the fundamental phenomena in nature, man and society
while the goal of engineering is practical and related to useful products. The research
work in construction informatics is winding across this vaguely understood line with-
out realizing the differences in the methodological apparatus applicable for the two
different missions. As the topic of construction informatics is maturing, a critical
overview of the appropriate methodologies presented. This paper - a position paper by
nature - should contribute to the improvement of research methods that are used in the
community.
After making a distinction between science and engineering we are further classifying
kinds of scientific research.
Good source of definitions and classification is the OECD's Frascatti Manual [1]. It
classifies science and technology into natural sciences, engineering and technology,
medical sciences, agricultural sciences, social sciences and humanities. It identifies
peculiarities of R&D in the software and informatics from research policy manage-
ment perspective: "Software development has since become a major intangible inno-
vation activity with a high R&D content. In addition, an increasing share of relevant
activities draws on the social sciences and humanities, and, together with advances in
computing, leads to intangible innovations in service activities and products, with a
1
Also referred to as computing in civil engineering, construction information technology or
information and communication technology in construction.
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 663 – 669, 2006.
© Springer-Verlag Berlin Heidelberg 2006
664 Ž. Turk
growing contributions from service industries in the business enterprise sector. The
tools developed for identifying R&D in traditional fields and industries are not al-
ways easy to apply to these new areas."
By method the main distinction is between the theoretical and empirical research.
By result type research can be analytic or synthetic (constructive). By what it studies
we can distinguish between the study of objective real world and the study of subjec-
tive interpreted world. Other taxonomies [2] refer to philosophical grounding (positiv-
ism, negativism, historicism, critical rationalism, hermeneutics and interpretivism)
and research strategies (induction, deduction, retroduction and abduction).
This paper argues that the main methodological problems in construction informat-
ics research can be traced back to the wrong categorization of research according to
these three criteria, in particular in the wrong assumption that it studies the objective
real world.
Schutz [3] distinguishes between first and second order constructs - the first corre-
sponding to what some call "real world" and the second by what some would call
"subjective, interpreted, constructed" world. Heidegger opened the debate weather not
everything is in fact constructed, but to argue appropriate research methods in con-
struction informatics we do not need to share this concern. We can safely assume that
natural sciences (the study of nature including man, including physics, chemistry,
biology, medicine) concern itself with first order constructs. They exist a-priori the
observer, independently of him. They are not changed or influenced by observation or
intellectual manipulation. For example, in a limited context of a structural mechanics
theory, a structural beam is a first order construct. No matter what kind of theory we
set about its behavior, in a lab, under weight, it would bend regardless of the favorite
theory of the structural mechanics researcher.
The second order constructs are "constructs of the constructs made by actors" and
are therefore influenced and changed by the observer(s). They are constructed within
a personal, organizational, social or historical context. The concepts studied by con-
struction IT (organizations, processes, works, information, representations, engineers,
technologies) are such second order constructs. A model of how an architect and
engineer collaborate, a new collaboration method or tool would change what was
initially observed. A data structure describing a generic "beam" would influence the
way this concepts is understood by the engineers.
Some fields of construction informatics are concerned with first order constructs. For
example simulation is predicting some real world behavior and sensing is measuring
the real world. Information technology is in this case supporting a natural science, for
example mechanics, building physics etc. The "scientific method", in its best positiv-
ist tradition, can and should be used.
Methodologies for Construction Informatics Research 665
The study of first order constructs fits well the traditional scientific method also re-
ferred to as "positivist science". For the illustration purposes we will use the famous
Galileo's foundations of structural mechanics and materials science in 1638 [4]. The
steps in the scientific method [5] are as follows:
1. Wonder. Pose a question. How is the load that a cantilever can sustain (Figure 1,
middle), related to the tensile strength of that same beam (Figure 1, left).
2. Hypothesis. Suggest a plausible answer (a theory) from which some empirically
testable hypothetical propositions can be deduced. Gallileo's proposed answer was
E = Sbh2 / 2l (1)
where S is the tensile strength so that
E = Sbh (2)
3. Testing. Construct and perform an experiment which makes it possible to ob-
serve whether the consequences specified in one or more of those hypothetical propo-
sitions actually follow when the conditions specified in the same proposition(s) per-
tain. If the experiment fails, return to step 2, otherwise go to step 4. Galileo confirmed
the formula by an experiment shown in Figure 1, right and which gave the appropriate
relative strengths of both cantilevers.
4. Accept the hypothesis as provisionally true. Return to step 3 if there other pre-
dictable consequences of the theory which have not been experimentally confirmed.
In 1729 the strength was reduced to Sbh2 / 3l and only around 1800 a correct formula
Sbh2 / 6l came into use.
5. Act accordingly. Until the end of 18th century, structures were built using formu-
lae that ware two or three fold on the dangerous side. Many of those structures still
stand, including the pipelines for the water of Versailles that were built according to
the 1729 equation.
problem of the approach is that in many cases the researchers do not genuinely enter
the problem situation.
4 Conclusions
In one of the key action research papers Susman and Evered [12] wrote: ”…we sug-
gest that the researcher ought to be skeptical of positivist science when (a) the unit of
analysis is, like the researcher, a self-reflecting subject, (b) when relationships be-
tween subjects are influenced by definitions of the situation, or (c) when the reason
for undertaking the research is to solve a problem which the actors have helped to
define”.(a,b,c added by author for clarity). Many problems in construction informat-
ics research fit this definition perfectly, particularly they fit into (b) and (c).
Most researchers in construction informatics have backgrounds in architecture, en-
gineering and computer science and have been brought up in the positivistic traditions
of physics, mechanics and mathematics. We are trying hard to map the methods of
those traditions into research areas that are of different nature. Most of the philoso-
phical discussion on the scientific methods (e.g. Popper, Feyerabend, Kuhn) has been
dealing with sciences that observe nature and whose hypotheses and theories are con-
firmed or refuted by observing some real world phenomena.
The experiences with construction informatics research paradigm evolution, par-
ticularly the one that takes information modeling as its underlying methodology, are
inviting a conclusion concurring with Feyerabend. He claimed [13] that a theory of
science in general is too crude for application to specific problems; specific in this
case means the different paradigms that govern in astronomy, biology, mathematics,
Methodologies for Construction Informatics Research 669
physics or, of course, construction informatics. It needs its own theory and methodol-
ogy and should borrow not only from natural sciences but from humanities and social
sciences as well. It is not only applied computer science, it has elements of applied
humanity as well. Such understanding would most likely bring some stability into the
paradigms and methods being used.
References
1. OECD (2002) Frascati Manual, OECD Publication Services.
2. Blaikie, N. (1993). Approaches to Social Enquiry, Polity Press, Cambridge
3. Schutz, A. (1962). Common-Sense and Scientific Interpretation of Human Action, Col-
lected Papers, Vol. 1: The Problem of Social Reality, Martinus Hijhoff, The Hague,
Netherlands.
4. Petroski H. (1994). Design Paradigms, Cambridge University Press, UK.
5. Dye, J. (1996). Socratic Method and Scientific Method, http://www.soci.niu.edu/
~phildept/Dye/method.html
6. Gielingh W (1988) General AEC reference model (GARM), Christiansson P, Karlsson H
(ed.); Conceptual modelling of buildings. CIB W74+W78 seminar, October, 1988. Lund
university and the Swedish building centre. CIB proceedings 126 http://itc.scix.net/cgi-
bin/works/Show?w78-1988-165
7. de Kleer, J. and J. S. Brown (1983), “Assumptions and Ambiguities in Mechanistic Mental
Models,” Mental Models, D. Gentner and A. L. Stevens (Eds.), Lawrence Erlbaum Asso-
ciates, New Jersey, pp. 155-190.
8. Henson, B., N. Juster and A. de Pennington (1994), “Towards an Integrated Representa-
tion of Function, Behavior and Form,” Computer Aided Conceptual Design, Proceedings
of the 1994 Lancaster International Workshop on Engineering Design, Sharpe J. and V.
Oh (eds.), Lancaster University EDC, pp. 95-111.
9. Qian L. and J. S. Gero (1996), “Function-Behavior-Structure Paths and Their Role in
Analogy-Based Design,” Artificial Intelligence for Engineering Design, Analysis and
Manufacturing, Vol. 10, No. 4, pp. 289-312.
10. Avison, D., Lau, F., Myers, M. and Nielsen, P. A. Action research, Communications of the
ACM, vol. 42, no. 1, 1999.
11. Popper, K. (1935). The logic of scientific discovery, Routledge Classics, 2002 (first pub-
lished in German in 1935).
12. Susman, G.I. and Evered, R.D. (1978). An Assessment of the Scientific Merits of Action
Research, Administrative Science Quarterly, Vol. 23, No. 4 (Dec., 1978) , pp. 582-603.
13. Feyerabend, P. (1978). Against Method Verso, London, 1978.
Wireless Sensing, Actuation and Control – With
Applications to Civil Structures
1 Introduction
Ensuring the safety of civil structures, including buildings, bridges, dams, tunnels,
and others, is of utmost importance to society. Developments in many engineering
fields, notably electrical engineering, mechanical engineering, material science, and
information technology are now being explored and incorporated in today’s structural
engineering research and practice. For example, in the last couple of decades, struc-
tural sensors, such as micro-electro-mechanical system (MEMS) accelerometers,
metal foil strain gages, fiber optic strain sensors, linear variable displacement trans-
ducers (LVDT), etc., have been employed to collect important information that could
be used to infer the safety conditions or monitor the health of structures [1-3].
To limit the response of structures subjected to strong dynamic loads, such as earth-
quake or wind, structural control systems can be used. There are three basic types of
structural control systems: passive, active and semi-active [4-6]. Passive control sys-
tems, e.g. base isolators, entail the use of passive energy dissipation devices to control
the response of a structure without the use of sensors and controllers. Active control
systems use a small number of large mass dampers or hydraulic actuators for the direct
application of control forces. In a semi-active control system, semi-active control de-
vices are used for indirect application of control forces. Examples of semi-active struc-
tural actuators include active variable stiffness (AVS) systems, semi-active hydraulic
dampers (SHD), electrorheological (ER) and magnetorheological (MR) dampers. Semi-
active control is currently preferred by many researchers, because of its reliability, low
I.F.C. Smith (Ed.): EG-ICE 2006, LNAI 4200, pp. 670 – 689, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Wireless Sensing, Actuation and Control – With Applications to Civil Structures 671
power consumption, and adequate performance during large seismic events. In active or
semi-active control systems, sensing devices are installed to record real-time structural
response data for the calculation of control decisions.
In order to transfer real-time data in a structural monitoring or control system, coaxial
cables are normally deployed as the primary communication link. However, cable in-
stallation is time consuming and can cost as much as $5,000 US dollar per communica-
tion channel [7]. Large-scale structures, such as long-span cable-stayed bridges, could
easily require over thousands of sensors and miles of cables [8]. To eradicate the high
cost incurred in the use of cables, wireless systems could serve as a viable alternative
[9]. Wireless communication standards, such as Bluetooth (IEEE 802.15.1), Zigbee
(IEEE 802.15.4), Wi-Fi (IEEE 802.11b), etc. [10], are now mature and reliable tech-
nologies widely adopted in many industrial applications. Potential applications of wire-
less technologies in structural health monitoring have been explored by a number of
researchers [11-19]. A comprehensive review of wireless sensors and their adoption in
structural health monitoring can be found in reference [20].
As opposed to structural monitoring, where sensors are used in a passive manner to
measure structural responses, researchers have now begun to incorporate actuation
interface in wireless sensors for damage detection applications [21-23]. For example,
actuation interfaces can be used to induce stress waves in structural elements by wire-
less “active” sensors. Corresponding strain responses to propagating stress waves can
be used to infer the health of the component. An integrated actuation interface can
also be used to potentially operate actuators for structural control [24-26].
Compared to traditional cable-based systems, wireless structural sensing and con-
trol systems have a unique set of advantages and technical challenges. Portable energy
sources, such as batteries, are a convenient, albeit limited, supply of power for wire-
less sensing units. Nevertheless, the need for reliable and low-cost energy sources
remains a key challenge for wireless sensors [27-29]. Furthermore, data transmission
in a wireless network is inherently less reliable than that in cable-based systems, par-
ticularly when node-to-node communication ranges lengthen. The limited wireless
bandwidth can also impede real-time data transmission as required by feedback struc-
tural control systems. Last but not least, the time delay issues due to transmission and
sensor blockage need to be considered [25,26,30]. These issues need to be resolved
with a system approach involving the selection of hardware technologies and the
design of software/algorithmic strategies.
A “smart” sensor combines both hardware and software technologies to provide
the capabilities that can acquire environmental data, process the measured data and
make “intelligent” decisions [18]. The development of autonomous, self-sensing and
actuating devices for structural monitoring and control applications poses an intrigu-
ing, interdisciplinary research challenge in structural and electrical engineering. The
purpose of this paper is to describe the design and implementation of a modular sys-
tem consisting of autonomous wireless sensor units for civil structures applications.
Designed for structural monitoring applications, the wireless sensor consists of a
sensing interface to which analog sensors can be attached, an embedded microcontrol-
ler for data processing, and a spread spectrum wireless radio for communication.
Optionally, for field applications where signals subject to environmental effects and
ambient vibrations are relatively noisy, a signal conditioning board is designed to
interface with the wireless sensing unit for signal amplification and filtering. To sup-
port active sensing and control applications, a signal generation module is designed to
672 Y. Wang, J.P. Lynch, and K.H. Law
interface with the wireless sensing unit. This wireless actuation unit combining sens-
ing, data processing, and signal generation, can be used to issue desired actuation
commands for real-time feedback structural control. Laboratory and field validation
tests are presented to assess the performance of the wireless sensing and actuation unit
for structural monitoring and structural control applications.
Computational Core
128kB External Wireless
SRAM UART Communication
CY62128B Port
Sensor Signal Wireless Transceiver:
SPI Parallel
Digitization 20kbps 2.4GHz
Port Port
8-bit Micro- 24XStream, or 40kbps
4-channel 16-bit 900MHz 9XCite
controller
Analog-to-Digital
ATmega128
Converter ADS8341
Structural
Sensors SPI
Port
Sensor Signal Actuation Signal
Conditioning Generation
Structural
Amplification, Filtering, 16-bit Actuators
and Voltage-offsetting Digital-to-Analog
Converter AD5542
Fig. 1. Functional diagram detailing the hardware design of the wireless sensing unit. Addi-
tional off-board modules can be interfaced to the wireless sensing unit to condition sensor
signals and issue actuation commands.
Wireless Sensing, Actuation and Control – With Applications to Civil Structures 673
Wireless Sensor
Network Server
ATmega128 Micro-
controller Octal D-type
Latch AHC573
Connector to
Wireless Transceiver
Sensor SRAM
Connector CY62128B
A2D Converter
ADS8341
(a) PCB of the wireless sensing unit (9.7 × 5.8 cm2). (b) Packaged unit (10.2
× 6.5 × 4.0 cm3).
Computational Core
The computational core of a wireless unit is responsible for executing embedded
software instructions as required by the application end-user. A low-cost 8-bit micro-
controller, Atmel ATmega128, is selected for this purpose. The key objective for this
selection is to balance the power consumption and hardware cost versus the computa-
tion power needed by software applications. Running at 8MHz, the ATmega128 con-
sumes about 15mA when it is active. Considering the energy capacity of normal bat-
teries in the market, which is usually a few thousand milliamp-hours (mAh), normal
AA batteries can easily support the ATmega128 active for hundreds of hours. Run-
ning in a duty cycle manner, with active and sleep modes interleaved, the ATmega128
microcontroller may sustain even longer before battery replacement is needed.
The ATmega128 microcontroller contains 128kB of reprogrammable flash mem-
ory for the storage of embedded software, which, based on our laboratory and field
experiments, is sufficient to incorporate a wide variety of structural monitoring and
control algorithms. One serial peripheral interface (SPI) and two universal asynchro-
nous receiver and transmitter (UART) interfaces are provided by the ATmega128 to
facilitate communication with other hardware components. The timer and interrupt
modules of the ATmega128 are employed for executing routines that need to be pre-
cisely timed, e.g. sampling sensor data or applying actuation signal at specified fre-
quencies.
The microcontroller also contains 4kB static random access memory (SRAM) for
storing stack and heap variables, which as it turns out, is often insufficient for the
execution of embedded data interrogation algorithms. To address this issue, an exter-
nal 128kB memory chip, Cypress CY62128B, is incorporated within the wireless
sensing unit design. Furthermore, hardware and software procedures are imple-
mented to bypass the 64kB memory address space limitation of the ATmega128, to
ensure that the full 128kB address space of the CY62128B can be utilized.
the data transfer rate of the 9XCite is double that of the 24XStream, while 24XStream
provides a longer communication range but consumes much more battery power.
Both transceivers support peer-to-peer and broadcasting communication modes, ren-
dering information flow in the wireless sensor network more flexible.
signal conditioning. When the vibration amplitude is higher, i.e. when the SNR is high,
the difference between the data collected with and without signal conditioning is al-
most negligible with respect to the signal amplitude, as shown in Fig. 5(b).
(a) Functional Diagram of the Circuits. (b) PCB board (5.0 × 6.5 cm2).
-3
x 10
4 0.15
0.1
2
0.05
Acceleration(g)
Acceleration(g)
0
0
-0.05
-2
-0.1
-4
With S.C. -0.15 With S.C.
Without S.C. Time(s) Without S.C. Time(s)
-6 -0.2
0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5
The functionality of the wireless sensing unit can be extended to support structural
actuation and control applications. The key component of the actuation signal genera-
tion module is the Analog Device AD5542 digital-to-analog (D/A) converter which
converts unsigned 16-bit integer numbers issued by the microcontroller into a zero-
order hold analog output spanning from -5 to 5V. It should be noted that the wireless
sensor is based upon 5V electronics; this requires an auxiliary -5V power supply to be
included in the actuation signal generation module. The switching regulator, Texas
Instruments PT5022, is employed to convert the 5V voltage source from the wireless
sensing unit into a regulated -5V signal. Another component included in the actuation
signal generation module is an operational amplifier (National Semiconductor
Wireless Sensing, Actuation and Control – With Applications to Civil Structures 677
LMC6484), to shift the output signal to have a mean of 0V. The actuation signal
generation module is capable of outputting -5 to 5V analog signals within a few mi-
croseconds after the module receives the digital command from the microcontroller.
The actuation signal generation module is connected with the wireless sensing unit
through two multi-line cables: an analog signal cable and a digital signal cable. The
digital signal cable connects between the D/A converter of the signal generation mod-
ule to the microcontroller of the wireless sensing unit via the SPI interface. The ana-
log cable is used to transfer an accurate +5V voltage reference, from the wireless
sensing unit to the actuation board. The generated actuation signal is transmitted to
the structural actuator through a third output cable in the module. Fig. 6 shows the
signal conditioning module, which is designed as a separate board readily interfaced
with the wireless sensing unit for actuation and control applications.
shown in Fig. 7(a) and (b), the three-story single-bay steel frame structure has a 3 by
2m2 floor area and a 3m inter-story height. H150x150 x7x10 I-section elements are
used for all columns and beams with each beam-column joint designed as bolted con-
nections. Each floor is loaded with concrete blocks and has a total mass of 6,000kg.
The test structure is mounted on a 5 by 5m2 shake table capable of applying base
motion in 6 independent degrees-of-freedom.
A1 A2
A3
WSU6
Y X
A8 A4
A5
WSU5
A12 A7
A6
WSU4
S43
WSU2 S44 A9 A10
A11
S41 S42
WSU1
WSU3
As presented in Fig. 7(a), the test structure is instrumented with a wireless monitor-
ing system consisting of 6 wireless sensing units. Because of local frequency band
requirements, the MaxStream 24XStream wireless transceiver operating at 2.4GHz
spectrum is employed for the wireless sensing unit. The instrumentation strategy of
the wireless monitoring system is governed by an interest in both the acceleration
response of the structure as well as the strain behavior at the base column. As shown
in Fig. 7(a), one wireless sensing unit is responsible for the three accelerometers in-
strumented on a floor. For example, wireless sensing unit WSU6 is used to record the
acceleration of the structure at locations A1, A2 and A3. This configuration of accel-
erometers is intended to capture both the longitudinal and lateral response of each
floor, as well as any torsion behavior.
The accelerometers employed with the wireless sensing units are the Crossbow
CXL01 and CXL02 (MEMS) accelerometers, which have acceleration ranges of ±1g
and ±2g, respectively. The CXL01 accelerometer has a noise floor of 0.5mg and a
sensitivity of 2V/g, while the CXL02 accelerometer has a noise floor of 1mg and a
Wireless Sensing, Actuation and Control – With Applications to Civil Structures 679
sensitivity of 1V/g. Additionally, 4 metal foil strain gages with nominal resistances of
120Ω and a gage factor of 2, are mounted on the base column to measure the column
flexural response during base excitation. To record the strain response, a Wheatstone
bridge amplification circuit is used to convert the changes in gage resistance into
voltage signals. Two wireless sensing units (WSU2 and WSU3) are dedicated to re-
cording the strain response with each unit connecting to two gages. As for compari-
son, Setra141-A accelerometers (with acceleration range of ±4g and a noise floor of
0.4mg) and 120Ω metal foil strain gages connecting to a traditional cable-based data
acquisition system are installed side-by-side.
Various ambient white noise and seismic excitations, including El Centro (1940),
Kobe (1995), and Chi-Chi (1999) earthquake records, were applied to excite the test
structure [33]. The results shown in Fig. 8 are based on a 90 sec bi-directional white
noise excitation of 1m/s and 0.5m/s standard deviation velocities in the X and Y di-
rections respectively. The time history responses for both acceleration and strain
measurements recorded (at locations A1, A2, and S44) by the wireless monitoring
system are identical to those measured independently by the cable-based monitoring
system.
0.1
−0.1
−0.2
10 12 14 16 18 20 Floor 1 − X−Direction (20 AR Coefficients)
A2 Wireless
A2 Wired 0.02
Acceleration (g)
0.05
0
0 −0.02
−0.04
−0.05 −0.06
0 0.5 1 1.5 2
−0.1
10 12 14 16 18 20
Table − Y−Direction (20 AR Coefficients)
−4 Base − Strain S44
x 10 0.15
2 A9 Wireless
S44 Wireless 0.1 AR Predict
Acceleration (g)
S44 Wired
1 0.05
Strain (m/m)
0
0
−0.05
−1 −0.1
−2 0 0.5 1 1.5 2
10 12 14 16 18 20
Time (sec) Time (sec)
calculated, the model coefficients are then transmitted to the central server. As shown
in Fig. 8(b), the acceleration time history responses reconstructed using 20 AR model
coefficients are compared with the directly recorded raw time history data at sensor
locations A6 and A9 (located on the first floor and table, respectively). The recon-
structed time history using the AR model coefficients can accurately predict the re-
sponse of the structure. That is, with the microcontroller, useful computations can be
performed on the wireless sensing unit, and the amount of data (in this case, the AR
coefficients) that need to be transmitted in real time can be significantly reduced.
12.6 m
2o
2.6 m
Elastomeric
Pad
Accelerometer
Location
SECTION A-A
(a) Section view of the girder. (b) Side view picture of the bridge.
NORTH SOUTH
9.5m 9.5m 9.5m 9.5m 11.5m 11.5m 11.5m 11.5m 9.5m 9.5m 9.5m 9.5m
Wireless
1 2 3 4 5 6 7 8 9
Abu
Server
Pier
Tethered
Pier
Pier
tmen
Server
6
5
4
10 11 12 13 14 15 16 17 18
t
Wireless Sensor
Tethered Sensor
Data Server
Table 2. Parameters for accelerometers used in the cabled and wireless sensing systems
PCB393 PCB3801
(Cabled System) (Wireless System)
Sensor Type Piezoelectric Capacitive
Maximum Range ±0.5 g ±3 g
Sensitivity 10 V/g 0.7 V/g
Bandwidth 2000 Hz 80 Hz
RMS Resolution (Noise Floor) 50 μg 500 μg
Minimal Excitation Voltage 18 VDC 5 VDC
connected with a MaxStream 9XCite transceiver, located at around the middle of the
bridge, is employed to collect sensor data from all the 14 wireless sensing units.
Vibration tests are conducted by driving a 40-ton truck at set speeds to induce
structural vibrations into the system. For all the tests conducted, no data losses have
682 Y. Wang, J.P. Lynch, and K.H. Law
been observed and the wireless sensing system proves to be highly reliable using the
designed communication protocol for synchronized and continuous data acquisition.
Fig. 10(a) shows the acceleration data recorded with a sampling rate of 200Hz at
sensor location #17 when the truck was crossing the bridge at 60km/h. Despite the
difference in the accelerometer and signal conditioning devices, the recorded output
by the wireless system has the precision identical to that offered by a commercial
cabled system. With the microcontroller, an embedded 4096 point FFT algorithm is
used to determine the Fourier transform to the acceleration data. As shown in Fig.
10(b), the first three dominant frequencies can easily be identified as 3.0, 4.3 and 5Hz,
which are very close to the bridge natural frequencies previously published [35].
Once the dominant frequencies are determined, the complex numbers of the Fou-
rier transform in the interested frequency range are wirelessly transmitted to the
central server, so that the operational deflection shapes (ODS) of the bridge under
the truck loading can be computed. Fig. 10(c) illustrates the ODS for the first three
dominant frequencies computed from the wireless sensor data. The ODS shapes are
not the bridge mode shapes, since the external excitation by driving the truck along
the bridge is difficult to accurately quantify. Nevertheless, the ODS shapes are
dominated by the corresponding modes and are typically good approximations to the
mode shapes.
Acceleration (g)
Acceleration (g)
0.02 0.02
0 0
Wired #17 Wireless #17
-0.02 -0.02
20 25 30 35 20 25 30 35
Time (s) Time (s)
Magnitude
Wired #17 Wireless #17
2 2
0 0
0 2 4 6 8 10 0 2 4 6 8 10
Frequency (Hz) Frequency (Hz)
0 0 0
−1 −1 −1
25 25 25
20 20 20
15 15 15
10 10 10
5
5 5 150
150 150 50 100
50 100 50 100 0
0 0
Fig. 10. Geumdang Bridge forced vibration data and frequency-domain analysis
Fig. 11. Example illustration of the WiSSCon system instrumented on a 3-story test structure
with one actuator
The laboratory setup for the structural control experiment is shown in Fig. 12. An
MR damper with a maximum force capacity of 20kN and a piston stroke of ±0.054m
is installed at the base floor. During a dynamic test, the damping coefficient of the
MR damper can be changed in real time by issuing an analog command signal
between 0 to 1V. This command signal controls the electric current of the
684 Y. Wang, J.P. Lynch, and K.H. Law
electromagnetic coil in the MR damper, which in turn, generates the magnetic field
that sets the viscous damping properties of the MR fluid inside the damper. The
damper hysteresis behavior is determined using a modified Bouc-Wen model [36].
During dynamic excitation, the control unit, either cabled or wireless, has to maintain
the time history of the damper model so that the damper hysteresis is known at all
times and a suitable voltage can be determined for the MR damper.
Fig. 12. Wireless structural sensing and control test with one MR damper installed between the
1st floor and the base floor of the structure
Time delay is an important issue in real-time feedback control. There are three ma-
jor components that constitute the time delay: sensor data acquisition, control decision
calculation, and actuator latency in applying the desired control force. Normally, the
control decision calculation time is the minimum among the three, while the actuator
latency being the maximum. In the LQR formulation, time delay from sensor data
collection to control force application is assumed to be zero, even though a non-zero
time delay always exists in practice. In active structural control, time delay may cause
system instability, in which the control force could actually excite the structure. In
semi-active control, the actuators normally dissipate vibration energy, without the
capability to excite the structure. Nevertheless, large time delay remains an important
issue in semi-active control, since it can degrade the performance of the system.
The difference between a wired control system and a wireless control system is
mostly in the sensor data acquisition time. For the cabled control system, it is esti-
mated that the time delay due to data acquisition is approximately 5ms. For the Max-
Stream 24XStream wireless transceivers, a single wireless transmission time delay is
about 20ms. For this experimental study, at each time step, four wireless transmis-
sions are performed: a beacon signal sent by the control unit and 3 data packets trans-
mitted one at a time by the three wireless sensors. Therefore, the WiSSCon system
implemented upon the 24XStream wireless transceiver provides a control time step of
about 80ms resulting in an achievable sampling frequency of 12.5Hz. Besides validat-
ing the concept of feedback wireless structural sensing and control, the experimental
study attempts to investigate the influence of this time delay difference.
Fig. 13 shows the maximum absolute inter-story drift of each floor obtained from the
tests conducted using the El Centro earthquake record with its peak acceleration scaled
to 1m/s2. Detailed description of the structural control tests can be found in reference
[26]. As for comparison, two passive control tests when the MR damper voltage is fixed
at 0 and 1V respectively, are also shown in the figure. The results illustrate that the LQR
control with the cabled and wireless systems give more uniform maximum inter-story
drifts for all three floors than the passive control tests. These preliminary results also
show the wireless control system, even though with significantly larger time delay,
suffers only minor performance degradation. Current investigation to improve wireless
communication and minimize time delay is underway.
0
0 0.002 0.004 0.006 0.008 0.01 0.012
Drift (m)
Fig. 13. Maximum inter-story drifts for tests with a scaled El Centro record as ground excitation
686 Y. Wang, J.P. Lynch, and K.H. Law
Acknowledgement
This research is partially funded by the National Science Foundation under grants
CMS-9988909 (Stanford University) and CMS-0421180 (University of Michigan),
and the Office of Naval Research Young Investigator Program awarded to Prof.
Lynch at University of Michigan. The first author is supported by an Office of Tech-
nology Licensing Stanford Graduate Fellowship. Additional support was provided by
the Rackham Grant and Fellowship Program at the University of Michigan. Prof.
Chin-Hsiung Loh, Dr. Pei-Yang Lin, and Mr. Kung-Chun Lu at National Taiwan
University provided generous support for conducting the shake table experiments at
Wireless Sensing, Actuation and Control – With Applications to Civil Structures 687
NCREE, Taiwan. The authors would also like to express their gratitude to Professors
Chung Bang Yun and Jin Hak Yi, as well as Mr. Chang Geun Lee, from the Korea
Advanced Institute of Science and Technology (KAIST), for the access to Geumdang
Bridge. During this study, the authors have received many valuable advices on the
PCB layout from Prof. Ed Carryer at Stanford University. The authors appreciate the
generous assistance from the individuals acknowledged above.
References
1. Chang, P.C., Flatau, A., Liu, S.C.: Review Paper: Health Monitoring of Civil Infrastruc-
ture. Struct. Health Monit. 2 (2003) 257-267
2. Farrar, C.R., Sohn, H., Hemez, F.M., Anderson, M.C., Bement, M.T., Cornwell, P.J., Doe-
bling, S.W., Schultze, J.F., Lieven, N., Robertson, A.N.: Damage Prognosis: Current Status
and Future Needs. Report LA-14051-MS, Los Alamos National Laboratory, NM (2003)
3. Elgamal, A., Conte, J.P., Masri, S., Fraser, M., Fountain, T., Gupta, A., Trivedi, M., El
Zarki, M.: Health Monitoring Framework for Bridges and Civil Infrastructure. Proc. of the
4th Int. Workshop on Structural Health Monitoring, ed. F.-K. Chang. Stanford, CA (2003)
123-130
4. Yao, J.T.: Concept of Structural Control. ASCE J. of Struct. Div.. 98 (1972) 1567-1574
5. Soong, T.T., Spencer, B.F.: Supplemental Energy Dissipation: State-of-the-art and State-
of-the-practice. Engng. Struct. 24 (2002) 243-259
6. Chu, S.Y., Soong, T.T., Reinhorn, A.M.: Active, Hybrid, and Semi-active Structural Con-
trol: a Design and Implementation Handbook. John Wiley & Sons, NJ (2005)
7. Celebi, M., Seismic Instrumentation of Buildings (with Emphasis on Federal Buildings).
Report No. 0-7460-68170 United States Geological Survey (USGS). Menlo Park, CA
(2002)
8. Solomon, I., Cunnane, J., Stevenson, P.: Large-scale Structural Monitoring Systems. Proc.
of SPIE Nondestructive Evaluation of Highways, Utilities, and Pipelines IV. SPIE Vol.
3995, ed. A.E. Aktan, S.R. Gosselin (2000) 276-287
9. Straser, E.G., Kiremidjian, A.S.: A Modular, Wireless Damage Monitoring System for
Structures. Report No. 128, John A. Blume Earthquake Eng. Ctr., Stanford Univ., (1998)
10. Cooklev, T.: IEEE Wireless Communication Standards: A Study of 802.11, 802.15, and
802.16. IEEE Press, NY (2004)
11. Kling, R.M.: Intel Mote: an Enhanced Sensor Network Node. Proc. of Int. Workshop on
Advanced Sensors, Struct. Health Monitoring, and Smart Struct.. Keio Univ., Japan (2003)
12. Lynch, J.P., Sundararajan, A., Law, K.H., Kiremidjian, A.S., Kenny, T., Carryer, E.: Em-
bedment of Structural Monitoring Algorithms in a Wireless Sensing Unit. Struct. Eng.
Mech. 15 (2003) 285-297
13. Arms, S.W., Townsend, C.P., Galbreath, J.H. Newhard, A.T.: Wireless Strain Sensing
Networks. Proc. of 2nd Euro. Work. on Struct. Health Monitoring. Munich, Germany
(2004)
14. Glaser, S.D.: Some Real-world Applications of Wireless Sensor Nodes. Proc. of SPIE
11th Annual Inter. Symp. on Smart Structures and Materials. SPIE Vol. 5391, ed. S.C.
Liu. San Diego, CA (2004) 344-355
15. Mastroleon, L., Kiremidjian, A.S., Carryer, E., Law, K.H.: Design of a New Power-
efficient Wireless Sensor System for Structural Health Monitoring. Proc. of SPIE 9th An-
nual Int. Symp. on NDE for Health Monitoring and Diagnostics. SPIE Vol. 5395, ed. S.R.
Doctor, Y. Bar-Cohen, A.E. Aktan, H.F. Wu. San Diego, CA (2004) 51-60
688 Y. Wang, J.P. Lynch, and K.H. Law
16. Ou, J.P., Li, H., Yu, Y.: Development and Performance of Wireless Sensor Network for
Structural Health Monitoring. Proc. of SPIE 11th Annual Inter. Symp. on Smart Structures
and Materials. SPIE Vol. 5391, ed. S.C. Liu. San Diego, CA (2004) 765-773
17. Shinozuka, M., Feng, M.Q., Chou, P., Chen, Y., Park, C.: MEMS-based Wireless Real-
time Health Monitoring of Bridges. Proc. of 3rd Inter. Conf. on Earthquake Engng. Nan-
jing, China (2004)
18. Spencer, B.F. Jr., Ruiz-Sandoval, M.E., Kurata, N.: Smart Sensing Technology: Opportu-
nities and Challenges. Struct. Control Health Monit. 11 (2004) 349-368
19. Wang, Y., Lynch, J.P., Law, K.H.: Wireless Structural Sensors using Reliable Communi-
cation Protocols for Data Acquisition and Interrogation. Proc. of 23rd Inter. Modal Anal.
Conf. (IMAC XXIII). Orlando, FL (2005)
20. Lynch, J.P., Loh, K.: A Summary Review of Wireless Sensors and Sensor Networks for
Structural Health Monitoring. Shock and Vibration Dig.. 38 (2005) 91-128
21. Lynch, J. P., Sundararajan, A., Sohn, H., Park, G., Farrar, C., Law, K.: Embedding Actua-
tion Functionalities in a Wireless Structural Health Monitoring System. Proc. of 1st In-
ter.Workshop on Adv. Smart Materials and Smart Struct. Technology. Honolulu, HI
(2004)
22. Grisso, B. L., Martin, L.A., Inman, D.J.: A Wireless Active Sensing System for Imped-
ance-Based Structural Health Monitoring. Proc. of 23rd Inter. Modal Anal. Conf. (IMAC
XXIII). Orlando, FL (2005)
23. Liu, L., Yuan, F.G., Zhang, F.: Development of Wireless Smart Sensor for Structural
Health Monitoring. Proc. of SPIE Smart Struct. and Mat.. SPIE Vol. 5765. San Diego, CA
(2005) 176-186
24. Casciati, F., Rossi, R.: Fuzzy Chip Controllers and Wireless Links in Smart Structures.
Proc. of AMAS/ECCOMAS/STC Workshop on Smart Mat. and Struct. (SMART’03).
Warsaw, Poland (2003)
25. Seth, S., Lynch, J. P., Tilbury, D.: Feasibility of Real-Time Distributed Structural Control
upon a Wireless Sensor Network. Proc. of 42nd Annual Allerton Conf. on Comm., Control
and Computing. Allerton, IL (2004)
26. Wang, Y., Swartz, A., Lynch, J.P., Law, K.H., Lu, K.-C., Loh, C.-H.: Wireless Feedback
Structural Control with Embedded Computing, Proc. of SPIE 11th Inter. Symp. on Nonde-
structive Evaluation for Health Monitoring and Diagnostics. San Diego, CA (2006)
27. Churchill, D.L., Hamel, M.J., Townsend, C.P., Arms, S.W.: Strain Energy Harvesting for
Wireless Sensor Networks. Proc. of SPIE 10th Annual Int. Symp. on Smart Struct. and
Mat. SPIE Vol. 5055, ed. by Varadan, V.K. and Kish, L.B. San Diego, CA (2003) 319-327
28. Roundy, S.J.: Energy Scavenging for Wireless Sensor Nodes with a Focus on Vibration to
Electricity Conversion. Ph.D. Thesis, Mech. Engng.. Univ. of California, Berkeley (2003)
29. Sodano, H.A., Inman, D.J., Park, G.: A Review of Power Harvesting from Vibration using
Piezoelectric Materials. Shock and Vibration Dig.. 36 (2004) 197-205
30. Lei, Y., Kiremidjian, A.S., Nair, K.K., Lynch, J.P., Law, K.H.: Time Synchronization Al-
gorithms for Wireless Monitoring System. Proc. of SPIE 10th Annual Int. Symp. on Smart
Struct. and Mat.. SPIE Vol. 5057, ed. S.C. Liu. San Diego, CA (2003) 308-317
31. Lynch, J.P., Wang, Y., Law, K.H., Yi, J.H., Lee, C.G., Yun, C.B.: Validation of Large-
Scale Wireless Structural Monitoring System on the Geumdang Bridge. Proc. of 9th Int.
Conf. on Struct. Safety and Reliability. Rome, Italy (2005)
32. Lu, K.-C., Wang, Y., Lynch, J.P., Loh, C.-H., Chen, Y.-J., Lin, P.-Y., Lee, Z.-K.: Ambient
Vibration Study of the Gi-Lu Cable-Stay Bridge: Application of Wireless Sensing Units.
Procc of SPIE 13th Annual Symp. on Smart Struct. and Mat.. San Diego, CA (2006)
Wireless Sensing, Actuation and Control – With Applications to Civil Structures 689
33. Lynch, J.P., Wang, Y., Lu, K.-C., Hou, T.-C., Loh, C.-H.: Post-seismic Damage Assess-
ment of Steel Structures Instrumented with Self-interrogating Wireless Sensors. Proc. of
8th National Conf. on Earthquake Engng. San Francisco, CA (2006)
34. Sohn, H., Farrar, C.: Damage Diagnosis using Time-series Analysis of Vibrating Signals.
J. of Smart Mat. and Struct.. 10 (2001) 446-451
35. Lee, C.-G., Lee, W.-T., Yun, C.-B., Choi, J.-S.: Summary Report - Development of Inte-
grated System for Smart Evaluation of Load Carrying Capacity of Bridges. Korea High-
way Corporation, Seoul, South Korea (2004)
36. Lin, P.-Y., Roschke, P.N., Loh, C.-H.: System Identification and Real Application of a
Smart Magneto-Rheological Damper. Proc. of 2005 Int. Symp. on Intelligent Control, 13th
Mediterranean Conf. on Control and Automation. Limassol, Cyprus (2005)
37. Franklin, G.F., Powell, J.D., Workman, M.: Digital Control of Dynamic Systems. Pearson
Education (2003)
38. Domer, B. and Smith, I.F.C.: An Active Structure that Learns. J. Comput. in Civil Engng.,
19 (2005) 16-24
39. Fest, E., Shea, K. and Smith, I.F.C.: Active Tensegrity Structure. J. Struct. Engng. 130
(2004) 1454-1465
40. Lynch, J.P., Law, K.H.: Decentralized Control Techniques for Large Scale Civil Structural
Systems. Proc. of the 20th Int. Modal Analysis Conference (IMAC XX), Los Angeles, CA
(2002)
41. Lynch, J.P., Law, K.H.: Decentralized Energy Market-Based Structural Control. Struct.
Engng. and Mech., 17 (2004) 557-572
42. Wang, Y., Swartz, R.A., Lynch, J.P., Law, K.H., Lu, K.-C., and Loh, C.-H.: Decentralized
Civil Structural Control using a Real-time Wireless Sensing and Control System. Proc. of
the 4th World Conf. on Struct. Control and Monitoring, San Diego, CA (2006)
Author Index