0% found this document useful (0 votes)
50 views8 pages

The Challenge of Computerizing Building Codes in A BIM Environment

Uploaded by

Kavish Bhagwat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views8 pages

The Challenge of Computerizing Building Codes in A BIM Environment

Uploaded by

Kavish Bhagwat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Downloaded from ascelibrary.org by University of Liverpool on 08/03/23. Copyright ASCE. For personal use only; all rights reserved.

The Challenge of Computerizing Building Codes in BIM


Environment
N. Nawari1
1
School of Architecture, University of Florida, P.O. Box 115702, Gainesville,
FL32611-5702; PH (352) 392-0205; FAX (352) 392-4606; email: [email protected]

ABSTRACT

Computerization of building codes and standards in connection with Building


Information Modeling (BIM) represents a real challenge for the AEC industry. On
one hand, building rules and regulations are written by professionals to be read and
applied by people since the reasoning and interpretation ability of the human brain is
unlike anything implemented in computer systems. On the other hand, a smart phone
app can outperform any engineer on solving a set of linear equations. However, many
tasks that are easy for engineers or human in general are surprisingly difficult for
computers. Examples include interpretation and expression of code and standard
provisions that are characterized by subjective and descriptive rules. The perceptual
abilities of engineers certainly surpass the fastest supercomputers. This gave
motivation to many researches investing the efforts in creating computable
representation of the building codes and standards and their link to Building
Information Modeling (BIM). This paper provides critical review of the development
of computable building codes rules that can be implemented into BIM-based
automated rule-checking systems. It also addresses the complexity of these
knowledge systems, the problems they pose for engineers and designers, and the
methods for managing and advancing them.

INTRODUCTION

Computerizing the rules and provisions checking of building codes and


standard has interested many researcher and practioner since the mid-sixties. For
example, in 1966 Fenv investigated application of decision tables to represents AISC
standard specifications. He made the observation that decision tables, an if-then-novel
programming and program documentation technique, could be used to represent
design standard provisions in a precise and unambiguous form. The concept was put
to use when the 1969 AISC Specification (AISC 1969) was represented as a set of
interrelated decision tables. Subsequently, Lopez et al. implemented the SICAD
(Standards Interface for Computer Aided Design) system (Lopez and Wright 1985;
Elam and Lopez 1988; Lopez et al. 1989). The SICAD system was a software
prototype developed to demonstrate the checking of designed components as
described in application program databases for conformance with design standards.
Garrett developed the Standards Processing Expert (SPEX) system (Garrett and

285

Computing in Civil Engineering (2012)


286 COMPUTING IN CIVIL ENGINEERING

Fenves 1987) using a standard-independent approach for sizing and proportioning


structural member cross-sections. The system reasoned with the model of a design
Downloaded from ascelibrary.org by University of Liverpool on 08/03/23. Copyright ASCE. For personal use only; all rights reserved.

standard, represented using SICAD system representation, to generate a set of


constraints on a set of basic data items that represent the attributes of a design to be
determined.
Then further research effort was led by Singapore building officials, who
started considering code checking on 2D drawings in 1995. In its next development,
it switched and started the CORENET System working with IFC (Industry
Foundation Classes) building models in 1998 (Khemlani,, 2005). In the United States
similar works have been initiated under the Smart Code initiative. There are also
other several research implementations of automated rule-checking to assess
accessibility for special populations (SMC, 2009) and for fire codes (Delis, 1995).
The GSA and US Courts has recently supported development of design rules
checking of federal courthouses, which is an early example of rule checking applied
for automating design guides (GSA, 2007) .
More focused research efforts on frameworks for the representation and
processing of design standards for automated code conformance began two decades
ago (Yabuki and Law 1992). During that time, building models and the methods for
rule checking have been developed, but effective Smart Codes systems are just
beginning to emerge. In the 1990s, the introduction of the Industry Foundation
Classes (IFC) led to early research for using this building model schema for building
code checking. Han and others laid out schema for a client–server approach (Han
et.al, 1998 and Vassileva,, 2000). They later developed a simulation approach of
American Disability Act (ADA) wheelchair accessibility checking (Han et. al, 1999,
2002). (Han et.al. 2009).These efforts set the stage for larger, more industrial-based
efforts. A comprehensive survey on the topic was reported by Fenves et al. (1995)
and Eastman et al. (2009).
Building rules and regulations are written by professionals to be read and
applied by people since the reasoning and interpretation ability of the human brain is
unlike anything implemented in computer systems, the computerization of this
process poses a real challenge to the AEC industry. Fortunately, the recent
advancement in the Artificial Intelligence research and Building Information
Modeling (BIM) can provide key solutions. The next sections will review some of
the AI fundamentals that have relevance to the AEC knowledge domain.

Nature of human languages. Human languages are easy to learn by children, they
can express any thought that any adult might ever conceive, and they are adapted to
the limitations of human breathing rates and short-term memory (Sowa, 2007). This
indicates that with a finite vocabulary, they possess infinite extensibility of
expressions and an upper bound on the length of phrases. Together, they imply that
most words in a natural language will have an open-ended number of senses, and thus
vagueness and ambiguity is inevitable. Throughout history many philosophers (e.g.
Charles Sanders Peirce and Ludwig Wittgenstein) understood that vagueness and
ambiguity and concluded that these are not defects in language, but essential
characteristics that permit it to express variety of things and all aspects of objects
that human need to describe. For example, Peirce noted the difficulty of stating any

Computing in Civil Engineering (2012)


COMPUTING IN CIVIL ENGINEERING 287

general principle with absolute precision (Sowa, 2007): “It is easy to speak with
precision upon a general theme. Only, one must commonly surrender all ambition to
Downloaded from ascelibrary.org by University of Liverpool on 08/03/23. Copyright ASCE. For personal use only; all rights reserved.

be certain. It is equally easy to be certain. One has only to be sufficiently vague. It is


not so difficult to be pretty precise and fairly certain at once about a very narrow
subject”. This quotation summarizes the futility of any attempt to develop a precisely
defined ontology of everything, but it offers two useful alternatives: an informal
classification, such as a thesaurus or terminology, and an open-ended collection of
formal theories about narrowly delimited subjects. It also raises the questions of how
and whether these resources might be used as a bridge between informal natural
language and formally defined logics and programming languages.

Modeling languages. During the second half of the 20th century, various models of
language understanding were proposed and implemented in computer programs. All
of them have been useful for processing some aspects of language, but none of them
have been adequate for all aspects of language or even for full coverage of just a
single aspect.

Statistics.: In the 1950s, Shannon’s information theory and other statistical methods
were popular in both linguistics and psychology, but the speed and storage capacity
of the early computers were not adequate to process the volumes of data required. By
the end of the century, the vastly increased computer power made them competitive
with other methods for many purposes. Their strength is in pattern-discovery
methods, but their weakness is in the lack of a semantic interpretation that can be
mapped to the real world or to other computational methods.

Syntactics: Chomsky’s transformational grammar and related methods dominated


linguistic studies in the second half of the 20th century, they stimulated a great deal
of theoretical and computational research, and the resulting syntactic structures can
be adapted to other paradigms, including those that compete with Chomsky and
others. But today, Chomsky’s argument (Chomsky, 1957) that syntax is best studied
independently of semantics is at best unproven and at worsts a distraction from a
more integrated approach to language modeling. In the construction industry ISO
STEP is a good example of such modeling. The main issues with STEP is that the
models have been proven to be to complex and difficult to implement and currently
can be more efficiently replaced with web-based technologies.

Logic: By the 1970s, the philosophical studies from Carnap and Tarski among others
led to formal logics with better semantic foundations and reasoning methods than any
competing approach. However, those methods can only interpret sentences that have
been deliberately written in a notation that looks like a natural language, but is
actually a syntactic variant of the underlying logic.

Lexical Semantics: Instead of forcing language into the mold of formal logic, lexical
semantics deals with all features of syntax, vocabulary, and context that can cause sentences
to differ in meaning. The strength of lexical semantics is a greater descriptive adequacy and
sensitivity to more aspects of meaning than other methods. Its weakness is a lack of

Computing in Civil Engineering (2012)


288 COMPUTING IN CIVIL ENGINEERING

an agreed definition of the meaning of ‘meaning’ that can be related to the world and
to computer knowledge representation systems.
Downloaded from ascelibrary.org by University of Liverpool on 08/03/23. Copyright ASCE. For personal use only; all rights reserved.

W3C Semantic Web (SW): Web technologies provide modeling alternatives such as
eXtensible Markup Language (XML) and eXtensible Schema Definition language
(XSD) that more effectively replaced SPFF and EXPRESS language. However, the
problem of the languages is that it is not extensible and limited to structure only and
not really providing instruments to add actual semantics in the form of concepts,
properties and rules. These limitations have let to the development of Ontology Web
Language (OWL) and RDF(Resource Description Framework)-XML as syntax for
the content according to OWL-expressed ontologies. It is in essence a fully generic,
freely reusable data structure with knowledge specified in OWL. OWL is a fully web-
based and distributed variant of the traditional ISO STEP technologies like
EXPRESS and SPFF. Semantic Web technology like OWL and RDF-XML provide
promising new modeling languages in the construction However, their applications in
representing building codes and standards are limited.

Neural Network: Many researchers believe that neurophysiology may someday


contribute to better theories of how humans generate and interpret natural language.
Certainly, that can be true, but the little that is currently known about how the brain
works can hardly contribute anything to linguistic theory and knowledge
representation. Neural networks are statistical methods that have the same strengths
and weaknesses as other statistical methods, however, they have little resemblance to
the way actual brain neurons work.
Each of these approaches is based on a particular technology: mathematical
statistics, grammar rules, dictionary formats, or networks of neurons. Each of them
ignores those aspects of language for which the technology is ill adapted. For
humans, however, language is seamlessly integrated with every aspect of life, and
they don’t stumble over boundaries between different technologies. The greatest
strength of natural language is its flexibility and power to express any sublanguage
ranging from cooking recipes to stock-market reports and mathematical formulas.

BUILDING CODES COMPUTABLE MODEL

Building codes generally have a natural aim to organize, classify, label, and
define the rules, events, and patterns of the build environment to achieve safety,
efficiency and economy. However, their best-laid plans are overwhelmed by the
inevitable change, growth, innovation, progress, evolution, diversity, and entropy.
These rapid changes, which create difficulties for both young engineers and
experienced professionals, are far more disruptive for the fragile traditional
knowledge bases in computer systems. Although precise definitions and
specifications are essential for solving problems in building design, many code
provisions are not well defined and highly subjective in nature. Furthermore, code
provisions are characterized by continuous gradations and open-ended range of
exceptions make it impossible to give complete, precise definitions for any concepts
that are learned through experience.

Computing in Civil Engineering (2012)


COMPUTING IN CIVIL ENGINEERING 289

For over two thousand years, efforts were made to create intelligent
classification systems as depicted in Aristotle's categories and his system of
Downloaded from ascelibrary.org by University of Liverpool on 08/03/23. Copyright ASCE. For personal use only; all rights reserved.

syllogisms for reasoning about the categories were the most highly developed system
of logic and ontology (Sowa. 2004). The syllogisms are rules of reasoning based on
four sentence patterns, each of which relates one class in the subject to another
category in the predicate: (i) Universal affirmative. Every truss is a frame. (ii)
Particular affirmative. Some trusses are space frames. (iii) Universal negative. No
truss is a deep foundation. (iv)Particular negative. Some space frames are not trusses.
Fascinating enough is the effort by Leibniz in 1666 when he tried to automate
Aristotle's syllogisms by creating a computable model: “The only way to rectify our
reasonings is to make them as tangible as those of the Mathematicians, so that we can
find our error at a glance, and when there are disputes among persons, we can simply
say: Let us calculate, without further argument, in order to see who is right.”
At the same time, it is crucial to realize the limitations of any computerization
systems by clearly indicating which part of the codes and standards can’t be
computerized.
The introduction of SmartCodes will greatly improve the current design
practice by simplifying the access to code provisions and complaints checks.
Representing building codes and standards in a computable and flexible model that
accommodate and make sense of the specific nature of this knowledge domain play a
key role and as Leibniz stated “let us calculate without further ado”. By breaking
through the precincts of Code and Standard provisions, design software, and the
Building Information Modeling a solution to insurmountable hurdle can be achieved.
SmartCode is referred to as the computable digital format of the building
codes that allow automated rule and regulation checking without modifying a
building design, but rather assesses a design on the basis of the configuration of
parametric objects, their relations or attributes. Smart Codes employ rule-based
systems to a proposed design, and give results in format such as “PASS”, “FAIL” or
“WARNING”, or ‘UNKNOWN’’ for conditions where the required information is
incomplete or missing.
Recently, a number of researchers investigated the application of ontology-
based approach (Yurchyshyna et al. 2009) and the semantic web information as a
possible computable framework (Pauwels et. al. 2009) for computerizing building
codes rules. The first research approach works on formalizing conformance
requirements conducted under the following methods (Yurchyshyna et al. 2009): (i)
knowledge extraction from the texts of conformance requirements into formal
languages (e.g. XML, RDF); (ii) formalization of conformance requirements by
capitalizing the domain knowledge. (ii) semantic mapping of regulations to industry
specific ontologies; and (iv) formalization of conformance requirements in the
context of the compliance checking problem. On the other hand the semantic web
approach focuses on enhancing the IFC model by using description language based
on a logic theory such as the one found in semantic web domain.
The computable representation of building codes and standards requires
special purpose ontology that is consistent with the general-purpose ontology set
forth by the National BIM Standard (NBIMS). This special purpose ontology must
have the ability to handle exceptions and uncertainties presents in various building

Computing in Civil Engineering (2012)


290 COMPUTING IN CIVIL ENGINEERING

code provisions. The organization of building elements into categories and


subcategories in a taxonomic hierarchy is a vital part of these ontologies. Although
Downloaded from ascelibrary.org by University of Liverpool on 08/03/23. Copyright ASCE. For personal use only; all rights reserved.

most of the code checking activities takes place at the level of individual structural
elements, much rule checking and reasoning begins at the level of categories. Also,
Categories serve to assist in making prediction about building objects once they are
classified. Furthermore categories serve to organize and simplify the knowledge base
through inheritance. Thus, a system for mapping of terms, definitions and code
provisions to existing classification tables will provide key solutions. The OmniClass
represents a good example of general purpose (upper) ontology for the construction
industry. Figure 1 below illustrates the classification of some building elements
using the categories and subcategories concept mentioned earlier.
Substructure
21-01 00 00

Foundations
21-01 10

Special Foundations Standard Foundations


21-01 10 20 21-01 10 10

Driven Piles Bored Piles Raft Foundations Grade Beams Caissons Wall Foundations Column Foundations Supplementary Components
21-01 10 20 10 21-01 10 20 15 21-01 10 20 60 21-01 10 20 80 21-01 10 20 30 21-01 10 10 10 21-01 10 10 30 21-01 10 10 90

Figure 1. Part of OmniClass Table 21 – upper ontology


The current Omniclass that will be adopted by the NBIMS version2 include a
limited number of tables (namely, Table 13, 21, 22, 23, 32, and 36). However, the
transformation of codes and standards require the inclusion of all tables to establish
general-purpose ontology for the computable model of the building codes. Hopefully,
this will be addressed in the subsequent versions of the NBIMS.
In addition to the general purpose ontology, special-purpose ontology needs to
be developed to cover the level of details for the computable models of the
SmartCodes. This special-purpose ontology will play a similar role as the UniFormat
II (ASTM E1557) classification did for cost estimate and project management
domains.
In the United State, the International Codes Council (ICC) will be available in
some form of XML. The ontology used in this model is based on the Omniclass
classification system and the International Framework for Dictionaries (IFD). The
dictionary is being developed as part of the IFD effort and, in the US, is being
managed by the Construction Specifications Institute (CSI) in cooperation with ICC.

BIM-MODEL CONTENT
With the introduction of Building Information Modeling, the production and
dissemination of information was accelerated, but ironically, communication became
more difficult. When construction documents were printed on paper, an engineer
could compare details from different consultants, even though they used different
formats and terminology. But when everything is model driven, designer, contractor,
client and vendor systems cannot interoperate unless their formats are identical.
The primary requirement in application of SmartCodes is that object-based
building models (BIM) must have the necessary information to allow for complete

Computing in Civil Engineering (2012)


COMPUTING IN CIVIL ENGINEERING 291

code checking. BIM objects being created normally have a family, type and
properties. For example, an object that represents a structural column possesses type
Downloaded from ascelibrary.org by University of Liverpool on 08/03/23. Copyright ASCE. For personal use only; all rights reserved.

and properties such as steel, wood or concrete, and sizes etc. Thus the requirements
of a building model adequate for code conformance checking are stricter than normal
drafting requirements. Architects and Engineers creating building models that will be
used for code conformance checking must prepare them so that the models provide
the information needed in well-defined agreed upon structures.
The GSA BIM Guides (GSA, 2009) provide initial examples of modeling
requirements for simple rule checking. This information must then be properly
encoded in IFC by the software developers to allow proper translation and testing of
the design program or the rule checking software. IFC is currently considered one of
the most appropriate schemas for improving information exchange and
interoperability in the construction industry. These software applications have mainly
concentrated on deriving additional information concerning specialized domains of
interest. In order to automatically verify the information in an exchange process it is
required to detail the information further than the general level of the IFC standard.
The code conformance domain represents a new level of details and requirements on
IFC model. This should be achieved by developing the appropriate Information
Delivery Manuals (IDMs) and Model View Definitions (MVDs) for the Automated
Code Conformance Checking (AC3) domain (Nawari 2011).

CONCLUSIONS

Building rules and regulations are written by professionals to be read and


applied by people since the reasoning and interpretation ability of the human brain is
unlike anything implemented in computer systems, the computerization of this
process poses a real challenge to the AEC industry. The computable model for code
representation must possess enough elasticity and expressiveness to capture most of
the provisions similar to how a child grow from a simple stage to a more
sophisticated stage without relearning everything from scratch: each stage from
infancy to adulthood adds new skills by extending, refining, and building on the
earlier representations and operations.

REFERENCES

Chomsky, N. (1957). “Syntactic Structures”, The Hague/Paris: Mouton.


Delis, E.A., and Delis, A. (1995). “Automatic fire-code checking using expert-system
technology”, Journal of Computing in Civil Engineering, ASCE 9 (2), pp.
141–156.
Eastman, C. M., Jae-min Lee, Yeon-suk Jeong, Jin-kook Lee (2009). “Review
Automatic rule-based checking of building designs “, Journal of Automation
in Construction (18), pp. 1011–1033, Elsvier.
EDM (2009).” EXPRESS Data Manager”, EPM Technology,
http://www.epmtech.jotne.com
Fenves, S. J. (1966). “ Tabular decision logic for structural design”, J. Structural
Engn 9 92, pp. 473-490.

Computing in Civil Engineering (2012)


292 COMPUTING IN CIVIL ENGINEERING

Fenves, S. J. and Garett Jr, J. H. (1986). “Knowledge-based standards processing”,


Int. J. Artificial Intelligence Engn 1, pp. 3-13.
Downloaded from ascelibrary.org by University of Liverpool on 08/03/23. Copyright ASCE. For personal use only; all rights reserved.

Fenves, S. J., Garrett, J. H., Kiliccote. H., Law. K. H., and Reed, K. A. (1995).
"Computer representations of design standards and building codes: U.S.
perspective.",The Int. J. of Constr. Information Technol., 3(1), pp. 13-34.
Garrett, J. H., Jr., and S. J. Fenves, (1987). “A Knowledge-based standard processor
for structural component design”, Engrg. with Computers, 2(4), pp 219-238.
GSA (2007). “U.S. Courts Design Guide”, Administrative Office of the U.S. Courts,
Space and Facilities Division, GSA,
http://www.gsa.gov/Portal/gsa/ep/contentView.do?P=PME&contentId=15102
&contentType=GSA_DOCUMENT.
Han, C., Kunz, J., Law, K. (1997). “Making automated building code checking a
reality”, Facility Management Journal (September/October), pp. 22–28.
Han, C., Kunz, J. and Law, K., Law (1999). “Building design services in a distributed
architecture”, J. of Computing in Civil Engn., ASCE 13 (1), 1999 12-22.
Han, C., Kunz, J. and Law, K., Law (2002). “Compliance Analysis for Disabled
Access, Advances in Digital Government Technology, Human Factors, and
Policy”, in: WilliamJ. McIver.
ISO 10303 ‘STEP’. Product Data Representation and Exchange, http://www.tc184-
sc4.org/
Khemlani, K. (2005). “ CORENET e-PlanCheck: Singapore's automated code
checking system”,
Lopez, L. A., and R. N. Wright (1985). “Mapping Principles for the Standards
interface for Computer Aided Design”, NBSIR 85-3115, National Bureau of
Standards, Gaithersburg, MD.
Lopez, L. A., S. Elam and K. Reed (1989). “ Software concept for checking
engineering designs for conformance with codes and standards”. Engn. with
Computers, 5, pp.63-78.
NIBS (2007): NBIMS (National Building Information Modeling Standard), Version
1, Part 1: “Overview, Principles, and Methodologies”, National Institute of
Building Sciences. http://www.nationalcadstandard.org/ (Nov. 2010).
Nawari, N. O. (2011). “Automated Code Conformance in Structural Domain”,
Proceeding of the 2011 ASCE Int. Workshop on Computing in Civil
Engineering, ASCE, pp.569-577.
OWL, Web Ontology Language, W3C. http://www.w3.org/TR/owlfeatures/.
Sowa, S. J (2007). Chapter 2 in: Game Theory and Linguistic Meaning, edited by
Ahti-Veikko Pietarinen, Elsevier, 2007, pp. 17-37.
Sowa, J. F. (2004). “Graphics and languages for the flexible modular framework”, in
K. E. Wolff, H. D. Pfeiffer, & H. S. Delugach, eds., Conceptual Structures at
Work, LNAI 3127, Springer-Verlag, Berlin, pp. 31-51.
SMC (2009).“automated code checking for accessibility”, Solibri,
http://www.solibri.com/press-releases/solibri-model-checker-v.4.2.
Yabuki, N. and Law, K. H. (1992). "An integrated framework for design standards
processing." Tech. Rep. 67, Ctr. for Integrated Fac. Engrg., Stanford
University, Stanford, Calif.

Computing in Civil Engineering (2012)

You might also like