2015, Enhanced BIM - Using IoT Services and Integration Patterns
2015, Enhanced BIM - Using IoT Services and Integration Patterns
Umit Isikdag
Enhanced
Building
Information
Models
Using IoT Services
and Integration Patterns
123
SpringerBriefs in Computer Science
Series editors
Stan Zdonik
Shashi Shekhar
Jonathan Katz
Xindong Wu
Lakhmi C. Jain
David Padua
Xuemin (Sherman) Shen
Borko Furht
V.S. Subrahmanian
Martial Hebert
Katsushi Ikeuchi
Bruno Siciliano
Sushil Jajodia
Newton Lee
More information about this series at http://www.springer.com/series/10028
Umit Isikdag
Enhanced Building
Information Models
Using IoT Services and Integration Patterns
123
Umit Isikdag
Mimar Sinan Fine Arts University
Istanbul
Turkey
Have you ever thought of the Internet as a ‘Thing’? A physical object that you can
hold, measure the dimensions, visualize and so on. You may suggest that the
Internet is a combination of physically and non-physically existent ‘Things’ such as
communication rules, messages, information sent from A to B, which is also true.
But how many of you think of cables and satellites, when you send an e-mail or
start a video conference? Actually we use a ‘Thing’ to do that, a global ‘Thing’ that
has physical and non-physical components. However, regardless of the technology
behind it, we concentrate on if it gets the job done, and mostly it does. Thus the
main focus is not on the ‘Thing’ itself, but on information, and it concerns the
success of the sharing and exchange of information. Do you think this vision is
enough? Maybe not, we also want to receive information as soon as something
happens. We want real-time information. Actually we do not care too much about
physical ‘Things’, but we do care about the states of ‘Things’. We are curious. We
would like to learn what is happening all around us. As soon as possible!
The key technologies we elaborate on in this book are the Internet of Things
(IoT), Web services and building information modelling. The first technology, IoT,
aims to answer the questions discussed until now. IoT does not care about the
existence of ‘Things’. ‘Things’ can be real, ‘Things’ can be virtual; what IoT really
focuses on is the state of ‘Things’. The approach concentrates on making every
physical and virtual ‘Thing’ a publisher of information, like the nerve cells in the
brain. The IoT approach enables ‘Things’ to publish information when a state
change occurs. For instance, in a home that implements the IoT approach, a door
will publish information such as ‘I am locked now!’, a light bulb will indicate ‘I am
in a morning blue color at the moment’. Are these the only cases of all the hype
about the IoT? There are more. The ‘Things’ will also become capable of taking
actions based on messages coming from other ‘Things’ or humans. For example,
you can use a ‘Thing’ (a cell phone) to control your home lighting while you are far
away, or you can turn your TV set off from another country when you think it is
time for the children to sleep. These ideas were the science fiction of yesterday, but
are the science of today; a reality that has been a part of our lives for just a few
years, but will be in our lives for many more. It is inevitable that the technologies
v
vi Preface
termed within the context of IoT will be a part of our lives. This is so with the other
two technologies that this book focuses on. Web services for one. The Internet came
into our everyday life around 20 years back. At that time it was viewed as a new
way of speaking with friends, new way of sending mails, a new way of marketing
and selling goods and a new way of expressing oneself to the world. Over the past
20 years, although we are still confronted with issues of digital divide, things have
significantly changed. For instance, mobile devices are now of no use if they cannot
connect to the Internet. It is the same with tablet computers. The question is how do
we interpret a situation where the role of a single technology, such as the Internet or
a ‘Thing’ becomes useless if it does not benefit from a certain technology. Let us
take the analogy of electricity/water and dishwashers. ‘Things’ need to benefit from
utilities in order to work; however, once a technology comes to the level that a
‘Thing’ cannot work without that technology, the latter is no more a technology but
a utility. The situation is the same today for the Internet. The Internet will become a
key utility in the future. From this perspective Web services can be thought of as
interaction endpoints of this utility. Today, there are architectural advancements on
the implementation of these endpoints (such as Representational State Transfer). In
fact it should be noted that these endpoints are not entry/exit points (such as plugs
for electricity), but they enable us to interact with (hardware/software) components
that make use of this utility. Thus, Web services are endpoints for interaction. It is
our choice actually to use these endpoints (or not) for interaction, as there are also
other choices that we can use such as sending messages from one component to
another, or from a human to the components. Message brokers are middleware tools
that help us to distribute these messages. Finally, building information models is
another hype that has been a buzzword in the construction industry for the past 15
years. These models have emerged as a result of a thrust by software companies to
tackle problems of inefficient information exchange between different softwares and
to enable true interoperability. An industry standard schema (namely Industry
Foundation Classes) was developed to facilitate information exchange between
construction industry applications. Later, the industry noted that models produced
within a common schema could be utilized to enable shared use of information with
the help of shared databases. Thus, BIM became the data sharing technology, where
the most up to date and accurate models of a building are stored in shared central
databases. This opened new doors. Industry started to focus on making
pre-construction simulations using these models, accompanied by multiple stake-
holders, which is now termed the nD modelling approach. Later, the information
residing in the models was maintained following the construction phase, and the
models started to act as the virtual ID cards of the buildings. In parallel, devel-
opments in city modelling led to information requirements from these models,
which have now become the information providers of the digital city. The city is a
living entity and city-level applications require information from ‘Things’ (i.e. real
and virtual) and from ‘Models’ in real time. Thus, today emerges the requirement
for real-time information regarding buildings, indoors and all other city elements in
order to efficiently monitor and manage a city. In essence, the construction industry
applications (such as smart buildings) and city monitoring/city management
Preface vii
ix
x Contents
Abstract A building information model (BIM) can be defined as the digital rep-
resentation of a building that contains semantic information about the building
elements. The keyword BIM also defines an information management process
based on the collaborative use of semantically rich 3D digital building models in all
stages of the project’s and building’s lifecycle. A BIM is defined by its object
model schema. Industry Foundation Classes (IFC) is the most popular BIM stan-
dard (and schema) currently. This chapter starts by providing definitions of BIM
and the general characteristics of IFC models, elaborates on sharing/exchanging of
BIMs and model views, and concludes by discussing the role of BIMs in
enterprises.
1.1 Introduction
building. In the mid-1990s, AutoDesk’s DXF format became a de facto standard for
the exchange of 2D geometric information. The main drawback of the CAD outputs
was that the drawings in CAD documents only consisted of sets of polylines and
polygons which did not contain semantic and ontological (i.e. product) information
about the building and its components. Parallel to the developments in the CAD
domain, the efforts towards the representation of building product information
generated results such as definition of classification systems for materials.
Classification systems such as OMNICLASS and UNIFORMAT later served as the
foundation for the idea of building product models (where semantic information is
stored together with geometric information). According to Tolman (1999), early
product models included general AEC reference model (GARM) (Gielingh 1988),
integration core model (ICM) and the integration reference model architecture
(IRMA). The idea behind the definition of building product models was to facilitate
the representation of building-related product information at the most appropriate
time and to the right project team member. Building product models had the fol-
lowing characteristics:
• They provided detailed geometric and semantic information about all building
elements in a tightly coupled form.
• They focused on addressing the problem of poor interoperability between
software applications in the construction industry.
• Most of them were defined based on ISO 10303 data definition guidelines.
ISO 10303, which is also known as STEP (STandard for Exchange of Product
model data), emerged in the early 1990s as a formal standard to exchange product
data in all production industries. The emergence of STEP was a result of the issues
associated with the shortcomings of CAD data translation. The distinction between
data sharing and exchange is clearly identified during STEP development efforts; in
addition, the STEP standard identified four implementation levels for data storage
and exchange. These will be explained further in this chapter. The early efforts for
building product modelling continued with the development of the building con-
struction core model (BCCM), which was later approved as Part 106 of the STEP
standard (ISO 10303). Another early effort in the area included the COMBINE
project, which was explained by Sun and Lockley (1997) and Eastman (1999).
Other important efforts in the area included computer integrated manufacturing of
constructional steelwork—CIMSteel and CIMSteel integration standards (CIS/2)—
explained in Eastman (1999), NIST CIS/2 Website (2005), and Eastman et al.
(2005), engineering data model (EDM) as explained by Eastman et al. (1991),
semantic modelling extension (SME) explained in Zamanian and Pittman (1999),
models developed in the integrated design environment (IDEST) project as
explained in Kim et al. (1997), Kim and Liebich (1999), RATAS and STEP Part
225 as explained in Eastman (1999). These efforts later continued with the intro-
duction of the IFC and CIS/2 building product models, which formed the basis for
the paradigm that is known today as building information modelling. First, we look
at the definition of building information modelling; later we elaborate on the IFC
model architecture, the role of model views and the function of BIM in enterprises.
1.2 Defining Building Information Modelling 3
The current key efforts in the area of BIM are IFC and CIS/2, both of which are
defined using STEP (ISO 10303) description methods. In this section, we focus on
IFC, which is the most popular BIM standard (and schema) currently. IFC appeared
as a result of the effort of IAI/BuildingSmart as a common language to improve the
communication, productivity, delivery time, cost and quality throughout the design,
construction and maintenance of buildings. In the IFC model, each specification
(called a ‘class’) is used to describe a range of things that have common charac-
teristics. IFC-based objects aim to allow AEC/FM professionals to share a project
model while allowing each profession to define its own view of the objects con-
tained within the model. In 2005, IFC became an ISO Publicly Available
Specification (as ISO 16739). Most AEC industry software today is capable of
importing and exporting its internal models as IFCs, and some of them are also
capable of acquiring information from an IFC model through the use of a shared
resource such as a model server database.
The IFC model architecture provides a modular structure for the development of
model components by use of ‘model schemata’. There are four conceptual layers
within the architecture (see Fig. 1.1) which use a strict referencing principle. Within
each conceptual layer a set of model schemata is defined.
1. The first conceptual layer (resource) provides resource classes used by classes at
the higher levels.
2. The second conceptual layer (core) provides a core project model and contains
the kernel and several core extensions.
3. The third conceptual layer (interoperability) provides a set of modules defining
concepts or objects common across multiple application types or construction
industry domains.
4. Finally, the fourth and the highest layer (domain) provides a set of modules
tailored for the specific construction industry domain or application type.
The architecture operates on a ‘gravity principle’. At any layer, a class may
reference a class at the same or at the lower layer, but may not reference a class
from a higher layer.
Resources are general-purpose or low-level concepts or objects that are inde-
pendent of application or domain, but which rely on other classes in the model for
their existence. The core layer provides the basic structure of the IFC object model
and defines most general concepts specialized by higher layers of the IFC object
model. The main goal of the design of the interoperability layer is the provision of
schemata that defines the concepts (or classes) common to two or more domain
models. These schemata enable interoperability between different domain models.
Domain layer models provide further model details within the scope of require-
ments for a domain process or for a type of application. An important purpose of
domain models is to provide the leaf node classes that enable information from
external property sets to be attached appropriately. Figure 1.1 shows a sample of
selected components/classes in each layer of the IFC 2x4 model. Readers are
advised to refer to IFC 2x4 documentation for the full model reference.
exports a snapshot of the data for others to use. Other software systems that import
the exchange file have effectively assumed the ownership of the data. In the sharing
model, there is a centralized control of ownership and there is a known master copy
of the data, i.e. the copy maintained by the information resource. STEP has four
different implementation levels derived from PDES implementation levels. Loffredo
(1998) mentioned these four levels as file exchange level, working form level
(SDAI API access), database level and knowledgebase level. The levels mentioned
here, except the last one, are for the exchange and sharing of BIM, but there are also
further novel methods that are in use today for BIM information sharing, as BIM of
today are also defined/described as non-compliant with STEP description methods
(i.e. by using a non-STEP compliant XML language).
A BIM is defined by its object model. The object model of the BIM is the logical
data model that defines all entities, attributes and relationships. The object model
today is implemented in the form of EXPRESS or XSD schemas. The model data
(i.e. instances) is created by an application (e.g. CAD, analysis, etc.) and stored in
physical files or databases. As mentioned above, it is possible to share and
exchange BIMs using three implementation levels of STEP, if the model is defined
using STEP description methods (such as EXPRESS). If not, it will possibly be
defined and made popular as a model in an XML file or in a relational or object
database, and the data will either be exchanged as XML files or sharing will be
realized using the XML database interfaces.
snapshot of the BIM in one stage of the project. Another type of model view is the
application/system-specific view. The application/system-specific view does not
have to be a subset of the base information model. By contrast, this view is an
information model of its own, defined according to the needs of the
application/system it works with. As mentioned in the literature, an information
model can be called an application/system-specific view based on the following
conditions:
1. The model should interact with a base information model.
2. The model should address the specific data needs of an application/system it is
developed for.
3. The model should address a similar information domain with the base infor-
mation model.
4. The model should address the same information domain as the
application/system it is developed for.
In common practice, the model views (transient, persistent and application
specific) are generated and updated using STEP EXPRESS-X and XSL languages.
In the past 20 years, several research projects envisioned the role of building
information models in enterprise (software) architectures. As mentioned above,
they emerged to facilitate interoperability of software, but later building information
modelling gained more popularity as a process of information management through
the lifecycle of a building or a facility. BIM paradigm goals such as efficient
information exchange, better coordinated design and construction, construction
process simulations in higher dimensions (4D time, 5D cost, …) are now being
realized not only by use of BIMs alone but together with many other supporting
tools and technologies. In the beginning of the 2000s, the role of BIM was per-
ceived as being a common shared repository of information for the construction
enterprise where different software in use can acquire information from and update
information to the model. For example, the participants of the EU research project
ROADCON (ROADCON Project Deliverable 4 and 5.2 2001) explained that a
BIM can be created with an architectural design application and the structural
design can be carried out using the same model. Likewise, heating, ventilation and
air conditioning (HVAC) and electrical and lighting designs can be undertaken
using the same model. 4D simulations can be made to evaluate several different
phases of construction. The model will also contain information about materials and
their properties, and the facilities management (FM) services can benefit from the
model after the construction phase (see Fig. 1.2).
The envisioned role of BIMs is not only limited to being a shared information
resource; as explained in the project, many services will also support the role of
BIM in the overall construction enterprise. Different levels were identified in the
1.6 The Role of BIM in the Enterprise 9
Fig. 1.2 Software interactions with a building information model (image in the middle is courtesy
of njaj at FreeDigitalPhotos.net)
References
Eastman, C.: Building Product Models: Computer Environments Supporting Design and
Construction. CRC Press, USA (1999)
Eastman, C., Bond, A.H., Chase, S.C.: Application and evaluation of an engineering data model.
Res. Eng. Des. 2(4), 185–208 (1991)
Eastman, C., Wang, F., You, S.F., Yang, D.: Deployment of an AEC industry sector product
model. Comput. Aided Des. 37(12), 1214–1228 (2005)
Gallaher, M.P., O’Connor, A.C., Dettbarn Jr., J.L., Gilday, L.T.: Cost analysis of inadequate
interoperability in the U.S. capital facilities industry. NIST Publication GCR 04-867 (2004).
Available online at http://www.bfrl.nist.gov/oae/publications/gcrs/04867.pdf
Gielingh, W.F.: General AEC Reference Model GARM, CIB Seminar Conceptual Modelling of
Buildings. Lund, Sweden (1988)
Hua, G.B.: A BIM based application to support Cost Feasible ‘Green Building’ concept decisions.
In: Underwoord, J., Isikdag, U. (eds.) Handbook of Research on Building Information
Modelling and Construction Informatics: Concepts and Technologies. IGI Global, Hershey
(2010)
Kemmerer, S.J.: STEP: the grand experience. NIST Special Publication 939 (1999). Available
online at http://www.nist.gov/msidlibrary/doc/stepbook.pdf
Kim, I., Liebich, T.: A data modelling framework and mapping mechanism to incorporate
conventional CAD systems into an integrated design environment. Int. J. Constr. Inf. Technol.
7(2), 17–33 (1999)
Kim, I., Liebich, T., Maver, T.: Managing design data in an integrated CAAD environment: a
product model approach. Autom. Constr. 7(1), 35–53 (1997)
Loffredo, D.: Efficient database implementation of EXPRESS information models. PhD thesis,
Rensselaer Polytechnic Institute, Troy, New York (1998). Available online at http://www.
steptools.com/*loffredo/papers/expdb_98.pdf
NBIMS.: National BIM standard purpose. US National Institute of Building Sciences Facilities
Information Council, BIM Committee (2006). Available online at http://www.nibs.org/BIM/
NBIMS_Purpose.pdf
NIST CIS2 Web Site.: Web Site (2005). Available online at http://cic.nist.gov/vrml/cis2.html
Rebolj, D., Babic, N.C., PodBreznik, P.: Automated building process monitoring. In: Underwoord,
J., Isikdag, U. (eds.) Handbook of Research on Building Information Modelling and
Construction Informatics: Concepts and Technologies, IGI Global, Hershey (2010)
ROADCON Project Deliverable 4.: ICT Requirements of the European Construction Industry:
The ROADCON Vision (2001). Available online at http://cic.vtt.fi/projects/roadcon/docs/
roadcon_d4_short.pdf
ROADCON Project Deliverable 5.2.: ICT Requirements of the European Construction Industry:
The ROADCON Vision (2001). Available online at http://cic.vtt.fi/projects/roadcon/docs/
roadcon_d52.pdf
Schenk, A.D., Wilson, R.P.: Information Modelling: The EXPRESS Way. Oxford University
Press, New York (1994)
12 1 Building Information Models: An Introduction
Solis, J.L.F., Mutis, I.: The idealization of an integrated BIM, lean, and green model (BLG). In:
Underwoord, J., Isikdag, U. (eds.) Handbook of Research on Building Information Modelling
and Construction Informatics: Concepts and Technologies. IGI Global, Hershey (2010)
Spearpoint, M.: Extracting fire engineering simulation data from the IFC. In Underwoord, J.,
Isikdag, U. (eds.) Handbook of Research on Building Information Modelling and Construction
Informatics: Concepts and Technologies. IGI Global, Hershey 2010
Sun, M., Lockley, S.R.: Data exchange system for an integrated building design system. Autom.
Constr. 6(2), 147–155 (1997)
Tolman, F.: Product modelling standards for the building and construction industry: past, present
and future. Autom. Constr. 8(3), 227–235 (1999)
US General Services Administration BIM Guide.: GSA BIM Guide Series 01 (2006). Available
online at http://www.gsa.gov/bim
Zamanian, K.M., Pittman, J.H.: A software industry perspective on AEC information models for
distributed collaboration. Autom. Constr. 8(3), 237–248 (1999)
Chapter 2
The Future of Building Information
Modelling: BIM 2.0
Abstract The first evolution of BIM was from being a shared warehouse of
information to an information management strategy. Now the BIM is evolving from
being an information management strategy to being a construction management
method. This change in interpretation of BIM is fast and noticeable. Four newly
emerging dimensions in management of building information towards transforming
BIM to BIM 2.0 focus on enabling an (i) integrated environment of (ii) distributed
information which is always (iii) up to date and open for (iv) derivation of new
information. The chapter starts with providing recent trends in building information
modelling and then elaborates on technologies that will enable BIM 2.0. BIM-based
management of the overall construction processes is becoming a major requirement
of the construction industry, and the final part of this chapter provides matrices that
can be used as a tool for facilitating BIM-based projects and process management.
2.1 Introduction
Over the last half-century ICT has evolved beyond the ‘personal’ computer to
become a strategic asset for business in delivering productivity improvements,
extending to provide socio-economic development and growth (European
Commission 2006). From the personal computer we have witnessed the emergence
of technological advancements such as business systems and applications, visual-
ization, communications, the Internet, mobile/smart/android devices, social net-
working and most recently, virtualization and cloud computing as part of this
revolution. There is no doubt that the effects of the digital age have facilitated
considerable changes and improvements to the construction industry and in shaping
and modernizing the industry as we currently see in the twenty-first century when
compared to the ‘traditional’, ‘archaic’ and ‘draconian’ one from the dim and
distant past. It is evident that construction organizations are already in the process
of looking towards rapidly maturing technology approaches such as virtualization
and cloud computing in the provision of cheaper, more flexible and commoditized
ICT infrastructure services to directly drive business efficiencies (France et al.
2010). Other industries are demonstrating that combined cost savings up to 35 %
can be achieved through a range of modernization measures, including the con-
solidation of data centres and full utilization of virtualization technologies. The
advancements in information technologies have changed the way we interpret
design, analysis and construction management and facilities management. As it has
been discussed in the previous chapter, building information modelling has
emerged as a cure to the illness of poor interoperability in the industry, but now the
paradigm is accepted as a new method of information management, and a new
method of construction management. The optimistic view on building information
modelling argues that this methodology will be a “sine qua non” in the future of the
construction industry. The first evolution of BIM was from being a shared ware-
house of information to an information management strategy. Now BIM is evolving
from being an information management strategy to being a construction manage-
ment method. The evolution of BIM is fast and noticeable. Today, BIM-based
management of daily processes in construction is becoming a de facto standard for
large investments. Many projects in the US, Singapore, Dubai and the UK require
the existence of BIM-based processes, and involvement of BIM managers (a pro-
fession emerged in the past 10 years).The evolution of BIM from BIM 1.0 to BIM
2.0 can be investigated in two dimensions. The first dimension is the changing role
of information models (i.e. the shared information sources) from being a shared
database to something more complicated. The second dimension is the emerging
role of BIM as a new construction management method. We elaborate on these two
dimensions in the remainder of this chapter, but it would be good to look at the
research dimensions of BIM in the past 5 years as a starting point.
Adoption The move from CAD-based thinking to the vision of BIM is much more
difficult as it involves a shift in fundamental data management philosophy. As
indicated by Bew and Underwood (2010), in a similar manner to the move from old
accounting packages to Enterprise Resource Planning (ERP), this transformation
includes the formal management of processes on a consistent, repeatable basis. Like
the ERP implementation, this too is a very difficult transition to make. The lack of
mature process management tools and methodologies for the projects has made this
transition more confusing. BIM adoption most likely occurs in phases (i.e. in an
almost Darwinian evolutionary way), but serious effort should be taken to move
from one phase into another.
Maturity A key area in BIM is organizational readiness. If BIM is considered as a
set of new technology and methodologies supporting information management in
the construction industry, then maturity in terms of implementing and using BIM
(technology and methodologies) is critical to the success of BIM implementation.
Frameworks for measuring BIM maturity can greatly facilitate organizations in
positioning themselves against their competitors in terms of technological,
methodological and process maturity. Such a maturity framework is explained in
Succar (2010).
Education and Training Education and training is sine qua non for successful
BIM implementation and adoption in addressing issues/barriers such as culture, etc.
As mentioned in Tanyer (2010) and as appeared in the Integrated Project Delivery
(IPD) efforts in the US, AEC professionals are beginning to move away from the
traditional way of design and project delivery towards a more integrated one.
Project-based collaborative learning environments such as in Stanford University
(PBLLab 2007), and e-learning environments such as ITC-Euromaster (2009) will
also facilitate (and be facilitated by) the use of BIM and collaborative design
approaches.
Real-life Cases BIM is not a subject of pure (laboratory type) research any more.
This has significantly evolved over the last few years with the implementation of
BIM methods and shared digital models in real-life projects increasing exponen-
tially (Lostuvali et al. 2010; Underwood and Isikdag 2010a, b; Riese 2010). The
experience and lessons learned from real-life cases will contribute to the devel-
opment of BIM as a data model or as a project management methodology.
Industry-wide Adoption Research towards the positioning of BIM adoption
across disciplines in relation to their current status and future expectations and
based on such factors as the tools, people and processes is viewed as a key
requirement. For instance Gerrard et al. (2010) provided a bird’s-eye view of the
industrial adoption picture.
2.2 Research Dimensions of Building Information Modelling 17
Lean Construction and Green BIM The aim of lean construction is to enable
continuous improvement of all construction processes in the building life cycle
(starting from design through the demolition of the building) (Solis and Mutis
2010). On the other hand, to address global concerns on environmental issues, the
construction industry now takes the initiative to build more ‘environment-friendly’
buildings, along with reducing its own carbon footprint such as during the con-
struction stage. BIM emerges here as a strong tool where green design, green
construction and lean construction can be enabled by the utilization of BIM-based
design, simulation and information management tools and methods.
Building and Geo-information Integration As mentioned by Peters (2010), Van
Oosterom et al. (2006) and Isikdag and Zlatanova (2008) there is an apparent need
for integrated geometric models and harmonized semantics between BIM and
geo-information modes for efficient city management.
Emergency Response Emergency response operations indoors require a high
amount of geometric, semantic and state information related to the building ele-
ments. Until very recently egress models used in building evacuation have mainly
been based on 2D floor plans. Today BIM is capable of providing detailed geo-
metric and semantic information related to the buildings, where floor plans, navi-
gation graphs and indoor positioning methods are developed using this information.
Process Simulation and Monitoring The efforts in the area of 4D CAD are
making much use of 3D CAD models, but in recent years BIMs have superseded
3D CAD models in the visual simulation of construction processes. Analysis such
as clash detection can now be completed using BIM software. BIMs are also used in
monitoring the construction progress and as Rebolj et al. (2010) describes, activity
progress can be monitored directly by using a combination of data collection
methods which are based on the BIM, especially on the 4D model of the building.
explained by London et al. (2010), future BIM approaches would require the shared
models in model servers to be linked with external systems in a heterogeneous
environment.
utilization of a data model that is based on agreed taxonomies. The final dimension
deals with providing the data model as an internationally agreed standard, for which
many types of software would be able to develop input and output plug-ins to
generate and read the contents of the exchanged model file; in addition this enables
databases to provide application programming interfaces for interacting with the
standard information model.
windows (i.e. being open/closed and so on), occupancies in rooms and conditions of
different systems working within a building/facility. These issues will be elaborated
in more detail in the following chapters of this book, in the context of IoT.
RESTful Web Services Service-oriented architectures and RESTful Web services
offer opportunities for making building information stateful (i.e. real time, accurate
and up to date). Vast amounts of residing in BIMs, and micro (atomic) feeds from
sensor networks can be exposed as loosely coupled Web services, where generating
data mashups from these resources would be very straightforward.
Semantic Web If this mass of new information (derived from multiple resources)
can be restructured in compliance with semantic Web Standards and supported by
well-built ontologies, i.e. formal specifications of conceptualizations which consist
of finite list of terms and the relationships between these terms (Antoniou and van
Harmelen 2008), semantic queries such as “Would you provide me the number of
working elevators and escalators in the Empire State Building between 12:00 and
14:00?”, “Would you provide me the average CO2 level in top 20 floors of 5 of the
highest buildings in London?” or “Please provide me the difference between
temperatures in my hotel room in Singapore Marina Bay, and my office in Sydney”
can be responded. The success rate in responses to the (presented) semantic queries
will depend on, (i) the level of integration of distributed building information,
(ii) the level of success in derivation of information mass from multiple loosely
coupled resources (which are exposed as Web services), and finally (iii) how well
the query can be interpreted and, reasoning/search and retrieval can be accom-
plished upon the interpreted query.
The developments in the field of building information modelling have led the
stakeholders in the construction projects to re-engineer their traditional construction
management processes to BIM-based design and construction processes. Although
there does not exist an ISO standard for management of BIM-based processes, a
joint effort (known as BIM Project Execution Planning) in the last 5 years has
produced noticeable scientific outputs. As stated in the project website (BPEPG
2015) the planning guide which is produced as a result of this research aimed to
provide a practical manual that can be used by project teams for designing their
BIM strategy and developing a BIM Project Execution Plan. This guide provides a
structured procedure for creating a BIM Project Execution Plan. The four steps
within the procedure include:
1. Defining high-value BIM uses during project planning, design, construction and
operational phases
2. Using process maps to design BIM execution
3. Defining the BIM deliverables in the form of information exchanges
2.5 BIM-Based Management of Construction Processes 21
Actor/Role matrix
Activity Actor Role Sub-role Eligibility of model ownership
Phase/Activity/LOD matrix
Project phase Activity BIM LOD BIM objects required
2.5 BIM-Based Management of Construction Processes 23
As also stated in PSU (2013) there is no single best method for BIM imple-
mentation on every project, each team must effectively design a tailored execution
strategy by understanding the project goals, project characteristics and the capa-
bilities of the team members. For the construction industry, there is still a long way
to go and much to do in terms of realizing the full potential of these emerging
technologies in line with the efficiencies and performance improvement that are
being witnessed in other sectors. However, as both the technologies along with the
industry (in their capability to embrace them) further mature, and as BIM-based
process and information management techniques get more advanced, progressive
improvements towards BIM 2.0 will continue to be made, enabled through
emerging technologies.
References
Antoniou, G., van Harmelen, F.: A semantic web primer. The MIT Press, Cambridge (2008)
Bew, M., Underwood, J.: Delivering BIM to the UK market. In: Underwoord, J., Isikdag, U. (eds.)
Handbook of Research on Building Information Modelling and Construction Informatics:
Concepts and Technologies. IGI Global, Hershey (2010)
BIM Server: Open Source BIM server. Available online at: http://www.bimserver.org (2010).
Accessed 05 Jan 2010
BIM Forum: BIM LOD specification. Available online at http://bimforum.org/wp-content/uploads/
2013/08/2013-LOD-Specification.pdf (2013)
BPEPG: BIM Project Execution Planning Guide Web Site. Available online at http:///bim.psu.edu
(2015)
Dado, E., Beheshti, R., Van de Ruitenbeek, M.: Product modelling in the building and
construction industry: a history and perspectives. In: Underwoord, J., Isikdag, U. (eds.)
Handbook of Research on Building Information Modelling and Construction Informatics:
Concepts and Technologies. IGI Global, Hershey (2010)
European Commission: ICT Uptake, Working Group 1. ICT Uptake Working Group draft Outline
Report. Available at: http://ec.europa.eu/enterprise/ict/policy/taskforce/wg/wg1_report.pdf
(2006)
France, K., Fox, S., Khosrowshahi, F., Underwood, J.: Building on IT Survey: Cost Reduction and
Cost Effectiveness. Construct IT For Business Report, Salford (2010)
Gerrard, A., Zuo, J. Zillante, G., Skitmore, M.: Building information modeling in the Australian
architecture engineering and construction industry. In: Underwoord, J., Isikdag, U. (eds.)
Handbook of Research on Building Information Modelling and Construction Informatics:
Concepts and Technologies. IGI Global, Hershey (2010)
Isikdag, U., Zlatanova, S.: Towards defining a framework for automatic generation of buildings in
CityGML using building information models. In: Lee, J., Zlatanova, S. (eds.) 3D
Geo-Information Sciences, Springer LNG&C, Berlin (2008)
ITC-Euromaster: The European Master in Construction Information Technology. Available online
at http://euromaster.itcedu.net/ (2009). Accessed 11 Dec 2009
24 2 The Future of Building Information Modelling: …
London, K., Singh, V., Gu, N., Taylor, C., Brankovic, L.: Towards the development of a project
decision support framework for adoption of an integrated building information model using a
model server. In: Underwoord, J., Isikdag, U. (eds.) Handbook of Research on Building
Information Modelling and Construction Informatics: Concepts and Technologies. IGI Global,
Hershey (2010)
Lostuvali, B., Love, J., Hazleton, R.: Lean enabled structural information modeling. In:
Underwoord, J., Isikdag, U. (eds.) Handbook of Research on Building Information Modelling
and Construction Informatics: Concepts and Technologies. IGI Global, Hershey (2010)
Riese, M.: Building lifecycle information management case studies. In: Underwoord, J., Isikdag,
U. (eds.) Handbook of Research on Building Information Modelling and Construction
Informatics: Concepts and Technologies. IGI Global, Hershey (2010)
PBLLab: Stanford University Project Based Learning Laboratory. Available online at http://pbl.
stanford.edu (2007). Accessed 11 Dec 2009
Peters, E.: BIM and geospatial information systems. In: Underwoord, J., Isikdag, U. (eds.)
Handbook of Research on Building Information Modelling and Construction Informatics:
Concepts and Technologies. IGI Global, Hershey (2010)
PSU: The Uses of BIM: Classifying and Selecting BIM Uses. Available online at http://bim.psu.
edu/Uses/the_uses_of_BIM.pdf (2013)
Rebolj, D., Babic, N.C., PodBreznik, P.: Automated building process monitoring. In: Underwoord,
J., Isikdag, U. (eds.) Handbook of Research on Building Information Modelling and
Construction Informatics: Concepts and Technologies. IGI Global, Hershey (2010)
Solis, J.L.F., Mutis, I.: The idealization of an integrated BIM, lean, and green model (BLG). In:
Underwoord, J., Isikdag, U. (eds.) Handbook of Research on Building Information Modelling
and Construction Informatics: Concepts and Technologies. IGI Global, Hershey (2010)
Succar, B.: Building information modelling maturity matrix. In: Underwoord, J., Isikdag, U. (eds.)
Handbook of Research on Building Information Modelling and Construction Informatics:
Concepts and Technologies. IGI Global, Hershey (2010)
Suermann, P.C., Issa, R.R.A.: The US national building information modeling standard. In:
Underwoord, J., Isikdag, U. (eds.) Handbook of Research on Building Information Modelling
and Construction Informatics: Concepts and Technologies. IGI Global, Hershey (2010)
Tanyer, A.M.: Design and evaluation of an integrated design practice course in the curriculum of
architecture. In: Underwoord, J., Isikdag, U. (eds.) Handbook of Research on Building
Information Modelling and Construction Informatics: Concepts and Technologies. IGI Global,
Hershey (2010)
Underwood, J., Isikdag, U.: A synopsis of the handbook of research on building information
modelling. CIB World Congress, Salford, 10–13 May (2010a)
Underwood, J., Isikdag, U.: Handbook of Research on Building Information Modelling and
Construction Informatics: Concepts and Technologies. IGI Global, Hershey (2010b)
Underwood, J., Isikdag, U.: Emerging technologies for BIM 2.0. Constr. Innov. 11(3), 252–258
(2011)
Van Nederveen, S., Beheshti, R., Gielingh, W.: Modelling Concepts for BIM. In: Underwoord, J.,
Isikdag, U. (eds.) Handbook of Research on Building Information Modelling and Construction
Informatics: Concepts and Technologies. IGI Global, Hershey (2010)
Van Oosterom, P., van Stotter, J., Janssen, E.: Bridging the worlds of CAD and GIS. In: Zlatanova,
Prosperi (eds.) Large-scale 3D data integration -Challenges and Opportunities, pp. 9–36.
Taylor&Francis, London
Chapter 3
Foundational SOA Patterns for Complex
Information Models
3.1 Introduction
The term design pattern is used heavily in software engineering. Design patterns
prevent developers from re-inventing the wheel when they face a common problem
in the software design. A design pattern defines a problem that frequently occurs in
software design and implementation, and then describes the solution to the problem
in such a way that it can be reused in many different situations. A pattern portrays a
commonly recurring structure of communicating components that solve a general
design problem within a particular context (Gamma et al. 1995; Yacoub and
Ammar 2003). As explained by Isikdag (2012), a pattern can be characterized as the
template for a solution, a software design problem or as a defined and recognized
formalization of interaction between the software components for fostering better
software design. The use of patterns in software design helps software developers in
design decisions, (i) when they face commonly observed problems in the design of
their new software or (ii) when they would like to introduce new ways of interaction
He (2003) indicated that two constraints exist for implementing Web services:
(i) Interfaces must be based on Internet protocols such as HTTP, FTP and SMTP
and (ii) except for binary data attachments, messages must be in XML. This has
also changed by the emergence of RESTful architectures. Two definitive charac-
teristics of Web services are loose coupling and network transparency (Pulier and
Taylor 2006). The foundational layers for the design principles of service-oriented
architectures (SOA) are defined in Gamma et al. (1995), Hohpe and Woolf (2003),
Linthicum (2003) and Fowler (2003). In his well-known textbook, Erl (2009)
defined SOA design patterns based on nine design principles explained in Erl
(2008), which will be summarized in this section.
Standardized Service Contract A service contract can be defined as a set of rules
that defines a service. These rules are represented in a data model for the Web
service (if such a data model exists). The contract can be interpreted as the
meta-model of the Web service. The model is used to express the purpose and
capabilities of the service. As indicated in Erl (2008), the standardized service
contract design principle is perhaps the most fundamental part of service orienta-
tion, in that it essentially requires that specific considerations be taken into account
when designing a service’s public technical interface and assessing the nature and
quantity of content that will be published as part of a service’s official contract.
In Simple Object Access Protocol (SOAP) architectures, the contract is explicit and
represented in the form of WSDL (XML) files, but in RESTful architectures (which
are much popular today) there is no explicit service contract that is represented with
an XML schema and file, but for service discovery purposes it is recommended to
prepare such a file or documentation for facilitating the discovery of the service.
3.2 Design Principles of Service Orientation 27
the state data becomes more and more resource consuming either for server or for
middle-tier components. The service statelessness principle defends that the Web
services should not store and manage the state information but this information
needs to be managed by other external components that are specifically designed for
these purposes. Services should only be designed as stateful when this is a real
architectural requirement. This principle forms the foundation of RESTful services.
Service Discoverability Discoverability is defined as the ability of something to
be found. In our case, service discoverability refers to a service’s ability to be
discovered by another service or application. Discoverability is directly propor-
tional to accessibility of the service and also contributes to the usability of the
service. Discoverability of Web services can be enabled and facilitated by use of
global service registries such as UDDI. In fact, such a global registry of Web
services is not sufficient; it is also very important to generate services that are
self-discoverable. Meta-information about the service purpose and characteristics
needs to be embedded into Web services that will help search engines to discover
the service. Sites such as www.apis.io or www.programmableweb.com are
well-known examples of service search engines and can be used to discover Web
services defined for any purpose.
Service Composability In computer science, object composition refers to the
ability to combine core or simple objects into more complex objects. In composi-
tion, paradigm simple objects act as building blocks of more complex objects.
A similar approach can be applied in SOAs to generate composite services by using
or reusing core Web services as (service) components. To achieve this, Web ser-
vices need to be defined in a way that they are pluggable to other services.
Service Interoperability Interoperability of software can be defined as the
capability of one type of software to function as the other, or as the capability of
different software to exchange and share data via a common set of exchange for-
mats. The latter is known as data-level interoperability. Interoperability is also
possible at the service level when one service is capable of substituting the function
of another Web service, or when one Web service is able to operate using data
models that are consumed by the other Web service. Hence in order to be con-
sidered as interoperable services, they should either have the ability to function as
the other, or have the ability to operate with the other services’ data model.
Interoperability of services is a key element in facilitating the integration of ser-
vices, where one-to-n interaction between services is a common practice.
It is not easy to define what a complex information model is, but we can start from
simple information models and try to underline the difference between simple and
non-simple (complex) information models. Information models for most domains,
3.3 Complex Information Models 29
ranging from health care to plane ticket reservation systems make use of a single
schema that is defined for that specific domain (or application purpose). Once the
model is agreed on by the software developers, it is used to generate tables in a
shared database. Once there appears a need, different views of the model are then
generated using the database management system and stored inside the database.
The schemas of the information model and model views are stored and exchanged
very rarely outside the database management system environment. The information
management practice explained here is the common practice for the daily routines
in most domains in production and service industries. In this common practice,
schema exchange and data-level interoperability are not common requirements. On
the other hand, in domains where detailed semantic information coupled with
detailed geometric representations is of key importance (such as city modelling,
construction, ship and aircraft production and so on), information models become
complex. In these domains, information models are represented by schemas which
are agreed by the major stakeholders of the domain (also defined in the form of
standards). The model schemas usually refer to a meta-model or data modelling
standard (such as ISO 10303 EXPRESS or XML). The model schema forms the
core of the information model standards (such as IFC or CityGML schemas). Model
views are also represented by agreed schemas. Furthermore, some of these models
such as CityGML (OGC 2012) are capable of being extended using application
domain extensions (ADEs) which can be referred as extension schemas. In some
cases such as models defined in ISO 10303, the model definition language (such as
EXPRESS) needs to be recognized by the databases; thus, a meta–meta schema (i.e.
a schema that defines the data definition language (DDL), i.e. ISO 10303
EXPRESS) would also be required by the database. As all of the well-recognized
databases are capable of interpreting the XML schemas, there would not be such a
(meta–meta schema) need for XML compliant information models. Depending on
these, a generalized definition of complex information model would contain four
components (see Fig. 3.1).
30 3 Foundational SOA Patterns for Complex Information Models
This section elaborates on patterns defined for enabling SOAs for systems using
complex information models in their data layer. Patterns defined in this section
focus on the provision and facilitation of Web services for complex information
models. In some patterns, there are observer-type services. Observer-type Web
services explained in this section are composite Web services, but illustrated as a
single component in the figures for preventing confusion for the readers. The reader
should note that the observer services explained are not simple services that can
only be invoked by HTTP requests; in fact, they would be interpreted as a set of
software components including an HTTP Web service. The observer services would
consist of software components required to observe the changes in the models and
present these changes to the other software components upon a request or as a result
of subscription. For further information on database and file change detection which
are key research topics on their own, readers are advised to check the related
computer science literature.
The idea and background of this pattern is similar to the model view selector, as the
pattern works within an environment where there are multiple model views. In the
previously presented pattern, model views are presented as a whole dataset
depending on the selection of the consuming application. For instance, a design
application might require a design view of a BIM (where all entities related to
design will be transferred to the client), while an analysis application would require
an analysis view of the BIM (where all entities related to analysis will be transferred
to the client). In fact in some cases, the model views might become too large and
applications sometimes require individual entities of the view. An example of this
would be a design application information request made while an architect is
working on the detailed design of a single façade element (such as a logo of a
company that will be placed in the building façade). In this situation, transfer of the
whole dataset of the view (i.e. all façade elements) is unnecessary and will cause
redundant use of hardware and network resources. The view entity extractor service
defined in this pattern would respond to the request of transferring an individual
entity of the view, to the service consumer. Similar to the previous pattern, the
service can be consumed by a service consumer API. The API can be embedded
3.4 Service-Oriented Patterns 33
within an application, or the output of the API can be served, or visualized through
a Web interface (Fig. 3.4).
The structure of this pattern is similar to the model view selector and model view
entity extractor as the pattern works within an environment where there are multiple
model views. In the previous pattern, the model view entity extractor service is
capable of presenting each individual entity of the model (such as presenting a door,
a window, a wall, etc., as a result of a request from a BIM model view). In fact in
some cases, business processes require a set of entities of a sub-model of the view
(i.e. a sub-view) to be presented as a result of a request. For instance, during the
construction of buildings schedules are updated once the construction is pro-
gressing. In fact, the update is done as the work progresses in a part of the building
(e.g. beams of floor 2 might be constructed on 2 October, and this information
needs to be updated on the BIM model view—that is related to scheduling). In this
situation, a sub-view would be required to be transferred to the client. As these
operations are usually done using mobile devices, the transfer of sub-view would
34 3 Foundational SOA Patterns for Complex Information Models
also increase the efficiency of the system. Depending on the user request, the
sub-view generator service will generate a sub-view from the model view and
present this view as the service output. Similar to the previous pattern, the service
can be consumed by a service consumer API. The API can be embedded within an
application, or the output of the API can be served, or visualized through a Web
interface (Fig. 3.5).
The view observer pattern is defined for an environment where there are multiple
model views. The pattern is proposing a mediator between the model and the view
(s). The previous three patterns are related to presenting model view(s) to the client
applications through a Web service and a consumer API. Once these applications
work on the model views, they will update the model views inevitably. If we think
of a situation where there are 3–4 applications working simultaneously with 3–4
different views and updating them synchronously, this might bring inconstancies
between the model views. For example, an architect might be working on the
design of a space in a room on model view A, while an engineer might propose a
3.4 Service-Oriented Patterns 35
curtain wall dividing this space into two subspaces on model view B. In this case,
the architect needs to be notified immediately regarding the proposed curtain wall.
In order to enable this, every change occurring in the model views needs to be
listened (or observed) and the model itself (where views are generated) needs to be
updated. The view observer service in this pattern is a composite service that
includes components for noticing the changes in the model views. The service
components listen for changes in multiple model views and once a change occurs in
one of the views, it will issue a notification with the details of the change. These
notifications would then be consumed by a service consumer API (an observer of
the view observer), which is also capable of updating the information model
residing in a database (or in a file—which is a less common situation). Although not
provided in the diagram, the existence of a database API or file I/O API can be
required in order to update the database or the model file (Fig. 3.6).
latest version of the model. This is accomplished using the view updater service.
The view updater will be designed as a composite service (which includes com-
ponents for) functioning as an observer of the core model (also by maintaining a log
of changes on the DB side). Once a change in the core model is made and is notified
by the view updater service, the service would then indicate which elements from
the core model need to be transferred to update each model view. The information
from the core model will be acquired through a database API that is interacting with
the model server database where the core model resides. A service consumer API
will be used to consume information from the view updater service and update the
model views related to the changes in the core model (Fig. 3.7).
This pattern is similar to the previously explained extended model observer pattern.
In this pattern, the extended model view observer service observes the view(s) of
38 3 Foundational SOA Patterns for Complex Information Models
the extended model (is a composite service that includes components for noticing
the changes in the extended model view). Once the extended model view is handled
by an application, changes are expected to occur in this view. At the time a change
occurs at the extended model view, the extended model view observer service will
notify its subscribers about this change. The most obvious subscriber of this service
would be a service consumer API. The main responsibility of the API in this case
would be updating the extended model once a change occurs in the extended model
view. The role of the API is exactly the same as the role of the API in view observer
pattern. In the view observer pattern, it was mentioned that the service consumer
API would be able to update the database where the model resides, or would be able
to update the model file. The situation is the same for the API in this pattern.
Although not provided in the diagram, the existence of a database API or file I/O
API can be required in order to update the database or model file (Fig. 3.9).
This pattern has common elements with the view updater pattern. It is mentioned in
the extended model view observer pattern that once a change occurs in one
extended model view, the extended model will get updated. In this situation, the
extended model becomes the most up to date and accurate model in the system. In
fact, other extended model view(s) are not aware of this change, and cannot be
3.4 Service-Oriented Patterns 39
considered as accurate views. The extended model view updater service will be
designed as a composite service (which include components for) functioning as an
observer of the core model (also by maintaining a log of changes on the DB side).
Once a change occurs in the extended model, the extended model view updater
service would take the responsibility to broadcast the changes with the help of a
database API and service consumer API. This latter API will be used to consume
information from the extended model view updater service and update the extended
model views regarding the changes occurred (Fig. 3.10).
The previous patterns in this section provide observer and updater functions with
different services. In fact, these services can be merged into a single service which
is known as a controller. The controller in the well-known design pattern MVC
covers both functions of observer and updater. Similarly, a Web service can mimic
the functionality of the controller for complex information models. In this pattern, a
model controller service utilizes a database API (i) in order to listen and broadcast
changes in the model and (ii) to update the model based on update requests coming
40 3 Foundational SOA Patterns for Complex Information Models
from service consumer. Service consumer, like model controller service, has two
functions: to broadcast the changes in the information model and to trigger the
update of the model using the model controller service. The pattern can also be
generalized to cover model views and extended models and extended model views,
for generating model view controller (but should not be mixed with well-known
M-V-C pattern), extended model controller and extended model view controller
patterns (Fig. 3.11).
References 41
References
Abstract The present can be regarded as the start of the Internet of Things
(IoT) era. IoT covers the utilization of sensors and near-field communication
hardware such as RFID or NFC, together with embedded computing devices. The
devices can range from cell phones to RFID readers, GPS devices to tablets,
embedded control systems in cars to weather stations. In an IoT environment, a
door would have the ability to connect with the fire alarm, or your chair would
communicate with your home lights, or a car would communicate with the parking
space. In the context of this book, we focus on single-board computers (SBCs) as
the main IoT hardware components for acquiring and presenting indoor informa-
tion. This chapter elaborates on different types of SBCs that can be used for
acquiring and presenting information regarding building elements, indoor equip-
ment and indoor spaces.
4.1 Introduction
In the future, the Internet will not only be a communication medium for people, it
will in fact be a communication environment for devices. Internet of Things (IoT) is
defined as a dynamic global network infrastructure with self-configuring capabili-
ties based on standards and interoperable communication protocols. Physical and
virtual things in an IoT have identities and attributes and are capable of using
intelligent interfaces and being integrated as an information network (Li et al.
2014). The overall concept is known as the IoT. In an IoT environment, a door
would have the ability to connect with the fire alarm, or your chair would com-
municate with your home lights or a car would communicate with the parking
space; the list can become longer and longer, and is only limited by your imagi-
nation. The present can be regarded as the start of the IoT era. IoT covers the
utilization of sensors and near-field communication hardware such as RFID or
NFC, together with embedded computing devices. The devices can range from cell
phones to RFID readers, GPS devices to tablets and embedded control systems in
cars to weather stations. In fact, within the context of this book, we will only focus
on the single-board computers (SBCs) that facilitate the provision of information
acquired from the environment, as (i) SBCs are quite reachable and easy to use and
test for development purposes when compared with embedded systems of industrial
automation, and (ii) patterns and communication methods explained in this book
are based on SBCs as the components of the hardware layer.
SBCs are devices that are developed as proof of concept and experimental tools.
These devices have the ability to acquire information from its surroundings by
sensors, either embedded in them/or connected to them. SBCs form a solid hard-
ware infrastructure for facilitating the development of the IoT software. The term
SBC is used to define computers that consist of a single circuit board memory and
the processor. Most SBCs have I/O interfaces where different kinds of sensors can
be plugged in. These computers usually do not have extension slots like regular
PCs, and the processors used in them are usually of low cost. The size of these
devices ranges from a matchbox to a playing card. Some of them have USB and
memory card interfaces. SBCs can run versions of embedded Linux and even
Windows, while some only have programmable microprocessors which provide
output to their proprietary workbench. Recent developments have shown that SBCs
can be used as Web servers, or even as a node in cloud clusters. At the time of
writing this book, there were more than 20 different types of SBCs produced by
different vendors in different parts of the world. In the following sections, we
summarize the key efforts in the field of SBCs, which today act as the foundational
layer for development of IoT concepts and IoT software layers.
One of the most well-known SBC series is the Arduino development boards.
Arduino boards were developed to act as development environments or affordable
embedded computers which have the ability to acquire information through
sensors/and control actuators that are connected to these boards. The Arduino
boards include a basic microcontroller and a development framework/workbench
for developing the software.
The boards can take inputs from various sensors such as heat, luminance,
pressure, magnetic field, proximity and so on and can be used to control various
actuators for lights, motors, etc. Thus, the boards are used in education for robotics.
They can be obtained from various dealers or can be developed in-house by uti-
lizing the provided (open-hardware) schemas. The common development envi-
ronment for Arduino is the Processing IDE. As mentioned in the Arduino Web Site
(2015), the Arduino programming language is an implementation of Wiring, a
similar physical computing platform, which is based on the Processing multimedia
programming environment. There are different models of Arduino starting from
4.2 Arduino Development Boards 45
Uno to Yun (Figs. 4.1 and 4.2), in different sizes, equipped with different hardware
components and designed for different purposes, ranging from robotics to wearable
computing. The main advantage that makes Arduino so popular among its com-
petitors is its user-friendly software environment, multinational user support, price
and simplicity in developing the platform projects. Arduino IDEs can run
cross-platform in multiple operating systems and it is possible to extend the soft-
ware depending on user requirements. The Arduino hardware development efforts
are a part of the open-hardware initiative, so it is possible to build up an Arduino
from scratch if you have a certain level of electronics knowledge. Arduino Uno is
the most popular model of the Arduino hardware among development communities,
while Arduino Yun, which has been released recently, is gaining increasing pop-
ularity as it can be used as a micro IoT server, where information gathered from
sensors can be directly served over the Web server software which can be installed
on the Linux Distribution residing in the Arduino Yun.
There are also many Arduino-compatible boards capable of being programmed
using the Arduino programming framework. Another interesting aspect of the
Arduino boards is their extensibility. The Arduino boards use the concept of
shields, which is similar to the concept of interfaces in object-oriented programing
languages; in fact, the interface here is a hardware component instead of a
soft-interface in programming languages. A shield can be defined as a hardware
component and once plugged to an Arduino board, the board becomes capable of
utilizing extra hardware resources. Examples include the Ethernet interface, Wi-Fi
interface, a GPRS interface and a GPS interface. Further information about these
SBCs can be obtained from Arduino Web Site (2015).
Fig. 4.2 Different models and shields of Arduino (Arduino Web Site 2015). Source http://
creativecommons.org/licenses/by-sa/3.0/legalcode
4.3 BeagleBoard
Fig. 4.3 BeagleBoard hardware and capes (BeagleBoard Web Site 2015). Source http://
creativecommons.org/licenses/by-sa/3.0/legalcode
4.4 CubieBoard
4.5 Raspberry Pi
Fig. 4.5 Raspberry Pi 2 current model (Wikimedia Commons 2015). Source http://commons.
wikimedia.org/wiki/Raspberry_Pi#/media/File:Raspberry_Pi_2_Model_B_v1.1_top_new.jpg
4.6 Orange Pi
The UDOO board project was launched as a Kickstarter project and reached its
funding goal in 40 h. The UDOO is an SBC with an integrated microcontroller
which is Arduino compatible. The main strength of the board comes from running
both the Arduino development framework and Linux, and the Android operating
systems. Today, the Yun model of the Arduino has similar capabilities. The UDOO
board was developed with the aim of supporting (i) the education of computer
science and (ii) R&D projects related to IoT. The UDOO board also has the ability
to work with Arduino-compatible shields. The Arduino programming framework
50 4 Internet of Things: Single-Board Computers
The Netduino board has been developed as a response to the demand from .NET
developers following their intention to code for SBCs that are similar to Arduino by
using the .NET environment. Although the Netduino board is produced with
similarities to Arduino, in terms of microcontroller the SBCs have differences. The
main difference in the Netduino hardware is that it can be programmed using the
Microsoft .NET development environment. As there are many developers who are
familiar with the .NET framework, the Netduino SBCs provides them an
easy-to-use interactive environment to start programming for IoT. Another
advantage of Netduino is that it is compatible with Arduino shields and can be
extended using any of the Arduino/Arduino-compatible shields. Currently,
Netduino boards have three models (see Fig. 4.6). Netduino 2 does not have net-
working capabilities, but Netduino plus 2 and Netduino Go have the ability to
connect to the network/Internet. Recent developments have demonstrated that it is
Fig. 4.6 Netduino plus single-board computer (Wikimedia Commons 2015). Source http://
commons.wikimedia.org/wiki/Category:Netduino#/media/File:Netduino_Plus.jpg
4.8 Netduino Board 51
possible to use Netduino as Web servers to serve web pages or to serve the
information acquired from the sensors that are connected to the Netduino. Further
information about these SBCs can be obtained from the Netduino Web Site (2015).
Fig. 4.7 Intel Galileo single (Intel Maker Web Site 2015). Grant of permission by Intel
Corporation
52 4 Internet of Things: Single-Board Computers
Intel Edison is an SBC developed to support prototyping for new product devel-
opment for the IoT. The main focus of Intel Edison is wearable computing (Fig. 4.8).
Further information about these SBCs can be obtained from the Intel Maker Web
Site (2015).
The Radxa Rock is recognized as one of the powerful SBCs in the market capable
of doing PC tasks efficiently. As indicated in the Radxa Rock web site, the SBC is
shipped with Android and Ubuntu/Linaro and has the dual boot option on the
NAND flash (onboard storage). All PC accessories can work with this SBC. Rock
chips are also used in SBCs that are used to watch TV channels (Fig. 4.9).
Fig. 4.9 An android TV stick with rockchip (Wikimedia Commons 2015). Source http://
commons.wikimedia.org/wiki/File:MK809III_V1.0_130606_inside_front.jpg?uselang=tr
4.10 Radxa Rock 53
The IoT software development research in the academic world today is much
facilitated by the existence and utilization of SBCs. These devices form
well-defined interfaces between the sensors and other software layers such as
middle-tier data acquisition components or integration portals. While these SBCs
form the hardware-backbone of the IoT, the integration efforts from the software
side are today dominated by integration portals and platforms. In the following
chapter of this book, we will focus on the software-backbone components of the
IoT, and specifically on the integration platforms. Further information about these
SBCs can be obtained from the Radxa Rock Web Site (2015).
References
Abstract The Internet of Things (IoT) architectures do not only consist of hard-
ware. The IoT hardware would require operating systems to work and also need to
implement communication protocols to communicate with other devices and
humans. Furthermore, there are middleware components that facilitate communi-
cation and exchange of information between devices. In IoT architectures, inte-
gration portals play an important role in combining and integrating information
acquired from multiple devices and presenting this information to the users. This
chapter provides detailed information on the software side of IoT.
5.1 Introduction
The software side of the Internet of Things (IoT) is more complicated than the
hardware side. The hardware components include embedded systems, sensors and
SBCs. As mentioned in the previous chapter, IoT covers the utilization of sensors
and near-field communication hardware such as RFID or NFC, together with
embedded computing devices. In fact, in the previous chapter we only focused on
single-board computers as they are easy to reach and operate, and as patterns and
communication methods explained in this book focus on the use of these devices. In
this chapter, the software side of the IoT approach is summarized from a broader
perspective.
As depicted in Fig. 5.1 the software component of the IoT is formed by different
layers. The lowest software layer is the operating system(s) of the hardware com-
ponents. Either an embedded system or an SBC hardware would mostly be con-
trolled by an operating system. There are exceptions to this where the hardware
directly communicates with the middleware or a programming framework. The
protocols play a role in enabling communication between different software layers.
The middleware and development frameworks form the core layer of the IoT
Most well-known embedded operating systems run on the mobile phones of today.
There are three main operating systems in the field, Android, iOS and Windows
Phone. Android is a Linux-based operating system developed by Google. The OS’s
focus is enabling user interaction in touch screen devices. Apart from the mobile
phone, the Android OS is also used to control tablet computers, PCs, TV sets,
digital cameras, game consoles and cars. The source code of the Android OS is
open. The iOS is the mobile operating system for Apple devices. The iOS is used to
interact with Apple’s mobile phones and tablet computers. The Windows Phone
uses Windows operating systems for managing mobile phones. Starting from
5.2 Operating Systems 57
Windows 8 the tablet computers are able to run the Windows operating system
which is designed for multiple device types (such as tablets, PCs and so on.)
5.2.2 OpenWRT
OpenWRT is the operating system used with the most advanced model of the
Arduino, i.e. the Arduino Yun. The OpenWRT Web Site (2015) defines the OS as a
highly extensible GNU/Linux distribution for embedded devices. The main focus of
the OS is wireless routers. OpenWRT is free to use and open source and the system
is developed as a result of a community-driven effort. Apart from the wireless
routers that the OS is developed for and the Arduino Yun SBC, a 3D printing
project (Doodle3D) uses OpenWRT as the OS of their environment.
5.2.4 Raspbian
5.2.5 Contiki OS
5.2.6 RIOT OS
5.2.7 Tiny OS
The Tiny OS is a BSD clone that is designed for low-power wireless devices which
form (i) the backbone of the wireless sensor networks (ii) and when connected to
the internet through M2M communication form the backbone of an IoT architec-
ture. These low-power devices are used in smart-city applications, smart buildings
as smart devices. The TinyOS supports IPv6 stack, 6LoWPAN or RPL protocols as
well. A global community supports the development of the TinyOS (TinyOS Web
Site 2015).
The RTOS acronym stands for Real-Time Operating System. As explained in the
Free RTOS Web Site (2015) “…most operating systems appear to allow multiple
programs to execute at the same time. This is called multi-tasking. In reality, each
processor core can only be running a single thread of execution at any given point
in time. A part of the operating system called the scheduler is responsible for
deciding which program to run when, and provides the illusion of simultaneous
execution by rapidly switching between each program. The scheduler in a
Real-Time Operating System (RTOS) is designed to provide a predictable (deter-
ministic) execution pattern. This is particularly of interest to embedded systems as
embedded systems often have real-time requirements. A real-time requirements is
5.2 Operating Systems 59
one that specifies that the embedded system must respond to a certain event within
a strictly defined time (the deadline). A guarantee to meet real-time requirements
can only be made if the behaviour of the operating system’s scheduler can be
predicted (and is therefore deterministic).” Free RTOS is a component of RTOS
that is designed to be small enough to run on a single micro-controller.
There are also some specific bundles where the hardware components require
specific software or frameworks to interact with the hardware nodes. This section
will present two examples of these bundles.
5.3.1 Spark.IO
The open hardware for the IoT project, Open Mote, provides a set of core modules
and an interface board which can interface various sensors and operate autono-
mously as a node. The hardware utilizes the XBee standard to connect to the other
nodes. As stated by the Open Mote Web Site (2015) the Open Mote hardware has
been designed to support various open-source software stacks specifically designed
for the IoT, that is, Contiki and OpenWSN. In addition, the Open Mote hardware is
also known to run FreeRTOS, a real-time operating system for embedded devices
and RiOT, another fully open-source RTOS designed for the IoT (OpenMote Web
Site 2015).
60 5 Internet of Things: Software Platforms
The TCP/IP protocol suite offers many possibilities for the enablement of IoT, but
in fact, in several different layers of the IoT network architecture, there are protocols
that specifically focus on enabling and facilitating the communication of nodes in
an IoT network. As presented in Fig. 5.2 protocols such as RPL and 6LowPAN at
the network layer, COAP at the application layer are specifically defined for
facilitating M2M communication. There are also protocols that benefit from TCP/IP
application and transport layers for enabling message exchange between devices
such as MQTT and XMPP. In this section, we elaborate on IoT-specific commu-
nication protocols in the upper layers of the network architecture that support
enablement and utilization of IoT. It should be noted that (lower) physical layer
protocols such as IEEE 802.15.4, Zigbee, Bluetooth and NFC will not be elaborated
in this section as they are not within the context of this book.
The RPL protocol acronym stands for low-power and Lossy Network
(LLN) routing protocol. According to the definition in the standard (RPL IETF
2015) LLNs consist largely of constrained nodes with limited processing power,
memory and sometimes energy. It is assumed that in Lossy Networks a vast number
of nodes are interconnected with unstable links supporting only low data rates. The
traffic patterns in Lossy Networks are point to multipoint or vice versa. The RPL
protocol is an IPv6 routing protocol that supports routing in Lossy Networks.
6LoWPAN is the acronym for IPv6 over low-power wireless personal area net-
works. The protocol was originated with the idea of facilitating routing between
low-power devices with limited processing capacities in the IoT environment. The
5.4 Messaging Standards and Protocols 61
idea behind this routing protocol was related with connecting devices with wireless
connectivity and low data rates. The idea applies to many nodes that can broadcast
and share information during a production process or within the smart-city grid.
The acronym MQTT stands for Message Queue Telemetry Transport. The MQTT
is a lightweight messaging protocol that is used over the TCP/IP protocol stack.
Similar to protocols explained in this section, this protocol is designed for
low-power networks with limited resources, and for networks that are assumed to
be less reliable. MQTT provides a publish/subscribe model and, as any
publish/subscribe protocol it implements a message broker. As the protocol is based
on top of TCP/IP, both client and broker need to have a TCP/IP stack. The aim of
the protocol is minimization of bandwidth use, but assuring some degree of
delivery. The protocol can be used in M2M communication, specifically when
power consumption and bandwidth is a key issue. TCP/IP Port 1883 is used for
MQTT protocol communication.
beneficial for IoT architectures. XMPP extensions are used in facilitating com-
munication in IoT systems. Examples include the use of XMPP in automatic power
metre reading and in energy efficiency research.
The IoT middleware and frameworks form the core layer of the IoT software
components. The IoT middleware accomplishes the goal of enabling machine-
to-machine and machine-to-user/user-to-machine communication and interaction.
The middleware layer mainly focuses on orchestrating the exchange of information
and interactions between (i) devices, (ii) between devices and other software layers.
The IoT frameworks can have many functions ranging from providing
development/coding interfaces to hardware, to providing component containers to
interact with real or simulated (virtual) sensors. This section will focus on some key
examples of IoT middleware and frameworks.
Eclipse IoT provides a set of frameworks and services which can be regarded as
building blocks that sit on top of open standards and protocols (Eclipse IOT Web
Site 2015). Two well-known frameworks of Eclipse are Kura and Mihini. Kura is a
Java-based framework for IoT gateways. Kura APIs offer access to the underlying
hardware (serial ports, GPS, watchdog, GPIOs, I2C, etc.), management of network
configurations, communication with M2M/IoT integration platforms and gateway
management (Kura Web Site 2015). On the other hand, the Mihini project delivers
5.5 Middleware and Frameworks 63
an embedded run time running on top of Linux that exposes a high-level API for
building machine-to-machine applications (Mihini Web Site 2015). Eclipse offers
two IoT service components, Smart Home and SCADA. The Smart Home com-
ponents focus on smart home and ambient-assisted living (AAL) solutions. The
project focuses on services and APIs for data handling, rule engines, declarative
user interfaces and persistence management. SCADA is a set of tool development
libraries, interface applications, mass configuration tools, front-end and back-end
applications to connect different industrial devices.
IoTSyS provides a middleware layer to enable the integration of objects in the IoT
architecture. It is focused on facilitating communication for embedded devices
addressed using IPv6. The focus of the middleware is on enhancing the interop-
erability of smart objects. The middleware utilizes the 6LoWPAN and COAP and
XML interchange over Web services to interact with sensors and actuators. As
indicated in the IoTSyS Web Site (2015), the IoTSyS middleware aims at providing
a gateway concept for existing sensor and actuator systems found nowadays in
home and building automation systems, a stack which can be deployed directly on
embedded 6LoWPAN devices and further addresses security, discovery and scal-
ability issues.
The project name stands for open-source cloud solution for IoT. The aim of the
project is to provide an open-source middleware for acquiring information from the
sensor clouds regardless of the type of sensors used. As stated in the OpenIoT Web
64 5 Internet of Things: Software Platforms
Site (2015), OpenIoT focuses on the development of middleware for sensors and
sensor networks, ontologies, semantic models and annotations for representing
Internet-connected objects, along with semantic open-linked data techniques,
cloud/utility computing, including utility-based security and privacy schemes.
OpenIoT middleware infrastructure allows flexible configuration and deployment of
algorithms for collection, and filtering information streams stemming from
Internet-connected objects.
5.5.6 Macchina.IO
The Macchina.IO is a toolkit for developing IoT applications for SBCs that run
Linux OS such as Arduino Yun, Raspberry Pi or so on. The framework provides
modular JavaScript and C++ run time environment which enables to acquire
information from multiple sensors and present it to cloud services. The framework
utilizes RESTful architecture and the MQTT protocol for communication.
The integration portals can be defined as web sites that contain interfaces (i) to
facilitate M2M integration, (ii) to enable visualization of information obtained from
sensors, (iii) to facilitate user interaction with actuators, (iv) to provide a service
based on information acquired from sensors, (v) to update web resources based on
information acquired from the sensors. Thus, these sites aim to facilitate multiple
dimensions of IoT integration and can be termed as Integration Portals.
5.6.1 Xively
The portal formerly called Pachube and later COSM, is a platform as a service for
IoT. It is one of the pioneer Web integration platforms that provided APIs for
different hardware components and SBCs, for enabling M2M and user interaction.
Furthermore, the APIs are also useful in enabling discovery of devices and inte-
gration of information published by these devices.
As illustrated in Fig. 5.3 Xively has become a commercial platform that provides
directory services, data services and business services focused on IoT. The message
bus layer deals with the real-time routing and management of messages. Xively API
is the core component of the Xively tools and forms a gateway between front-end
and back-end software components, mobile applications and IoT nodes (i.e. con-
nected objects). Xively API supports reading and writing data via three resources:
feeds, datastreams and datapoints (Fig. 5.4).
5.6 Integration Portals 65
As mentioned by the Xively Web Site (2015) a feed is the collection of channels
(datastreams). A feed’s metadata can optionally specify location, tags, whether it is
physical or virtual, fixed or mobile, indoor or outdoor, etc. Every device has exactly
one feed. A datastream is a bidirectional communication channel that allows for the
exchange of data between the Xively platform and authorized devices, applications
and services. Each datastream represents a specific attribute, unit or type of
information (a variable). Datapoint: A datapoint represents a single value of a
datastream at a specific point in time. It is simply a key-value pair consisting of a
timestamp and the value at that time. For example requesting the datastream as a
PNG image with the following HTTP GET request will generate a customizable
graph of the datastream’s history as a .png file (Fig. 5.5).
[GET https://api.xively.com/v2/feeds/FEED_ID_HERE/datastreams/DATASTREAM_
ID.png]
66 5 Internet of Things: Software Platforms
Fig. 5.5 An illustration of the graph of a datastream generated as a result of a GET request
5.6.2 Paraimpu
The Paraimpu describes itself as a social tool defined to connect things with the aim
of creating personal applications for IoT. The portal focuses on three specific
directions. As mentioned in the Paraimpu Web Site (2015) these are (a) Connect
your things, such as sensors, motors, micro-controllers like Arduino, home appli-
ances, lighting systems or whatever you want, talk with the Web. (b) Compose and
inter-connect them together or to social networks to interact with them or publish
their feeds in social networks. (c) Share and let other people use produced data in
their own connections, enabling real, social, physical-virtual web mashups. The
Paraimpu workspace (illustrated in Fig. 5.8) is used to connect the information
gathered from sensors or SBCs (such as Arduino) to real actuators in SBCs or
virtual actuators such as Twitter post publisher. It is also possible to define M2M
communication services through the Paraimpu workspace.
68 5 Internet of Things: Software Platforms
5.6.3 Dweet.IO
[GET https://dweet.io/dweet/for/my-thing-name?hello=world&foo=bar]
In order to get a dweet (i.e. device tweet) the following GET request would be
sufficient.
[GET https://dweet.io/get/latest/dweet/for/my-thing-name]
The JSON response would be as shown in Fig. 5.9.
5.6.4 Freeboard.IO
References
Abstract Two styles of Web services exist today: Simple Object Access Protocol
(SOAP) and REST. Representational State Transfer (REST) is often preferred over
the more heavyweight SOAP because REST does not leverage as much bandwidth.
REST’s decoupled architecture makes it a popular building style for cloud-based
APIs, such as those provided by Amazon, Microsoft and Google. This chapter starts
with providing technical information about RESTful Web services. Following this,
RESTful design patterns for facilitating BIM-based software and Web service
architectures are presented.
6.1 Introduction
HTTP protocol. A GET request in the HTTP protocol is a request where you can
pass a URI including some parameters. For instance,
http://www.somewebsite.org/showbuildingpart.php?building_id=1254&floor_id=12
is a valid GET request with parameters. In fact, the HTTP protocol offers just more
than the GET request. Another commonly used request in the web browsers is the
POST request. The POST request sends information to a URI but as opposed to the
GET request does not include parameters as a part of the request. In a POST
request, information that is required by the receiving end is sent within the HEAD
or BODY of the message. In fact, the HTTP protocol capabilities are not limited to
the GET and POST, but the HTTP protocol is able to send a PUT request to update
the REPRESENTATION of a web RESOURCE and DELETE request to delete a
resource. There are also other HTTP methods such as HEAD, TRACE, PATCH,
CONNECT or OPTIONS, but these will not be elaborated here. The REST
architectural principles indicate that the information on the Web can be managed
using the HTTP methods, in a similar way that one can manage the Create/Read/
Update/Delete (CRUD) operations for a data resource. In a RESTful architecture
HTTP methods GET, POST, PUT and DELETE are used to make CRUD-like
operations over the web RESOURCES. According to the REST architectural style a
Web service can be built by utilizing …
• RESOURCES (i.e. anything that is available digitally over the Web),
• IDENTIFIERS (i.e. URIs),
• REPRESENTATIONS (i.e. current state of the resources).
In a RESTful architecture RESOURCES, REPRESENTATIONS and
IDENTIFIERS can be described as below:
• RESOURCE → is a logical object identified by an IDENTIFIER.
• IDENTIFIER → A globally unique ID that points to the RESOURCE.
• REPRESENTATION: Physical source of information that is pointed by the
IDENTIFIER.
• A RESOURCE can have multiple representations but a single IDENTIFIER
which can only point to a single REPRESENTATION of a RESOURCE at a
single point in time.
Once a RESOURCE is created in a RESTful architecture it is constant until it
has got DELETED. The variability is on the REPRESENTATION, as the
REPRESENTATION can be UPDATED using the HTTP protocol methods. The
acronym REST stands for Representational State Transfer. In a RESTful archi-
tecture a REPRESENTATION existent in one STATE of a RESOURCE is
TRANSFERRED to a web client and this causes the change in the STATE of the
web client. It is vital to mention here that the SERVER is STATELESS in RESTful
architectures, which means that the CLIENT STATE is not known or maintained by
the SERVER, which provides great efficiency in RESTful architectures. Table 6.1
summarizes valid server-side RESOURCE REPRESENTATIONS by examples.
74 6 Advanced SOA Patterns for Building Information Models
hierarchical in the sense that data has structure relationships. This is not a REST
rule or constraint, but it enhances the API. If we go back to our first example, URI
called with an HTTP GET request
http://www.somewebsite.org/showbuildingpart.php?building_id=1254&floor_id=12
has its counterpart URI in a RESTful API which implies a hierarchical naming
convention would be one of the following:
http://www.somewebsite.org/building_id/1254/floor_id/12
http://www.somewebsite.org/building/1254/12
REST is a key architectural style in enabling interaction with different layers of
data. As one of the complex information models, BIM would benefit much from
RESTful data interchange, RESTful interactions and RESTful APIs. The following
sections will present advanced SOA/RESTful patterns to facilitate interaction with
BIMs. The patterns presented in the following sections are RESTful, utilize REST
for information interchange, the services presented in the patterns are RESTful APIs
and it is assumed that a service consumer API is present and facilitates interaction
between the service itself and the service client(s).
The patterns explained in the following sections define software architectures that
consist of several layers. Although the patterns are defined for different purposes, the
software layers defined in these patterns have common characteristics. In order to
prevent repetition of these characteristics for each pattern, the common character-
istics of the architectural layers are illustrated in Fig. 6.1 and are explained below.
1. Data Layer: The patterns describe architectures where an extended BIM, a BIM
or a BIM view resides in an object or an object relational database (which are
shown with database symbols in the pattern illustrations). These databases are
the persistence environments where the persistent versions of the BIMs reside.
In some patterns the BIM instances are persisted in the form of a BIM file which
is encoded as an ISO10303 P-42 ASCII file or an XML file (which are shown as
file symbols in pattern illustrations). The databases reside in a database server or
in cloud database server hardware. The file resides in a web server or a cloud
web server. These data stores together form the first layer of the architecture.
2. Database/File API Layer: The database/file API forms the second layer of the
architecture. The API acts as a database/file interface that will be used to query
and interact with the object/object-relational databases or the file, based on
requests coming from the run-time objects (i.e. the upper layer). For IFC BIMs
the variations of SDAI-based APIs and XML APIs can be used in this layer. The
database/file API can reside within the same hardware which the database
6.3 Generalized Design Pattern for BIM-Based SOA 77
operates (i.e. in the database/cloud database server) or which the file resides in,
but it can also reside in a different server.
3. Run Time Objects Layer: The run time objects form the third layer of the
architecture and maintain the (i) transient data objects that are generated as a
result of service requests/interaction and (ii) transient objects of the business (or
pattern) logic which the service designer chooses to utilize in this layer. The
communication of this layer is bidirectional (i.e. would either be for information
provision to the service or for updating the data layer based on information
provided by the service) and the direction of communication depends on the
type of REQUESTs that are directed to the Web service. The object container
can be one of Common Language Infrastructure of the .NET cross-language
cross-platform object model, EJB contained within the Java VM. The service
designer can choose to distribute the business logic between the service objects
and the run time objects. The run time objects layer will be tightly coupled with
the database/file APIs and the service. The ratio of objects maintained in the run
time objects layer to objects maintained in the service layer would be defined by
78 6 Advanced SOA Patterns for Building Information Models
the choice of the software architect. Both the run time objects layer (i.e. the
object container) and the service layer can be located in the same hardware or in
different hardware components/servers, including real and virtual servers. The
number of predicated requests/interactions between these layers per time frame
(i.e. hour/day/week etc.) and the network bandwidth required for communica-
tion between these layers will be the main determinants for this decision.
4. Service Layer: This layer consists of service component(s) that provide the
RESTful interface. This is the core service layer of the pattern. Each service
component presented in this layer will be implemented as a REST API. This
layer’s main function is to provide an interface to handle HTTP GET/POST/
PUT/DELETE REQUESTS, which means that this layer would form the end-
point for the service. All the requests from client side such as
HTTP GET http://www.service.com/model
HTTP POST http://www.service.com/model
HTTP PUT http://www.service.com/model
HTTP DELETE http://www.service.com/model
will be handled by this layer. Based on architectural decisions, this layer can
also contain server-side business logic along with service logic to handle HTTP
REQUESTs. The service discovery and metadata provision mechanisms are also
implemented in this layer. It is advised to maintain a dedicated real or virtual
server as the hardware for this layer to operate smoothly.
5. Service Consumer API: The service consumer API forms the next layer of the
architecture. The API is a general-purpose software component which is loosely
coupled with the REST API. The consumer API can reside in an independent
hardware, or on the client side. Unlike with the choice in run time objects layer,
this layer does not contain the client-side business logic (such as software
components for user experience or visualization support). The API here is solely
designated to facilitate communication between the client and the REST API.
This API acquires information from the REST API and this information is then
transferred to the client. The API can also be used to transfer information from
the client to the service side. Communication between the service consumer API
and the REST API needs to be efficient and most of the bandwidth requirement
of the overall architecture would be for this API. If an efficient communication
mechanism between these two APIs cannot be established, it can generate a
major bottleneck for the overall architecture.
6. The Client: The client is the software or a software component that is loosely
coupled with the RESTful Web service. In other words, the client would exist
and function perfectly without the existence of the Web service; similarly the
Web service presented in the patterns would exist and function perfectly without
the requirement of the existence of the client. The client can be a simple
visualization interface working on a PC, a CAD or analysis software, a
Web-based 3D visualization tool, a mobile device/tablet interface, an augmented
6.3 Generalized Design Pattern for BIM-Based SOA 79
or virtual reality device user interface or any other user interface. The client
provides the means to the user for interaction with the presented visualization of
the model (which can either be in the form of a 3D model or a simple tabular
data). The visualization of the information that is acquired from the Web service
and the acquisition of the user input are the key functions of the client. Apart
from its main functionality and based on user requirements, the client can
provide voice or video communication with other clients. Following the gen-
eralized design pattern, the specialized patterns developed will be explained in
two parts, first a problem definition will be provided, and then the structure of
the service pattern and its role in solving the (early defined) problem will be
presented.
year or updated in the last 100 days). An interaction with these API can include
REQUESTS such as
HTTP GET http://www.service.com/model/filter/beams
HTTP GET http://www.service.com/modelview/filter/firstfloor
HTTP GET http://www.service.com/extendedmodel/filter/façade
HTTP GET http://www.service.com/model/filter/beams/updated/days/100
HTTP GET http://www.service.com/model/filter/walls/updated/years/1
The Problem Today, design process in the construction industry requires tools for
synchronous collaboration between the stakeholders. For example an architect, an
engineer and a customer would have the need to work on the design documents
(which are generated from the BIM) synchronously. In fact these parties are mostly
located in different places, and it is difficult to make efficient synchronous use of
design tools because of their location constraints. Thus there appears a need for
collaborative use of information systems over the Web. In fact, as different stake-
holders focus on different aspects of the process and need to interact with different
parts of the data store, problems occur in reaching information from multiple model
(data) sources such as the BIM itself, the extended model or the model view.
The Solution In order to propose a solution to the problem, REST façade pattern
provides a single interface to multiple components of the data layer, which will act
as a RESTful gateway to reach information in extended BIM, a BIM, a BIM view
stored in databases or the model information residing in BIM files. The RESTful
architecture provides loose coupling between the client and the provided façade.
The architecture consists of six layers.
1. Data Layer: This layer implements generalized design pattern data layer, which
consists of an extended BIM, a BIM, a BIM view or a BIM file which can be
queried through the RESTful façade.
2. Database/File API Layer: The layer implements generalized design pattern
database/file API layer.
3. Run Time Objects Layer: The layer implements generalized design pattern run
time objects layer.
4. REST Façade Layer: The layer consists of a single service interface. The role of
the interface is to provide a service to enable interaction with the multiple data
sources through the Web (Fig. 6.3). The service will focus on queries targeted to
multiple data sources such as acquiring the garden furniture information from
the extended model while acquiring the outer installations of the building from
the model (BIM) itself. Another query can be for exploring utilities inside the
building together with utility elements in the garden (i.e. which are represented in
the extended model). An interaction with the REST API of this service can
include REQUESTS such as
HTTP GET http://www.service.com/façade/outside_elements
HTTP GET http://www.service.com/façade/all_utilities
3. Run Time Objects Layer: The layer implements generalized design pattern run
time objects layer.
4. RESTful Real-Time View Generator Layer: The layer consists of a single
service interface. The role of the interface is to provide a service that will
generate transient model views based on the requirements of the users. For
example, a mechanical engineer would like to visualize the utility elements or
HVAC elements of the second floor, or a civil engineer would like to check the
details of columns on the first floor, or the architect would like to check the
details of window elements. An interaction with the REST API of this service
can include REQUESTS such as
HTTP GET http://www.service.com/runtimeview/utilities/secondfloor
HTTP GET http://www.service.com/runtimeview/columns/firstfloor
HTTP GET http://www.service.com/runtimeview/windows
HTTP PUT http://www.service.com/runtimeview/columns/firstfloor
The service would also be able to handle the HTTP PUT/POST/DELETE
REQUESTS; for instance, the last example PUT REQUEST can be used to
84 6 Advanced SOA Patterns for Building Information Models
update the BIM with the as-built information provided from the construction
site. In this case, the service (i.e. this layer) would interact with the run time
objects layer to update the BIM with the latest changes that occur at the con-
struction site.
5. Service Consumer API: This layer implements generalized design pattern
service consumer API layer.
6. The Client: This layer implements the client description in generalized design
pattern.
The Problem In the design and construction process the BIMs are updated fre-
quently and the stakeholders have access to the latest version of the BIM. As this
has been the key user requirement of the BIM-based construction management
processes, most efforts focus on providing the most up-to-date version of the model.
In fact, in many situations, specifically in the design process and less commonly in
the construction process, there appears a need for examining the previous version of
the model to compare the changes, and to notify what has changed and so on.
The Solution The RESTful memento pattern is focused on persisting the multiple
state(s) of the BIM in the data layer and restoring an old version of the model when
required by the user. In other words, the pattern focuses on enabling the backing up
of the previous versions of the BIM in a persistence environment and restoring
them. The memento service provided in this pattern would (i) generate a back-up
copy of the model with the timestamp as a response to a user REQUEST (i.e. on
demand) and would store this data in a model server database and (ii) would restore
a model from the generated copies based on user REQUEST. The RESTful
architecture provides loose coupling between the client and the provided façade.
The architecture consists of six layers (Fig. 6.5).
1. Data Layer: This layer implements generalized design pattern data layer, which
consists of a BIM that resides in a model server database.
2. Database API Layer: The layer implements generalized design pattern data-
base API layer.
3. Run Time Objects Layer: The layer implements generalized design pattern run
time objects layer.
4. RESTful Memento Layer: The layer consists of a single service interface. The
role of the interface is to provide a service to generate a copy of the overall BIM
based on a service call, and store this query in a model server database (i.e.
where the current model resides). A BIM data warehouse can be built upon the
stored versions of the model in a later stage. An example call to the service
would involve a real-time back-up request or a batch request, or a scheduled
request which can be accomplished when system resources at the data layer
6.7 RESTful Memento Pattern 85
become free or at the scheduled time intervals. The service would also have the
capability to restore the model from one of the previous versions. An interaction
with the REST API of this service can include REQUESTS such as
HTTP GET http://www.service.com/memento/backupnow
HTTP GET http://www.service.com/memento/backup/weekly/saturday/22/30
HTTP GET http://www.service.com/memento/backup/monthly
HTTP GET http://www.service.com/memento/restore/version/date/10/20/2015
HTTP GET http://www.service.com/memento/restore/version/last
HTTP GET http://www.service.com/memento/restore/version/lastweek
5. Service Consumer API: This layer implements generalized design pattern
service consumer API layer.
6. The Client: This layer implements the client description in generalized design
pattern.
86 6 Advanced SOA Patterns for Building Information Models
view, the controller service is notified by the service consumer API, which
would also provide information regarding what has changed in the view. The
controller service once being notified by the call of the service consumer API,
notifies the run time objects which then manipulates the transient model objects
and the persistent model in the database. Following the persistence of the change
(or update) in the data layer, the run time objects interact with the controller
service REST API to update the views by sending the changes in the model, for
example, as a .json message. The REST API would then interact with the service
consumer API to update the views. In this pattern the container of the run time
objects would reside in a different platform/or even hardware from the service
container (Fig. 6.6). An interaction with the REST API of this service can
include REQUESTS such as
Client → Controller HTTP GET http://www.srv.com/controller/subscribe/id/2/
ip/22.11.11.22:7777
Service Consumer API → Controller HTTP PUT http://www.srv.com/
controller/update/model {data: json message}
Run time Objects → Controller HTTP PUT http://www.srv.com/controller/
update/view/2 {data: json message}
Run time Objects → Controller HTTP PUT http://www.srv.com/controller/
update/allviews {data: json message}
88 6 Advanced SOA Patterns for Building Information Models
process there can be another request, i.e. REQUEST B to the service 5 s after the
first request. In such a situation as the call-back responder layer does not have to
wait for REQUEST A to be completed, it can send REQUEST B to the run time
objects layer to be processed. This second process can take 1 s to be completed
and response to REQUEST B can be provided in the sixth second, while
response to REQUEST A is provided at the seventh second, i.e. after the
completion of the second request. REQUEST B in this situation is responded to
without any latency. An interaction with the REST API of this service can
include REQUESTS such as
HTTP GET http://www.service.com/callbackresponder/makeanalysis
HTTP GET http://www.service.com/callbackresponder/beams/secondfloor
real-time view generator. This layer also implements another data layer where a
relational or object relational database/cloud database (i.e. the identity store)
holds the user information which will serve for client authentication.
2. Database/File API Layer: The layer implements generalized design pattern
database API layer. The layer consists of another database API to enable
interaction between user and authenticator service.
3. Run Time Objects Layer: The layer implements generalized design pattern run
time objects layer. The layer does not exist on the authenticator side.
4. User Authenticator Layer: The layer consists of two service interfaces (i.e.
user authenticator, and RESTful service). The user authenticator service will
operate as an authentication gateway. A client will call the service with an
authentication request by providing a username or a password. The service will
then query the identity store and if the information matches the records in the
identity store, it will respond to the URL of the RESTful service that the client
requested. The user authenticator service can be developed in such a way that it
will provide a set of URIs as a result of successful authentication. An interaction
with the REST API of this service will include a REQUEST such as
HTTP POST http://www.service.com/authenticator {data: json message}
5. Service Consumer API: This is a generalized API but also implements a
function for passing the URI of the RESTful service (sent by the user authen-
ticator) to the client.
6. The Client: Once the client finalizes the authentication and acquires the URI of
the REST API of the target service it implements the client description in
generalized design pattern.
The Problem Interactions with building information models do not only involve
data acquisition or data update functions at the fine granular object level. Sometimes
the data layer might be complex and fuzzy in distributed systems. For instance,
BIMs in use might not reside in a single database server, and the project may not
contain a single standard BIM such as an IFC, but multiple BIMs. There might be
different models defined with different schemas (i.e. Green Building XML, CIS2,
IFC …) which reside in different platforms. In these situations, management and
housekeeping at the back-end (i.e. the data layer) become vitally important. If the
back-end is not managed successfully the problems will lead to chaos where
information exchange and sharing would become a nightmare with so many models
defined with different schemas, conflicting views, files that are not persisted in model
servers and different versions of different models independently floating around in
the data layer.
92 6 Advanced SOA Patterns for Building Information Models
The Solution The RESTful data management pattern introduces a set of Web
services to facilitate data management tasks in a distributed system. The Web
services introduced in the pattern can be thought of as a Swiss Army knife for
managing the information transformation tasks and facilitating data persistence in
model servers. Two of the Web services concentrate on transforming information
between two information models, while the other concentrates on persisting the
BIM file contents (which has arrived into the data layer as a result of data exchange)
in model server databases. The main difference in the architecture presented here
from other patterns presented in this section is that the client layer includes data
components (i.e. which can be regarded as the target data layer). The architecture
consists of eight layers.
1. (Source) Data Layer: This layer is shown at the bottom of the diagram and
implements generalized design pattern data layer, which consists of a BIM or a
BIM file which can be queried through the Web services defined in this pattern.
2. (Source) Database/File API Layer: The layer is shown at the bottom of the
diagram and implements generalized design pattern database/file API layer
3. (Source) Run Time Objects Layer: The layer is shown at the bottom of the
diagram and implements generalized design pattern run time objects layer
(Fig. 6.9)
4. Model Management Service Layer: The layer consists of three different ser-
vice interfaces. The first one is the model transformer service which can be used
The Solution The pattern view synchronizer explained here is adopted from the UI
mediator pattern (Erl 2009). The architecture of this pattern utilizes a view syn-
chronizer service which will act as a component that will synchronize information
coming from multiple sources. The synchronizer service presented in this pattern
has two functions. It synchronizes the information sent from multiple endpoints
(REST APIs) and also acts as a façade layer and provides a single gateway to
multiple REST endpoints. The architecture consists of seven layers (Fig. 6.10).
1. Data Layer: This layer implements generalized design pattern data layer, which
consists of a BIM residing in a model server database.
2. Database API Layer: The layer implements generalized design pattern data-
base API layer.
3. Run Time Objects Layer: The layer implements generalized design pattern run
time objects layer.
4. REST Endpoint Layer: The layer consists of a multiple service interfaces
which comply with the definitions in generalized design pattern.
5. View Synchronizer Service Layer: The layer consists of a single service
interface. The role of the interface is to provide a service that provides the
synchronized information derived from multiple REST Endpoints (i.e. REST
APIs). As the business logic for the synchronization would be towards enabling
automatic synchronization, the interaction with this layer would be similar to the
REST façade pattern. An interaction with the REST API of this service can
include REQUESTS such as
HTTP GET http://www.service.com/synchronized/outside_elements
HTTP GET http://www.service.com/synchronized/all_utilities
6. Service Consumer API: This layer implements generalized design pattern
service consumer API layer.
7. The Client: This layer implements the client description in generalized design
pattern.
The Solution In a situation where a view (client) interacts with the model to
change its state it would be feasible to implement the RESTful MMVC pattern; in
fact, in this situation the role of the client is only visualization, not interaction with
the service. Erl (2009) proposes the event-driven messaging pattern for messaging
services for a similar situation. The idea of the event manager proposed by Erl
(2009) can be extended to define an event manager service, which is working with
publish/subscribe approach. Once the client subscribes for specific events, the event
manager can send notifications to the client once the event occurs (such as the state
change of door from open to close).
1. Data Layer: This layer implements generalized design pattern data layer, which
consists of a BIM residing in a model server database
2. Database API Layer: The layer implements generalized design pattern data-
base API layer.
3. Run Time Objects Layer: The layer implements generalized design pattern run
time objects layer; in addition, components in this layer make calls to the
controller service layer to update the client.
4. Event Manager Layer: The layer consists of a single service interface. At the
start of the sequence each client (view) subscribes to the event manager service
by providing its GUID or IP address in order to get state changes of the model.
Once a state change is represented in the model (such as the start of air con-
ditioning units in the building), the run time objects interact with the event
manager to update the views by sending the changes in the model, for example,
as a .json message. The REST API would then interact with the service con-
sumer API to update the client. In this pattern the container of the run time
objects would reside in a different platform/or even hardware than the service
container (Fig. 6.11). An interaction with the REST API of this service can
include REQUESTS such as
Client → Controller HTTP GET http://www.srv.com/emanager/subscribe/id/2/
ip/22.11.11.22:7777
Run time Objects → Event manager → Service Consumer API
HTTP PUT http://www.srv.com/emanager/update/client {data: json message}
References
Erl, T.: SOA Design Patterns. Prentice Hall, New Jersey (2009)
Fielding, R.T.: Architectural styles and the design of network-based software architectures. Ph.D.
thesis, Department of information and computer science, University of California, Irvine
(2000)
He, H.: What is service-oriented architecture. Online at http://webservices.xml.com/pub/a/ws/
2003/09/30/soa.html. Accessed 21 July 2004 (2003)
Isikdag, U., Underwood, J.: Two BIM based web-service patterns: BIM SOAP façade and
RESTful BIM, construction in the 21st century conference, Istanbul, May 2009 (2009)
98 6 Advanced SOA Patterns for Building Information Models
Pautasso, C., Zimmermann, O., Leymann, F.: Restful web services vs. “big” “web services:
making the right architectural decision” WWW ‘08: proceeding of the 17th international
conference on World Wide Web, pp. 805–814 (2008)
Pulier, E., Taylor, H.: Understanding Enterprise SOA. Manning Publications, Greenwich (2006)
RESTful API Tutorial: http://www.restapitutorial.com/lessons /restfulresourcenaming.html (2015)
Techtarget: Definition of REST available at : http://searchsoa.techtarget.com/definition/REST
(2015)
Chapter 7
Sensor Service Architectures for BIM
Environments
7.1 Introduction
IBM’s Smarter Planet Video (ASmarterPlanet 2010) starts with the following
statements:
Over the past century, but accelerating over the past couple of decades, we have seen the
emergence of a kind of global data field, the planet itself. Natural systems, human systems
and physical objects have always generated an enormous amount of data, but we did not
use to be able to hear it, to see it, to capture it. Now we can, because all of the stuff is
instrumented, and it is all interconnected, so we can actually have an access to it, so in
effect, the planet has grown a central nervous system….Over the last 10 years devices are
being linked up together using networks, such as temperature sensors, flow rate sensors,
electricity measuring devices, and it will not be long or it may even have happened already
that, there are more things on the Internet than there are people on the Internet. That is
really what we mean by the Internet-of-Things
Ubiquitous computing and the Internet of Things (IoT) concepts are gathering
more attention day-by-day. From the perspective of information management,
realization of ubiquitous computing would lead to a focus shift (in data manage-
ment) from data acquisition to abstraction of acquired data, as in the future data will
significant advantages in (i) facilitating the interaction with IoT nodes and
(ii) consuming the information provided by IoT nodes. RESTful architectures were
elaborated in Chap. 6.
IoT nodes today are capable of providing real-time information about them on
the Web. The information can be acquired from various sensors that are connected
to the IoT device and broadcasted through single-board computers (SBCs).
Information broadcasted by SBCs is usually in the form of XML or JSON docu-
ments. Several RESTful loosely coupled web architectures can be designed to reach
and utilize this information. In addition, Usländer et al. (2011) indicated that virtual
sensors (i.e. soft-sensors that are used to gather and abstract data from diverse sets
of sensor network nodes) can act as a middleware between the sensors and services
in these architectures.
The client side of the architecture as indicated in Usländer et al. (2010) can be
composed of visualization, reporting and other sensor applications. Since SBCs in
an IoT architecture generally have fewer resources than clients, browsers or mobile
phones have proven to be a good way of transferring some of the server workload to
the client (Guinard et al. 2011), and thus development of rich client applications is
also a possibility.
The Problem: The IoT approach mainly depends on machine to machine (M2M)
communication. In fact, Web service technologies and architectures such as REST
generally provide mechanisms for enabling communication based on client actions
(and method invocation requests). Thus, a pull approach is common for information
acquisition using Web services. In fact, in an IoT environment where thousands of
102 7 Sensor Service Architectures for BIM Environments
devices connect to each other and exchange information sometimes with very short
intervals, a pull-based mechanism is not efficient (Fig. 7.1).
The Solution: The publish-subscribe approach as explained in Chap. 5 forms the
backbone of the IoT middleware as it provides an efficient mechanism for devices
to share information between each other. Messaging protocols such as CoAP,
XMPP and MQTT form the main elements of this approach. The foundational
publish-subscribe pattern illustrates a typical architecture to enable and facilitate
M2M communication in an IoT environment. In this pattern a message broker acts
as a mediator between the IoT nodes. The IoT nodes (composed of single-board
computers––SBCs) publish messages to a message broker (such as an MQTT
server/broker), which will then distribute this messages to the other IoT nodes
which subscribe to the message broker. Protocols such as MQTT and MQTT
message brokers are today in use with Arduino devices to facilitate M2M com-
munication in home automation. For instance, a luminosity sensor attached to an
Arduino SBC publishes a message to an MQTT broker when the light level in a
room decreases below a certain threshold value, then this message is distributed to
another SBC which has actuators to turn on the home lighting. Once this message is
received by the SBC it activates the lights in the room. This is a basic example of
how traditional publish-subscribe works for IoT nodes, but the approach is simple,
the protocols in the approach are lightweight and very efficient in enabling device to
device (M2M) communication.
The Problem As explained in the previous pattern, message brokers are known as
efficient middleware components for distributing messages to other devices; in fact,
messages distributed by the brokers can also be consumed by everyday applica-
tions, web portals and mobile devices. Although the information acquired from
most of the home automation sensors would not be confidential in general, there are
7.4 Feed Encoder 103
technique, the use of public key authentication between the IoT node, message
broker and feed encoder API and sending the message to the feed encoder API
without encryption is the other alternative.
The Problem The IoT nodes publish a large amount of data in a short time frame.
The use of real-time data coming from sensors is important for other devices in
M2M communication. In fact the novel paradigm of big data advocates that
information acquired regarding the states of everyday objects including city objects,
buildings and indoor objects is of key importance and needs to be analyzed to find
patterns of behaviours and patterns of occurrences in the cities. Thus, in order to
enable this analysis, information acquired needs to be persisted (stored) in databases
(Fig. 7.3).
The Solution As the information stored needs to be accessed from multiple
resources for big data analysis, online storage of the data is necessary. In order to
store information acquired from the IoT nodes, cloud storage is an economically
feasible alternative. The message-based cloud update pattern proposes a six-layer
architecture. The first two layers of the architecture are formed by the IoT node and
a message broker. The third layer is composed of a message consumer API, which
can be defined as a general-purpose API which is the subscriber of the message
broker middleware. Once a message is received from the IoT node, the message
broker distributes it to the message consumer API. The second role of the message
consumer API in this pattern is automatically updating the cloud database/files
through the REST endpoint once it receives a message from the message broker
middleware. As the service consumer API can be the subscriber of different brokers
and brokers might be the observers of many IoT nodes, the bandwidth between the
IoT nodes/message broker and message broker/cloud DB REST endpoint should be
high. The next layer in the stack is the REST endpoint. The REST endpoint is the
general-purpose endpoint to interact with the database API or the file in the cloud
layer. The REST endpoint will contain business logic for the I/O operations
regarding the file and will also have the ability to communicate with the database
API. Although a database API is mentioned as a component in this architecture, it is
optional and the REST endpoint can contain business logic to interact directly with
the cloud DB. The database API layer contains the database API, an optional
component which aims to facilitate the CRUD operations related with the cloud
DB. As some cloud DBs in use today provide RESTful endpoints, the use of a
specific database API is not always compulsory, but this layer can be required when
a special-purpose database (with no RESTful interface) is planned to be used in the
7.5 Message-Based Cloud Update 105
data layer. Finally, the data layer consists of a cloud DB and/or a file. The cloud DB
preference in this layer would be a spatial database, graph database or an XML
database as spatial, graph and XML databases are capable of storing and presenting
information coming from IoT nodes with greater efficiency and with geo-
coordinates. The second reason behind this choice is spatial and graph databases
would provide a data structure that is more efficient to respond to spatial queries,
while XML databases provide structured information that would facilitate big data
analysis. The file storage would either contain an XML file or a plain-text CSV
106 7 Sensor Service Architectures for BIM Environments
(comma separated values) file. As mentioned, XML files provide structured infor-
mation for big data analysis and plain text files are efficient forms of storage in terms
of file size.
The Problem In the case of direct acquisition of information from multiple IoT
nodes, communication of clients with the IoT nodes can be enabled by use of
message brokers. A subscription to a message broker can be enabled over an API,
but if there are many IoT nodes in the architecture or the IoT nodes are publishing
data too frequently, the API will be flooded by messages coming from multiple
sensors. In this situation a periodic pull approach needs to be implemented in order
to limit the data transfer from the IoT nodes (Fig. 7.5).
The Solution In order to provide a mechanism to regulate the information transfer
from the IoT nodes to the client side, a periodic pull approach is implemented in the
RESTful node façade pattern. The pattern presents an architecture composed of five
layers. The first layer is composed of IoT nodes. Similar to the previously presented
pattern, in this pattern the IoT nodes publish information acquired from sensors to
HTTP endpoints. The HTTP endpoints contain simple XML, HTML or text files.
The IoT node façade is a RESTful service layer, the service in this layer needs to
contain programming logic to acquire information from multiple documents in the
HTTP endpoint layer (i.e. fulfilling the façade role); in addition, the IoT node
façade service needs to be able to handle periodic pull (HTTP GET) requests from
the upper layer. The service consumer API issues periodic pull (HTTP GET)
requests based on user demand and user–client interaction. The client (which is the
subscriber of the service consumer API) provides a GUI to the user, which can also
contain business logic to mash-up information coming from multiple IoT nodes.
Fig. 7.6 BIM and IoT service façade. Grant of permission by Intel Corporation
database. Model server database forms the data layer of the architecture. The
real-time BIM resides in the model server database which forms the data layer of
the architecture.
The Problem The softwares that consume information from BIM today are either
CAD packages for design or software for analysis of structures. As these tools work
with models of non-existent buildings, the need for real-time information from the
BIMs is not very apparent currently during design. In fact, city modelling and
management applications including city portals in the near future will utilize
information acquired from BIMs. These applications and portals will also have
access to mash-ups of information published by IoT nodes. In this situation,
building information and information regarding the current state of objects indoors
and outdoors need to be integrated in these applications and portals (Fig. 7.8).
The Solution The pattern elaborated in this section proposes the use of rich clients
which have the capability of interacting with multiple APIs of BIM and IoT nodes.
The architecture described consists of six layers. The IoT nodes publish information
to a message broker, and a general-purpose message consumer API serves this
information on demand to the rich client. On the other dimension, BIMs (model
Fig. 7.8 Rich client for BIM and IoT nodes pattern. Grant of permission by Intel Corporation
7.10 Rich Client for BIM and IoT Nodes 113
entities) reside in model server databases; the database API is used to bridge the gap
between the run time objects and the data layer. A callback responder service is a
RESTful service that supports asynchronous communication with the BIM through
run time objects and database API. As there can be many BIMs in the architecture,
the run-time efficiency (time performance) is supported by use of callback functions
in the service layer. On the other hand, the use of message broker and lightweight
messaging protocols such as MQTT will also contribute to the time performance of
the system. The service consumer API is a general-purpose API that the rich client
will utilize to interact with the callback responder service.
Fig. 7.9 Real-time BIM callback pattern. Grant of permission by Intel Corporation
114 7 Sensor Service Architectures for BIM Environments
information coming from the IoT nodes and thus integrating BIMs and IoT nodes at
a lower layer and providing a BIM-based single entry point to present indoor
information. The real-time BIM callback pattern presented in this section provides a
mechanism for such an integration and consists of six layers. The IoT nodes publish
information as a result of state changes to a message broker. In this case, a message
consumer API is the subscriber of the message broker and is also responsible for
interacting with the database API for updating the BIM with information coming
from IoT nodes. The database API is also responsible for publishing real-time
information from BIM (on-demand) when requested by the client application. The
run time objects interact with the BIM to acquire information and present it through
the callback service. The callback responder is a RESTful service designed to
respond queries in an asynchronous manner. The asynchronous nature of the
RESTful service will contribute to the time performance of the architecture. The
service consumer API interacts with the callback responder service as result of the
request that is coming from the client. The client can be a web portal component or
an application that presents information about building interiors or an application
focused on real-time city level analysis (such as an application for calculating
real-time energy consumption of a city).
The Problem The communication protocols that enable device to device (M2M)
communication form the backbone of IoT, and these protocols mainly utilize
message-oriented middleware, such as message brokers, to facilitate exchange of
small amounts of data in a very timely manner. In fact, BIM applications utilize
service-oriented data sharing, and differences in the middleware layer bring the
need for integrating message and service-oriented architectures (SOAs) in order to
unite information coming from both BIM and IoT nodes (Fig. 7.10).
The Solution In order to eliminate differences in communication approaches,
protocols and middleware the data sharing approaches can integrated with wrapper
Web services or at client level (these approaches are presented in the previous two
patterns). In fact, there is another option where BIM entities can mimic the IoT
nodes and present their states, or changes in their states using the lightweight
protocols of IoT. The final pattern of the book presents an architecture where a
series of virtual (soft) sensors are populated and used to represent information
acquired from BIM, and publish this information to the message brokers where the
BIM elements become the virtual 'Things' themselves. The pattern focuses on
information acquisition and fusion. The architecture presented here is composed of
five layers. The IoT nodes publish their states or change in their states to a message
broker. On the other hand, virtual sensors that act as the observers of every building
7.12 BIM Virtual Sensors 115
Fig. 7.10 Real-time BIM callback pattern. Grant of permission by Intel Corporation
element in the BIM periodically query the model entities, and once they notice a
state change they publish a notification to the message broker. A message consumer
API is the subscriber of the message broker, and once the changes are notified it
will inform the client about the changes in the BIM and IoT nodes.
References
org/publications-renewable-energy/4064-distributed-detection-of-events-for-evaluation-of-
energy-efficiency-in-buildings. Accessed 17 Mar 2011
Guinard, D., Trifa, V., Mattern, F., Wilde, E.: From the internet of things to the Web of things:
resource oriented architecture and best practices. In: Uckelmann, D., Harrison, M.,
Michahelles, F. (eds.) Architecting the Internet of Things, pp. 97–129. Springer, New York
(2011). ISBN 978-3-642-19156-5
Kwon, J.W., Park, Y-M., Koo, S.-J., Kim, H.: Design of air pollution monitoring system using
ZigBee networks for ubiquitous-city. In: Proceedings of International Conference on
Convergence Information Technology (ICCIT 2007), http://www.computer.org/portal/web/
csdl/doi/10.1109/ICCIT.2007.361. Accessed 20 Jan 2010
Tse, W.L., Chan, W.L.: A distributed sensor network for measurement of human thermal comfort
feelings. Sens. Actuators: A 144(2), 394–402 (2008)
Usländer, T., Jacques, P., Simonis, I., Watson, K.: Designing environmental software applications
based upon an open sensor service architecture. Environ. Model Softw. 25(9), 977–987 (2010)
Chapter 8
Summary and Future Outlook
The book provides architectural approaches for (i) utilizing building information
models (BIMs) over the Internet and for (ii) enabling information fusion between
BIMs and Internet of things (IoT) elements. A BIM can be defined as a digital
representation of a building that contains semantic information about the building
elements. The BIM keyword also defines an information management process
based on the collaborative use of semantically rich 3D digital building models in all
stages of a project’s and building’s lifecycle. A BIM is defined by its object model
schema, Industry Foundation Classes (IFC) which is the most popular BIM stan-
dard (and schema) currently. Chapter 1 starts by providing definitions of BIM,
provides the general characteristics of IFC models, elaborates on sharing/exchange
of BIMs and on model views, and concludes by discussing the role of BIMs in
enterprises. The first evolution of BIM was from being a shared warehouse of
information to an information management strategy. Today, the concept of BIM is
evolving from being an information management strategy to a construction man-
agement method. This change in interpretation of BIM is fast and noticeable.
Transformation from BIM to BIM 2.0 focuses on enabling an (i) integrated envi-
ronment of (ii) distributed information which is always (iii) up to date and open for
(iv) derivation of new information. Chapter 2 starts with providing recent trends in
building information modelling and later elaborates on technologies that will enable
BIM 2.0. BIM-based management of the overall construction processes is
becoming a major requirement of the construction industry, thus the final part of
this chapter provides matrices that can be used as a tool for facilitating BIM-based
project and process management. In domains where detailed semantic information
coupled with detailed geometric representations is of key importance (such as city
modelling, construction, aircraft industry, ship production and so on), information
models that represent these domains (such as BIM) have a complex model structure.
Chapter 3 provides generalized service-oriented design patterns to facilitate the
management of information models of complex structure. The chapter starts by
summarizing design principles of service orientation, and later provides
service-oriented architecture (SOA) patterns for managing complex information
models such as (but not limited to) BIM. The present can be regarded as the start of
the IoT era. IoT covers the utilization of sensors and near-field communication
hardware such as RFID or NFC, together with embedded computing devices. The
devices can range from cell phones to RFID readers, GPS devices to tablets,
embedded control systems in cars to weather stations. In an IoT environment, a
door would have the ability to connect with the fire alarm, or a chair would
communicate with home lights, or a car would communicate with the parking
space. This book focuses on single-board computers (SBCs) as IoT hardware
components for acquiring and presenting building and indoor information. Thus,
Chap. 4 elaborates on different SBCs which can be used for this purpose. IoT
architectures do not only consist of hardware. The hardware would need to have
operating systems to work, and to implement communication protocols to com-
municate with different hardware and humans. Furthermore, the middleware
components facilitate communication and exchange of information between these
devices. Integration portals play an important role in combing and integrating
information acquired from multiple devices and presenting this information to the
users. In this regard, Chap. 5 provides detailed information on the software com-
ponents of IoT platforms. Web services are the endpoints of the Web which enable
interaction with web objects. Two styles of Web services exist today: Simple Object
Access Protocol (SOAP) and REpresentational State Transfer (REST). The REST is
often preferred over the more heavyweight SOAP because REST does not leverage
much bandwidth. REST’s decoupled architecture made REST a popular building
style for cloud-based APIs, such as those provided by Amazon, Microsoft and
Google. Chapter 6 starts with providing technical information about RESTful Web
services. Following this, the chapter presents RESTful design patterns for facili-
tating BIM-based software and Web service architectures. The IoT approach pro-
poses a global wireless sensor/actuator network composed of everyday devices such
as home appliances, city furniture, mobile phones or vehicles. Everyday devices
would either be publishers of information, or subscribers of information coming
from other people and devices. The information provided by each device would
8.1 Overall Summary 119
acquire information from the sensors located on each floor regarding the spreading
of the fire; in response, it can then invoke the Web services to interact with IoT nodes
which will then invoke the actuators to close the doors on certain floors to prevent
the spread of the fire to other floors. Furthermore, M2M autonomous interaction is
also possible and a node can collect information regarding the emergency situation,
and interact with another node to perform an action. This concept when implemented
might be regarded as a shift from an automated building to an intelligent building.
Human–building interaction might provide other opportunities in other emergency
situations, such as floods; for example, sensors in the building can interact with the
actuators to close doors to prevent some parts of the building from being flooded by
water, in fact if there can be people in these parts of the building, they can be trapped
as they cannot get out. In this situation, the people in the rooms can interact with the
nodes (to control sensor and actuators) to let them out of that building part. In
summary, the ability to consume information from sensors, and the ability to control
the actuators provides unique opportunities by enabling human–building interaction
in emergency response operations.