0% found this document useful (0 votes)
9 views25 pages

AI Methods in Concurrent Engineering

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views25 pages

AI Methods in Concurrent Engineering

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

AI Methods in Concurrent Engineering

Raimar J. Scherer

Dresden University of Technology, Faculty of Civil Engineering


MommsenstraBe 13, 01069 Dresden, Germany
[email protected]

Abstract. An Al-based hierarchical framework for concurrent engineering is


suggested. It is based on the research carried out on product modelling,
interoperability by mapping and matching, activity control, workflow management
and information logistics in virtual enterprises and the research done in AI
application to conceptual structural design and product information systems for the
electronic market. This research has resulted in several prototypes in the past four
years. By extrapolating the results achieved in the single prototype areas, this
framework is expected to reduce the lead time of briefing, design and constructs to
the range of 50 %. The paper is focused on the cognitive architecture in which the
AI methods are embedded and the information logistics part with the newly
suggested electronic board. The framework has to support object-oriented as well as
object-centred, document-centred, process-centred and production-centred working
environments with different working cultures. This needs multiple presentation of
data, data transformation, data condensation and dedicated AI methods.

Introduction

Real cognitive engineering tasks, whether for design or for project management,
are too complex to be formalised by one single method or modelling paradigm.
Approaches that are able to combine multiple AI methods are considered more
suitable. Thus, an intelligent tool should be considered as the sum o f its cognitive
architecture and the algorithms implemented in this architecture.
Intelligent tool = cognitive architecture + basic AI algorithms
This reveals that a dedicated problem has to be analysed according to two
criteria: the general cognitive aspects determining the cognitive architecture as the
upper layer, and the specific cognitive steps mapped on selected AI algorithms as
the basic layer. The intelligent tool becomes a system, if a user is incorporated as a
part of the system. Adaptability of the tool to the user's preferences, intention and
behaviour is the most important aspect here. This needs AI methods.
Intelligent system = human + intelligent tool
360

Tasks are parts of activities and activities are parts of processes. Therefore
intelligent tools and the dedicated users have to be co-ordinated by integration and
communication, which again need AI methods. Because of the distributed and
concurrent nature of most building construction processes, this is first of all a
logistic problem, i.e. the problem of information logistics.
Intelligent process -- information logistics + intelligent systems
The information logistics architecture consists of several intelligent agents
based on AI methods, which have to generalise and condense the individual
incoming information in order to partition, archive, retrieve and distribute the right
information to the right person at the right time. It is not only a simple through-put
task with one or several new addressees, but it demands cognitive management
work, which needs a responsible human - the project manager. The information
logistics system should support the manager by taking over routine management
and logistic tasks and prepare decision making.
Concurrent engineering system = project manager + intelligent processes
In a virtual enterprise we can identify four main processes: the management
process, the business process, the technical process and the commerce process. The
horizontal co-ordination and integration of these four processes as well as the
vertical co-ordination and integration of the various activities down to each single
resulting individual task together with the appropriate integration of the various
individuals involved is not only an information technology task but demands also
the extensive use of AI methods with very specific configuration requirements.
Therefore, an architectural approach for intelligent hybrid AI tools is first
introduced and later on in this paper a scenario of a concurrent engineering system
is discussed, which is organized according to the above given four processes and
based on the hierarchical components of a concurrent engineering system.

Intelligent Hybrid Tools


Due to the complexity of real-world problems intelligent tools should be
considered as the combination of different AI methods in a cognitive architecture.
For the purpose of adapting AI-technology to AEC we propose a process that starts
with the adaptation of basic AI-algorithms to the engineering domain, combines
these in genetic cognitive architectures, and transfers these architectures in real-
world prototypes. The necessary domain-dependent transformations of tile basic
AI algorithms and architectures can, to a wide degree, be generalised for
knowledge based systems. The reasons for this are the specific, but common
characteristics of building and construction problems:
• Building and construction usually deal with one-of-a-kind products but in
almost all cases each product is a member of a product family. Thus the need
for fast evaluation of alternatives and re-design is very important.
• In many parts of building and construction design numerical computation is
essential and there are in existence a large set of mathematically based methods.
361

• In all areas of building and construction the design process relies, to a high
degree, on rules of thumb.
• Project specifications are, in general, not fully given in detail. They often
remain on a general level and are vague and conflicting in the details.
As a result of the adaptation, a set of basic domain specific algorithms can be
obtained and can serve as conceptual building blocks for a variety of intelligent
tools.

Cognitive Architectures
Cognitive architectures have been the subject of active research in cognitive
science and Artificial Intelligence. A broad set of architectures has been invented,
either based on requirements for general intelligence and problem-solving
behaviotu" [1,2,3,4] or with respect to autonomous robot and vehicle behaviour
[5,6]. Hybrid architectures can combine the developed basic algorithms as building
blocks, depending on the associated cognitive abilities necessary for their
corresponding problem class (fig. 1). If the intelligent tool is configured as a
toolbox [7] the combination of the AI building blocks needs specific civil
engineering knowledge from the practical domain of the application, provided
either by the provider of the intelligent tool or by the user.

' ° - ' ' ° °

Figure 1: Combination of AI building blocks by cognitive architectures

Depending on abstract and generic classifications of the problems in terms of


strategic, tactic and reactive reasoning [ 16], cognitive architectures for knowledge
based systems in building and construction can be distinctly characterised as
follows [8]:
Reflex behaviour, A system that acts on the basis of connected input/output
patterns. This behaviour is sufficient for design tasks that do not require
anticipation or consideration of past experience.

Utility-based behaviour. Apart from pure inference capability, it might also be


desirable for a system to judge the value of information, the actions and the
362

sensitivity of decisions to small changes in the assessments made. This is an


architectural characteristic that is suited for design systems frequently involving a
high-level interaction with the user.
Planning behaviour. In problem solving, a system makes guesses by choosing
actions and evaluating resulting states. Since this leads to huge branching factors
and thus is inapplicable for some domains, often a direct linkage between actions
and states and discarding independent parts of the world' is necessary. This is
achieved with planning methods.
Decision-theoretic behaviour. A system that makes decisions by choosing the
action from a set of considered actions that has the best expected outcome. In that
sense, this behaviour is an integration of probability theory and utility theory.
Learning capability. A system that is able to improve itself in order to increase
its problem solving performance. This is desirable behaviour in domains that are
difficult to formalise in a priori knowledge, because either the knowledge is too
data-state dependent or the knowledge to be provided is too user-dependent, i.e.
the tool behaviour has to adapt to the user's preference.
Communication. A system that can communicate about its state and
knowledge in a formalised way to other agents or to the user.
The definition of the methodology for assigning and implementing intelligent
tools with dedicated cognitive architectures involves three main steps:
1. Define problem categories.
Design and construction problems are to be generalised in common categories that are
comparable in respect to the necessary cognitive skills and behaviour. This requires, as a
very first step, to define a suitable granularity on which problems and sub-problems are
to be eharacterised.
2. Develop suitable cognitive architectures for the identified categories.
It is also necessary to categorise cognitive behaviour and architectures, which can be
matched with problem categories. Starting point here are behaviour facets.
3. Define a formalised approach for adapting a selected architecture to a given
problem.
Cognitive architectures and behaviours facets, in general, will not be reusable in an
identical way in different problems, although these problems are classified in the same
category. In order to obtain best performance, domain-specific adaptation of the generic
building blocks for a dedicated application will be necessary.
Areas of application should be key areas with a great amount of routine
cognitive labour, where pure mathematical tools fail because of imprecise
information, e.g. of the necessary parameter values, or where information is only
available as rules of thumb such as in preliminary design, uncertain material
parameters in geotechnical engineering or uncertain cost values.
363

Intelligent Systems for Concurrent Engineering Support


Concurrent engineering technology has been the subject of intensive research in
recent years [10,11,12]. The main objectives of concurrent engineering
methodology are to support design for value and considerably reduce the lead time
by overlapping briefing, design, bidding and product and supplier selection (fig. 2).
Alongside the emerging information technology (IT) methods for integration and
communication, electronic commerce can add an additional important dimension
to the concurrent engineering system if we are able to integrate detailed technical
product and cost information available on the electronic market in the briefing and
design phase tools.
State-of-the-art Future

phase phase
I Construction Construction

I Commerce [ El,Commerce]

time time
Figure 2: Reduced lead time through IT based concurrent engineering

Vision
We can imagine the following visionary scenario. The early consulting session
between the investor and the architect-salesman will be based on virtual reality,
deriving benefit from typified building blocks attached with functional
requirements and cost values which can be assembled in a VR environment to
already provide the investor at the very beginning with an understandable
impression and reliably cost values. Cost intensive and architecturally or
functionally critical building elements can be figured out and downloaded from
suppliers' catalogue servers for fast and precise alternative conceptual design
studies. Therefore consequences and impacts on the design and investment costs
can be discussed at the very beginning with high precision.
The virtual design team including a cost consultant will be set up by Internet
bidding in order to fred the best team. The virtual design team will work in a
concurrent engineering manner using the electronic market place to obtain the
technical information about building components and services, whilst the cost
designer will control the costs and organise the bidding, negotiations and
contractual agreements with the suppliers via the Internet electronic market place.
In parallel, the investor will be kept informed by the architect-salesman about
the progress of the design and can for example interact via video conference for
364

critical parts with the virtual design team. Thus, in the future, the design process
will simultaneously be driven by the investors requirements, the technical
knowledge provided by the design team and the products and services offered on
the electronic market without any time delay.
Approach
An information logistics system called co-ordination board is developed for the
information integration of the concurrent processes (fig. 3). There the information
is not simply collected and distributed, but it is also condensed, merged, re-
classified and transformed into active objects serving as reactive and pro-active
agents to support the project manager and trigger follow-up actions.
Management

Technical ====1~
Process
Business ====~
Process
Commerce
Process

Product Activity Costand


Data Data DeliveryData
Figure 3: Cross section of the processes to be co-ordinated and the resulting data issues

The client-server-agent architecture is centred around the Internet/Intranet-


based co-ordination board, which can operate on top of a middleware layer (e.g. a
CORBA implementation) and an advanced information logistics layer [19]. The
co-ordination board is structured into two layers - a wrapper layer and a
classification layer. The objective of the wrapper layer should be to enhance the
basic technical data with meta data describing the context, the status, permitted
variations, constraints and dependencies, permitted notational transformations and
interoperability information. The objective of the classification layer is to condense
and classify dynamically the wrapped design data into high-level objects with the
help of the object-centred approach based on description logic [14].
The system will be driven by the investor's requirements for the technical
process, and the availability of the required products on the market. The goal is to
get both these two sets of requirements to match. The connection of the technical
process and electronic commerce results in several electronic market links in the
following design phases:
365

Briefing phase: - Fast and concise catalogue look-up


(download of technical information and standard prices)
- Selection design team
Design phase - Detailed catalogue look-up
(download of 'intelligent' technical components)
- Cost information on services and components
(pre-bidding and pre-negotiation)
Tendering: - Bidding and negotiation for services and components.

The Information Logistics Process

The information logistics system has to link together data, information and
intention. The project manager must be able to control the process and therefore
the status of the activities has to be transparent to all project participants in a
presentation corresponding to their dedicated views and responsibility in the
project. As a result, two major objectives can be identified:
Firstly, a communication architecture is needed that carries out the compression
and presentation of information for all participants across the technical design,
business and commerce domains. This defines the data-oriented view.
Secondly, the data manipulation and communication must be coupled with the
management process in order to monitor the execution of the planned project
workflow and adapt it to the current alternatives. For this purpose, software-agents
shall automatically trigger small routine activities, whereas responsibility-
dependent tasks are left to human actors. This defines the activity p r o c e s s -
o r i e n t e d view.
Both objectives demand interoperability of different degrees and a high level of
semantics in order to identify the necessary activities. We claim that besides the
sharing of data and objects also a common understanding and interpretation must
be supplied in order to allow co-operative work. From the viewpoint of ontologies
[15], a theory for common object representations for the co-ordination board has to
be derived. We call this c o - o r d i n a t i o n a l i n t e r o p e r a b i l i t y because it should
provide the co-ordinated access and interpretation of semantic objects. The key
point here is that integration can not be reduced to common data structures and
transportation of data across networks. High level interoperability is the sharing of
interpretable and useful information between multiple partners. This implies that,
in general, information must be condensed when moving from the specific context
of an actor to differing contexts of other actors.
Below co-ordinational interoperability, there are the platform i n t e r o p e r a b i l i t y
and the n o t a t i o n a l interoperability. Platform interoperability aims at integrating
heterogeneous computer hard- and software and notational interoperability is the
effort to standardise the syntax notation and complete it by mapping procedures as
suggested in STEP [17].
366

USER B

Informationsloglstlcs

a~v~
t LI ........

Workflow:
acti~ty- I
l .... [
CLASSIFIER
n~x~ _j ==

8 agent
reactions

,o, we - 4 W .PER j

l
worktask,
versions

r
l
+ T

Informationslogistlcs
L___
...................... ]

USER A

Figure 4: Architecture of the co-ordination board

Co-ordinational and notational interoperability


In order to realise co-ordination and notation interoperability, we have
introduced the concept of an electronic co-ordination board. This approach will be
discussed from a data and process-oriented view. The data-oriented view is shown
in fig. 4 in the data transportation and compression process from the bottom to the
top and the process-oriented view is given by the direction of workflow control
from left to fight.
Data view. In respect to the data view, the co-ordination board shall provide
the interoperability mechanisms to compress data objects resulting from arbitrary
software tools to common-understandable objects on a high semantic level. This
compression is a data transformation process that requires data transportation and
transformation on dedicated levels of interoperability. We define this transforma-
tion as a two-step process modelled by the wrapper and the classification layer.
The wrapper layer can be categorised as an advanced means to achieve
notational interoperability. The basic functionality is to wrap the native result data
of the software tools in a commonly parseable description - the meta information.
Since the objective is information compression for decision support it is not
necessary to map all elements of the result data to a common neutral description,
367

instead they are selectively extracted. This is done with the help of a priori defini-
tions of formats, tools, keywords and data structures configured in the wrapper
layer and controlled by the context of the actual worktask. The wrapped result data
and associated meta information serve as input to the classification layer.
The classification layer can be interpreted as the level on which co-ordinational
interoperability is provided. There, native data are reduced to their basic, common
understandable, characteristics. This means classification from the object-oriented
point of view. In contrast to a traditional class-centred approach, the process of
categorising information on the classification layer after the instantiation of the
corresponding objects requires an object-centred approach [14], which allows
dynamic classification and re-classification based on the actual properties of an
object instance applying methods of description logic. The resulting classified
information objects should be accessible by various client software. The
information they contain will be on the one side the native result data and on the
other side background knowledge that is inherited from the activity context
through classification. These information objects serve as a high-level
representation of project and domain knowledge to support decision making.
Activity process view. Some of the classified information objects act as agents
themselves. Their reactive behaviours modify the planned workflow in response to
actual data and process information on the board (fig. 5). Thus the board will also
serve for activity control, i.e. project management. The pre-planned work-flow
serves as input configuration data on both levels of the board.
The context of a worktask allows characterisations of task, role, actors and
software tools. These characterisations serve as a valuable information source for
the wrapper in order to derive meta intbrmation, Thus the configuration of the
wrapper layer needs input data from activity control on the level of worktasks.

Figure 5: Interaction cycle of activities (A) controlled by the project manager and
co-ordination board (B) for concurrent engineering.
368

On the classification layer, the received data shall be compressed to objects on a


high semantic level. In order to derive a meaningful classification in a given project
context, the classification rules must be adjusted to the context of the actual project
activities, inherent goals and intentions. In the terminology of workflow modelling,
the level that considers goals and intentions is the level of activities with
corresponding process templates configured in the business process.
The objects that result from the classification are in the data-view condensed
information objects representing the status of product data for the project
participants. With the methods, which the objects inherit when they are classified
and in consequence evolve to active objects, reactions to object configurations on
the board are implemented. These influence the activity model and support
decision-making of the project manager.
Platform Interoperability
For the interoperability of distributed system components, we have developed
in the ESPRIT project ToCEE a mechanism called Uniform Project Resource
Locator (UPRL) [18]. The main extensions of UPRL are summarized in table 1.
They allow project-wide object addressing and extends the standard WWW
addressing technique towards co-ordinated concurrent access to shared object-
oriented models, client applications to manipulate server-side documents and objects,
based on a formal interface def'mition [13, 19].

Server www server UPRL object server


Client browser, robot browser, robot, helper application
Response content HTML, multi media HTML, multi media, objects
Access methods browsing, browsing, full text search,
full text search object queries
Spec. of request semantics EXPRESS-C language
Semantic reflection Concept registry
Semantic multi-server Concept registry + Server
integration Interoperability Protocol (SIOP)
Table 1: Comparison of URL and UPRL

Interoperability of helper applications and servers is achieved by mapping


object addresses to URLs. The basic mechanism URL is extended for addressing
project relevant objects and includes modelling of (inheritable) object behaviour,
access authentication (role dependent visibility of objects and object behaviour),
parallel execution of object behaviour, transparent access to all meta data.
All objects of the environment can be addressed by UPRLs of a dedicated
Information Logistics server (fig. 6) although they may be physically located and
managed by other servers. Systems can retrieve and manipulate data of a server by
369

sending requests to the server across the network and specifying an object address
and name at run-time. Models can be physically distributed across several servers,
based on a Server Interoperability Protocol specification (SIOP). All requests to
UPRL objects can be co-ordinated by a common request broker corresponding to
the concepts of the CORBA technology.
For UPRL servers, additional client applications, such as CAD systems, can be
included on the client side, and which can actively manipulate server side
documents and objects. These client applications can be extended with a respective
middleware interface. It can be implemented either on the basis of a Java class
library developed in the ToCEE project, or, for other target implementation
languages, by using HTTP libraries.

ToCEE ~ M e t a Layer
Modelling / "~ Kernel Layer
Frame- / [ \ ~ NeutralLayer
work /4~o~men~rodu¢~
Others~ AspectLayer
//iiiiii\ii~ AppficationLayer
Information Logistics Request Broker ]
metalevel -- _ ~ logical
instance integration
level
I
MDdc/smeewnter MoPre/dU¢t e r I Process
physical
distribution

Figure 6: Partitioning models across several servers [19,27]

The Business Process

The part of the business process we are interested in is the co-ordination of


teamwork for concurrent engineering and electronic commerce. The work of the
members of the virtual enterprise is organised in terms of worktasks (fig. 7), which
are globally identifiable and linked to roles, required input, expected or delivered
results (documents, product model views, product data objects) and project time
schedules.
370

I Pr°cesitemplatV
te Pr°cess
templateI
instantia repository

~,: ~. ACtlVl ....:.,.....

................................................................../-------~

Figure 7: Business Process Model

Models are needed which support the users during the selection of correct
activity input versions and the notification about which activities have to be
performed when, by whom and for which reason. The data are needed according to
the preferences of working either design, production, or process-oriented.
Role and Actor Model
The collaborative work environment we propose supports the activities with
backgrotmd knowledge about the roles, obligations and contractual relationships of
the different players. The object-oriented role model is used to enrich
communication with a semantic model of senders and receivers. The roles may
have multiple aspects, because they are embedded in different environments
(fig. 8). These environments determine the working culture and the kind of data
represented. The role model includes devolution of responsibility, authentication,
user preferences, tools to be used and notification services. The generic role model
has to be adaptive to the project-specific configuration of the roles. The
instantiation of a consistent role model is a non-trivial data transformation [20]
even if it is based on an object-oriented contract model of the specific project. The
system supports both multiple actors per role and multiple roles per actor. For the
latter, we have introduced a technique which we call "organizational role
abstraction" in order to address different functions of an organization not only by
actors, but also by roles, e.g. HVAC expert of company C. Organizational
abstraction permits to encapsulate the details of the execution of an activity from
other participants.
371

.~. Pt~]ect Rotes


Marketplace Roles
Figure 8: Role model

Activities and worktasks


It should be possible for a project manager to define and monitor project
communication on the basis of responsibilities. His model is therefore role-
process-centred. In our approach a project manager describes the activities of the
roles in terms of worktasks and an activity model is built up from those atomic
worktasks (fig. 7). Such an activity driven architecture extends the different
communication activities of actors with an explicit semantic level, which keeps
track of the causality of communication, by grouping atomic worktasks to
activities and checking their relationships to predefined process templates, which
are defined by a project manager and based on overall enterprise strategies. Data
management is required to handle and archive the correct versions of documents in
a shared document management system linked as views to the Product Model.
Monitoring role and actor interactions with Software Agents
Transparent and predictable knowledge about expected activities is needed to
provide actors with the best available information when they schedule their
activities• We investigate knowledge-based approaches, where dependencies are
represented as constraints, so that we can detect the status of worktasks, e.g.
scheduled, in execution, pending or finished by constraint propagation. Agents will
trigger reactions to modify or respectively correct the planned workflow.
Examples of such agents are specific simple software demons that carry out
version control, verify if a given worktask is successfully completed, detect
worktasks which take an unexpectually long time or identify worktasks which are
blocked due to missing input.
Implementation of the Activity Model
For the integration of the activity control model special interfaces to the co-
ordination board and the two processes controlled by worktasks, the technical and
the electronic commerce process, have to be developed. They are closely based on
AI methods. For instance, worktasks define events on the co-ordination board and
vice versa. A filtering agent has to be developed which extracts those events on the
co-ordination board, which are candidates for worktasks which update the activity
model, monitored by a project manager. This should be based on a generic
372

software agent. The responsibility for including agents remains in the hands of the
project manager or the persons to whom he grants rights to include such agents.
The interface to the Technical Process includes, for instance, an interface to the
top level semantic categories of the design product, such as "building", "building
part .... wall", "column", which allows it to partition large sets of worktasks into
partial models and define advanced retrieval services, e.g. "all worktasks related to
windows", and a workflow aware transaction management, e.g. locking, rollbacks
of the product model server. The interface to Cost Control should allow the
transfer of explicit cost information as input and output of worktasks and
management of the relationship to a resource model, e.g. by updating accounts.

0 ."

Figure 9: Process Wizard tool with worktask window and with an attached diagram
detailing the dependencies and actor roles.

In the ToCEE project, such a business process environment, where the


interfaces are based on formal interface definition language like CORBA-IDL, is
already conceptually developed [19] and a prototype implementation called
Process Wizard is on the way. In figure 9, an example is shown with the main
activities of the ToCEE demonstration scenario. Each worktask of a user role is
modelled as a node in the process network and the dependencies between them are
represented as arrows. Advanced modelling techniques, such as conditional
execution of worktasks and recursion, are supported by the modelling tool. In this
context a process definition methodology was developed to achieve a parametric
373

description of worktask pattern, based on process templates. The environment has


a layered process architectuxe, built up from:
requests and responses as atomic process events, as provided by the distributed
middleware (request broker),
data transactions, triggered by requests, as the atomic operations on product
model data, but where not all consistency rules for model data are performed,
orders as units of inter-personal communication, creating worktasks for the
project actors, related to aggregations of documents and transactions, e.g.
"check consistency" and
process templates as the most general level of reusable a priori process
knowledge, modelling reusable patterns of human decisions and worktask
dependencies.
The following process templates are provided [ 19]:
- Atomic templates, which define orders for worktasks
- Composed templates
They permit recursions, loops and reusable template modules. Based on these
different possibilities to combine worktasks, process templates are generated,
containing parametric descriptions of sets of worktasks.
For each project, a set of process templates can be maintained in a project
specific process template repository. After the selection of a template, the process
management tool creates the actual worktasks through an intermediate
transactional layer, which can be used for concurrent access to the process model.
The main dependencies between object classes of the process model are shown in
fig. 10. During this overall process, the process management tool continuously
updates the worklists for the different users, which contain exactly those worktasks
which are relevant for one user showing date, status, data repository' ID and
dependencies on other worktasks. If a user finishes a worktask, he assigns his
results to the process management tool which analyses possible follow-up
activities for other users.
However several extensions are still necessary. For instance, the first operation
level (fig. I0) needs extensions to ensure the high security and reliability needs of
electronic commerce and business transactions. The second level has to be
extended to support both technical product semantics and business semantics in an
integrated manner. Document management can be included as an extension of
classical transactions by also supporting binary large objects (BLOBs) as basic
data types, maintaining all documents created in the system. The authorship and
authorisation of documents can be transparently managed. The third level has to be
extended towards cost aware activities, by explicitly modelling cost attributes and
evaluating them along all interaction paths. On the fourth level, a new resource
model is needed, with relationships to aggregated costs and account updating.
374

I Proc
t

Transactions ensure
integrity of shared
data during Leqend:
concurrent access
Operation
Specify arbitrary
Objects (documents, Operation
product objects, views)
as result of worktasks I °biect [
Figure 10: Basic interaction between process related object classes

The Technical Process

We envisage the technical process as driven by the requirements of the investor,


the availability of technical products on the electronic market, the know-how of the
design team and the sophistication of the tools. In general, the technical process
will be integrated in the co-ordination board via the work tasks of the business
process (see fig. 7).
Each individual technical work task will thus represent one distinct part of the
consecutive changes of the state of the evolving product data. The goal of each
task is to provide a feasible technical solution whereby the required functionality
and behaviour of the designed building are achieved in a measurable way. The
actor responsible for a certain specific task will typically work with the product
data representing his specific view. He needs tools and services that can support
the reliable access to a common product data repository, including view
transformation methods, version management and recognition of conflicts with
other designers. Since the technical solutions most often include a great variety of
externally supplied products, it is important to ensure a close interaction with the
electronic commerce activities, i.e. search on the electronic market, finding,
negotiating, purchasing and supplying the right products and product data.
Such a holistic approach should enable:
• early consideration of the investor's specific requirements, i.e. including the
selection of specific product components already at the briefing phase,
• step by step satisfaction of the requirements by means of gradual refinement of
critical control parameters, such as cost and supply time, by means of a gradual
reduction of initially broad ranges for the values of these parameters modelled
throughfuzzy attributed objects to concretely instantiated entities,
• continuous awareness of the attractive alternative products offered on the
electronic market, in order to incorporate them into the design.
375

We supported the technical process by a hierarchically structured product data


model of the building (fig. 6), based on STEP [17] and alternatively on the IFC
model [21]. The product data will be stored in a shared distributed database and
accessed through a high-level interface based on SDAI [17]. The anticipated
overall architecture of the technical process system and its relation to the overall
workflow is shown in fig. 11. The Product Data Server details are given in [22].

product data I product data [


repository ~posito~..J
Set-top
interoperability tools

Technical
Design Tool
(CAD, AID)

Business process
workflow

Figure 11: Architecture of the IT components for the technical process

Interoperability Methods
In order to achieve the interoperability on the technical data level, methods are
needed that can enable the transformation of the data from one designer's view to
another, at the same time ensuring proper management of all locally made changes
and of the resulting product data versions. Accordingly mapping and matching
tools based on STEP have been developed in the ESPRIT project COMBI [23] and
were upgraded in the ESPRIT project ToCEE [22]. In order to achieve flexibility
operational interoperability, semantic interoperability and functional
interoperability must be taken into account.

ENTITY TC Model SUBTYPE OF (TColfcRoot};


contents : TC ModelContents;
underlying schemas : SET [I:?] OF TC ModelSchema;
accessRigh~s : OPTIONAL SET [i:?] O F - T C A c c e s s R i g h t ;

OPERATIONS
Map (VAR TargetRefs : LIST OF TC_ModelSchema);
Match (compVersion : INTEGER;
VAR ChangedContent : TC InfoContainer);
CheckConsistency (OPTIONAL Targets : LIST OF T C M o d e l ;
VAR SyncRequest : B00LF~hN;
VAR Result : BOOLEAN);
Commit (VAR Result : BOOLEAN);
GetProductObjects (VAR ProdObjRefs : SET OF T C P r o d u c t ) ;

END'ENTITY;

Figure 12: Part of the TC_Model entity specification in EXPRESS-C


376

Operational interoperability is part of the Information Logistics process and


sub-structured into co-ordinational, platform and notational interoperability. On
this level the operations involving semantic and functional interoperability are
already formally defined, although they have to be performed by dedicated
interoperability tools. An example for such definition is the concept TC_Model
(fig. 12), representing a whole model instantiation, as implemented in the ToCEE
project.
Semantic interoperability is defined as the ability of the conceptual model
schemata to share common concepts of the different technical data models. It
involves two complementary approaches:
- Static model harmonisation, i.e. use of the inherent features of the modelling
paradigm like generalisation/specialisation at system design time,
- Model mapping, i.e. use of specifications and methods for the transformation of
the modelling objects of one schema to another at run-time, where model
harmonisation is alone insufficient.
Model mapping is a one-directional process that fully or partially transforms the
classes and instances contained in a source model to new target model. The result
of a mapping operation can be a modified schema (useful at system design time),
or a new context (instantiation) of the target model (needed at run-time). For the
formal specification of the inter-model equivalences an appropriate mapping
language is needed, in order to avoid hard-coding of the model transformations for
each particular case. As a result of extensive research efforts, a wide range of
languages has been developed for such high-level specifications in recent years,
e.g. EXPRESS-M, EXPRESS-V, EXPRESS-X, VML, KIF [24,25]. However,
none of these languages is especially designed to support a modelling framework,
such as the IFC project model [21], which strongly relies on the use of a common
lean kernel model. Therefore, it can be more beneficial to use instead a more
specialised language, such as CSML [26], developed in the COMBI project, which
is able to support the kernel model approach and thus seems to exhibit better run-
time performance compared to the other methods.
The mapping process we developed encompasses the following steps (fig. 13):
• parsing, including the analysis of the source and the target models and the
mapping specifications, and the generation of the needed meta data structures,
• analysis of the instances of the source model and their representation on the
common ontology level,
• expansion of the instances and relations of the source model into the target,
• reduction of the generated new target model instances to eliminate redundancy.

Functional interoperability is defined as the capacity to support at run-time


the data modifications in the actual populated data. These features of the model
management services are needed when one model has to be updated and checked
against the constraints defined in its underlying schema, or when changes in one
model have to be propagated and checked against the constraints of one or more
377

other discipline-specific aspect models. This can be done with the help of two
intelligent tools: model matching and consistency checking.
l ~ ¢ e model c,ass,eve, -[ qrargctm~d ]. . . . . . . . . . . . . . . . . . 1
" t .................................... I
[ transformat/ons
I Consistency"~ new
checking • instantiatinn
(after mapping,
I mstant~at]on J reasoning &
I code compliance
I ~:hecking, consistence ohlect matchm~
I
I with user-defined I
J

So~ce model ~.~o~,,~i I~ instantiation


instantiation transformations
New version of the
target model data
contents
analysis of the source version comparison and1- I MatchinQ | j~~
classes/instances/ dynmnic identification
relations and the of the new/modified
source/target inter- obiects
schema equivalences Exist. version of the i ~ changes,Pr°pagati°n
of the
target model data re-classification and
contents merging into new version

Figure 13: Principal schema of the data model transformations using the interoperability
tools for model mapping, matching and consistency checking

Model matching involves a context-dependent analysis of the newly obtained target


model data, including comparison with older target model versions. The matching
operations are thus applied only on the target model schema. In our approach they make
use of knowledge-based rules for dynamic object identification and classification, which
work on specially introduced meta-level object representations [27]
The matching process we are developing encompasses the steps (fig. 13):
• identification of the changed instances in the target model,
• classification of the changed instances w.r.t, the concepts of the ontology by
using the subsumption algorithm of the description logic approach,
• propagation of the changes over the whole data structure of the target model, if
necessary repeatedly applying the two steps above.
The objective of consistency checking is to prove the validity of a new model
version on the basis of 'after-add' rules applied to the new target model context.
Such rules would normally not be included in the model specification itself, and
can address issues such as compliance to certain codes or regulations, user
requirements etc. Hence, this type of local consistency checking operations will be
based both on the object-oriented definition of the target model classes and on a
complementary rule-based extension of the model itself, represented in a separate
specification schema. We are currently investigating this approach in the
framework of the ToCEE project. In COMBI we have already established a
prototype called PROMINENT [9] for semantic and some functional
interoperability based on a three-layer product model, the kernel, the domain-
378

dependent aspect and the application-dependent application models. The snapshot


in fig. 14 visualizes three aspect models of the COMBI demonstration building.

Figure 14: Integration of structural analysis (left), foundation design (right) and structural
system design (middle) by semantic and functional interoperability

Technical Product Information System (T-PINS)


The efficient search for appropriate products on the electronic market needs a
dedicated Technical Product Information System, which should be integrated in
the design tools. Such a system should act as an intelligent advisory system
supporting the designer's cognitive work on several levels [28].
On the first level the input will not be provided by pure data, but on a higher
knowledge level, i.e. by design criteria complemented with allowable ranges for
certain control parameters. This is comparable to shopping, when we want first to
be advised by a skilled person's judgement about the product criteria. At this level
no instantiated product data are needed, but the search object is enriched by
knowledge through dynamic classification.
On the second level the input will be comprised of incomplete, i.e. only partially
attributed, product data, representing a feasible, but not yet fully detailed design
solution. Here the product selection will be done by comparing the available
suppliers' products with the specified design data with the help of a case-based
reasoning method and context knowledge extracted from a design environment
database which will be prototyped for certain typical cases.
On the third level the real product components will be considered. The search
can be carried out with a standard Intemet product broker system, such as the
GLENET system [29] This layer has to deal with different ontological problems,
379

e.g. the presence of synonyms (different semantics for one and the same named
concept) and homonyms (same semantics for differently named concepts). Such
problems can be tackled with the interoperability methods mentioned before.
Technical Design Tools
The design tools used in the technical process can be general-purpose tools,
such as CAD systems or analysis software, and tools or technical agents built on
the basis of AI methods.
The added value from the use of intelligent technical tools for the technical
design process is already proved by the prototype of a design assistant for
preliminary structural design, developed within the COMBI project [16]. The
structural design of a standard office building was reduced to 10 % of the normal
design time. This tool is able to support the designer in his synthesis tasks and not
just in analysis tasks. It is based on the methodology of intelligent systems as
described in chapters 1 and 2,. Its layered cognitive architecture is:
The strategic level. Decisions of a general nature are made and the abstraction
level is high. Reasoning on this level heavily involves projection and anticipation.
In the system architecture, strategy is managed by a planning component.
The tactic level. The degree of freedom is limited by corresponding strategic
decisions. The used models are increasingly detailed. In the system architecture a
concept of so-called "tools" is used to model tactic design actions that relate to the
definition of a focused set of known design parameters.
The reactive level. On this level instances and models are more detailed.
Actions can be interpreted as reactions to other actions or conflicts detected. The
main AI method for this is constraint propagation
Technical Agents built on AI methods can tackle some routine design tasks in
order to reduce the cognitive load of the designers. They do not have to be part of a
design tool, but one may imagine that they are downloaded from the internet. Such
a category of routine tasks is the continuos checking of the consistency of the
actual design data. Agents could autonomously check for conflicting design
decisions and inform the project participants when possibly critical design states
are detected. At present, agent development is focus on the domain of geometry,
and later on should be extended on code checking and functional criteria. This task
gets immediately more complicated when intervals of values corresponding to
contributions of different designers have to be considered.

The Electronic Commerce Process

Several electronic commerce links with the technical process are needed, at the
same time distinguishing between technical look-ups and legally pre-binding and
binding commercial interactions. The links are provided on different design phases
as described before. The necessary tools are briefly described.
380

The looking up and downloading of technical information as well as standard


prices can be handled with the T-PINS system as described in the previous chapter.
Bidding, tendering, negotiation and procurement contracts can be handled by a
Commercial Product Information System - C-PINS, and a Supplier Product
INformation System - S-PINS. The envisioned overall architecture of the IT
components for support of the electronic commerce process and their relation to
the overall workflow are shown in fig. 15. In anticipation of the fact that many
advanced IT applications for cost calculation and electronic commerce exist and
more are emerging on the software market, the research focus in the context of
concurrent engineering methodology should be based strongly on the necessary
interoperability methods. The generic interoperability tools that must be developed
in order to service such costing control applications can use basically the same IT
methods as the respective tools for the technical process.
For data integration, the STEP- or IFC-based building object model has to be
extended with specific high-level "electronic commerce" objects in order treat the
electronic commerce process on the same semantic level as the technical and the
business processes. The electronic commerce objects can be accessed through an
electronic commerce product data repository, which can also be used by the
Technical Product Information System T-PINS.

pS!!!!?~a

Business processworkfl

Figure 15: Overview of the components of the Electronic Commerce process

The Commercial Product Information System (C-PINS) for the support of


Internet-enabled tendering, bidding, negotiation and procurement must be closely
linked to theT-PINS system. It should consist of a set of a tendering module based
on SGML/HTML-enhanced tender texts, a bidding module, a negotiation module
which enables confidential price discussions with suppliers and a procurement
module which is supported by an object-oriented contract model, such as the one
developed in the ToCEE project. This model can be used to formalise the contract
issues in order to allow rigorous support of contractual responsibilities by the
system and to facilitate the preparation of contracts with the suppliers. In addition,
the procurement module can provide a sample library of standard forms for typical
381

contract parts which will be selected automatically by an agent, triggered through


status changes of the "contract" objects. Filtering, sorting and evaluation of the
electronic commerce data can be provided by generic AI-based agents.
The Supplier Product Information System (S-PINS) provides the specific
additional functionality needed from the point of view of the suppliers. It should
include: a module facilitating the promotion of the suppliers' products on the
electronic market, a module for bidding offers, and a product data interface
(STEP). The module for promoting the supplier's products should be based on
STEP/SGML-based forms and provides interactive aids for publishing product
information (in the form of texts, tables, technical drawings, photographic images
etc.), and for presenting the supplier's profile (qualifications, portfolio) on the
WWW.

Conclusion

Information technology will strongly encourage the construction indnstry to


migrate from current document-centred fast tracking strategies to a concurrent
engineering way of working. Classic information technology methods, such as data
integration, electronic communication, Internet and Intranet technologies and
workflow methods are only one substantial part of a concurrent engineering
environment. Another, yet no less important part, comprises the AI tools. However
these tools will appear in the overall framework not as readily observable large
components, but rather in a very fine granular manner, distributed over the whole
system. AI methods will appear in an atomic manner and - as it was demonstrated
will have to be built-in in cognitive architectures, demanding specific
configurations for each dedicated task. This means that each applied AI method
will need a lot of configuration and application-dependent effort to exhibit its full
power, but on the other side the measurable benefit of each single built-in AI
method might be almost negligible on the macro scale of a concurrent engineering
system. Only when the sum of the whole AI impact on the overall system is
measured, their important role in the design of complex and powerful IT
environments can be understood.
The objectives in IT research and development have already shifted from the
investigation of client-server systems to client-server-agent systems, where agent
development subsumes the application of AI methods. However, this is mostly
attempted on platform level. What is still missing is the shift from the man-
machine interface to the man-machine-agent interface, which means that an
intelligent system = human + intelligent tool and intelligent tool = a cognitive
architecture + configurated AI methods.
382

Acknowledgements
The support of the Commission of the European Community for the ESPRIT
project contracts No. 6609 COMBI (10/93-12/95) and No. 20587 ToCEE (01/96-
12/98) is gratefully acknowledged. My gratitude goes especially to m y research
assistant Peter Katranuschkov and to m y PhD students Rainer Wasserfuhr and
Markus Hauser for their contributions to this research work, the results of which
are described in this article.

References
[1] Laird J.: Preface of the special section on integrated cognitive architectures. SIGART
Bulletin, 2, 1991.
[2] Mitchell M., J. Allen, P. Chalasani, J. Cheng, O. Etzioni, M. Ringuette, and J.C.
Schlimrner: Theo: A framework for self-improving systems. In K. VanLehn, (ed),
Architectures for Intelligence. Lawrence Erlbaum Associates, Hillsdale, N J, 1991.
[3] Forbus D. and D. Gentner. Similarity-based cognitive architecture. SIGART Bulletin,
2, 1991.
[4] Carbonell, J.C., C.A. Knoblock and S. Minton. Prodigy: An integrated architecturejbr
planning and learning. In K. Van Lehn, (ed), Architectures for Intelligence, Lawrence
Erlbaum Associates, Hillsdale, N J, 1991.
[5] Brooks A.: How to buiM complete creatures rather than isolated cognitive simulators.
In K. Van Lehn, (ed), Architectures for Intelligence. Lawrence Erlbaum Associates,
Hillsdate, NJ, 1991.
[6] Maes P.: The agent network architecture (ANA). SIGART Bulletin, 2, 1991.
[7] Hauser M. and R.J.Scherer: Automatic knowledge acquisition in the reinforcement
design domain. In Choi C.-K., C.-B. Yun, H.-G. Kwak (eds.), Proc. 8'~ Int. Conf. on
Computing and Building Engineering, pp. 1407 - 1412, Seoul, Korea, August, 1997.
[8] Russel I. S. and P. Norvig: Artificial intelligence - a modern approach. Prentice Hall,
New Jersey, 1995.
[9] Scherer R. J. and P. Katranuschkov: Integrated product model centred design in a
virtual design office, to appear in: Proc. of Information Technology for Balanced
Automation Systems in Manufacturing, Prague, August 1998.
[10] Kusiak A.: Concurrent engineering: automation, tools and techniques. John Wiley &
Sons, t993.
[11] Prasad B.: Concurrent engineering fundamentals (integrated product and process
organization). Prentice-Hall, Englewood Cliffs, NJ, 1996.
[12] Huovila P., L. Koskela and M. Lautanala: Fast or concurrent - the art of getting
construction improved. Proc. 2"~ Int. Workshop on Lean Construction, Santiago, Chile,
1994.
[13] Wasserfuhr, R. and R.J. Scherer:: InJbrmation management in the concurrent design
process. Proc. Int. Colloquium IKM'97, Weimar, February 1997.
[14] Hakim, M.: Modelling evolving information about engineering design products. PhD
thesis, Dept. of Civil engineering, Carnegie Mellon University, 1993.
383

[15] Gruber T.R.: Towards principles for the design of ontologies used for knowledge-
shatqng. [n Guarino N. and R.Poli (eds): Formal Ontology in Conceptual Analysis and
Knowledge Representation. Kluwer Academic Publ., Deventer, Netherlands, 1993.
[16] Hauser M., Scherer R.J.: Application of intelligent CAD paradigms to preliminary
structural design, Artificial Intelligence in Engineering 11 (Special Issue: Structural
Engineering Applications of Artificial Intelligence), pp. 217 - 229, Oxford, 1997.
[17] ISO 10303-1 IS: Product Data Representation and Exchange - Part l: Overview and
fimdamentalprinciples. ISO TC 184/SC4, Geneva, 1994.
[18] Soberer R. J.: Overview of requirements and vision of ToCEE. Public Annual Report
1996, EU-ESPRIT project No. 20587 ToCEE, TU Dresden, Germany, Feb. 1997.
[19] Wasserfuhr R. and R. J. Scherer: Pivcess models for information logistics in the con-
current building life cycle. In K.S. Pawar (ed): Proc. 4'~' Int. Conf. on Concurrent
Enterprising, Nottingham, Oct. 1997.
[20] Scherer R.J.: Legal framework for a virtual enterprise in the building industly. In K.S.
Pawar (ed): Proc. 4'" Int. Conf. on Concurrent Enterprising, Nottingham, Oct. 1997.
[2l] IFC Release 1.5 Final Version: IFC object model for AEC projects. IAI Publ.,
Washington DC., Sept. 1997.
[22] Hyv~rinen J., P. Katranuschkov and R.J. Scherer: ToCEE: Concepts for the product
model and interoperability management tools. Deliverable F2-1, EU-ESPRIT project
No. 20587 ToCEE, TU Dresden, July, 1997.
[23] Katranuschkov P.: COMBI: Integrated Product Model. In Scherer R.J. (ed) Proc. I st
European Conf. on Product and Process Modelling in the Building Industry, Balkema
Publ, Rotterdam, Netherlands, 1995.
[24] VerhoefM., Y. Liebich and R. Amor: A multi-paradigm mapping method survey. Proc.
C1B W78-TGI0 Workshop on Modelling of Buildings through their Life-Cycle, pp.
233-247, Stanford University, CA., 1995.
[25] Genesereth M. and R. Fikes. Knowledge interchange format 3.0, Reference mammal.
Tech Report Logic-92-1, Comp. Science Department, Stanford University, CA. 1992.
[26] Katranuschkov P. and R.J. Scherer: Schema mapping and object matching: a STEP-
based approach to engineering data management in open integration environments.
Proc. CIB-W78 Workshop, Bled, Slovenia, June 1996.
[27] Katranuschkov P. and R.J. Scherer: Framework for interoperability of building product
models in collaborative work environments. In Choi C.-K., C.-B. Yun and H.-G. Kwak
(eds) Proc. 7'h Int. Conf. on Computing in Civil and Building Engineering, pp. 627-632.
Seoul, Korea, August 1997.
[28] Scherer RJ.: A Product Information System with an Adaptive Classification Structure.
In J. Gausemeier (ed.) Proc. Int. Symposium on Global Engineering Networking,
Antwerpen, Part I, pp. 69 - 78, Paderborn, 1997.
[29] Rethfeld U.: GEN vision an major concepts. In Proc. 1" European Workshop on Global
Engineering Networking, Paderborn, February, 1996.

You might also like