Livro Common KADS
Livro Common KADS
A Bradford Book
The MIT Press
Cambridge, Massachusetts
London, England
c 1999 Massachusetts Institute of Technology
All rights reserved. No part of this book may be reproduced in any form by any elec-
tronic or mechnical means (including photocopying, recording, or information storage and
retrieval) without permission in writing from the publisher.
This book was set in : : : by : : : and was printed and bound in the United States of America.
Schreiber, August T.
Knowledge engineering and management: the CommonKADS methodology /
A. Th. Schreiber, : : : [et al.].
.. p. ... cm.
Includes bibliographical references and index.
ISBN 0-262-19300-0 (hc. : alk. paper)
1. Expert systems (Computer science) 2. Database management.
I. Schreiber, A. Th.
QA76.76.E95K5773 1999
006.3’32--dc21 99-1680
CIP
Contents
Preface vii
2 Knowledge-Engineering Basics 13
2.1 Historical Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 The Methodological Pyramid . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 Model Suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5 Process Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.6 Some Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.7 Bibliographical Notes and Further Reading . . . . . . . . . . . . . . . . 24
4 Knowledge Management 69
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.2 Explicit and Tacit Knowledge . . . . . . . . . . . . . . . . . . . . . . . . 69
4.3 The Knowledge Management Cycle . . . . . . . . . . . . . . . . . . . . 71
4.4 Knowledge Management Has a Value and Process Focus . . . . . . . . . 72
vi
Bibliography 423
Readership
This book is intended for practitioners and students in information systems engineering as
well as in knowledge and information management. We assume that you are willing to
consider new ways of managing the increasing complexity of information in applications
xii
and organizations. In reading this book, it will be helpful if you have some background in
information systems, have some understanding of information analysis or business process
modeling, or have experience in the area of information management. The material of this
book has proved to be useful for courses, tutorials and workshops for industrial practi-
tioners, as well as for advanced undergraduate and first-year graduate students in different
information systems related disciplines.
1. For information analysts and knowledge engineers, we show how knowledge anal-
ysis constitutes a valuable and challenging extension of established development
approaches, in particular of object-oriented approaches such as UML.
2. For knowledge managers, we show how a seamless transition and integration can
be achieved from business analysis to IT systems modeling and design — a feature
absent in almost all business process approaches as well as systems engineering
methodologies.
3. For software engineers, we show how conceptual modeling of information and
knowledge naturally provides the necessary baseline structures for reusable software
architecture, systems design and implementation.
4. For IT project managers, we show how one can solve the eternal dilemma of balanc-
ing management control versus flexibility, in a structured way that is directly based
on quality systems development methodology .
Throughout the book, these points are illustrated by extensive case studies, which have
been taken from real-life application projects carried out in different industries we have
been working with over the years.
As a guide to readers with different specific interests, the first chapter contains a de-
tailed road map to help you select those parts of the book that are most interesting and
relevant to you.
Additional Material
This book contains the consolidated baseline of the CommonKADS methodology. The
material in this book is sufficient for readers to start useful work on knowledge-intensive
applications. There is a wealth of additional material available, which could not be in-
cluded in this book. For those who want to learn more about CommonKADS, this material
is accessible through the following address:
xiii
http://www.commonkads.uva.nl
including:
Exercises related to the material discussed in this book.
Case studies of applications.
Access to sample running systems.
Texts about additional modeling techniques, such as a dedicated formal specification
language for knowledge systems.
Catalogs of knowledge model elements developed in previous projects.
Pointers to support tools for CommonKADS, such as diagramming tools, elicitation
support tools, CASE tools, parsers for the languages used, etc.
Background
CommonKADS is the product of a series of international research and application projects
on knowledge engineering that date back as far as 1983. Historically, knowledge systems
developed mainly through trial and error. The methodological aspects received little atten-
tion, despite a clear need expressed by industry practitioners for guidelines and techniques
in order to structure and control the development process. Accordingly, system developers
and managers greatly appreciated the steps made by CommonKADS to fill this gap.
Over the years, the methodology has been gradually extended as a result of the feed-
back from practitioners and scientists. Practical use of CommonKADS showed that many
system projects fail because of a technology push approach. An organization can only suc-
cessfully implement information and knowledge technology if both the system’s role and
its potential impact on the organization are made explicit, and are agreed upon before and
during system development. Thus, the introduction of knowledge-oriented methods and
techniques for organizational analysis represents a major advance. Organizational anal-
ysis aims at creating an application pull situation. Such an approach provides assurance
to users, clients and stakeholders that a new system will actually solve a real problem or
takes advantage of a real opportunity within the organization. Other useful additions to the
methodology deal with the modeling of complex user-system interaction, with the intro-
duction of new specification techniques, and with the definition of a flexible, risk-driven
and configurable, life-cycle management approach that replaces the waterfall model for
information system projects which is as classical as it is overly rigid.
Experiences
Early on, companies began using the knowledge technology products provided by Com-
monKADS. This contributed greatly to the products’ success. As early as 1986, the Dutch
xiv
company Bolesian Systems, now part of the large European software firm Cap Gemini,
exploited the first version of CommonKADS and refined it into their in-house method for
knowledge system development. They have built a very large number of commercial sys-
tems, mostly in the financial sector. More recently, the Everest company is making use of
CommonKADS in a similar manner. Many banks and insurance companies in The Nether-
lands have systems developed with CommonKADS in daily use for assessing loan and
mortgage applications. In Japan, several big companies including IBM are using Com-
monKADS in their in-house development, for example to increase software architecture
reusability. A well-known application in the UK is the credit-card fraud detection program
developed by Touche Ross Management Consultants for Barclay Card. All the “Big Six”
world-wide accounting and consultancy firms have integrated smaller or larger parts of
CommonKADS into their proprietary in-house development methods.
CommonKADS also frequently serves as a baseline for system development and re-
search projects, such as European IT programme and national government projects. Fur-
thermore, the CommonKADS methods are nowadays in use for purposes other than system
development, such as knowledge management, requirements capture and business process
analysis. The US-based Carnegie Group, for example, has applied CommonKADS in this
way in a project for US West. Likewise, the Unilever company uses CommonKADS as its
standard both for knowledge-intensive systems development and for knowledge manage-
ment.
The Authors
Because it is difficult to write a textbook with many different authors, we decided early on
that it would be best if only two authors would actually write the text, and that the others
would contribute to the drafts. Of course, the material contains ideas and is based on work
of all authors. Accordingly, Guus Schreiber has been responsible for the general editing
process and for Chs. 5-7 and 10-14, whereas Hans Akkermans wrote most of Chs. 1-4, 9
and 15. Nigel Shadbolt contributed Ch. 8, Robert de Hoog wrote part of Ch. 4, and Anjo
Anjewierden checked the CommonKADS Conceptual Modelling Language examples and
contributed Appendix B.
Acknowledgments
Many persons, companies and institutions have contributed to the CommonKADS knowl-
edge methodology in its various stages of development.
The TRACKS workshop participants, who acted as an international industrial review
board for the book, are gratefully acknowledged for their extensive reviews of the mate-
rial. Their suggestions have led to many improvements in the book. We are particularly
grateful to Unilever Research Laboratories in Vlaardingen (The Netherlands), also for their
xv
financial support. This book has profited from the input and extensive feedback from many
colleagues at Unilever. The Aion implementation described in Chapter. 12 was developed
by Leo Hermans and Rob Proper of Everest. The diagnosis model in Chapter. 6 is based on
input from Richard Benjamins. The scheduling model in Chapter. 6 was kindly provided
by Masahiro Hori of IBM-Japan. Brice LePape of the ESPRIT office of the European
Commission has supported the development of CommonKADS from the beginning. In
the final stages we received funds from the Commission through the TRACKS project
(ESPRIT P25087) in order to organize a number of workshops to get feedback on the
draft book. This project also supported the construction of the web-site with additional
material. Rob Martil, then at Lloyd’s Register, now at ISL, contributed to the develop-
ment of the CommonKADS model suite and to the ideas about project management.Many
ideas about knowledge management (Chapter. 4) were developed in cooperation with Rob
van der Spek of CIBIT, while teaching CommonKADS to many students with a conven-
tional systems development background at CIBIT helped in clearly focusing ideas about
the worldview behind CommonKADS. Joost Breuker was one of the founding fathers of
KADS and developed, together with Paul de Greef and Ton de Jong, the typology for
transfer functions (Chapter. 5). Peter Terpstra contributed to foundations of the work on
knowledge system design and implementation (Chapter. 11). The work of Klas Orsvarn
(SICS) and Steve Wells (Lloyd’s Register) is acknowledged as an inspiration for the de-
scription of the knowledge modeling process (Chapter. 7). Wouter Jansweijer and other
people working on the KACTUS project contributed to the ideas underlying the material
on advanced knowledge modeling (Chapter. 13). The first author, Guus Schreiber, is grate-
ful for the feedback given by his course students in “Data and knowledge modeling,” who
had to study from draft material, and whose experiences have proved valuable in increasing
the quality of the book. The second author, Hans Akkermans, would like to thank Rune
Gustavsson (University of Karlskrona/Ronneby) for his support and feedback on courses
and workshops given in Sweden based on the draft textbook, and to Fredrik Ygge, Alex
Ratcliffe, Robert Scheltens, Pim Borst, Tim Smithers, Amaia Bernaras, Hans Ottosson, Ioa
Gavrila, Alex Kalos, Jan Top, Chris Touw, Carolien Metselaar, Jos Schreinemakers, and
Sjaak Kaandorp for providing case study material and/or giving their feedback on Chap-
ter. 1, Chapter. 3, Chapter. 4, Chapter. 9 and Chapter. 15. Jacobijn Sandberg provided
feedback based on the use she made of CommonKADS in other projects. Machiel Jansen
helped to develop the classification model in Chapter. 6. Most figures in this book were
drawn with Jan Wielemaker’s MODELDRAW program.
The research on methodology described in this book has been supported by a number of projects partially funded by the
ESPRIT Programme of the Commission of the European Communities, notably projects P12, P314, P1098 (KADS), P5248
(KADS-II), P8145 (KACTUS) and P25087 (TRACKS). The partners in the KADS project were STC Technology Ltd. (UK),
SD-Scicon plc. (UK), Polytechnic of the South Bank, (UK), Touche Ross MC (UK), SCS GmbH (Germany), NTE NeuTech
(Germany), Cap Gemini Innovation (France), and the University of Amsterdam (The Netherlands). The partners in the KADS-II
project were Cap Gemini Innovation (France), Cap Gemini Logic (Sweden), Netherlands Energy Research Foundation ECN (The
Netherlands), ENTEL SA (Spain), IBM France (France), Lloyd’s Register (UK), the Swedish Institute of Computer Science (Swe-
den), Siemens AG (Germany), Touche Ross MC (UK), the University of Amsterdam (The Netherlands) and the Free University
of Brussels (Belgium). The partners in the KACTUS project were Cap Gemini Innovation (France), LABEIN (Spain), Lloyd’s
Register (United Kingdom), STATOIL (Norway), Cap Programmator (Sweden), University of Amsterdam (The Netherlands),
xvi
University of Karlsruhe (Germany), IBERDROLA (Spain), DELOS (Italy), FINCANTIERI (Italy) and SINTEF (Norway). The
partners in the TRACKS project were Intelligent Systems Lab Amsterdam (The Netherlands), AKMC Knowledge Management
BV (The Netherlands), Riverland Next Generation (Belgium), and ARTTIC (Germany). This book reflects the opinions of the
authors and not necessarily those of the consortia.
The quintessential raw materials of the Industrial Revolution were oil and
steel. Well, more than 50% of the cost of extracting petroleum from the earth
2 Chapter 1
Knowledge has thus come to be recognized and handled as a valuable entity in itself.
It has been called “the ultimate intangible.” Surveys consistently show that top executives
consider know-how to be the single most important factor in organizational success. Yet,
when they are asked how much of the knowledge in their companies is used, the typical
answer is about 20%. So, as an observer from a Swiss think tank said, “Imagine the
implications for a company if it could get that number up just to 30%!” This book offers
some concepts and instruments to help you achieve that.
The value of knowledge can even be expressed in hard figures. James Brian Quinn has
made an extensive study of the key role of knowledge in modern organizations in his book
Intelligent Enterprise (1992). Even in manufacturing industries, knowledge-based service
capabilities have been calculated to be responsible for 65% to 75% of the total added value
of the products from these industries. More generally, writers on management estimate
that intellectual capital now constitutes typically 75% to 80% of the total balance sheet
of companies. Today, knowledge is a key enterprise resource. Managing knowledge has
therefore become a crucial everyday activity in modern organizations.
These developments have fundamentally changed the importance and role of knowl-
edge in our society. As Peter Drucker, in his book Post-Capitalist Society (1993), says:
The change in the meaning of knowledge that began 250 years ago has trans-
formed society and economy. Formal knowledge is seen as both the key per-
sonal resource and the key economic resource. Knowledge is the only mean-
ingful resource today. The traditional “factors of production” — land (i.e.
natural resources), labour and capital — have not disappeared. But they have
become secondary. They can be obtained, and obtained easily, provided there
is knowledge. And knowledge in this new meaning is knowledge as a utility,
knowledge as the means to obtain social and economic results. These devel-
Prologue: The Value of Knowledge 3
The Industrial Revolution revolutionized manual labor. In the process, it brought about
new disciplines, such as mechanical, chemical, and electrical engineering, that laid the sci-
entific foundation for this revolution. Likewise, the Information Society is currently rev-
olutionizing intellectual labor. More and more people are becoming knowledge workers,
while at the same time this work is undergoing a major transformation. New disciplines
are emerging that provide the scientific underpinnings for this process. One of these new
disciplines is knowledge engineering. Just as mechanical and electrical engineering offer
theories, methods, and techniques for building cars, knowledge engineering equips you
with the scientific methodology to analyze and engineer knowledge. This book teaches
you how to do that.
Data Data are the uninterpreted signals that reach our senses every minute by the zil-
lions. A red, green, or yellow light at an intersection is one example. Computers are
full of data: signals consisting of strings of numbers, characters, and other symbols
that are blindly and mechanically handled in large quantities.
Information Information is data equipped with meaning. For a human car driver, a red
traffic light is not just a signal of some colored object, rather, it is interpreted as an
indication to stop. In contrast, an alien being who had just landed on Earth from
4 Chapter 1
characteristic example
Data uninterpreted
raw ...---...
Figure 1.1
Distinctions between data. information, and‘knowledge.
outer space, and happened to find itself on a discovery tour in his earth shuttle near
the Paris’ périphérique during the Friday evening rush hour, will probably not attach
the same meaning to a red light. The data are the same, but the information is not.
Knowledge Knowledge is the whole body of data and information that people bring to
bear to practical use in action, in order to carry out tasks and create new information.
Knowledge adds two distinct aspects: first, a sense of purpose, since knowledge is
the “intellectual machinery” used to achieve a goal; second, a generative capability,
because one of the major functions of knowledge is to produce new information.
It is not accidental, therefore, that knowledge is proclaimed to be a new “factor of
production.”
Figure 1.1 summarizes the distinctions usually made between data, information, and
knowledge. However, there is a second and very different answer to the question of what
constitutes a suitable definition of knowledge, namely, Why bother? In our everyday prac-
tical work, most of us recognize quite well who the knowledgeable people are and what
knowledge is when we see it in action. And this is usually good enough for our purposes.
The alien traveling on the crowded highways surrounding Paris and ignoring traffic signs
will not strike many of us as being very knowledgeable. We don’t really need any formal
definitions for that.
There are good reasons for such an answer, even beyond pragmatics. In many acknowl-
Prologue: The Value of Knowledge 5
edged scientific disciplines, the practitioners often have a hard time answering analogous
questions. We might ask (probably in vain) various categories of scientists to give a pre-
cise and formal definition of the central object of their science, say, of life, civilization, art,
intelligence, evolution, organizational culture, the economic value of intangible assets : : :
Engineers and physicists, we bet, will often give inadequate or incomplete answers to the
question, what exactly is energy? (not to mention entropy). This does not prevent them,
however, to build reliable bridges, cars, computers or heating installations. Seen in this
light, there is nothing special or mystical about knowledge.
An important reason that the question, What is knowledge? is difficult to answer
resides in the fact that knowledge very much depends on context. One of the authors of
this book, for example, is a first-rate bridge player. To some of the others, all his knowledge
does not really make much sense, because they know little more about bridge than that it
is a game involving four players and 52 cards. Other authors happen to have a background
in quantum physics, so they could explain (if you really wanted to know) about excited
nuclear states and the Heisenberg uncertainty relations. To others, this is just data, or
perhaps more accurately, just uncertainty. For all authors, all this is utterly irrelevant in
the context of writing this book. Thus, one person’s knowledge is another person’s data.
The borderlines between data, information, and knowledge are not sharp, because they are
relative with respect to the context of use.
This observation concerning the context dependence of knowledge is found, in differ-
ent terminology, across different study fields of knowledge. In knowledge engineering, it
has become standard to point out that knowledge is to a large extent task- and domain-
specific. This book offers a range of practical but general methods to get a grip on the
structure of human knowledge, as well as on the wider organizational context in which
it is used. Only through such a nontechnology-driven approach can we build advanced
information systems that adequately support people in carrying out their knowledge work.
Drucker refers here to disciplines like mechanical engineering, physics, and chemistry
that developed out of the craft of, say, building steam engines.
We see that the same is happening in our time in relation to information and knowl-
edge. From the craft of building computers, software programs, databases and other sys-
tems, we see new scientific disciplines slowly and gradually evolve such as telematics,
algorithmics, information systems management, and knowledge engineering and manage-
ment.
Knowledge engineering has evolved from the late 1970s onward, from the art of build-
ing expert systems, knowledge-based systems, and knowledge-intensive information sys-
tems. We use these terms interchangeably, and call them knowledge systems for short.
Knowledge systems are the single most important industrial and commercial offspring of
the discipline called artificial intelligence. They are now in everyday use all around the
world. They are used to aid in human problem-solving ranging from, just to name a few of
the CommonKADS applications, detecting credit card fraud, speeding up ship design, aid-
ing medical diagnosis, making scientific software more intelligent, delivering front-office
financial services, assessing and advising on product quality, and supporting electrical net-
work service recovery.
What are the benefits of knowledge systems? This is a valid question to ask, since over
the years there have been high hopes, heavily publicized success stories, as well as clear-
cut disappointments. Therefore, we will cite the results of a recent empirical study, carried
out by Martin et al. (1996). Two questions were addressed: (1) What are benefits expected
from the use of knowledge systems? and (2) Are expected benefits from an investment
in knowledge systems actually realized? To answer these questions, survey data were
collected from persons in industry and business, and on this basis the variables linked to
knowledge system benefits were explored from the viewpoint of those working with them.
A summary of the empirical data is given in Table 1.1. The numbers represent fre-
quencies, i.e., the number of times an item was mentioned by the respondents in the sur-
vey. The top three benefits are (1) faster decision-making; (2) increased productivity; and
(3) increased quality of decision-making. Generally, anticipated benefits are indeed real-
ized. The authors of the survey point out, however, that this occurs in varying degrees
(percentages quoted range from 57% to 70%). Faster decision-making is more often felt
to be a result of knowledge system utilization than an increase either in decision quality
or in productivity. Thus, knowledge systems indeed appear to enhance organizational ef-
fectiveness. Although they are employed for a range of purposes, they seem to contribute
Prologue: The Value of Knowledge 7
Table 1.1
Survey data on anticipated and realized benefits from knowledge systems. Numbers indicate frequency of men-
tioning the indicated category of benefits by the survey respondents.
benefits:
Knowledge engineering enables one to spot the opportunities and bottlenecks in how
organizations develop, distribute and apply their knowledge resources, and so gives
tools for corporate knowledge management.
Knowledge engineering provides the methods to obtain a thorough understanding
of the structures and processes used by knowledge workers — even where much of
their knowledge is tacit — leading to a better integration of information technology
in support of knowledge work.
Knowledge engineering helps, as a result, to build better knowledge systems: sys-
tems that are easier to use, have a well-structured architecture, and are simpler to
maintain.
8 Chapter 1
This book explains in detail how to carry out structured knowledge management, knowl-
edge analysis, and associated knowledge-intensive system development. It is comprehen-
sive. The book covers all relevant aspects ranging from the study of organizational benefits
to software coding. Along the road, the methods are illustrated by practical examples and
case studies. The sum constitutes the CommonKADS standard for knowledge analysis
and knowledge-system development. Below we briefly discuss the contents of each chap-
ter. In the next section you will find a road map for reading this book, depending on the
type of reader (knowledge manager, knowledge analyst, knowledge implementor, project
manager).
Chapter 2 describes the baseline and rationale of CommonKADS, in particular its
model-driven approach. This chapter contains some basic terminology used in this field.
In Chapter. 3 we pay attention to the first part of the analysis process: the modelling of
the context or “environment” of a knowledge-intensive task we are interested in. We have
learned that one cannot emphasize the need for context modelling enough, because the
success of your application depends on it. The knowledge analysis at this level is still
coarse-grained and is typically at the level of knowledge management. For this reason the
next chapter deals with the issues related to knowledge management. This chapter contains
an activity model for knowledge management. Together, Chapters 2, 3, and 4 provide a
good introduction for readers interested primarily in knowledge management and coarse-
grained knowledge analysis.
In Chapter. 5 you will find an introduction to the major topic of this book: the methods
for fine-grained knowledge modelling. Through a simple intuitive example you will learn
the main ingredients needed. In Chapter. 6 you will learn that a nice thing about knowledge
analysis is that you do not have to build everything from scratch. For most knowledge-
intensive tasks there are a number of reusable knowledge structures that give you a head
start. There is a parallel here with design patterns in object-oriented analysis, but you will
find the knowledge patterns to be more powerful and precise, in particular because they are
grounded on a decade of research and practical experience. Chapter 7 concludes the first
set of three chapters dedicated entirely to knowledge modelling by presenting you with
practical how-to-do-it guidelines and activities for knowledge modelling. In Chapter. 8 we
present a number of elicitation techniques that have proved to be useful in the context of
knowledge analysis. This chapter contains practical guidelines for conducting interviews,
as well as many other techniques.
In Chapter. 9 we turn our attention to communications aspects. Knowledge systems
communicate with humans and with other systems. More and more, our systems act as
software agents in close interaction with other agents. This chapter provides you with the
tools for modelling this interactive perspective. Together, Chapters 3 through 9 contain
the baseline of the CommonKADS analysis methods. Chapter 10 illustrates the use of
these methods through a case study of a small and easy-to-understand sample application
Prologue: The Value of Knowledge 9
Knowledge Engineering
Basics
Ch. 2 X X X X
The Task and its
Organizational Context
Ch. 3 X X X
Knowledge Model
Ch. 5 X X
Components
Template Ch. 6
Knowledge Models
X
Knowledge Model
Construction
Ch. 7 X
Knowledge Elicitation
Techniques
Ch. 8 X
Modelling Communication
Ch. 9
Aspects
Case Study
The Housing Application
Ch. 10 X
Designing
Ch. 11
Knowledge Systems X
Knowledge System
Implementation
Ch. 12 X
Advanced
Ch. 13
Knowledge Modelling
UML Notations
Ch. 14
used in CommonKADS
Figure 1.2
Road map for reading this book. Legenda: cross = core text; bullet = recommended; circle = support material.
Prologue: The Value of Knowledge 11
reasoning
control
application
domain
knowledge
Figure 2.1
The basic architecture of the first generation of expert systems: application knowledge as a big bag of domain
facts and rules, controlled by a simple reasoning or inference engine.
Figure 2.2
A short history of knowledge systems.
Knowledge-Engineering Basics 15
case studies
application projects use feedback
CASE tools
implementation environments
tools
Figure 2.3
The building blocks of a methodology: the worldview or “slogans,” the theoretical concepts, the methods for using
the methodology, the tools for applying methods, and the experiences through use of the methodology. Feedback
flows down along the pyramid. Once a world view changes, the fundament falls away under an approach, and the
time is ripe for a paradigm shift.
2.3 Principles
The CommonKADS methodology offers a structured approach. It is based on a few basic
thoughts or principles that have grown out of experience over the years. We briefly sketch
the fundamental principles underlying modern knowledge engineering.
16 Chapter 2
Figure 2.4
The old “mining” view of knowledge engineering.
Many software developers have an understandable tendency to take the computer sys-
tem as the dominant reference point in their analysis and design activities. But there are
two important reference points: the computational artefact to be built, but most importantly,
there is the human side: the real-world situation that knowledge engineering addresses by
Knowledge-Engineering Basics 17
studying experts, users, and their behavior at the workplace, embedded in the broader or-
ganizational context of problem-solving. In the CommonKADS approach, the latter is
the foremost viewpoint. The knowledge-level principle, first put forward by Alan Newell
(1982), states that knowledge is to be modelled at a conceptual level, in a way independent
of specific computational constructs and software implementations. The concepts used
in the modelling of knowledge refer to and reflect (that is, model) the real-world domain
and are expressed in a vocabulary understandable to the people involved. In the Com-
monKADS view, the artefact design of a knowledge system is called structure-preserving
design, since it follows and preserves the analyzed conceptual structure of knowledge.
It goes without saying that knowledge, , and problem-solving are extremely rich phe-
nomena. Knowledge may be complex, but it is not chaotic: knowledge appears to have a
rather stable internal structure, in which we see similar patterns over and over again. Al-
though the architecture of knowledge is clearly more complicated than depicted in the rule-
based systems of Figure 2.1, knowledge does have an understandable structure, and this is
the practical hook for doing successful knowledge analysis. Conceptually, knowledge-level
models help us understand the universe of human problem-solving by elaborate knowledge
typing. An important result of modern knowledge engineering is that human expertise can
be sensibly analyzed in terms of stable and generic categories, patterns, and structures of
knowledge. Thus, we model knowledge as a well-structured functional whole, the parts
of which play different, restricted, and specialized roles in human problem solving. We
will encounter this concept of limited roles of knowledge types and components in many
different forms throughout this book. If you want the answer to what knowledge is, this is
the way you’ll find it in this book, at the level of and in terms of an engineering science.
knowledge communication
Concept
model model
Ch. 5 & 9
design
Artefact
model
Ch. 11-12
Figure 2.5
The CommonKADS model suite.
management follows a spiral approach that enables structured learning, whereby interme-
diate results or “states” of the CommonKADS models act as signposts to what steps to take
next. In determining these steps, the notions of objectives and risks play a crucial role.
Why? Why is a knowledge system a potential help or solution? For which problems?
Knowledge-Engineering Basics 19
Which benefits, costs, and organizational impacts does it have? Understanding the
organizational context and environment is the most important issue here.
What? What is the nature and structure of the knowledge involved? What is the nature
and structure of the corresponding communication? The conceptual description of
the knowledge applied in a task is the main issue here.
How? How must the knowledge be implemented in a computer system? How do the
software architecture and the computational mechanisms look? The technical as-
pects of the computer realization are the main focus here.
All these questions are answered by developing (pieces of) aspect models. Com-
monKADS has a predefined set of models, each of them focusing on a limited aspect,
but together providing a comprehensive view.
Organization model The organization model supports the analysis of the major fea-
tures of an organization, in order to discover problems and opportunities for knowl-
edge systems, establish their feasibility, and assess the impacts on the organization
of intended knowledge actions.
Task model Tasks are the relevant subparts of a business process. The task model an-
alyzes the global task layout, its inputs and outputs, preconditions and performance
criteria, as well as needed resources and competences.
Agent model Agents are executors of a task. An agent can be human, an information
system, or any other entity capable of carrying out a task. The agent model describes
the characteristics of agents, in particular their competences, authority to act, and
constraints in this respect. Furthermore, it lists the communication links between
agents in carrying out a task.
Knowledge model The purpose of the knowledge model is to explicate in detail the
types and structures of the knowledge used in performing a task. It provides an
implementation-independent description of the role that different knowledge com-
ponents play in problem-solving, in a way that is understandable for humans. This
makes the knowledge model an important vehicle for communication with experts
and users about the problem-solving aspects of a knowledge system, during both
development and system execution.
Design model The above CommonKADS models together can be seen as constituting
the requirements specification for the knowledge system, broken down in different
aspects. Based on these requirements, the design model gives the technical system
specification in terms of architecture, implementation platform, software modules,
representational constructs, and computational mechanisms needed to implement the
functions laid down in the knowledge and communication models.
Together, the organization, task, and agent models analyze the organizational environ-
ment and the corresponding critical success factors for a knowledge system. The knowl-
edge and communication models yield the conceptual description of problem-solving func-
tions and data that are to be handled and delivered by a knowledge system. The design
model converts this into a technical specification that is the basis for software system im-
plementation. This process is depicted in Figure 2.5. We note, however, that not always do
all models have to be constructed. This depends on the goals of the project as well as the
experiences gained in running the project. Thus, a judicious choice is to be made by the
project management.
Accordingly, a CommonKADS knowledge project produces three types of products or
deliverables:
As a final note, we want to emphasize that knowledge systems and their engineering
are not life forms totally unrelated to other species of information systems and manage-
ment. In what follows, we will see that CommonKADS has been influenced by other
methodologies, including structured systems analysis and design, object orientation, orga-
nization theory, process reengineering, and quality management. For example, the selling
point of object orientation is often said to be the fact that objects in information systems
model real-world entities in a natural fashion. This has clear similarities to the knowledge-
level principle discussed above. (And the consequences of the limited-role concept, intro-
duced later on, will show that there is more to information systems than objects alone!)
Thus, there is a gradual transition. CommonKADS has integrated elements of other ex-
isting methodologies, and also makes it possible to switch to other methods at certain
points. This is in line with the modern view of knowledge systems as enhancements em-
bedded in already existing information infrastructures, instead of stand-alone expert sys-
tems. Hence, CommonKADS-style knowledge engineering is to be seen as an extension
of existing methods: it is useful when tasks, processes, domains, or applications become
knowledge intensive.
Knowledge-Engineering Basics 21
knowledge
manager
knowledge
engineer/
knowledge elicits knowledge analyst
provider/ from project
specialist manager
manages
elicits
requirements
from
validates
delivers
analysis models
to
KS manages
uses
knowledge
user
designs &
implements
knowledge
system developer
Figure 2.6
Graphical view of the six process roles in knowledge engineering and management.
as well as techniques for eliciting data about knowledge-intensive tasks from domain spe-
cialists.
Knowledge user A knowledge user makes use directly or indirectly of a knowledge sys-
tem. Involving knowledge users from the beginning is even more important than in regular
software engineering projects. Automation of knowledge-intensive tasks invariably af-
fects the work of the people involved. For design and implementation it is important to
ensure that they interact with the system with their own interface representations. The
knowledge engineer also needs to be able to present the analysis results to the potential
knowledge users. This requires special attention. One of the reasons for the success of
CommonKADS has always been that the knowledge analysis is understandable to knowl-
edge users with some background in the domain.
risk the project manager runs is the elusive nature of knowledge-related problems. There-
fore, requirements monitoring is of prime importance during the lifetime of the project.
The context models of Chapter. 3 play a key role in that.
Domain A domain is some area of interest. Example domains are internal medicine and
chemical processes. Domains can be hierarchically structured. For example, internal
medicine can be split into a number of sub-domains such a hematology, nephrology,
cardiology, etc.
Task A task is a piece of work that needs to be done by an agent. In this book we are
primarily interested in “knowledge-intensive” tasks: tasks in which knowledge plays
a key role. Example tasks are diagnosing malfunctions in internal organs such as a
kidney, or monitoring a chemical process such as oil production.
Agent An agent is any human or software system able to execute a task in a certain
domain. For example, a physician can carry out the task of diagnosing complaints
uttered by patients. A knowledge system might be able to execute the task of moni-
toring an oil production process on an oil rig.
Application An application is the context provided by the combination of a domain and
a task carried out by one or more agents.
Application domain/task These two terms are used to refer to the domain and/or task
involved in a certain application.
Knowledge(-based) system The term “knowledge-based system” (KBS) has been used
for a long time and stems from the first-generation architecture discussed in the
24 Chapter 2
previous chapter, in which the two main components are a reasoning engine and a
knowledge base. In recent years the term has been replaced by the more neutral term
“knowledge system.” It is worthwhile pointing out that there is no fixed borderline
between knowledge systems and “normal” software systems. Every system contains
knowledge to some extent. This is increasingly true in modern software applica-
tions. The main distinction is that in a knowledge system one assumes there is some
explicit representation of the knowledge included in the system. This raises the need
for special modelling techniques.
Expert system One can define an expert system as a knowledge system that is able to
execute a task that, if carried out by humans, requires expertise. In practice the term
is often used as a synonym for knowledge(-based) system. We do not use this term
anymore.
automating expert tasks. “Automation” is misleading for two reasons. First, knowledge-
intensive tasks are often so complex that full automation is simply an ill-directed ambition,
bound to lead to wrong expectations and ultimately to disappointment. At the same time,
knowledge systems can be an active rather than passive help – in contrast to most current
automated systems – precisely because they store knowledge and are able to reason about
it. On this basis, they can much more actively act and interact with the user. Therefore,
the appropriate positioning of knowledge systems is not that of automating expert tasks.
Automation is a misleading concept. Zuboff (1988) therefore speaks of “informating”
rather than “automating” work. Indeed, knowledge systems are better seen as agents that
actively help their user as a kind of intelligent support tool or personal assistant. In this
way, they have their partial but valuable role in improving the overall business process in
collaboration with their users.
Therefore it is essential to keep track of the organizational environment in which a
knowledge system has to operate. Already at an early stage the knowledge engineer has
to take measures to ensure that a knowledge system will be properly embedded in the
organization. Traditionally, much of the effort of information and knowledge engineers
was directed at getting the technical aspects under control. Now that knowledge and in-
formation technology have achieved a good degree of maturity and diffusion, this is no
longer the main focus. Many factors other than technology determine success or failure of
a knowledge system in an organization. They must perform their task well according to set
standards, but they must also be acceptable and friendly to the end user, interoperate with
other information systems, and fit seamlessly into the structures, processes, and quality
systems of the organization as a whole.
It is fair to say that practical experience has shown that often the critical success fac-
tor for knowledge systems is how well the relevant organizational issues have been dealt
with. Many failures in automation have resulted, not from problems with the technology
but from the lack of concern for social and organizational factors. Yet, many system-
development methodologies focus on the technical aspects and do not support the analysis
of the organizational elements that determine success or failure.
The CommonKADS methodology offers the tools to cater to this need. These tools for
task and organization analysis achieve several important goals:
1. Identify problems and opportunities: Find promising areas where knowledge sys-
tems or other knowledge management solutions can provide added value to the or-
ganization.
2. Decide about solutions and their feasibility: Find out whether a further project is
worthwhile in terms of expected costs and benefits, technological feasibility, and
needed resources and commitments within the organization.
3. Improve tasks and task-related knowledge: Analyze the nature of the tasks involved
in a selected business process, with an eye on what knowledge is used by the respon-
sible agents in order to carry them out successfully, and what improvements may be
The Task and Its Organizational Context 27
The CommonKADS task and organization analysis has a very tight fit to these four
goals. Although their aim is a very critical one – uncovering the key success factors
of knowledge systems and preparing the needed organizational measures – the methods
themselves are easy to understand and simple to use, as we show below.
Along the above lines, a comprehensive picture of the organizational situation in which
a knowledge system must operate is built up. For the first study, on scope and feasibil-
ity, CommonKADS offers the organization model for the description and analysis of the
broader organizational environment.
For the second study, on impacts and improvements, CommonKADS offers the task
and agent models. This study is more focused and detailed. It zooms in on the relevant
part of the organization. The task model focuses on those tasks and task-related knowledge
assets that are directly related to the problem that needs to be solved through the knowledge
system. These tasks are allocated to agents characterized through the agent model.
For both studies, their first part (1a and 2a) is oriented toward modelling and analy-
sis, whereas the concluding parts (1b and 2b) integrate the model results for the express
purpose of managerial decision-making.
28 Chapter 3
Below we discuss how to carry out these studies through the CommonKADS orga-
nization, task and agent models. All steps in developing these models can be taken by
employing a set of practical and easy-to-use worksheets and checklists.
Organization Model
OM-1 OM-2
Problems Organization
& Focus Area
Opportunities Description:
OM-3 OM-4
General Structure
Context
(Mission, Process Process
Strategy, Breakdown
Environment, People
CSF’s,...)
Culture & Power
Resources
Potential
Knowledge Knowledge
Solutions
Assets
Figure 3.1
Overview of the components of the CommonKADS organization model.
stakeholders that have an interest
Knowledge providers: The specialists or experts in whom the knowledge of a certain
area resides.
Knowledge users: The people that need to use this knowledge to carry out their work
successfully.
30 Chapter 3
Table 3.1
Worksheet OM-1: Identifying knowledge-oriented problems and opportunities in the organization.
Knowledge decision-makers: The managers that have the position to make decisions
that affect the work of either the knowledge providers or the knowledge users.
Identifying these people and their roles at an early stage helps to quickly focus on
the appropriate business processes, problems, and opportunities. Usually, knowledge
providers, users, and decision-makers are very different persons with very different in-
terests. Interviewing them helps you to understand what is at stake for them in relation to
your knowledge project. Divergent views and conflicts of interests are common in organi-
zations, but it takes effort to understand them. Without such an understanding, however, a
good knowledge solution is not even possible.
The second part of the organization model concentrates upon the more specific, so-
called variant, aspects of the organization. Here, we cover aspects such as how the business
process is structured, what staff is involved, what resources are used, and so on. These
components of the organization model may change (hence “variant”) as a result of the
introduction of knowledge systems. As an aid to the analysis, Table 3.2 gives a worksheet
(numbered OM-2). It explains what important components of the organization to consider.
We note that this analysis relates to a single problem-opportunity area, selected out of the
list produced previously (in worksheet OM-1). It might be the case that this step has to be
repeated for other areas as well.
The process component in OM-2 plays a central role within the CommonKADS
organization-analysis process, as we will also see in the next worksheet. A good guideline
is construct an UML activity diagram of the business process, and use this diagram as a
filler of the slot in worksheet OM-2. Figure 3.2 shows a simplified business process of a
company designing and selling elevators, described with the use of an activity diagram, A
nice feature of activity diagrams is that we can locate the process in parts of the organiza-
tion, and can include both process flow as well as information objects involved. Readers
The Task and Its Organizational Context 31
Table 3.2
Worksheet OM-2: Description of organizational aspects that have an impact on and/or are affected by chosen
knowledge solutions.
not familiar with the notation can read Section 14.2, which provides a short description of
the main ingredients of this notation.
The process is also specified in more detail with the help of a separate worksheet. The
business process is broken down into smaller tasks, because an envisaged knowledge sys-
tem always carries out a specific task – and this has to fit properly into the process as a
whole. Often, some process adaptations are needed by changing tasks, or combining or
connecting them differently. To investigate this aspect better, Table 3.3 presents a work-
sheet (numbered OM-3) to specify details of the task breakdown of the business process.
A rough indication is given how knowledge-intensive these tasks are and what knowledge
is used. You might find it difficult ti establish the knowledge-intensiveness of a task at this
point, but after reading more about knowledge-intensive task types in Chapter. 6 you will
have background knowledge to help you in this respect.
Also, an indication is given of the significance of each task, e.g. on a scale of 1-5.
There are no hard rules for assessing task significance, but it is typically a combination of
effort/resources required, criticality, and complexity.
The business process is modelled down to the level of detail that we can make decisions
32 Chapter 3
SALES DESIGN
CUSTOMER
DEPARTMENT DEPARTMENT
:customer
information
process
decide about
customer
design type
information
cost :elevator
calculation design
write
tender
:tender
Figure 3.2
Business process of a company designing and selling elevators, specified through a UML activity diagram.
about what to do with a task, e.g. construct a knowledge model to automate or explicate
that task.
Let’s turn now to the “knowledge” element in worksheet OM-2. Evidently, knowledge
is the single most important aspect of the organization to analyze here in detail. Accord-
ingly, Table 3.4 provides a worksheet (numbered OM-4) to describe knowledge assets. This
worksheet provides the specification of the knowledge component of the CommonKADS
organization model. Later on, this specification will be further refined, first in the task
model and very extensively (of course) in the knowledge model. This piecemeal approach
gives more opportunities for flexibility in knowledge project management.
Thus, the knowledge asset worksheet (OM-4) is meant as a first-cut analysis. The
perspective we take here is that those pieces of knowledge are significant as an asset, that
The Task and Its Organizational Context 33
Table 3.3
Worksheet OM-3: Description of the process in terms of the tasks it is composed of, and their main characteristics.
Name (cf. Agent (cf. Task (cf. (Yes or no; (Yes or no; (Yes or no; (Yes or no;
worksheet worksheet worksheet comments) comments) comments) comments)
OM-3) OM-3) OM-3)
Table 3.4
Worksheet OM-4: Description of the Knowledge component of the organization model and its major characteris-
tics.
34 Chapter 3
Table 3.5
Worksheet OM-5a: Checklist for the feasibility decision document (Part I).
are in active use by workers within the organization for the purpose of a specific task or
process. An important issue in this part of the study is to single out dimensions in which
knowledge assets may be improved, in form, accessibility in time or space, or in quality.
This analysis is not only important in knowledge-systems engineering, but perhaps even
more so in knowledge management actions in general.
Now, after carrying out the steps represented in the worksheets of Tables 3.1–3.4,
we have all the information ready related to the CommonKADS organization model of
Figure 3.1. The final step is to wrap up the key implications of this information in a
document, on the basis of which commitments and decisions by management are made. At
this stage of a knowledge system project, decision-making will focus on:
What is the most promising opportunity area for applications, and what is the best
solution direction?
What are the benefits versus the costs (business feasibility)?
The Task and Its Organizational Context 35
Table 3.6
Worksheet OM-5b: Checklist for the feasibility decision document (Part II).
Are the needed technologies for this solution available and within reach (technical
feasibility)?
What further project actions can successfully be undertaken (project feasibility)?
Tables 3.5 and 3.6 present an extensive and self-contained checklist for producing
the feasibility decision document (worksheet OM-5). This completes the CommonKADS
organizational analysis. The further stages focus more on the features of specific tasks,
pieces of knowledge, and individuals involved. But before going into these topics, we will
first further illuminate the above organization analysis by an illustrative case example.
In this section we illustrate the above organizational model study by a real-life case study.
36 Chapter 3
In the Netherlands, the administration of a range of social security benefits is carried out by
municipalities. The most important ones are general assistance benefits. The latter category
is an end-of-the-line type of benefit, in the sense that if no other regulations apply, a person
may ultimately apply for this type of benefit. At the time of the project, in the municipality
of Amsterdam, approximately 60,000 people were supported by these general assistance
benefits. In order to qualify for this financial assistance, each applicant is screened in great
detail. The rules for this are codified in in or can be derived from several volumes of laws
and regulations.
In Amsterdam, a considerable backlog in dealing with (the growing numbers of) clients
had accumulated over the years. This led to long queues in the offices, as well as long
elapse times between initial client intake and final decision. At the level of the directorate
of the responsible municipal service, this backlog created concerns over the efficiency
of the work being done. Moreover, the clients themselves started to complain about the
delays, and these complaints found their way into the local media. In this context, the
secretary of the directorate suggested the use of knowledge systems to help reduce the
backlog. It is highly important to stress the initial hypothesis because it shows how crucial
modelling organizational features is. Briefly, the initial problem/opportunity formulation
was:
Because the applicable laws and regulations are so complex, it takes a long
time for the staff involved to reach a decision. If we can assist these peo-
ple with a knowledge system that stores the needed legal decision-making
knowledge, the decision process can be speeded up, so that more clients can
be served in the same time and the application backlog will be significantly
reduced.
Thus, at the beginning a very clear idea existed about the problem area, the direction
of the solution, and the benefits for the organization at large. Although we give this part
of the case study in a narrative form, it is obvious how the above information constitutes
fillers for the invariant components of the CommonKADS organization model, according
to the problems and opportunities worksheet (Table 3.1, worksheet OM-1).
The next step in the study is to consider the various aspects of the organization model,
as indicated in Figure 3.1 and particularly in the variant component worksheet (Table 3.2,
worksheet OM-2). For the social security service case, we briefly discuss the main ele-
ments and results below.
The Task and Its Organizational Context 37
1 Central
Office
archive
support
dept.
*
finance
Branch dept.
Office
team support
external
training
visiting
archive test
section section
Figure 3.3
The structure component in the social security service case.
Structure The formal organizational structure of the social security service is given in
the form of an organization chart, as presented in Figure 3.3. The service consists of a
central office and a branch office in each of the sixteen boroughs of the municipality of
Amsterdam. The structure of each of these branch offices is the same. The central office
has a mixture of line and staff departments. In addition, the chart includes the computer
center of the municipality of Amsterdam, although it is strictly speaking not a part of
the service organization (hence the dotted lines). However, it is important to include the
connection, since the computer center performed a sizable amount of work for the social
security service, and (at that time) every municipal institution was formally required to use
this center for all its computer work.
38 Chapter 3
Directorate
Branch
Secretary
Director
Team Regulations
Members Expert
Figure 3.4
Various power relationships in the social security service case.
People In a complex organization, there are many different people playing many different
organizational roles, requiring very different levels of expertise. Given the brief for the
project (see the problem-opportunity component of the organization model), only a very
limited area has been taken into account, mostly staff members that are directly involved
in some way in the decision-making process.
The major roles played by people in the organization in our case can be found in
Figure 3.3.
Culture and power Power relations among the main people in the organization are
shown in Figure 3.4. This figure shows not only formal relationships of authority between
people but also informal influencing relationships.
To get a grip on these aspects is often not easy, because informal relationships between
disparate actors may be difficult to detect. Three types of power relationships are shown
in Figure 3.4. Strong official lines of formal authority are indicated by solid lines. These
relations are formally laid down in the organization and its hierarchy. For example, the
The Task and Its Organizational Context 39
branch director is the boss of a branch office, and as such has formal authority over chiefs
and testers in this office. Rather strong informal power relations are shown in the figure
by means of dashed lines. These relations have often slowly grown over time, and have
come to be viewed as more or less regular. For example, the regulations expert from the
central office can convene meetings at which all testers of the branch offices are present.
Moreover, he can use these meetings to launch certain quality-control campaigns. This
can be done almost without interference from the branch director, in spite of his formal
authority over the tester in the branch office. Finally, weak informal relations of influence
are the hardest to uncover, because they reflect occasional but sometimes very important
links between persons in the organization (see dotted lines in Figure 3.4). This influence is
mainly exercised through informal meetings and telephone calls.
Resources For the present project, the following resources were deemed to be most rel-
evant:
Computers: In the service as a whole, at the time of the project, only a limited num-
ber of computers were available. All computing was done by the central computer
Center (see Figure 3.3). In each of the branch offices there were a few terminals
connected to the central computer. In some branch offices, local experiments with
personal computers had started to take over routine work, such as producing letters
of notification.
Office space: Some branch offices were inadequately housed, leading to insufficient
facilities for doing the client intake work.
Process, knowledge As follows from the brief for the project, the main focus was the
knowledge for the decision-making process about benefits by the social security service.
These aspects are treated separately in the next subsection.
Generally, we note that a flexible use of the organization model and its representation
techniques gives the best results. It is neither always helpful nor necessary to fill in all slots
of all organization model components and worksheets. This should only be done if the
information bears relevance to conclusions and has implications for action. However, this
selectivity must be a conscious decision on the part of the knowledge engineer, whereby
the given worksheets provide guidance as comprehensive checklists. In addition, the form
of representing the collected information will generally vary. Short pieces of text, e.g.,
filling in slots of worksheets, are useful, but as we have seen sometimes simple diagrams,
charts, or pictures are much more clear and effective. Thus, the knowledge engineer should
feel free to pick the most appropriate form. The criterion here is what means will be the
most effective in communicating with the persons for whom the study is carried out.
40 Chapter 3
The process and knowledge components of the organization model are modelled with the
help of separate worksheets (Table 3.3, worksheet OM-3, and Table 3.4, worksheet OM-4,
respectively). Now, we will give the most important results, in various forms, for the pro-
cess breakdown in tasks and the associated knowledge assets in the social security service
case.
Process breakdown in tasks On the basis of interviews with key personnel, among oth-
ers the secretary of the directorate, the following main parts (tasks) of the overall process
were identified.
Intake: This task refers to obtaining all relevant information about a client, for ex-
ample, age, address, additional sources of income, various aspects of the personal
situation of the client. Direct person-to-person contact is commonly involved in the
intake work.
Archiving: Keeping and maintaining files and documents for all clients throughout
the lifecycle of their being clients of the social security service.
decision-making: Taking the decision, based on the data concerning the personal
situation of the client (as obtained from the intake work) and the applicable laws and
rules, whether the client qualifies for a benefit, as well as deciding about the amount
of money he or she is entitled to.
Notifying: Informing the client about the decisions made. Without a written notifi-
cation a decision has no legal status.
Reporting: Writing an internal report about the client. This report serves, for exam-
ple, as input for paying.
Paying: Making the actual payment to the client.
Quality control: Controlling whether the decisions made are correct in view of the
applicable laws and regulations. This control task is carried out “after the fact.” It is
based on sampled cases from the decision-making task, as laid down in the reporting
task.
An overall view of what the process looks like, in terms of the tasks it is composed
of and their mutual dependencies, is shown in Figure 3.5. Some tasks, such as archiving,
occur at several points within the process. There is a distinction between the primary
process and the supporting tasks (see the two “compartments” in Figure 3.5).
The start focus of the project was on the decision-making task, but now it has become
clear that it is directly linked to the intake and notifying tasks, and that in addition there is
likely to be some interaction with the archiving task.
Knowledge assets and task significance From the study it became clear that only some
of the tasks were knowledge intensive, namely intake, decision-making, and quality con-
trol. In intake, other competences are also important, particularly interpersonal and com-
munication skills. As the project’s focus was on speeding up decision-making, it was a
The Task and Its Organizational Context 41
intake
decision
making
notifying
archiving
reporting
:archive
quality
paying control
Figure 3.5
Activity diagram of the tasks in the business process of the social security service.
42 Chapter 3
Time
(%)
Task
IN DM NO PAY AR REP QC
Figure 3.6
Task significance: workload in the social security service case, expressed in percentage of total time spent.
straightforward step to investigate in more detail the knowledge underlying the decision-
making task. Given the above process results, it was also natural to look at the intake and
notifying tasks, since there are direct input-output dependencies with decision-making.
After some initial knowledge acquisition it became clear that there are at least two
aspects of decision-making that were insufficiently understood, and therefore might com-
promise the construction or functioning of the envisaged knowledge system. First, clients
sometimes cheat about their data in order to qualify for a benefit. Detecting this is a highly
sensitive process that relies strongly upon all kinds of non-verbal cues. Personnel doing
the intake were very good at interpreting these cues. A knowledge system, however, will of
course have a very hard time in distinguishing between such true and (slightly) false client
data.
Second, civil servants do have the (understandable) tendency to sometimes adjust the
client data somewhat when they feel a client is justified in getting the benefit but the official
rules do not cover the special case. Again, a knowledge system would entirely miss this
point of fudging client data. It would produce advice that is, strictly speaking, correct, but
that does not take into account special circumstances. This would make at least some of
the proposed decisions hard to accept for the responsible decision-makers. Both the cheat
and fudge factors represents a delicate gray area in decision-making. It lies outside the
competence of a knowledge system, consequently restricting its scope and usability.
The Task and Its Organizational Context 43
Finally, as a check on the initial project hypothesis that the problem source was related
to the decision-making task, a field study was undertaken to estimate the actual workload
for the various tasks within the process. During two weeks the work of the people in a
number of sampled branch offices was followed closely. During this investigation it was
captured how often decision-making problems occurred as a result of the complexities of
the regulations, and how this was reflected in the average workload. The results are shown
in Figure 3.6.
A striking result of this analysis is that the major workload is not due to the complexity
of decision-making. Over 60% of the time is spent in archiving and reporting. This was the
result of the paper-based archives in use at that time in the social security service. Much
time had to be spent in finding lost client files, and overcoming or bypassing all kinds of
bureaucratic hurdles and procedures.
In order to assess the relative task significance (cf. Table 3.3, worksheet OM-3) these
observations are of prime importance. It yields one clear quantitative measure of the rel-
ative significance of tasks in the process. If we take time spent as an indicator of process
cost (which is probably quite adequate in this case), it is evident that the cost and inef-
ficiency drivers are in archiving and reporting, rather than in decision-making. Even if
decision-making could be fully automated (which is judged to be highly unrealistic given
the nature of the knowledge assets), the maximum gain would be about 10% (as seen from
Figure 3.6) relative to the total process. Much more modest improvements (more realistic
and easy to achieve, say on the order of 10% only) within archiving and reporting would
already result in similar gains relative to the overall process. Thus, focusing on these tasks
is much more likely to result in speeding up the total process and reducing backlogs.
Given the above results of the organization model study, ample material is now available
for well-founded decision-making on feasibility and scope. Very briefly, following the
format given for this in worksheet OM-5 the main proposed conclusions and decisions are
as follows.
Business feasibility From the study it is clear that building a knowledge system for
decision-making will not, in itself, solve the problem. Higher benefits in speeding up the
overall process can be expected by focusing on improvements in archiving and reporting
instead. Quantitative indicators have been given above.
The knowledge-system solution would be limited with respect to needed changes in the
organization. It would require a more decentralized PC-based computer infrastructure, and
associated changes with respect to the individual offices. Also, the position of the testers
would clearly change, while people at the intake might lose some of their discretionary
power in making “gray area” decisions.
If archiving and reporting is chosen as the target area, the impact on the organization
44 Chapter 3
is likely to be much more important. As can be seen by comparing the structure and
process/task components of the organization model, several different departments play a
role, and moreover these tasks reoccur at different places in the overall process. Even the
external computer center, largely outside the control of the organization, would have to be
involved.
Technical feasibility The main technical risk associated with the knowledge system ac-
cording to the above analysis is how to deal with (or perhaps better, how to leave it to
humans) the gray aspects (improper cheat or fudge data) of decision-making. These pro-
vide very good examples of tacit knowledge in the organization, hard to explicate and
formalize in a computational fashion.
The alternative solution, focusing on improving archiving and reporting, did not appear
to pose technical risks, at least at the first stage (we mention in passing that it did later on).
Project feasibility For the knowledge-system solution, the technical risk combined with
the limited benefits expected in terms of time saved gives rise to wonder whether it is wise
to continue now in the initially suggested direction.
On the other hand, a project targeted at archiving and reporting would need, as a first
step, to ensure participation and commitment from the various actors, or need to downscale
desired procedural changes to a more restricted and local level to start with.
Proposed actions Based on the organization model results, the best proposal obviously
is to redirect the project from a knowledge system for decision-making, to simplifying the
workflow and procedures related to archiving and reporting.
Therefore, it was proposed to refrain from building a knowledge system for decision
support, and to start working on the bottlenecks in archiving, which were now perceived
–due to the organization model study– as the most crucial ones in dealing with the appli-
cation backlog.
This case study shows how important it is to pay attention to organizational factors at
an early stage, and it shows the capability of the CommonKADS methodology to clarify
these factors in a step-by-step manner. This is even more pressing when the results are
different from what one expects at the beginning, as was the case here. However, the
experience described in the case study is not at all uncommon in practice. It points to an
important lesson ensuing from knowledge management. As knowledge in an organization
is often tacit, one should not be surprised when it turns out to be quite different from what
you initially expect.
(Sub)Function
Objects and Flow
Information Time and
Systems Structure
Control
(3D) View (Data)
Managerial
View TASK Knowledge and
Agents
MODEL Competences
Performance
Resources
and Quality
Goal and
Value
Figure 3.7
Overview of the CommonKADS task model.
to zoom in on the features of the relevant tasks, the agents that carry them out, and on
the knowledge items used by the agents in performing tasks. All these aspects refine the
results from the organization model. For their description CommonKADS offers the task
and agent models. The outcome of this study is detailed insight into the impact of a knowl-
edge system, and especially what improvement actions are possible or necessary in the
organization in conjunction with the introduction of a knowledge system.
The notion of task, although important, has different connotations. As a commonsense
concept, it is a human activity to achieve some purpose. In the above organizational study
it has been viewed in the (not incompatible) sense of a well-defined subpart of a business
process. The notion of task has also emerged as a crucial one in the theory and methodol-
ogy of knowledge systems and of knowledge sharing and reuse. Later we will see that it is
a core technical concept in the modelling of expertise.
Thus, we need a link between the notion of task in the human and organizational sense
of the word, and the more information systems-oriented concept we will employ later on.
The CommonKADS task model serves as this linking pin between the organizational as-
pect and the knowledge-system aspect of a task. In this perspective, the following definition
46 Chapter 3
is suitable.
A task is a subpart of a business process that:
represents a goal-oriented activity adding value to the organization;
handles inputs and delivers desired outputs in a structured and controlled
way;
consumes resources;
requires (and provides) knowledge and other competences;
is carried out according to given quality and performance criteria;
is performed by responsible and accountable agents.
Table 3.7
Worksheet TM-1: Refined description of the tasks within the target process.
other knowledge about time aspects. Depending on the type of control, this as-
pect is commonly represented by means of either state diagrams (in case control is
dominated by external events or is strongly asynchronous) or by means of activity
diagrams (in case of (mostly) synchronous internal control). A quick introduction to
state diagrams can be found in Section 14.3.
Note that most of the time these descriptions already exist, at least partially. We also
want to point out that the CommonKADS knowledge model exploits a similar multidimen-
48 Chapter 3
Table 3.8
Worksheet TM-2: Specification of the knowledge employed for a task, and possible bottlenecks and areas for
improvement.
sional view of knowledge modelling. These three dimensions are clearly reflected in the
items in the CommonKADS task model indicated as dependency/flow, objects handled,
and time/control. Hence, the CommonKADS task model provides an integrative link with
accepted standard methodology for information modelling and analysis.
Next, the item of knowledge and competence is a key item in our task model, and
for this reason it is again modelled by means of a separate worksheet TM-2, presented
in Table 3.8. It constitutes a refinement of the data from worksheet OM-4 (Table 3.4) on
knowledge assets. As with the other worksheets, it is rather self-explanatory. It has a
highly important function, since it concentrates in detail on bottlenecks and improvements
The Task and Its Organizational Context 49
Table 3.9
Worksheet AM-1: Agent specification according to the CommonKADS agent model.
relating to specific areas of knowledge. Hence, this analysis is not only worthwhile for
knowledge systems but is a useful step in knowledge management in general, to achieve
superior use of knowledge by the organization.
Much of this information can be obtained by simple and direct questions to the people
involved. Examples are: How often do you carry out this task? How much time does
it take? Who depends on your results? Whom do you talk to in carrying out this task?
What do you need in order to start with it? What happens to the organization if it goes
wrong? What may go wrong, and what do you do then? How do you know that the task
is successfully concluded? Such questions are best asked with the help of concrete task
examples. With the answers you can write down a task scenario. Scenario techniques are
very helpful in getting a practical understanding, and later on they are useful in validating
the information and setting up a system test plan.
The above steps in the impact and improvement study were dominated by the view-
point of tasks to be carried out. It is also useful to consider the information from the rather
different perspective of individual agents (staff workers; sometimes also information sys-
tems can be viewed as agents). This is done in the CommonKADS agent model, displayed
in Table 3.9 by means of a rather straightforward worksheet AM-1. The purpose of the
agent model is to understand the roles and competences that the various actors in the orga-
nization bring with them to perform a shared task. The information contained in the agent
specification is for a large part a rearrangement of information already existing in previ-
ous worksheets. However, the present arrangement may be useful to better judge impacts
and organizational changes from the viewpoint of the various agents. It also yields input
information for other CommonKADS models, especially the communication model.
To show graphically how agents participate in (new) tasks carried out by a (new) sys-
tem, it is useful to construct a UML use case diagram. This diagram shows what services
50 Chapter 3
Table 3.10
Worksheet OTA-1a: Checklist for the impacts and improvements decision document (Part I).
are provided by a “system” to agents involved. Use case diagrams are helpful when pre-
senting potential solutions to stakeholders. A brief introduction into use case diagrams can
be found in Section 14.5.
Finally, with the worksheets of Tables 3.7–3.8 and Table 3.9 we have collected all
information related to the CommonKADS task and agent models (see also Figure 3.7). The
remaining step is to integrate this information into a document for managerial decision-
making about changes and improvements in the organization. For this purpose, Tables 3.10
and 3.11 present a complete checklist (worksheet OTA-1).
Proposed actions for improvement are accompanying measures, but are not part of the
knowledge-systems work itself. However, they are highly important for ensuring com-
mitment and support from the relevant players in the organization. The major issues for
decision-making here are:
The Task and Its Organizational Context 51
Table 3.11
Worksheet OTA-1b: Checklist for the impacts and improvements decision document (Part II).
Are organizational changes recommended and if so, which ones?
What measures have to be implemented regarding specific tasks and workers in-
volved? In particular, what improvements are possible regarding use and availability
of knowledge?
Have these changes sufficient support from the people involved? Are further facili-
tating actions called for?
What will be the further direction of the knowledge system project?
This completes the CommonKADS organization-task-agent analysis. Even without
building knowledge systems, it is likely that this analysis brings to the surface many mea-
sures and improvements that lead to better use of knowledge by the organization.
Table 3.12
Worksheet OM-1: problems, organizational context and possible solutions for the PARIS ice-cream project.
In the preparatory phase of the PARIS study, a variety of potential application ideas were
listed by the project team, including different types of knowledge systems (design, as-
sessment, manufacturing), ways to prevent loss of skills due to retirement, and new func-
tionalities of existing conventional IT support systems. As a fundamental principle, any
knowledge project must have active and direct support from the (in this case, ice-cream)
business itself. Therefore, initial and open interviews were held with various business ex-
ecutives in order to establish the main directions of the PARIS project and the business
support for them.
The Task and Its Organizational Context 53
Product
formulation
Product properties
Manufacturing Marketing
(physical, sensory)
Production
processes
Figure 3.8
A vision for ice-cream knowledge management, seen as an organizational learning feedback loop.
OM-2: description of focus area in the organization After the initial phase of stake-
holder interviewing, studies were done by surveying several ice-cream factories in different
54 Chapter 3
Follow-up
project plan
& agreement
System storyboards
Figure 3.9
The PARIS stakeholder-driven project approach.
countries of Europe, as well as in the United States. The selected focus area, new product
development, appears to be a key business process for continued market success. The sec-
ond worksheet OM-2 in Table 3.13 describes some results of this part of the study. A gen-
eral picture of the organizational structure of an ice-cream factory is given in Figure 3.10.
Product development is the focal business process, and it has a number of major sequential
stages running from product idea generation to product postlaunch review. However, it
is clear from the people slot that many different functional areas are involved in product
development, even including legal staff. These strong cross-functional aspects are very
relevant to devising knowledge management actions.
OM-3: breakdown of the product development business process This worksheet OM-
3, Table 3.14, describes the main tasks giving a breakdown of the “process” slot in the
The Task and Its Organizational Context 55
Table 3.13
Worksheet OM-2: description of variant organization aspects of an ice-cream factory.
General
director
Figure 3.10
A typical organization structure of an ice-cream company.
previous worksheet OM-2. Each task listed is concluded by a go/no-go decision before the
next product development task may commence. For any new product introduction on the
market (many dozens every year), all tasks must have been concluded successfully, in the
indicated sequential order. All tasks are in their own way knowledge intensive, especially
the feasibility phase and to a lesser extent the planning phase. This is because the subtasks
in these phases necessitate a more experimental approach and therefore tend to be iterative;
the more knowledge is applied in the first cycle, the less iterations are needed. We note
56 Chapter 3
Table 3.14
Worksheet OM-3: top-level task breakdown for the ice-cream product development process.
that here only the top-level task breakdown of the product development process has been
presented. Every task mentioned in worksheet OM-3 is in its turn decomposed into a dozen
or so subtasks. In the PARIS project they were specified in similar worksheets, but in this
case study we can only consider a small fragment of them.
OM-4: example knowledge assets in the ice-cream domain The fourth worksheet,
OM-4, gives a description of the main knowledge assets in the part of the organization we
are focusing on. In Table 3.15 a small fragment is shown of the knowledge asset analysis
related to the feasibility task within the product development process. The task typically
requires knowledge from a number of different areas, and moreover it is performed by staff
from different departments. Thus, communication and sharing of knowledge is highly im-
portant, the more so because ice-cream products are becoming increasingly complex. This
The Task and Its Organizational Context 57
Table 3.15
Worksheet OM-4: an excerpt from the knowledge assets analysis.
Table 3.16
Worksheet OM-5: first decision document, comprising various feasible knowledge-improvement scenarios for
product development.
ing scenario (make explicit the effects of processing on product properties), an optimization
scenario (provide procedures to optimize one or more parameters in the whole product for-
mulation), a supply chain scenario (design a system to follow one ice-cream brand through
the whole process chain from raw materials sourcing to final product storage and distribu-
tion), knowledge transfer scenario (create methods for quicker dissemination of research
knowledge to the business units), and so on.
As an outcome of this part of the study, a decision was taken to further refine, as-
sess, and prioritize the suggested knowledge-improvement scenarios, and to select a first
knowledge module for rapid development and demonstration. This further detailing and
decision-making was done on the basis of task/agent modelling.
From the ice-cream organization model study it became clear that all tasks in product de-
velopment are knowledge-rich, but this conclusion turned out to be particularly strong for
the feasibility phase task within the product development process. Therefore, we consider
the task model for this feasibility task in greater detail. The task model has two associated
The Task and Its Organizational Context 59
Table 3.17
Worksheet TM-1: analysis of the ‘feasibility phase’ task within the ice-cream product development business
process.
worksheets. The first, TM-1, gives a refined task decomposition and analysis. The second
worksheet, TM-2, takes a closer look at the knowledge items involved in the task. Both
worksheets are similar to, but more detailed than, the corresponding worksheets OM-3 and
OM-4 of the organization model.
TM-1: business task decomposition and analysis Worksheet TM-1 in Table 3.17
zooms in on the feasibility task, numbered 2 in the product development process breakdown
of worksheet OM-3 (Table 3.14). The task structure of the feasibility phase is depicted in
Figure 3.11, which again brings out the cross-functional nature of ice-cream product de-
velopment.
TM-2: detailed knowledge bottleneck analysis In the task model we also take a closer
look at the knowledge assets involved in the task. Worksheet TM-2 is used for this purpose
(it speaks of knowledge items, which are just further detailed knowledge assets of smaller
60 Chapter 3
Develop Produce
Marketing Product
product product
brief description
concept prototypes
Product
prototypes
Figure 3.11
Flow diagram for the subtasks of the feasibility phase task within ice-cream product development.
grain size; there is no principal difference). In this worksheet we characterize the nature
of a knowledge item in terms of attributes related to nature, form, and availability of the
knowledge. In the feasibility phase task, many different knowledge items are involved. We
already saw a few examples in worksheet OM-4 (Table 3.15). For every knowledge item
in a task, a separate worksheet TM-2 is needed. In Table 3.18 we show one instance of this
worksheet for the knowledge item “consumer desires.”
As remarked previously in this chapter, the agent model rearranges organization and
task information from the perspective of the implications for a specific agent or actor. For
space reasons, we do not discuss the ice-cream agent model here. Instead, based on the
TM-1 and TM-2 task model results it was possible to rank and prioritize the different
knowledge-improvement scenarios listed in worksheet OM-5 (Table 3.16). The results of
the scenario comparison are given in Table 3.19.
Table 3.18
Worksheet TM-2: knowledge item characterization for “consumer desires”.
in worksheet OTA-1, underpinned by the organization, task, and agent models, and related
support documentation (interviews, data, reports), the CommonKADS context analysis is
concluded, and ready for decision-making. In the ice-cream case study, the context analysis
documents were complemented by a follow-up draft project charter and contract (see the
final part of Figure 3.9). These summarized the impacts of the proposed knowledge system
module on the organization from a business perspective, in a form suitable for presentation
to and assessment by the project stakeholders.
62 Chapter 3
Table 3.19
Comparison of knowledge-improvement scenarios in the ice-cream case, based on task and knowledge asset
analysis.
Table 3.20
Worksheet OTA-1: summary of organizational recommendations and actions in the ice-cream case.
The Task and Its Organizational Context 63
Some PARIS afterthoughts After this feasibility and improvement study, the decision
was taken to build a knowledge system module as proposed. The focus of this PARIS
system was on ice-cream processing knowledge for product developers, and several trials
were run with end users. A case study confirmed that the system could lead to a reduction
of needed tests in product development, thus saving time in line with the original goal. A
less anticipated result of PARIS was that less experienced developers were quite pleased
with the system, as a result of having so much “knowledge at your fingertips.” In hind-
sight, one of the knowledge engineers involved in the PARIS project concluded that the
fundamental stakeholder-driven approach was a crucial choice. No system project can do
without organization analysis and support if it intends to be successful. One of the con-
clusions drawn from the PARIS project is that this stakeholder-oriented approach must be
proactively continued also during systems design and even after installment and handover.
Namely, organizations are dynamic entities, people regularly move to different jobs, and so
the organizational support for IT systems activities must be actively maintained all along
the way.
1a. identifying problem/opportunity areas and potential solutions, and putting them into
a wider organizational perspective;
1b. deciding about economic, technical, and project feasibility, in order to select the
most promising focus area and target solution;
The CommonKADS organization model provides the tool for this scoping and feasi-
bility analysis. Subsequently, an impact and improvement study, for the selected target
area and solution is undertaken:
64 Chapter 3
2a. gathering insights into the interrelationships between the task, agents involved, and
use of knowledge for successful performance, and what improvements may be
achieved here;
2b. Deciding about organizational measures and task changes, to ensure organizational
acceptance and integration of a knowledge-system solution.
As tools for this part of the analysis, CommonKADS offers the task and agent models.
Building all of these models is done by following a series of small steps supported by prac-
tical and easy-to-use worksheets and checklists. In this way, a comprehensive picture of
how an organization uses its knowledge is built up. Constructing in this way a knowledge
atlas of the organization starts with an understanding of the broader organizational context.
Then, we progressively zoom in on the promising knowledge-intensive organizational pro-
cesses, guided by previous modelling results all along the line. This enables, as well as
requires flexible knowledge project management. A pictorial overview of the process of
organizational context modelling is given in Figure 3.12.
Accordingly, organization and task analysis constitutes in our opinion a key profes-
sional competence of knowledge engineers. From practical experience there are some
good guidelines for the process of carrying out an organization and task study:
Identify the stakeholders (knowledge providers, users, decision-makers) of your
project at an early stage. Interview them, also separately. Learn to understand their
perspectives and interests, to the extent that you can explain them to others.
Consider the support that exists in the organization for proposed knowledge solu-
tions. Clearly differentiate the interests from different stakeholder groups. You may
do this by making an evaluation matrix, with the list of stakeholder groups as one
dimension, and the list of possible knowledge solutions as the other. Explicate for
yourself the different criteria that each stakeholder group will use to judge, support,
or resist proposed changes and solutions. Be sensitive to the fact that some of this
may be tacit: the unwritten rules of the game in the organization.
Ask concrete, factual questions in interviews to clarify how tasks are carried out
and what they require. Ask for and use concrete examples. You really understand a
business process or task if you are able to write a script or scenario for it.
Business process analysis is an important activity in knowledge projects, because
here often lies the key to improvements. Process models can be understandably
expressed by means of various types of flow diagrams or charts, such as IDEF dia-
grams or UML activity diagrams. Distinguish between the primary process leading
to the main product or service of an organization, and secondary processes that have
a support role.
It is very helpful to indicate in process models what subprocess is carried out by
what part of the organization. This can be achieved by making a matrix where the
subprocesses are put along the horizontal axis, and the organization subparts along
the vertical axis. Subsequently adding the flow connections between subprocesses
The Task and Its Organizational Context 65
OM-1
worksheet:
Start
problems,
solutions,
context
OM-3
worksheet:
OM-5
worksheet:
process
breakdown Integrate [If unfeasible] Stop
OM-2 Judge
worksheet: Refine Feasibility
(Decision
description of OM-4 Integrate Document)
organization Refine worksheet:
focus area
knowledge
assets [If feasible]
OTA-1
worksheet:
Context
Analysis
Ready
Figure 3.12
A road map for carrying out knowledge-oriented organization and task analysis.
then shows very clearly the working relationships between the subparts of the orga-
nization in the overall process.
A similar matrix technique is helpful in evaluating how big or small the support
within the organization is for alternative knowledge solutions. Put the organization
subparts or stakeholders on the horizontal axis, and the evaluation criteria (as pro-
posed by different stakeholders) on the vertical axis. Then put in the matrix cells the
associated score, e.g., on a five-point scale (1 = very much against, 3 = neutral, 5 =
strongly in favor). Such an evaluation matrix explicates very well the different views
on alternative solutions and simplifies their ranking. An example of this technique is
presented in a published CommonKADS case study (Post et al. 1997). Include the
66 Chapter 3
existing situation as it is the main yardstick for comparison for most people.
Focus on the added value of what you are doing. Always ask yourself the question:
What difference would it make if we did this and this? Prioritize the things that give
the most value with the least effort. Be on the lookout for small but visible results,
also at an early stage.
Keep it simple. Organization and task analysis is a vast area. The CommonKADS
steps and methods give a good framework, but you should not do everything. Pick
out the steps and pieces that are most useful to you in your project. Use the rest
of the CommonKADS methodology as a checklist so that you don’t miss important
things. This selective approach is a cornerstone of CommonKADS project manage-
ment discussed later on in Chapter. 15.
The results of the analysis as described in this chapter provide important inputs to
other CommonKADS models, namely, the communication model (especially the agent
information) and the knowledge model (in particular the task structure). In addition, the
techniques and results of the present analysis can be imported to activities outside the
knowledge-systems area. We have indicated the integrative links with quality assurance,
process improvement, and conventional information-systems analysis. In the last case,
for example, the task model provides a top-level information model (covering information
object structure, function, and control), as we find it in information engineering and object-
oriented methodologies.
Finally; the analysis in this chapter is extremely worthwhile in itself. Far beyond
knowledge systems, it offers many practical insights into knowledge management in gen-
eral, to achieve higher value and leverage from the knowledge in the organization.
The CommonKADS model for knowledge-oriented organizational analysis was first de-
veloped by de Hoog et al. (1996). A full-blown and instructive case study not discussed in
this book can be found in Post et al. (1997). The practical and useful worksheet techniques
we present in this book are based directly on the further developments by the knowledge
management unit of Unilever.
The CommonKADS approach intentionally combines and integrates ideas coming
from various areas in organizational analysis and business administration. It has, for ex-
ample, been influenced by soft systems methodology (Checkland and Scholes 1990), espe-
cially in its thinking on how to come to a clear and agreed picture of what the real problems
and opportunities in an organization are. In this regard, it is also useful to consult litera-
ture on organizational learning, such as Argyris (1993). A good reader on many aspects
of organizational strategy is Mintzberg & Quinn (1992). A standard text on organizational
culture is Schein (1992). Interesting reading for knowledge engineers and managers is
Scott-Morgan (1994), showing that not only is knowledge often tacit, but also that there
The Task and Its Organizational Context 67
are many social rules for decision-making and management within organizations.
CommonKADS aims to integrate organization process analysis and information anal-
ysis. In many knowledge projects they are very hard to separate anyway. Practical ap-
proaches to business process modelling and reengineering are proposed, e.g., in Johans-
sonet al. (1993) and Watson (1994). The latter makes a clear link between thinking on
business process reengineering and improvement, and total quality management which is
reviewed concisely in Peratec (1994). Currently, most information systems methodologies
are very limited in their consideration of wider organizational feasibility and benefits as-
pects. This still even holds for the very recent object-oriented approaches (Eriksson and
Penker 1998). One of the very few exceptions is James Martin’s Information Engineering
approach (Martin 1990).
In job, task, and workplace analysis, there is, of course, also much existing work rele-
vant to knowledge engineering and management, from the areas of organizational behavior
(Harrison 1994), human resource management (Fletcher 1997), and ergonomics (Kirwan
and Ainsworth 1992). Ideas and techniques from these areas you will find reflected in
the CommonKADS task and agent modelling. All in all, CommonKADS contains a state-
of-the-art organization and workplace analysis method, with a special emphasis on the
knowledge aspects and on its integration with modern information modeling.
4
Knowledge Management
4.1 Introduction
Organization and task analysis are knowledge-engineering activities that directly hook up
with business administration and managerial aspects. A recent field that has emerged in
business administration is knowledge management. It takes knowledge as a central subject
for organizational decision making in its own right, and attempts to deal with the manage-
ment control issues regarding leveraging knowledge. In this chapter, we give a brief sketch
of some central concepts in knowledge management and indicate how they are related to
features of the CommonKADS methodology. To understand the basics is important for
knowledge engineers and system builders, because knowledge engineering and knowledge
management touch or even overlap each other at several points.
tacit explicit
knowledge To knowledge
From
explicit
knowledge Internalization Combination
Figure 4.1
Nonaka’s model of the dynamics of knowledge creation, built upon the distinction between explicit and tacit
knowledge.
1. from tacit to tacit knowledge (= socialization): we can teach each other by showing
rather than speaking about the subject matter;
2. from tacit to explicit knowledge (= externalization): knowledge-intensive practices
are clarified by putting them down on paper, formulate them in formal procedures,
and the like;
3. from explicit to explicit knowledge (= combination): creating knowledge through
Knowledge Management 71
Maintain,
Identify Plan Acquire Distribute Foster Dispose
Control
Develop use
quality
Figure 4.2
Activities in knowledge management and the associated knowledge-value chain.
According to these authors, organizational knowledge creation continuously needs all four
types of knowledge production. The aim of knowledge management is to properly fa-
cilitate and stimulate these knowledge processes, so that an upward, dynamic spiral of
knowledge emerges. In such a view, knowledge engineering as discussed in this book is
a methodology especially useful in “externalization,” that is, converting tacit into explicit
knowledge. This is a unique feature of knowledge engineering, because there is hardly
any other mature scientific methodology capable of externalizing tacit knowledge. Also,
the combination of knowledge is well supported in knowledge engineering, e.g., through
libraries of reusable task and domain models. The importance of tacit knowledge is nowa-
days widely acknowledged in knowledge engineering and management.
distinguished by many authors (see Figure 4.2).
Identify internally and externally existing knowledge.
Plan what knowledge will be needed in the future.
Acquire and/or develop the needed knowledge.
Distribute the knowledge to where it is needed.
Foster the application of knowledge in the business processes of the organization.
Control the quality of knowledge and maintain it.
72 Chapter 4
Figure 4.3 sketches the wider context of knowledge and its management within the orga-
nization. As outlined previously, knowledge is a prime enabler to successfully carry out
the business processes within the organization, which in turn create value for the recipi-
ents of its products and services. The formulation of a knowledge-management strategy
follows the opposite, outside-in direction. It starts by considering the value-creation goals
of the organization, and how this value is delivered by the organization’s business pro-
cesses. Knowledge assets are those bodies of knowledge that the organization employs
in its processes to deliver value. The knowledge management question then is what ac-
tions are useful for increasing the leverage of the knowledge underlying these processes.
Knowledge engineering as discussed in this book is one of the instruments available for
this purpose.
A very wide range of managerial actions to enhance the flow and leverage of knowl-
edge are conceivable. Many case studies, such as those done in industries in Japan, show
the importance of creating multifunctional and cross-disciplinary teams to build a richer
Knowledge Management 73
e
n
knowledge process value v
i
r
o
organization n
Knowledge m
Management
e
KM strategy
n
t
Figure 4.3
Knowledge management in relation to the business processes and value creation by the organization.
knowledge base for innovative product design. In some cases, knowledge is concentrated
within special expertise centers in order to achieve a sufficient critical mass, e.g., in emerg-
ing advanced technology areas. In other cases, knowledge is spread out, by reallocating
specialist knowledge from headquarters to small local offices by means of decision support
systems: this has been done, for example, by banking and insurance companies in Europe,
in order to better and more quickly serve the local customer with financial services such
as loans and mortgages. Research organizations rethink and redesign their“knowledge lo-
gistics,” seeking new ways of transferring their knowledge to target groups and taking ad-
vantage of the new opportunities for attractive visualization of information on the Internet
and its World Wide Web. Here, information gathering is supported by intelligent software
agents that assist us as a kind of knowledge broker. US-based internationally operating
enterprises have installed knowledge repositories, for example, in the form of a distributed
database of projects carried out and lessons learned, in order to strengthen worldwide man-
agement consulting. Other forms of organizational memory enhancing knowledge sharing
exist in libraries of reusable model fragments, information, and software components, to
facilitate assembly of new information systems (this book devotes a special chapter to this
topic), speed up engineering design studies, and reduce time to market. Automotive com-
panies have created new knowledge feedback loops by organizing special regular meetings
with their car dealers and customers, the results of which are then used in car redesign.
74 Chapter 4
knowledge-based thinking. Here are a number of them.
The new truism: knowledge is a key asset, but it is often tacit and private. The
knowledge manager’s challenge is to deal with the fact that knowledge is an organi-
zational asset, and at the same time mostly resides in individual people. Moreover,
in contrast to assets like plants and buildings, human knowledge assets are mobile
and may easily walk away at 5 o’clock (to their home or to the nearest competi-
tor). Knowledge management and engineering actions should therefore not have a
mechanistic or bureaucratic nature. Instead, they have to be people-oriented.
Knowledge is not what you know but it is what you do! The notion of asset can
be a bit deceiving, as it has a passive “just-sitting-there” connotation like a plant. In-
stead, we repeatedly stress the nature of knowledge as a potential for action. Knowl-
edge can realize its value only when it is used. What knowledge is depends on the
context of use.
Creating knowledge pull instead of information push: knowledge management
has an outside-in, value, and process focus. The information society and its new
technological capabilities have a tendency to overload us with information. The in-
formation society may well begin to develop the signs of a new disease: information
infarction. Knowledge management needs to counteract this danger by introducing
selectivity and enhancing focus, a point already discussed (cf. Figure 4.3). Basic
communication of information is not sufficient, and may even lead to overload, if it
is not supplemented with goal-oriented sharing of experience and expertise. Knowl-
edge management often has a bottom-up orientation in order to become practically
successful, creating and sustaining pull derived from ongoing application project
needs.
Knowledge transfer is not just handing over something: there is no such thing
as a knowledge-burger. Knowledge has traditionally been viewed as an attribute
of competent people, rather than as an entity in itself. The latter view (that we also
adhere to in this book) is quite recent, linked as it is to reflections about the impact
of the Information Age. It is a step forward, but the knowledge-as-a-substance view
also has its limitations and dangers. Knowledge is not like a hamburger you can
just produce at one place and hand over at another. Many failed knowledge and
technology transfer projects are witness to the fact that you cannot treat knowledge
Knowledge Management 75
as a thing you throw over the wall.
The knowledge exchange mechanism: knowledge sharing = communication +
knowledge recreation. Much more appropriate than a simplistic transport or sender-
receiver view on knowledge transfer, is the idea of knowledge sharing. Transfer is
better thought of in terms of coproduction or comakership of knowledge. This is
reason to stress the importance of multifunctional and multidisciplinary teamwork
in knowledge-intensive organizations. Similar experiences are reported by so-called
virtual organizations, where different companies at different locations form a net-
work to achieve a joint goal. In knowledge engineering, experience has led to dis-
carding the old idea that knowledge can be “mined” as jewels out of the expert’s
head. Rather, knowledge engineering is a constructive and collaborative activity in
which modelling of knowledge is central.
Knowledge management is about facilitating knowledge sharing by people. It is
about increasing their connectivity. This is what you will hear many experienced
knowledge managers say. Simple bottom-up measures will often do the job. Most
see knowledge management as a lightweight activity that balances soft and hard as-
pects. It has a facilitator role, helping to create knowledge pull, instead of installing
rigid structures ultimately giving rise to information overload.
Although this only sketches a high-level picture, it does give the general flavor of what
knowledge management is.
organizational goals
knowledge management level knowledge as a resource
value chain
knowledge
report
management
experiences
actions
knowledge assets
knowledge object level organizational roles
business processes
Figure 4.4
Knowledge management, like other management tasks, can be seen as a metalevel activity that acts on an object
level.
at the right time;
at the right place;
in the right shape;
with the needed quality;
against the lowest possible costs.
Knowledge as a resource has certain properties which make this management task
rather different from managing physical, tangible resources (see Wiig et al. (1997b) for
an enumeration of these properties). This justifies the existence of a separate discipline
of knowledge management and knowledge engineering. Knowledge management initiates
and executes knowledge-management actions which operate on the knowledge object level,
consisting of knowledge assets, organizational roles, and business processes. It monitors
the achievements through reports and observations.
To make knowledge management a viable enterprise, more flesh must be added to the
skeletal model in Figure 4.4. This means describing a process model for the management
level and an “object model” for the object level. Note that this is very similar to what
Knowledge Management 77
REFLECT
identify improvements
plan changes
CONCEPTUALIZE
identify knowledge
analyse strength/
weaknesses
ACT
implement changes
monitor improvements
Figure 4.5
Knowledge management consists of a cyclic execution of three main activities: conceptualize, reflect, and act.
business
process
participate
agents in
possess requires
Organization model:
OM-4: knowledge assets
knowledge coarse grained description
form, nature, time, location
assets
Task model:
TM-2: knowledge bottlenecks
Knowledge model:
knowledge specification
fine-grained
Figure 4.6
knowledge-management actions are defined in terms of three objects: agents that possess knowledge assets and
participate in the business process. The notes indicate which parts of the CommonKADS models describe these
objects.
ing up the knowledge object level — agents, business processes, and knowledge assets —
are shown. The knowledge-management actions indicated in Figure 4.4 will effect changes
in one or more of these components: in practice most actions will affect all three. These
actions will aim at improvements in one or more of the quality criteria for resources men-
tioned above. If we take the housing application, which will be elaborated in Chapter. 10,
we can see the building of that knowledge system as a knowledge-management action
which will increase, for example, the quality of the knowledge and its availability in terms
of time and place. The system will change the agents (some assets move from people to
software, new agents are introduced), the business process (the way requests are handled),
and features (form, nature) of the knowledge asset (housing allocation knowledge).
Knowledge Management 79
A closer look at this simple housing example shows that the components affected by
the knowledge-management actions (the object level) to a large extent coincide with what
is modelled by the CommonKADS models addressing the context of a knowledge sys-
tem. This is indicated in Figure 4.6 by the notes attached to agents, business process, and
knowledge assets. They refer precisely to those elements from the CommonKADS model
suite which can be linked to knowledge management. The organization model will show
the resulting change in people and (possibly) structure; the agent model will deal with new
agents; the organization model and task model will reflect the change in the process and
the resolved knowledge bottlenecks. The housing application does not lead to a new fine-
grained knowledge specification, but in many cases this will happen. This in turn can be
part of Nonaka’s model of model creation: the move from tacit to explicit knowledge (see
Figure 4.1). This implies, in our terminology, that the knowledge has changed its form.
What has been said here emphasizes the seamless linking of knowledge management
and knowledge engineering. However, it should be kept in mind that, although linked, they
are still essentially different because they are attached to organizatopnal roles with a dif-
ferent scope, purview, and discretion. Confusing knowledge management and knowledge
engineering is not a good idea. Building knowledge systems (whether based on explicit
knowledge representation or on machine-learning techniques) is not to be portrayed as
coinciding with knowledge management.
In our view, knowledge systems are some of the tools for knowledge management.
They offer potential solutions to knowledge resource problems detected, analyzed, and
prioritized by knowledge management. The resulting action, building a knowledge system,
is “delegated” to knowledge engineering. This is visualized in Figure 2.6 where there is
a clear distinction between the “knowledge manager” and the “project manager.” If you
wonder in Chapter. 15 where the management reports from the project manager will be sent
to, than the answer is the knowledge manager. These are the reports shown in Figure 4.4.
However, a strong point of the CommonKADS methodology is that the models indi-
cated in Figure 4.6 can be shared between the knowledge manager and the project man-
ager, and in a wider sense between all agents shown in Figure 2.6. In this way they create a
common ground, thus counteracting one of the most destructive tendencies in any human
endeavor, not understanding what the other means. In addition it reduces redoing work:
filling parts of models can be done either from the knowledge-management side or from
the knowledge-engineering side. But as both sides need them for their work, there is no
need to redo filling parts of models, when it has already been done by the other side.
Thus, the main link between knowledge management and knowledge engineering is
found at the knowledge object level. For the management level the similarities are far less,
which is to be expected, since managing knowledge is definitely not the same as managing
a knowledge-system development project.
The main model for the management-level activities in knowledge management is the
cycle depicted in Figure 4.5. Below we briefly discuss the three main activities in this
cycle.
80 Chapter 4
4.5.2 Conceptualize
The main goals of the conceptualize activity are to get a view on the knowledge in the
organization and its strong and weak points. The first goal will be served by filling the
knowledge object level, while the second can be supported by bottleneck analysis based
on a closer inspection of the properties of the knowledge assets involved. In carrying out
this activity most of the guidelines from Chapter. 3 can be applied. However, from the
knowledge-management perspective a few must be added.
Guideline 4-4: N EVER RELY ON A SINGLE SOURCE WHEN TRYING TO LINK KNOWL -
EDGE TO AGENTS
Rationale: People don’t know always what other people know. A simple technique
for dealing with this issue is network analysis: asking people where they turn to
when they have a problem they can’t solve.
procedures one can achieve better insights into this area. As a minimum, require and
deliver justifications for opinions of the type, “This knowledge is indispensable to
the organization.”
4.5.3 Reflect
Guideline 4-8: AVOID THE TRAPDOORS OF “S OLVING THE WRONG PROBLEM ” AND
“S ELECTING THE WRONG SOLUTION ”
Rationale: Everybody knows it but still : : : For some reason there is a tendency
to associate knowledge management with information technology, and this perni-
cious misconception spawns a bias toward solutions relying entirely on information
technology. Take a look from the other side; a simpler, more effective, and cheaper
solution might be there.
Rationale: Again, almost a truism, but the necessary companion of the previous
one. Life, and in particular organizations and knowledge, are too complex to believe
that one single measure will lead you into paradise. Don’t believe it when someone
tells you that your knowledge-sharing and -exchange problems can be solved by
installing program XYZ on your network. It probably will generate more problems
than it solves.
Rationale: The juiciest fruits are hardest to grasp. Only rarely will your improve-
ments also be the ones that are easiest to implement. When planning your improve-
ments be very aware of risks. There may be reasons to reject the preferred improve-
ments because the risks are too high. Keep a keen eye on unexpected side effects.
82 Chapter 4
4.5.4 Act
Acting in the framework presented here means initiating the agreed- upon improvement
plans and monitoring their progress. As knowledge management is tangential to many
other management concerns in an organization, one should be very conscious of the bound-
aries of discretion involved. Knowledge management carries the seeds of becoming every-
thing, which of course in the end will reduce it to nothing. Bordering disciplines are human
resource management, knowledge engineering, information technology, and organizational
consultancy, to name a few.
A simple example can clarify where boundaries can be set. From a knowledge-
management perspective it may have been decided that in order to solve a knowledge
problem or grasp a knowledge opportunity, some of the personnel have to be trained in a
new knowledge area. In our view, organizing this training program (finding training staff,
scheduling courses, allocating personnel) is the job of the human resources department (or
if there is no such department, the person playing the role of human resources manager),
whereas monitoring the progress of the courses and the effects on the knowledge in the
organization belongs to the “act”ivity. In the same vein, the relation with knowledge engi-
neering can be described; see again the differentiation in roles as shown in Figure 2.6. For
the “act” part of the knowledge-management cycle some guidelines can be formulated.
Guideline 4-12: GO FOR MEASURABLE OBJECTIVES
Rationale: The intangible nature of knowledge makes it easy to talk in vague terms
about results. However, proper monitoring can only be done when there are clear
yardsticks against which progress can be measured.
Guideline 4-13: T HINGS DO NOT RUN THEMSELVES
Rationale: Assign clear responsibilities and give clear briefs. Carry out control on
progress quite frequently. It is a mistake to believe that all is said and done, after an
action has been initiated.
Knowledge engineering as discussed in this book offers many useful concepts and
methods for knowledge management. To name a few:
Knowledge-oriented organization analysis helps to quickly map out fruitful areas for
knowledge-management actions. The methods presented are very suitable for quick
knowledge scans or audits, or for one-day workshops with responsible managers.
Task and agent analysis has shown to be very useful for clarifying knowledge bot-
tlenecks in specific areas. It is not uncommon that these turn out to be different
from the accepted wisdom in the organization. Techniques like these are relevant
to business process redesign and improvement where knowledge work is involved.
Because CommonKADS provides a gradual transition between business and infor-
mation analysis, this is also key to a better integration of information technology into
the organization.
Knowledge engineering places strong emphasis on the conceptual modelling of
knowledge-intensive activities. The often graphical techniques have proved to be
very useful in clarifying the major tacit aspects of knowledge, in a (nontechnical,
nonsystems) way enabling and stimulating fruitful communications with a variety
of people (managers, specialists, end users, customers) who often do not have a
background in information technology.
The accumulated experience of knowledge engineering shows that there are many re-
curring structures and mechanisms in knowledge work. This has, for example, led to
libraries of task models that are applicable across different domains. This approach
offers many useful insights into constructing the reusable information architectures
and software components that are increasingly needed in modern IT-based organiza-
tions.
Therefore, knowledge engineering has several different applications. The building of
knowledge systems is only one of them, albeit an important one. CommonKADS has also
been used in knowledge-management quick scans and workshops, in IT strategy scoping
and feasibility projects, and it further gives a sound support in the early stages of require-
ments elicitation and specification in systems projects. In all applications of knowledge
engineering, the conceptual modelling of knowledge at different levels of detail is a central
topic. This is the subject to which we now turn.
Knowledge management has received enormous attention over the last few years. This
has led to “guru” books like Stewart’s and Drucker’s (see Chapter. 1), books and articles
focusing on guidelines and techniques of which there still only a few (Wiig 1996, Sveiby
1997, Edvinsson and Malone 1997, Wiig et al. 1997a, Tissen et al. 1998), books and
articles with case studies (too many to mention), and “old wine in new bottles” publications
(see, for example, conference proceedings of PAKM ’96 (Wolf and Reimer 1996).
84 Chapter 4
The basic approach in this chapter is taken from van der Spek and de Hoog (1994) (see
also Wiig et al. (1997b), which was inspired by notions borrowed from CommonKADS).
Some of the first theoretical notions can be found in van der Spek and Spijkervet (1994) and
van der Spek and de Hoog (1994). The first explicit link between knowledge management
and CommonKADS can be found in Benus and de Hoog (1994).
The theory of organizational knowledge processes, built upon the distinction between
tacit and explicit knowledge, is discussed extensively in Nonaka and Takeuchi (1995),
which also contains several interesting case studies on knowledge creation in industrial
innovation processes. Some books out of the wave of recent general writings on knowledge
management from the perspective of business administration are Davenport and Prusak
(1998) and Tissen et al. (1998); see also the reading notes to Chapter. 1. Many works
emphasize the value orientation that knowledge management should have. The concept
of the business-value chain was developed by Porter (1985). The knowledge-value chain
as sketched in this chapter was taken from Weggeman (1996). That study is part of a
collection containing a wide range of views from different fields concerning knowledge
management, including information technology aspects. For a discussion of the relation
between knowledge engineering and knowledge management, see (Wielinga et al. 1997).
5
Knowledge Model Components
person loan
has loan
age amount
income interest
INFORMATION
KNOWLEDGE
Figure 5.1
Two object classes in the loan domain with some corresponding information and knowledge items.
Knowledge can often be used to infer new information. To stay with the previous
example, a subclass link can be used to inherit information from the super-class to the
subclass. This generative property of knowledge has been used by some as the feature
distinguishing between knowledge and information, but in practice this is difficult. For
example, is a formula for computing the sales tax knowledge? In this book we take the
(somewhat simplified) position that there is no hard borderline between knowledge and
information. Knowledge is “just” complex information. In the remainder of this section
we discuss in more detail the nature of this complexity.
Before diving into the detailed issues related to modelling knowledge, let’s take a simple
example.
Consider a financial application concerned with providing loans to people. Two classes
for this domain are shown in Figure 5.1 together with some typical attributes. The figure
also illustrates the difference between information and knowledge. Information is typi-
cally that a person X has a loan Y. We would model this information with an information
type, in this case something like a has-loan relation between person and loan. The figure
Knowledge Model Components 87
Figure 5.2
CommonKADS moves away from the idea of one large knowledge base. Instead, we need to identify parts of the
knowledge base in which the knowledge fragments (e.g., rules) share a similar structure.
also contains three statements that we intuitively would call knowledge. For example, all
persons applying for a loan should be at least 18 years old. Note that the intuitive defi-
nition of knowledge as “information about information” holds here: the statements tell us
something about the information stated above. The knowledge fragments tell us something
about persons and loans in general, and not just about particular person-loan instances.
If we look a bit closer at the knowledge fragments in Figure 5.1, we can see that there
are patterns in there. For example, the two “rules” about the relation between the amount
of the loan and the height of the person’s income have the same basic structure. One of
the challenges for any knowledge-engineering methodology is to find appropriate ways of
modelling knowledge in a schematic way. We do not just want to list all the possible pieces
of knowledge, just as we do not list the contents of a database.
What we do not want is one large flat knowledge base containing all the rules. Instead,
we are striving for a fine-grained structure in which we divide the knowledge base up into
small partitions (e.g., rule sets) that share a similar structure (see Figure 5.2). This is a
requirement for any form of useful knowledge analysis, validation, and maintenance. In
this chapter we will see how this goal of structuring knowledge can be achieved.
88 Chapter 5
organization model
knowledge-
task model design
intensive
agent model model
task
requirements
specification
knowledge
for reasoning functions
model
Figure 5.3
Schematic view of the role of the knowledge model in relation to the other models.
reasoning task (e.g., assessment, configuration, diagnosis). The knowledge model does not
contain any implementation specific terms. These are left for the design phase.
It is seen as essential that during analysis implementation-specific considerations are
left out as much as possible. For example, when we talk during analysis about “rules,” we
mean the rules that the human experts talk about. Whether these natural rules are actually
represented in the final system through a “rule” formalism, is purely a design issue, and not
considered relevant during analysis. This clear separation frees the analyst from all worries
concerning implementation-specific decisions. This requires, of course, that the analyst has
a means of knowing that the knowledge models she writes down are “designable.” This
issue is addressed in more detail in Chapter. 11.
The knowledge model has a structure that is in essence similar to traditional analysis
models in software engineering. The reasoning task is described through a hierarchical de-
composition of functions or “processes.” The data and knowledge types that the functions
operate on are described through a schema that resembles a data model or object model.
The notations are, on purpose, similar to the ones found in other contemporary methods,
such as the ones used in UML. There are, of course, a number of crucial differences. At the
end of this chapter we include a special section discussing in detail the differences between
the CommonKADS knowledge model and analysis models in general software engineer-
ing. Experienced software engineers might first want to read that section before moving
on.
A knowledge model has three parts, each capturing a related group of knowledge struc-
tures. We call each part a knowledge category.
The first category is called the domain knowledge. This category specifies the domain-
specific knowledge and information types that we want to talk about in an application. For
example, the domain knowledge of an application concerning medical diagnosis would
contain definitions of relevant diseases, symptoms, and tests, as well as relationships be-
tween these types. A domain knowledge description is somewhat comparable to a “data
model” or “object model” in software engineering.
The second part of the knowledge model contains the inference knowledge. The infer-
ence knowledge describes the basic inference steps that we want to make using the domain
knowledge. Inferences are best seen as the building blocks of the reasoning machine. In
software engineering terms the inferences represent the lowest level of functional decom-
position. Two sample inferences in a medical diagnosis application could be a hypothesize
inference that associates symptoms with a possible disease, and a verify inference that iden-
tifies tests that can be used to ascertain that a certain disease is indeed the factor that causes
the observed symptoms.
The third category of knowledge is the task knowledge. Task knowledge describes
90 Chapter 5
Task knowledge
task goals DIAGNOSIS
task decomposition (task)
task control
Inference knowledge
basic inferences hypothesize verify
roles (inference) (inference)
Domain knowledge
domain types Symptom Disease Test
domain rules (type) (type) (type)
domain facts
Figure 5.4
Overview of knowledge categories in the knowledge model. At the right some examples of knowledge items in a
medical diagnosis domain.
what goal(s) an application pursues, and how these goals can be realized through a decom-
position into subtasks and (ultimately) inferences. This “how” aspect includes a description
of the dynamic behavior of tasks, i.e., their internal control. For example, a simple med-
ical diagnosis application could have DIAGNOSIS as its top-level task, and define that it
can be realized through a repeated sequence of invocations of the hypothesize and verify
inferences. Task knowledge is similar to the higher levels of functional decomposition in
software engineering, but also includes control over the functions involved.
Figure 5.4 gives a brief overview of the three knowledge categories, as well as some
sample knowledge elements in each category. In the following sections, we discuss each
of the three categories in more detail.
As we will see in the next section, we model domain knowledge with a notation similar
to a UML class diagram. A difference is that in knowledge modelling we do not model
functions (i.e., operations, methods) within information objects. Thus, we will see that
the notion of concept is almost the same as a UML class,but without any operations. This
difference is due to the special role that functions (i.e., inferences and tasks) have within
knowledge modelling. The reader is referred to the final section of this chapter for a more
Knowledge Model Components 91
Knowledge base A knowledge base contains instances of the types specified in a do-
main schema. A major difference between a knowledge system and, for example, a
database application is that in database applications, one is, during analysis, seldom
interested in the actual facts that have to be placed in the database. In a knowledge
system, a knowledge base typically contains certain pieces of knowledge such as
rules, which are of interest (although also to a limited extent, as we will see later
on). In knowledge modelling we typically distinguish multiple knowledge bases
containing different types of knowledge (e.g., instances of different rule types).
In the remainder of this section we show how one can specify a domain schema and
a knowledge base. As illustrations we use examples derived from a simple application
concerning the diagnosis of problems with a car. Figure 5.5 shows in an intuitive fashion
some pieces of knowledge that are found in in this domain.
The knowledge model provides a set of modelling constructs to specify a domain schema
of an application. Most constructs are similar to the ones encountered in modern O-O
92 Chapter 5
2 3
6
fuse inspection battery dial gas dial
broken zero zero
gas in engine
false
power
off
7 8 9
engine behavior
does not start engine behavior
stops
Figure 5.5
Knowledge pieces in the car-diagnosis domain.
data models. In particular we follow as much as possible the notations provided by the
UML. A synopsis of the UML class-diagram notation can be found in Section 14.4. As
the UML description is written as a self-contained section, there is some overlap with the
descriptions in this chapter.
In addition to the class diagram, constructs are included to cover modelling aspects
that are specific to knowledge-intensive systems. In practice, the three main modelling
constructs are CONCEPT, RELATION, and RULE-TYPE. In addition, several other con-
structs are available such as SUPER/SUBTYPE OF and AGGREGATE/PART. A basic set
of constructs is introduced in this chapter. Chapter. 13 describes advanced modelling con-
structs, and discusses issues such as reuse of domain knowledge.
Concept A CONCEPT describes a set of objects or instances which occur in the appli-
cation domain and which share similar characteristics. The notion of concept is similar
to what is called “class” or “object class” in other approaches. A difference with object-
oriented approaches is that we do not include functions (i.e., operations, methods) in the
concept descriptions.
Knowledge Model Components 93
Examples of concepts in the car domain could be a gas tank and a battery. Concepts
can be both concrete things in the world, like the examples above, or abstract entities such
as a car design.
Characteristics of concepts can be described in various ways. The simplest way is
to define an ATTRIBUTE of a concept. An attribute can hold a VALUE: a piece of in-
formation that instances of the concept can hold. These pieces of information should be
atomic, meaning that they are represented as simple values. Thus, a concept cannot have
an attribute containing an instance of another concept as its value. Such things have to
described using other constructs (typically relations; see further).
For each attribute, a VALUE TYPE needs to be defined, specifying the allowable values
for the attribute. Standard value types are provided such as boolean, number (real, integer,
natural), number ranges, and text strings, as well as the possibility to define sets of symbols
(e.g., “normal” and “abnormal”). The value type UNIVERSAL allows any value. In the
appendixa full listing of standard value types can be found. By default, attributes have
a cardinality of 0–1, meaning that for each instance an attribute can optionally store one
value. Other types of cardinality have to be defined explicitly.
Two sample concept definitions with attributes are given in Figure 5.6. As can be
seen in this figure, we use both textual and graphical representations for knowledge-model
components. The textual representation is the “baseline,” and may contain details that are
not easy (or not necessary) to represent graphically.
Graphically, concepts are shown as a box consisting of two parts. The concept name
is written in bold in the upper half. Attributes are listed in the lower half, together with the
names of their value types. The textual specification shows that the type for the attribute
value of gas-dial is defined explicitly as a separate VALUE TYPE. This is typically useful
if one expects more than one concept attribute to use this value type. If concepts occur
at more than one place in diagrams, the attribute compartment can be omitted the second
time.
Figure 5.7 shows some other concepts, in this case connected to an apple-classification
problem. In the right-hand part of this figure a number of instances of apple-class are
shown. For instances the UML notation for objects is used: the name of the instance plus
the name of the concept it belongs to are written in bold and underlined.
Concepts are usually the starting point for domain modelling. In Chapter. 7 we discuss
guidelines for identifying concepts. One important reason for defining something as a
separate concept and not as an attribute of another concept is that it deserves to have its
own “existence” independent of other concepts. Identification of concepts cannot be done
in a neutral way: what is considered a concept depends on the context provided by the
application domain. This scoping provided by the application (both by the domain itself
and by the task) enables keeping the domain-modelling process “do-able.” If this context
does not exist, inexperienced knowledge engineers either model too much or, alternatively,
do not produce anything because they are scared off by the complexity that comes with a
widely applicable schema.
94 Chapter 5
Figure 5.6
Graphical and textual specification of concepts and their attributes and corresponding value types. A concept is
graphically represented as a box consisting of two parts. The concept name is written in the upper half. Attributes
are listed in the lower half, together with the names of their value-types. The textual specification os an explicit
value-type definition.
apple
has class
color: {yellow, yellow-green, green} apple class
rusty-surface: boolean
greasy-surface: boolean
form: {flat, high}
Figure 5.7
The concepts “apple” and “apple-class.” In the right-hand part of this figure a number of instances of apple-class
are shown. For instances the UML notation for objects is used: the name of the instance plus the name of the
concept it belongs to are written in bold and underlined.
Knowledge Model Components 95
Relation Relations between concepts are defined with the RELATION or BINARY-
RELATION construct. Relations can be used in the standard entity-relationship (E-R)
fashion, but can also be used for more complicated types of modelling. Relations are
defined through a specification of ARGUMENTS. For each argument the CARDINALITY
(sometimes called “multiplicity”) can be defined. The default cardinality is 0–1, meaning
that the participation in the relation is optional. In addition, one can specify a ROLE for an
argument, identifying the role the argument plays in the relation.
Relations can have any number of arguments. However, the bulk of relations have pre-
cisely two arguments. Therefore, a specialized construct BINARY-RELATION is provided.
Relations may also have attributes, just like concepts. Such attributes are values that de-
pend on the relation context, and not independently on one of its arguments. The standard
example of such an attribute is the wedding-date of the married-to relation between two
people.
Binary relations can be shown graphically in a number of ways. The simplest form is
just to draw a line between two concepts and label it with the name of the relation. An ex-
ample of this is shown in Figure 5.8a. The relation ownership holds between instances of
car and instances of person. The number close to the concept box indicates the cardinality:
a car can be owned by at most one person; a person can own any number of cars.
If the name of a binary relation is of a directional nature, an arrow may point to indicate
the direction. This is the case in Figure 5.8b. The relation owned-by has a direction from
the car to the person who owns it. In the case of directional relation names, there may
also be a need to introduce an inverse relation (e.g., owns). Generally speaking, it is
best to choose as much as possible nondirectional relation names. Nondirectional names
emphasize the static characteristics of a relationship, and are thus the least likely to change
when the functionality of the application changes.
If a relation has attributes of its own, or if the relation itself takes part in other re-
lations, this simple relation representation is not sufficient. In that case we omit the text
label and draw the relation in a fashion similar to concepts: namely as a box with attributes
connected with a dashed line to the relation line. This graphical representation is shown in
Figure 5.8c. This particular representation is also called reification of a relation. Mathe-
matically speaking, one treats a relation tuple as a single, complex object. Relation reifi-
cation is a powerful modelling mechanism that can be used in many knowledge-intensive
domains.
The textual description follows quite naturally from the graphical one. An example is
shown below:
CONCEPT car;
END CONCEPT car;
CONCEPT person;
END CONCEPT person;
BINARY-RELATION owned-by;
96 Chapter 5
0+ 0-1
a) car 1 person
ownership
b) car person
owned-by
c) car person
ownership
purchase date: date;
Figure 5.8
Three graphical representations of a binary relation for the car example. Part a) shows the simple nondirectional
representation. The name of the relation is written as a label next to the line connecting the relation arguments.
The numbers indicate the minimum and maximum cardinality of the relation argument Part b) shows the same
relation, but with a directional name. An arrow is included to reflect the directional nature. Part c) shows a more
complex representation, in which the relation becomes an object in its own right. The relation box is attached by
a dashed line to the relation line. The relation box may contain attribute definitions. Also, the relation itself may
be involved as an argument in other relations.
INVERSE: owns;
ARGUMENT-1: car;
CARDINALITY: 0-1;
ARGUMENT-2: person;
CARDINALITY: ANY;
ATTRIBUTES:
purchase-date: DATE;
END BINARY-RELATION owned-by;
This sample specification captures a relation type that is a mix of Figure 5.8b+c. This
Knowledge Model Components 97
relation is a binary relation and thus has exactly two arguments. The relation name is
directional (from person to car), and thus the INVERSE slot is used to indicate the in-
verse relation name (owned-by, from car to person). Note that the implied meaning of the
cardinality slot of an argument is that it specifies the number of times that one instance
of the argument may participate in the relation with one particular related object. In the
graphical representation it is common to draw it the other way around: the fact that a car
can be owned by at most one person is indicated at the “person” side of the relation (see
Figure 5.8). The sample relation also has an attribute purchase-date. This is necessary
because the attribute value is dependent on both arguments car and person.
Relations with three or more arguments are shown with a different notion. The relation
name is placed in a diamond-shaped box, with arguments linked to the diamond. An
example of a four-place relation is shown in Figure 5.9. In this figure we try to model an
observation in a medical context. The relation has four arguments: (1) the agent making
the observation, (w) the patient for which the observation is made, (3) the location in which
the observation is made (hospital ward, outpatient clinic), and (4) the type of observable
(e.g., skin color, heart sounds). Again, the relation is represented here as a reified relation.
This is necessary because the relation itself has three attributes: namely the observed value
and a time stamp (date plus time).
Analysts usually try to reduce relations with three or more arguments to binary rela-
tions. This can be done if one of the relation arguments fully depends on another relation
argument. This is not the case in the observation relation: each argument is necessary to
uniquely identify a relation instance. For example, the same observable could be observed
at the same location for a certain patient by two different agents (e.g., a doctor and a nurse).
In this particular example we could even turn it into a five-place relation by introducing a
time-stamp as a extra concept (replacing the time and date attributes in the relation itself).
Again, such a decision depends on whether one views a time-stamp as being a “first-class”
object in its own right in the context of this application (see the discussion before on
concepts and attributes).
Reified relations provide powerful forms of abstraction. The resulting concept can
be treated in a similar way as “normal” concepts. From a formalist point of view, reified
relations are of the form of second-order relations. Reified relations occur in any domain
with a certain degree of complexity (see, e.g., the application relation in Chapter. 10).
agent patient
name name
position diagnosis
location observable
department type
hospital
observation
value
date
time
Figure 5.9
A four-place relation modelling an observation about a patient in a hospital setting. In these multiargument
relations there should not be a relation argument which completely depends on another argument.
Subtypes are shown graphically through thick unlabeled lines, with an arrow pointing
to the super-type. Figure 5.11 shows subtypes of concepts in the car domain. There are
two hierarchies, one for car-state. and one for car-observable. This distinction between
observables and states is a distinction made in many diagnostic applications. The states
are further divided into states that we can notice in some way (and thus can give rise to
a complaint) and states that are completely internal to the system (invisible-care-state).
This figure in fact represents information about the nodes in Figure 5.5 plus a number of
super-types that give additional meaning to each node.
Note that for most subtypes in Figure 5.11 no new attributes are added. Instead, the
value set of an inherited attribute such as status is restricted. Typically, three types of
specialization can be introduced when creating a subtype:
1. New feature Add a new attribute or a new participation in a relation.
2. Type restriction Restrict the value set of an attribute or the types of related concepts.
3. Cardinality restriction Restrict the number of attribute values or the number of par-
Knowledge Model Components 99
residence
CONCEPT apartment;
CONCEPT house; DESCRIPTION:
house apartment
DESCRIPTION: "part of of a larger estate";
"a residence with its own territory"; SUB-TYPE-OF: residence;
SUB-TYPE-OF: residence; square-meters: natural entrance-floor: natural ATTRIBUTES:
ATTRIBUTES: lift-available: boolean entrance-floor: NATURAL;
square-meters: NATURAL; lift-available: BOOLEAN;
END CONCEPT residence; END CONCEPT apartment;
Figure 5.10
Subtype relations are shown graphically through thick unlabeled lines, with an arrow pointing to the super-type.
In the textual description, the subtype specification should be part of the subconcept.
invisible visible
car state car state
fuse power
battery
inspection
status: {on, engine behavior
status: {normal,
value: {normal, off}
low}
broken} status: {normal,
does-not-start,
fuel tank gas in engine stops}
Figure 5.11
Subtype relations between concepts in the car domain.
100 Chapter 5
ticipations in a relation.
Sometimes it is useful to introduce subtypes even without any specialized features. This
is done if a term acts as a central concept in an application domain. Such subtypes can be
seen as “blown-up attributes,” because technically speaking they can always be replaced
by introducing an attribute, where the subtypes appear as possible values. These modelling
issues are discussed in more detail in Chapter. 7.
In Chapter. 13 we introduce more sophisticated methods for defining sub- and super-
types. In particular, we allow for multiple subtype hierarchies along different dimensions,
where each dimension represents a different “viewpoint” on a concept. The need for mul-
tiple hierarchies turns up in many real-life applications.
Rule type So far, the reader may have wondered what the difference is between a domain
schema and a traditional data model. However, the situation becomes more complex when
we want to model the directed lines in Figure 5.5. These lines represent dependencies
between car concepts. If we want to represent these dependencies in a schematic form
(without listing all the instances), how can we do this? Take two examples of dependencies
between car states that can be derived from this figure (represented in a simple intuitive
logical language):
These dependencies are a sort of natural rules, indicating a logical relationship between
two logical statements. The logical statements in such rules are typically expressions about
an attribute value of a concept. These rules are thus a special type of relation. The relation
is not (as usual) between concept instances themselves, but between expressions about
concepts.
In describing a domain schema for an application there is usually a need to describe
such rules in a schematic way. For example, we would like to describe the general structure
of the dependencies in Figure 5.5. The same is true for the knowledge rules in the beginning
of this chapter in Figure 5.1. To model the structure of such rules we provide a RULE-
TYPE construct. The rule type for modelling the two rules listed above would look like
this:
RULE-TYPE state-dependency;
ANTECEDENT: invisible-car-state;
CARDINALITY: 1;
CONSEQUENT: car-state;
CARDINALITY: 1;
CONNECTION-SYMBOL:
causes;
END RULE-TYPE state-dependency;
Knowledge Model Components 101
A rule-type definition looks a bit like a relation, where the antecedent and the con-
sequent can be seen seen as arguments. But the arguments are of a different nature.
Antecedents and consequents of a rule type are not concept instances, but represent ex-
pressions about those instances. For example, the statement that invisible-car-state is the
antecedent in this rule type means that the antecedent may contain any expression about
invisible-car-state. Examples of instances of antecedent expressions are fuel-tank.status =
empty and battery.status = low.
The rule type state-dependency models six of the arrows in Figure 5.5, namely lines
2–3 and 6–9 (see the numbers in the figure), precisely those between concepts of type car-
state. The other three dependencies do not follow the structure of the state-dependency
rule type. The dependencies 1, 4, and 5 connect an invisible car state to an expression about
a car-observable. These rules represent typical manifestations of these internal states. The
rule type below models this rule structure:
RULE-TYPE manifestation-rule;
DESCRIPTION: "Rule stating the relation between an internal state
and its external behavior in terms of an observable value";
ANTECEDENT:
invisible-car-state;
CONSEQUENT:
car-observable;
CONNECTION-SYMBOL:
has-manifestation;
END RULE-TYPE manifestation-rule;
The rule-type construct enables us to realize the requirement posed earlier in this chap-
ter, namely to structure a knowledge base into smaller partitions (e.g., rule sets) which
share a similar structure (cf. Figure 5.2). A rule type describes “natural” rules: logical
connections that experts tell you about in a domain. The rules need not (and usually are
not) strictly logical dependencies such as implications. Often, they indicate some heuristic
relationship between domain expressions. For this reason, we specify for each rule type
a SYMBOL that can be used to connect the antecedent and the consequent, when writ-
ing down a rule instance. The examples of dependencies mentioned earlier in this section
would look like this as instances of the state-dependency rule type:
fuel-supply.status = blocked
CAUSES
gas-in-engine.status = false;
battery.status = low
CAUSES
power.value = off;
Note that the examples of rule types exploit the subtype hierarchy of Figure 5.11 to
provide types for the antecedent and the consequent of the causal rules.
102 Chapter 5
invisible 1 1
causes car state
car state
state
dependency
1 has 1
invisible car
manifestation
car state observable
manifestation
rule
Figure 5.12
Graphical representation of a rule type. A directed line is drawn from the antecedent(s) to the connection symbol,
and from the connection symbol to the consequent. The rule-type name is placed in an ellipse and connected by
a dashed line to the connection symbol. The numbers indicate the cardinality (the minimum/maximum number
of expressions in the antecedent/consequent).
Figure 5.12 shows the graphical representation of a rule type, using the two rule types
of the car domain as examples. A directed line is drawn from the antecedent(s) to the
connection symbol, and from the connection symbol to the consequent. The rule-type name
is placed in an ellipse and connected with a dashed line to the connection symbol. The
dashed-line notation is used because of the similarity between a rule type and a “relation
as class”: instances of both are complex entities. The numbers connected to the lines
indicate the cardinality. The cardinality is used to put restrictions on the minimum and
maximum number of expressions in the antecedent and/or consequent. In this case the
rules must have precisely one condition and conclusion.
In this way we can build a number of rule types for a domain that capture in a schematic
way knowledge types that we find useful to distinguish. In Figure 5.13 you see a rule type
for the knowledge rules presented earlier this chapter as a challenge. Below the schema
itself the actual rules are listed as “instances” of this rule type. For the moment we assume
Knowledge Model Components 103
loan
constraint
Figure 5.13
Rule type for the loan knowledge.
an intuitive informal representation of these instances. Later (see Chapter. 13) we introduce
more precise syntax for writing down rule-type instances.
Please note that the notion of “rule” as we use it here is not connected in any way to the
implementation-specific rule formalisms. It might be the case that this formalism turns out
to be an adequate formalism, but there is no guarantee nor a need for this to be true. The
rule types are an analysis vehicle and should capture the structural logical dependencies
that occur in an application domain, independent of their final representation in a software
system.
A domain schema describes domain-knowledge types, such as concepts, relations, and rule
types. A knowledge base contains instances of those knowledge types. For example, in
the car-diagnosis domain we could have a knowledge base with instances of the rule type
causal-dependency. Figure 5.14 shows how we can define such a knowledge base. A
knowledge-base specification consists of two parts:
1. The USES slot defines of which domain-knowledge types instances are stored in the
104 Chapter 5
KNOWLEDGE-BASE car-network;
USES:
state-dependency FROM car-diagnosis-schema,
manifestation-rule FROM car-diagnosis-schema;
EXPRESSIONS:
/* state dependencies */
/* manifestation rules */
Figure 5.14
The knowledge base “car-network” contains instances of the rule types state-dependency and manifestation-rule.
where the latter part defines in which domain schema the type is defined. In the
car example we have only one schema, but we will see in Chapter. 13 that in more
complex applications there is often a need to introduce multiple domain schemas.
2. The EXPRESSIONS slot contains the actual instances. These rule instances can be
described in a semiformal way, where the connection symbol is used to separate the
antecedent expression from the consequent. Alternatively, a formal language can be
used here. It should be noted, however, that knowledge bases may easily change in
form and extent during analysis, so it is important to avoid excessive formalization
of the rule instances in cases where the knowledge type has not been verified and
validated yet.
Figure 5.14 shows a sample knowledge base, containing the causal model of the car
application. It uses the two rule types defined earlier. The instances in Figure 5.14 corre-
spond to the knowledge pieces listed in Figure 5.5.
The fact that a notion like a knowledge base exists is a typical characteristic of knowl-
edge modelling. For a database one would not dream of writing part of the actual data set
Knowledge Model Components 105
that will be stored in the database during analysis. In knowledge modelling, these instances
are of interest: they contain the actual knowledge on which the reasoning process is based.
This does not mean that our knowledge bases have to be completed during analysis.
Often, the knowledge engineer will be satisfied in the early phases of development with a
partial set of instances, and complete the knowledge bases once the knowledge model is
stable enough.
The separation of “domain schema” and “knowledge base” means that we have to
reinterpret the term “knowledge acquisition” as consisting of at least two steps: (1) defining
a knowledge type such as a rule type, and (2) eliciting the instances of this type and putting
them in a knowledge base. There is often a feedback loop between these two steps, where
the type definition can be seen as a hypothesis about the format of certain knowledge
structures in a domain, and the knowledge-elicitation process functions as a verification
or falsification of this hypothesis by answering the question: can we elicit (a sufficient
amount of) knowledge of this form?
The techniques that can be used for knowledge elicitation vary considerably. A wealth
of techniques exist to support this, ranging from manual methods (e.g., interview tech-
niques) to automated learning techniques. A discussion of this topic can be found in Chap-
ter. 8.
The inference knowledge in the knowledge model describes the lowest level of functional
decomposition. These basic information-processing units are called “inferences” in knowl-
edge modelling. An inference carries out a primitive reasoning step. Typically, an infer-
ence uses knowledge contained in some knowledge base to derive new information from
its dynamic input.
Why do we give primitive functions such a special status? A major reason is that
inferences are indirectly related to the domain knowledge. This feature is realized through
the notion of a knowledge role, as we will see further on in this section. This indirect
coupling of inference and domain knowledge enables us to reuse inference descriptions, as
we will see at length in Chapter. 6.
In software engineering it is common to request a process specification for every leaf
function. The nature of the specification is usually left open: either procedural (algorithm,
106 Chapter 5
This approach provides us with a guideline for deciding when to stop functional
decomposition, a frequently occurring problem in system analysis. The guideline is in
essence very simple: be satisfied with the grain size of your set of leaf functions, if and
only if these inferences provide you with an understandable reasoning trace. This guide-
line builds on a property that many knowledge-intensive systems share: these systems need
to explain their information-processing behavior in order for the results to be acceptable to
the user.
The main feature that distinguishes an inference from a traditional “process” or “function”
is the way in which the data on which the inference operates are described. Inference I/O
is described in terms of functional roles: abstract names of data objects that indicate their
role in the reasoning process. A typical example of a role is “hypothesis”: a functional
name for a domain object that plays the role of a candidate solution.
We distinguish two types of roles, namely dynamic roles and static roles. Dynamic
roles are the run-time inputs and outputs of inferences. Each invocation of the inference
typically has different instantiations of the dynamic roles.
Let’s take an example inference. Assume we have a cover inference that uses a causal
model to find explanations that could explain (“cover”) a complaint about the behavior of
the car. Such an inference would have have two dynamic roles: (1) an input role complaint,
denoting a domain object representing a complaint about the behavior of the system, and
(2) an output role hypothesis, representing a single candidate solution.
Static roles, on the other hand, are more or less stable over time. Static roles specify
the collection of domain knowledge that is used to make the inference. For example, the
above-mentioned inference cover could use the state-dependency network described in the
previous section to find candidate solutions.
Figure 5.15 shows a sample textual specification of the cover inference and its dynamic
and static roles. The first part of the specification shows how the inference roles (both the
dynamic and static ones) are bound to the domain. Objects of domain type visible-state
can play the role of complaint. In our miniexample, this means that only instances of
engine-behavior can be complaints (see Figure 5.11). The role hypothesis can be played
by all invisible-states. The static role causal-model maps to the state dependencies in the
knowledge base car-network.
Knowledge Model Components 107
KNOWLEDGE-ROLE complaint;
TYPE: DYNAMIC;
DOMAIN-MAPPING: visible-state;
END KNOWLEDGE-ROLE complaint;
KNOWLEDGE-ROLE hypothesis;
TYPE: DYNAMIC;
DOMAIN-MAPPING: invisible-state;
END KNOWLEDGE-ROLE hypothesis;
KNOWLEDGE-ROLE causal-model;
TYPE: STATIC;
DOMAIN-MAPPING: state-dependency FROM car-network;
END KNOWLEDGE-ROLE causal-model;
INFERENCE cover;
ROLES:
INPUT: complaint;
OUTPUT: hypothesis;
STATIC: causal-model;
SPECIFICATION:
"Each time this inference is invoked, it generates a candidate
solution that could have caused the complaint. The output thus
should be an initial state in the state dependency network
which causally ‘‘covers’’ the input complaint.";
END INFERENCE cover;
Figure 5.15
The inference “cover” has two dynamic roles: the input role “complaint” and the output role “hypothesis.” The
output is supposed to be a causal explanation of the complaint.
inference
knowledge
dynamic input role inference dynamic output role
static role
a)
causal model
b)
state dependencies
Figure 5.16
a): The sample inference “cover” has three roles, each of which is bound to domain objects that can play this role.
Here, knowledge modelling differs significantly from standard methods, where the name of the domain object
types would have been directly associated with the function. Part b) shows the standard data-flow diagram (DFD)
representation.
Knowledge Model Components 109
exploited in Chapter. 6.
In the knowledge model we abstract from the communication with other agents: users,
other systems. The emphasis lies in the structure of the reasoning process. Inferences
provide the lowest level of functional decomposition in this process.
However, one cannot completely leave out the interaction with the external world.
Some of these interactions play a role in the reasoning process itself, for example, ob-
taining additional observations in a diagnostic process. For this reason, we introduce the
notion of transfer function. A transfer function is a function that transfers an information
item between the reasoning agent described in the knowledge model and the outside world
(another system, some user). Transfer functions are black boxes from the knowledge-
model point of view: only their name and I/O are described. Detailed specifications of the
transfer functions should be placed in the communication model (see Chapter. 9).
Transfer functions have standard names. These names are based on the two proper-
ties that transfer functions have: who has the initiative and who is in possession of the
information item being transferred? Based on these properties we distinguish four transfer
functions:
1. Obtain The reasoning agent requests a piece of information from an external agent.
The reasoning agent has the initiative. The external agent holds the information item.
2. Receive The reasoning agent gets a piece of information from an external agent. The
external agent has the initiative and also holds the information item.
3. Present The reasoning agent presents a piece of information to an external agent.
The reasoning agent has the initiative and also holds the information item.
4. Provide The system provides an external agent with a piece of information. The
external agent has the initiative. The reasoning agent holds the information item.
Figure 5.17 shows this typology of transfer functions based on the dimensions “ini-
tiative” and “information holder.” obtain and receive are the transfer functions most com-
monly found in knowledge models. In particular, obtain is frequently used. receive appears
in many real-time tasks and is typically associated with asynchronous control. Figure 5.18
shows the specification of the transfer function to obtain the actual finding in the car appli-
cation. The transfer function is a black box from the knowledge-model point of view and
defines just the input-output roles and the type of a transfer function.
Together, the inferences form the building blocks for a reasoning system. They define the
basic inference actions that the system can perform and the roles the domain objects can
110 Chapter 5
system external
initiative initiative
external
obtain receive
information
internal
present provide
information
Figure 5.17
Typology of transfer functions based on the dimensions “initiative” and “information holder”.
TRANSFER-FUNCTION obtain;
TYPE:
OBTAIN;
ROLES:
INPUT: expected-finding;
OUTPUT: actual-finding;
END TRANSFER-FUNCTION obtain;
Figure 5.18
Specification of athe transfer function to obtain the actual finding. The transfer function is a black box from the
knowledge-model point of view and defines just the input-output roles and the type of a transfer function (obtain,
receive, present, or provide).
play. The combined set of inferences specifies the basic inference capability of the target
system. The set of inference steps can be represented graphically in an inference structure.
Figure 5.19 shows an example of such an inference structure for the car-diagnosis
problem. In an inference structure the following graphical conventions are used:
Rectangles represent dynamic roles. The name of the role is written in the rectangle.
Ovals represent inferences. Arrows are used to indicate input-output dependencies
between roles and inferences.
A rounded-box notation is used to indicate a transfer function. An example is the
Knowledge Model Components 111
complaint
causal actual
cover model obtain
finding
expected
hypothesis predict compare
finding
manifestation
model result
Figure 5.19
Inference structure for a simple diagnosis application..
obtain function in Figure 5.19.
A static role name is written between two thick horizontal lines. This representation
is purposely similar to data stores in DFDs, as static roles incorporate the same
“storage” notion. Static roles are connected via a directed line to the inference in
which they are used.
Including static roles is traditionally optional in inference structures, where the main
emphasis lies on the dynamic data-flow aspects. We usually include static roles in
the inference structure, certainly during the construction process.
A role constitutes a functional name for a set of domain objects that can play this
role. Some inferences operate on or produce one particular object, others work on a
set of these objects. This can lead to ambiguities in inference structures, for example,
if one inference produces one object and another inference works on a set of these
objects, possibly generated by some repeated invocation of the first inference. The
graphical notation allows for making this distinction explicit: if an arrow is labeled
with the word set, it indicates that the input or output should be interpreted as a set
of objects playing this role.
Figure 5.19 shows examples of the graphical conventions discussed above, The cover
inference takes as input the dynamic role complaint and produces a hypothesis as output.
112 Chapter 5
engine does
complaint not start
state dependency
rules
gas dial = normal
causal actual
cover model obtain
finding
expected
hypothesis predict compare
finding
manifestation manifestation
rules model not equal result
Figure 5.20
Inference structure in which the roles are annotated with domain-specific examples.
The causal-model is used as a static role by this inference. The predict inference delivers
an expected finding for the hypothesis, typically some observation that could act as sup-
port evidence for this hypothesis. The inference structure also contains a transfer function
obtain (cf. the rounded-box notation) for “getting” the actual finding. The third inference
is a simple comparison of the actual finding with the expected finding. The result is some
equality value.
An inference structure is an abstract representation of the possible steps in the reason-
ing process. To make it a bit less abstract one can also construct an annotated inference
structure. In this figure all the roles are annotated with domain-specific examples. Fig-
ure 5.20 shows such an annotated version of an inference structure.
It should be noted that, although there are some similarities between inference struc-
tures and DFDs, the differences are also significant. The static roles typically correspond
to data stores in DFDs; the dynamic roles would need to be represented as data flows.
Also, control flows are obsolete in inference structures. Actors are not shown in inference
structures, as those would be part of the communication model.
The inference structure summarizes the basic inference capabilities of the prospective
system. It also defines the vocabulary and dependencies for control, but not the control
itself. This latter type of knowledge is specified as task knowledge.
Knowledge Model Components 113
applying knowledge? We mention some typical goals:
We want to assess a mortgage application in order to minimize the risk of losing
money.
We want to find the cause of a malfunction of a photocopier in order to restore
service.
We want to design an elevator for a new building.
Task knowledge is the knowledge category that describes these goals and the strate-
gies that will be employed for realizing goals. Task knowledge is typically described in
a hierarchical fashion: top-level tasks such as DESIGN-ELEVATOR are decomposed into
smaller tasks, which in turn can be split up into even smaller tasks. At the lowest level of
task decomposition, the tasks are linked to inferences and transfer functions.
Two knowledge types play a prominent role in the description of task knowledge: the
TASK and the TASK-METHOD. A TASK defines a reasoning goal in terms of an input-
output pair. For example, a DIAGNOSIS task typically has as input a complaint, and
produces as output a fault category plus the supporting evidence. A TASK-METHOD de-
scribes how a task can be realized through a decomposition into subfunctions plus a control
regimen over the execution of the subfunctions. The TASK and the TASK-METHOD can
best be understood as respectively the “what” view (what needs to be done) and the “how”
view (how is it done) on reasoning tasks.
Figure 5.21 shows a graphical representation of the hierarchical structure of task
knowledge. In this case, a top-level task DIAGNOSIS is decomposed by a task method
DIAGNOSIS - THROUGH - GENERATE - AND - TEST . This leads to the five subfunctions we al-
ready encountered in the previous section: four inferences and one transfer function. In
most real-life models, one level of decomposition is insufficient. In that case, a top-level
task is decomposed in several new tasks, which again are decomposed through other meth-
ods, and so on. At the lowest level of decomposition, the inferences and transfer functions
appear. Tasks that are not decomposed further into other tasks are called primitive tasks;
the other tasks are called composite tasks.
5.6.1 Task
A TASK defines a complex reasoning function. The top-level task typically corresponds to
a task identified in the task model (cf. Chapter. 3). The specification of a task tells us what
the inputs and the outputs of the task are. The main difference with traditional nonleaf
function descriptions in DFDs is that the data manipulated by a TASK are described in a
domain-independent way. For example, the output of a medical diagnosis task would not
be a “disease” but an abstract name such as “fault category.”
114 Chapter 5
task diagnosis
diagnosis
task method through
generate-and-test
decomposition
obtain
cover
predict compare
transfer function
inferences
Figure 5.21
Task decomposition example for the car-diagnosis application showing the two main task-knowledge types: “task
and “task method.” Only one single level of task-decomposition is present, so diagnosis is defined here as a
primitive task, which decomposes directly into leaf functions.
Figure 5.22 shows a simple specification of the DIAGNOSE task. The GOAL and
SPECIFICATION slots give an informal textual description of, respectively, the goal of the
task and the relation between task input and output. Note that there is no domain-dependent
term to be found in the definition. The specification talks about a “system” about which
we have received a “complaint.” As we shall see, this type of definition may sometimes be
a bit harder to read because of its generic character, but it enables us to employ powerful
forms of reuse.
Task I/O is, just like inferences, specified in terms of functional role names. There are,
Knowledge Model Components 115
TASK car-diagnosis;
GOAL:
"Find a likely cause for the complaint of the user";
ROLES:
INPUT:
complaint: "Complaint about the behavior of the car";
OUTPUT:
fault-category: "A hypothesis explained by the evidence";
evidence: "Set of observations obtained during the
diagnostic process";
SPEC:
"Find an initial state that explains the complaint and is
consistent with the evidence obtained";
END TASK car-diagnosis;
Figure 5.22
Specification of the car-diagnosis task.
Each task should have a corresponding task method that describes how the task is
realized in terms of subtasks and/or inferences1.
edge to choose dynamically a particular method. This leads to a much more complicated but also more flexible
system. We come back to this issue in Chapter. 13.
116 Chapter 5
TASK-METHOD diagnosis-through-generate-and-test;
REALIZES: car-diagnosis;
DECOMPOSITION:
INFERENCES: cover, predict, compare;
TRANSFER-FUNCTIONS: obtain;
ROLES:
INTERMEDIATE:
expected-finding: "The finding predicted,
in case the hypothesis is true";
actual-finding: "The finding actually observed";
CONTROL-STRUCTURE:
REPEAT
cover(complaint -> hypothesis);
predict(hypothesis -> expected-finding);
obtain(expected-finding -> actual-finding);
evidence := evidence ADD actual-finding;
compare(expected-finding + actual-finding -> result);
UNTIL
"result = equal or no more solutions of over";
END REPEAT
IF result == equal
THEN fault-category := hypothesis;
ELSE "no solution found";
END IF
END TASK-METHOD diagnosis-through-generate-and-test;
Figure 5.23
Example task method for the car-diagnosis task. This method follows a generate-and-test strategy.
which the set of candidate solutions to a diagnosis problem is stored. In Figure 5.23 a
sample task method for the task DIAGNOSE is given.
The method in Figure 5.23 decomposes the diagnose task into four subfunctions: three
inferences and one transfer function. The control specifies a generate-and-test strategy:
1. At the start of the task the inference cover is invoked to generate a set of candidate
solutions (the differential) on the basis of the original complaint.
2. Subsequently, the candidate solution is tested to see whether it is consistent with
other data. This test consists of specifying an expected finding for the hypothesis
(the inference predict), obtaining an actual value from the user or some other ex-
ternal agent (the transfer function obtain), and finally comparing the actual and the
expected finding to see whether these are equal.
3. The observations made by obtain are added to the evidence, the second output of
the DIAGNOSE task which after execution of the task holds all the additional data
gathered.
4. If the comparison delivers a difference, the cover inference is invoked again, and the
process is repeated,
Knowledge Model Components 117
5. The DIAGNOSE task is finished if either compare returns an equal value (in which
case the hypothesis becomes the solution) or the cover inference fails to provide a
new hypothesis, in which case the method fails to find a solution.
The strategy sketched is of course just one possibility. In this particular example, the
method apparently assumes that there exist observations that can verify the existence of a
hypothesis. In Chapter. 6 we will see a somewhat more comprehensive diagnostic strategy.
In the appendix a full description of the pseudocode language is given. Here, it suf-
fices to say that the imperative pseudocode for control structures typically consists of the
following elements:
A simple “procedure” call, i.e., an invocation of a task, an inference, or a trans-
fer function. Note that we only use the dynamic roles as arguments for inference
invocations, because these vary over time.
Data operations on role values: e.g., assign a value to a role, add a set of values to a
role, and so on.
The usual control primitives for iteration and selection: repeat-until, while-do, if-
then-else.
The conditions used in these control statements (e.g., until ....) are typically state-
ments about values of roles (“the differential is empty”).
There are two special types of conditions. First, one can ask of an inference whether
it can produce a solution, or whether it is capable of producing new solutions. An
example of the use of new-solution can be found at the start of the control structure
in Figure 5.23.
Secondly, one can ask whether an inference produces a solution with a particular
input. This predicate is called has-solution. This is particularly useful for inferences
that can fail, such as tests and verifications.
These two special conditions are a direct consequence of the specific way in which
the control of inferences should be viewed from a task perspective. We assume
that within the execution of a certain task an inference has a memory, and that each
invocation will produce a new value (in the context of this task). An inference fails
if it can produce no more solutions.
An alternative for the pseudocode is to model the method control with the help of an
activity diagram. This UML notation is described in Section 14.2. Figure 5.24 shows the
method control for the car-diagnosis task. Some people will probably prefer this graphical
notation.
During knowledge-model construction, the task decomposition often changes a num-
ber of times. What was viewed as an inference early on might later be viewed as a task
which itself can be decomposed. The main guideline to be followed here is:
start
diagnosis
through
generate-and-test
cover
[result = not equal]
[new solution
of cover]
solution found
[result = equal]
predict
obtain compare
Figure 5.24
Alternative representation of method control using an activity diagram.
This guideline is based on the fact that inferences are treated as black boxes. The behavior
of the inference is assumed to be self-explanatory if one looks at the inputs and outputs.
Note that the black box view of inferences is only true in the context of the knowledge
model. The inference might well be realized in the final system by a complex computation
technique. In fact, inferences provide us with a potentially powerful abstraction technique
in the analysis stage, which helps us to shift much of the burden to the design phase. We
come back to this issue in the chapter on the process of knowledge-model construction
(Chapter. 7).
In the rest of this book we will use typographic conventions to indicate knowledge-model
component types such as task, role, inference, and so on. The conventions are listed in
Table 5.1.
Knowledge Model Components 119
Table 5.1
Typographic conventions used in this book for knowledge-model components. The category “metatype” is used
to indicate elements of the language itself.
There are four crucial differences between the CommonKADS knowledge model and more
general analysis approaches. These differences all arise from the specific nature of knowl-
edge analysis. We discuss these differences in some detail, because a clear insight into
these differences considerably simplifies understanding the knowledge-model components.
Difference 1: “data model” contains both data and knowledge The “data model” of a
knowledge model contains elements that are usually not found in traditional data models.
As we saw in the car example, the representation of even simple pieces of knowledge poses
specific problems. This results from the fact that knowledge can be seen as “information
about information.” It implies that parts of the “data model” describe how we should
interpret or use other parts. For example, if we have data types for observations and
diseases of patients, we could also want to describe a domain-knowledge type that allows
us to infer the latter from the former. This requires specialized modelling tools, in particular
the construct RULE-TYPE discussed in this chapter.
with standard functional decompositions that have proved useful for a particular task type.
Example knowledge-intensive task types are assessment, planning, and diagnosis.
The availability of a catalog of functional decompositions is a powerful tool for the
system analyst, as we will see in Chapter. 6. However, this feature of knowledge mod-
elling requires that all functions are described in a domain-independent terminology. This
means that the input/output of functions in an knowledge model is not described in terms
of data model elements, but in terms of task-oriented “role” names. These “roles” act as
placeholders for data-model elements. Effectively, roles decouple the description of the
static information structure on the one hand and the functions on the other hand.
Decoupling of functions and data makes a knowledge model more complex, but it
enables exploitation of powerful forms of reuse. The function-data decoupling is the main
area in which CommonKADS differs from object-oriented approaches (see Figure 5.25).
Difference 4: knowledge model abstracts from communication aspects This is all the
more true, because the knowledge model abstracts from all issues concerning interaction
with the outside world. These interactions are described in the communication model (see
Chapter. 9).
Keeping these differences in mind, will hopefully enable the reader to understand the
rationale underlying the knowledge model, and explain how the elements relate to general
software-engineering concepts.
If we take a somewhat broader view, we can indicate the position CommonKADS takes
in what can be called the “data-function” debate. This term refers to the point made by
the advocates of object-oriented analysis approaches. They reject the traditional functional
decomposition approaches such as Structured Analysis in which the “data” are secondary
to the functions. O-O advocates claim that, because the information structures (“data”) are
much more stable (and thus less likely to change) than the functions of a system. Therefore,
Knowledge Model Components 121
DATA
viewpoint
CommonKADS:
function-data decoupling
Figure 5.25
Schematic view of the data-function debate. In the Yourdon approach, functional decomposition is the start-
ing point of analysis; in the modern object-oriented approaches the “data” are the initial focus of attention.
CommonKADS takes an intermediate position,assuming both data and function descriptions can be stable and
reusable.
in O-O analysis, the data view is the entry point for modelling an application domain.
In CommonKADS we take a position between these two approaches. As you will
see in Chapter. 6 we assume that the functional decompositions can also be stable and
potentially reusable as well as the static information andknowledge structures. For this
reason, CommonKADS employs a data-function decoupling through the inference roles.
This dual approach will also be apparent from the guidelines in Chapter. 7, where we advise
you to do an initial task analysis and a domain conceptualization in parallel.
122 Chapter 5
Figure 5.25 gives a schematic view of the various positions taken in the data-function
debate.
There are several ways in which knowledge models can be used to support the knowledge-
modelling process. A potentially powerful approach is to reuse combinations of model
elements. When one models a particular application, it is usually already intuitively clear
that large parts of the model are not specific to this application, but re-occur in other do-
mains and/or tasks. CommonKADS (as do most other approaches to knowledge mod-
elling) makes use of this observation by providing a knowledge engineer with a collection
of predefined sets of model elements. These catalogs can be of great help to the knowledge
engineer. They provide the engineer with ready-made building blocks and prevent her from
“reinventing the wheel” each time a new system has to be built. In fact, we believe that
124 Chapter 6
these libraries are a conditio sine qua non for improving the state of the art in knowledge
engineering.
In this chapter we have included a number of simple partial knowledge models of the
task template type (see further). These models have proved to be useful in developing a
range of common straightforward systems. We expect these to be of use when modelling
a relatively simple knowledge-intensive task. The full collection of reusable models is
much larger, although rather heterogeneous and represented in nonstandard ways. We give
references in the text where you can find these; the knowledge engineer tackling more
complex problems might find them useful input.
There is a parallel between task templates on the one hand and the notion of “design
patterns” (Gamma et al. 1995) in O-O analysis on the other hand. Task templates are
design patterns for knowledge-intensive tasks. We are bold enough to claim that you will
find these “knowledge” patterns to be more powerful and precise than those in use in O-
O analysis, in particular because they are grounded on a decade of research and practical
experience.
knowledge-
intensive
task
analytic
synthetic
task
task
modelling scheduling
assessment monitoring
configuration
design
Figure 6.1
Hierarchy of knowledge-intensive task types based on the type of problem being solved.
system does not yet exist: the purpose of the task is to construct a system description. The
input of a synthetic task typically consists of requirements that the system to be constructed
should satisfy.
Analytic and synthetic tasks are further subdivided into a number of task types, as can
be seen in Figure 6.1. This subdivision of tasks is based on the type of problem tackled
by the task. For example, a “diagnosis” problem is concerned with finding a malfunction
that causes deviant system behavior. A diagnosis task is a task that tackles a diagnostic
problem. Although in theory, “problem” and “task” are distinct entities, in practice we
use these terms interchangeably. We often use a term such as “diagnosis” for a diagnostic
problem as well as for the task of solving this problem. Tavles 6.1 and 6.2 provide an
overview of the main features of, respectively, eanalytic and synthetic task type.
Synthesis tasks Design is a synthetic task in which the system to be constructed is some
physical artifact. An example design task is the design of a car. Design tasks in general
can include creative design of components, as is usual in car design. Creative design is too
hard a nut to crack for current knowledge technology. In order for system construction to
be feasible, we generally have to assume that all components of the artifact are predefined.
The subtype of design is called configuration design. Building a boat from a set of Lego
blocks is a well-known example of a configuration-design task. Another example is the
configuration of a computer system.
Assignment is a relatively simple synthetic task, in which we have two sets of objects
between which we have to create a (partial) mapping. Examples are the allocation of offices
to employees or of airplanes to gates. The assignment has to be consistent with constraints
(“Boeing 747 cannot be placed on a certain gate”) as well as conform with preferences
(“KLM airplanes should be parked in Terminal 1”).
Template Knowledge Models 127
Table 6.1
Overview of analytic task types.
Planning shares many features with design, the main difference being the type of
system being constructed. Whereas design is concerned with physical object construction,
planning is concerned with activities and their time dependencies. Examples of planning
tasks are travel planning and the planning of building activities. Again, automation of
planning tasks is usually only feasible if the basic plan elements are predefined. Because
of their similarity, design models can sometimes be used for planning and vice versa.
Scheduling often follows planning. Planning delivers a sequence of activities; in
scheduling, such sequences of activities (“jobs”) need to be allocated to resources dur-
ing a certain time interval. The output is a mapping between activities and time slots,
while obeying constraints (“A should be before B”) and conforming as much as possi-
ble with the preferences (“lectures by C should preferably be on Friday”). Scheduling is
therefore closely related to assignment, the major distinction being the time-oriented char-
acter of scheduling. Examples of scheduling are the scheduling of lectures at a university
department and job-shop scheduling in a process line of a factory.
For completeness, we mention modelling as a synthetic task type. In modelling, we
construct an abstract description of a system in order to explain or predict certain sys-
tem properties or phenomena. Knowledge modelling itself is an example of a modelling
task. Another example is the construction of a simulation model of a nuclear accident.
Modelling tasks are seldom automated in a system, but are sometimes used in the context
of knowledge management. A real-life example we have been involved with is the con-
struction of a knowledge model of the modelling expertise of a retiring expert in nuclear
accident simulations.
128 Chapter 6
!
preferences
Assignment Two object Mapping set 1 Constraints, Mapping need not be
sets, set 2 preferences one-to-one.
requirements
Planning Goals, Action plan Actions, Actions are (partially)
requirements constraints, ordered in time.
preferences
Scheduling Job Schedule = Constraints, Time-oriented character
!
activities, mapping preferences distinguishes it from
resources, activities time assignment.
time slots, slots of resources
requirements
Modelling Require- Model Model elements, May include creative
ments template models, “synthesis”.
constraints,
preferences
Table 6.2
Overview of synthetic task types.
General Characterization Describes the typical features of a task: goal, input, output,
terminology. Also, some remarks are made about the relation with other task types.
Default method A method for a task type is described in terms of roles, subfunctions,
and a description of the internal control (through a control structure). We show an
inference structure for the functions at the lowest level of decomposition. However,
Template Knowledge Models 129
the reader should be aware that these inferences are of a provisional nature, as in
practice it might well be necessary to decompose one or more inferences and thus to
view them as tasks with internal complexity (see also the discussion in Chapter. 5 of
the distinction between tasks and inferences).
Typical variations Some frequently occurring variations of the default method are de-
scribed. For example, in the default classification method an abstraction task could
be included. We do not show the changed diagrams and specifications for each vari-
ation, but these should be straightforward to construct.
Typical domain schema Each method makes assumptions about the nature of the un-
derlying domain knowledge. For example, the classification we describe makes as-
sumptions about knowledge linking classes to observed features. We describe these
assumptions in a tentative domain schema. Please note that the word “domain” in
this latter term should be regarded with some caution: the schema can, by definition,
contain no domain-specific types! The schema can best be viewed as requirements
on the domain schema that the knowledge engineer has to construct for the applica-
tion.
The issue of the relation between a method-related domain schema and a domain-
specific schema is discussed in more detail in Chapter. 13.
You might find the template models in this chapter abstract, or wonder to what systems
these models lead. For this reason we have made exemplar systems available through the
CommonKADS website (see the preface) which demonstrate implementations of these
models. For example, demonstrators can be found there of apple classification, house
assessment (following the knowledge model of Chapter. 10), and configuration of personal
computer systems.
The major part of this chapter presents the catalog of templates. At the end we come
back to the issue of using these templates in an application.
6.3 Classification
General Characterization
Goal Classification is concerned with establishing the correct class (or
category) for an object. The object should be available for inspec-
tion. The classification is based on characteristics of the object.
Typical example Classification of an apple. Classification of the minerals in a rock.
Terminology object: the object of which one finds the class or category, e.g., a
certain apple.
class: a group of objects that share similar characteristics, e.g., a
Granny Smith apple.
130 Chapter 6
Default method A first decision one has to make is whether one chooses a data-driven
or a solution-driven method. The data-driven approach starts off with some initial fea-
tures, which are used to generate a set of candidate solutions. A solution-driven method
starts with the full set of possible solutions and tries to reduce this set on the basis of the
information that comes in.
In most simple applications the solution-driven approach works best. In the first step
we generate a full set of candidate solutions, e.g., all potential apple classes. Then we prune
this set by gathering information about the object. We specify a characteristic that we are
interested in, and obtain its value. On the basis of this new information, we eliminate
candidate solutions that are inconsistent with this information from the candidate set. We
repeat this process until we have reduced the candidate set to one single element.
The specification of this method is shown in Figure 6.2. The first while loop generates
the set of candidate solutions. The second while loop prunes this set by obtaining new
information. The method finishes if one of the following three conditions is true (see the
condition of the second while loop):
1. A single candidate remains. This class becomes the solution.
2. The candidate set is empty. No solution is found.
3. No more attributes remain for which a value can be obtained. A partial solution is
found in the form of the remaining set of candidates.
Figure 6.3 shows the corresponding inference structure. Three inferences are used in
this method plus a transfer function for obtaining the attribute value:
Generate candidate In the simplest case, this step is just a look-up in the knowledge
base of the potential candidate solutions.
Specify attribute There are several ways of realizing this inference. The simplest way
is to just do a random selection. This can work well, especially if the “cost” of
Template Knowledge Models 131
TASK classification;
ROLES:
INPUT: object: "Object that needs to be classified";
OUTPUT: candidate-classes: "Classes consistent with the object";
END TASK classification;
TASK-METHOD prune-candidate-set;
REALIZES: classification;
DECOMPOSITION:
INFERENCES: generate, specify, match;
TRANSFER-FUNCTIONS: obtain;
ROLES:
INTERMEDIATE:
class: "object class";
attribute: "a descriptor for the object";
new-feature: "a newly obtained attribute-value pair" ;
current-feature-se: "the collection of features obtained";
truth-value: "indicates whether the class is consistent with
object features obtained during the reasoning process";
CONTROL-STRUCTURE:
WHILE NEW-SOLUTION generate(object -> class) DO
candidate-classes := class ADD candidate-classes;
END WHILE
WHILE NEW-SOLUTION specify(candidate-classes -> attribute)
AND SIZE candidate-classes > 1 DO
obtain(attribute -> new-feature);
current-feature-set := new-feature ADD current-feature-set;
FOR-EACH class IN candidate-classes DO
match(class + current-feature-set -> truth-value);
IF truth-value == false
THEN
candidate-classes := candidate-classes SUBTRACT class;
END IF
END FOR-EACH
END WHILE
END TASK-METHOD prune-candidate-set;
Figure 6.2
Pruning method for classification.
specify attribute
object
match feature
truth
value
Figure 6.3
Inferences structure for the pruning classification method.
Obtain feature Usually, one should allow the user to enter an “unknown” value. Also,
sometimes there is domain knowledge that suggests that certain attributes should
always be obtained as one group.
Match This inference should be able to handle an “unknown” value for certain at-
tributes. The default approach is that every candidate is consistent with an “un-
known” value for a certain attribute.
Limited candidate generation If the full set of candidate solutions is too large, one
adds a small data-driven element into the method by giving a small set of features
as input to the generate step, with the idea that only those candidates are considered
that are consistent with these initial data. In most cases the choice of this initial set
will be quite straightforward. This set can either contain a fixed set of attributes or
be dependent on the context (e.g., the location where the object is found).
Template Knowledge Models 133
User control over attribute selection In applications we have seen, users often want
to control the order in which new information is provided. In this case, the attribute
produced by the specify inference is used as a suggestion to the user, who has the
final control over the information to be provided. This can be achieved by changing
the control flow slightly. Replace the obtain function in the second while loop of the
control structure (see Figure 6.2) with the following two transfer function invoca-
tions:
1. A present function that shows the user the suggested new information item (=
attribute).
2. A receive function that “reads in” the new feature(s). These could be different
from the one suggested.
Hierarchical search through class structure In some domains, natural subtype hier-
archies of classes exist. Such a hierarchy can be exploited in two ways:
1. The hierarchy is used for attribute selection, because the supertypes often sug-
gest attributes that discriminate between disjunct sets of candidates. The su-
pertypes themselves are not used in the candidate set.
2. The hierarchy is used to guide the pruning process. Supertypes are incorpo-
rated in the candidate set. If a supertype is ruled out, all its subtypes are also
ruled out.
Typical domain schema Figure 6.4 shows a sort of minimal domain schema for classifi-
cation. The object-type is the overall category to which the objects to be classified belong,
e.g., apple or rock. The object type is linked to multiple object-classes that represent the
categories that will act as output of the classification task, e.g., a James Grieves apple or
a granite rock. An object type can be characterized by a number of attributes, such as
color, shape, composition, and so on. The main knowledge category used in classification
is specified in the class-constraint rule type, which allows us to define dependencies be-
tween object classes and attribute values, e.g., the object class James Grieves restricts the
value of the color attribute to green or yellow-green.
6.4 Assessment
General Characterization
Goal Find a decision category for a case based on a set of domain-
specific norms.
Typical example Decide whether a person gets a loan she applied for.
Terminology case: data about the lender and the requested loan.
decision category: eligible-for-loan yes or no.
134 Chapter 6
object type
has-attribute
class-of
2+ 1+
class
constraint
Figure 6.4
Typical domain schema for classification tasks.
figure combines the two task methods described in Chapter. 10. It consists of the following
functions:
Abstract case Almost always, some of the case data need to be abstracted. For exam-
ple, in the housing application (see Chapter. 10) the age and household type of the
applicant needed to be abstracted. The abstractions required are determined by the
data used in the norms (see further). Abstraction is modelled here as an inference
that is repeated until no more abstractions can be made. The abstracted features are
added to the case.
Specify norms After abstraction, the first step that needs to be taken is to find the norms
or criteria that can be used for this case. In most assessment tasks the norms used
are at least partially dependent on the case, and are the (abstracted) case input to this
function.
An example of a norm in a loan assessment application would be “loan-amount
matches income.”
Select norm From the set of norms generated by the previous inference, one norm needs
to be selected for evaluation. In the simplest case, this selection is done at random.
Often however, there is domain knowledge available that indicates an ordering of
norms evaluation. This knowledge can be used to guide the selection. It is not
necessary for the selection knowledge to be complete: the system can always fall
back on random selection as the default method.
Evaluate norm Evaluate the selected norm with respect to the case data. This function
produces a truth value for the norm, e.g., “loan-amount matches income” is false.
This function is usually a quite straightforward computation.
Match to see whether a solution can be found Sometimes, the truth value of one
norm can be sufficient to arrive at a decision. For example, if in the house appli-
cation one of the four norms turns out to be false for a certain case, the decision
found by the match function is “not eligible for this house.”
The functions described are shown graphically in the inference structure of Figure 6.6.
TASK assessment;
ROLES:
INPUT: case-description: "The case to be assessed";
OUTPUT: decision: "the result of assessing the case";
END TASK assessment;
TASK-METHOD assessment-with-abstraction;
REALIZES: assessment;
DECOMPOSITION:
INFERENCES: abstract, specify, select, evaluate, match;
ROLES:
INTERMEDIATE:
abstracted-case: "The raw data plus the abstractions";
norms: "The full set of assessment norms";
norm: "A single assessment norm";
norm-value: "Truth value of a norm for this case";
evaluation-results: "List of evaluated norms";
CONTROL-STRUCTURE:
WHILE HAS-SOLUTION abstract(case-description -> abstracted-case)
DO
case-description := abstracted-case;
END WHILE
specify(abstracted-case -> norms);
REPEAT
select(norms -> norm);
evaluate(abstracted-case + norm -> norm-value);
evaluation-results := norm-value ADD evaluation-results;
UNTIL
HAS-SOLUTION match(evaluation-results -> decision);
END REPEAT
END TASK-METHOD assessment-with-abstraction;
Figure 6.5
Method for assessment.
case
abstract
abstracted
specify norms select
case
evaluate norm
norm
decision match
value
Figure 6.6
Inference structure of the assessment method.
2. Case abstraction knowledge: specifies dependencies between case data (see the has-
abstraction rule type in Figure 6.7).
3. Norm-evaluation knowledge: specifies logical dependencies between case data and
norms. As discussed, norm-ordering knowledge can be added as well.
4. Decision knowledge: specifies the decision options that can act as task output, as
well as logical dependencies between norm values and a particular decision.
This schema abstracts from the housing domain, and is phrased in domain-neutral
terms. For a particular application these types have to mapped onto domain-specific types.
For example, the case data have to be linked to domain types that represent the case, e.g. a
loan-applicant and a loan.
138 Chapter 6
case
abstraction
rule
1+
requirement implies
norm decision
indicates
truth-value: boolean 1+
decision
rule
Figure 6.7
Domain schema for the assessment method.
6.5 Diagnosis
General Characterization
Goal Find the fault that causes a system to malfunction
Typical example Diagnosis of a technical device, such as a copier.
Terminology complaint/symptom: the data that initiate a diagnostic process
hypothesis: a potential solution (thus a fault)
differential: the set of active hypotheses
finding(s)/evidence: additional data about the system being diag-
Template Knowledge Models 139
nosed
fault: the solution found by the diagnostic reasoning process. The
nature of the fault representation varies, e.g. an internal system
state, a causal chain, or a heuristic label.
Input Symptoms and/or complaints
Output Fault(s) plus the evidence gathered for the fault(s)
Features In principle, a diagnosis task should always have some model of
the behavior of the system being diagnosed. Often however, a
diagnosis task is reduced to a classification task by replacing the
behavioral model with direct associations between symptoms and
faults. In the default method we assume that the underlying do-
main knowledge contains an (albeit quite simple) causal model of
system behavior.
Default method The default method is somewhat different from the method used in
the car-diagnosis application of Chapter. 5. The method also assumes a simple causal
model in which symptoms and potential faults are placed in a causal network, and in which
internal system states act as intermediate nodes. The network also contains causal links that
indicate typical findings for some state (see the domain schema further on).
Figure 6.8 shows the method specification. The corresponding inference structure
is shown in Figure 6.9. The method follows a generate-and-test strategy. The method
decomposes the diagnosis task into five subfunctions: four inferences and one transfer
function, which are briefly discussed below.
Cover This inference searches backward through a causal network to find potential
causes of the complaint. This inference is executed until no more hypotheses can
be found. The set of hypotheses is placed in the differential.
Select The select inference selects one hypothesis from the differential. We assume
that a simple form of preference knowledge is used in this selection process, e.g.,
knowledge about the a priori probability of the fault.
Specify This inference specifies some observable entity, the value of which can be used
to limit the number of candidate faults. The observable may not only tell us some-
thing about the presence of the hypothesis that acts as input for this step but can also
be used to rule out other hypotheses.
Obtain This is a simple transfer function to obtain the actual value of the observable
used for testing the candidates.
Verify This inference is used to check a candidate fault (a hypothesis). The result is a
boolean, indicating whether the candidate should be kept in the differential.
140 Chapter 6
TASK diagnosis;
ROLES:
INPUT:
complaint: "Finding that initiates the diagnostic process";
OUTPUT:
faults: "the faults that could have caused the complaint";
evidence: "the evidence gathered during diagnosis";
END TASK diagnosis;
TASK-METHOD causal-covering;
REALIZES: diagnosis;
DECOMPOSITION:
INFERENCES: cover, select, specify, verify;
TRANSFER-FUNCTIONS: obtain;
ROLES:
INTERMEDIATE:
differential: "active candidate solutions";
hypothesis: "candidate solution";
result: "boolean indicating result of the test";
expected-finding: "data one would normally expect to find";
actual-finding: "the data actually observed in practice";
CONTROL-STRUCTURE:
WHILE NEW-SOLUTION cover(complaint -> hypothesis) DO
differential := hypothesis ADD differential;
END WHILE
REPEAT
select(differential -> hypothesis);
specify(hypothesis -> observable);
obtain(observable -> finding);
evidence := finding ADD evidence;
FOR-EACH hypothesis IN differential DO
verify(hypothesis + evidence -> result);
IF result == false
THEN differential := differential SUBTRACT hypothesis;
END IF
END FOR-EACH
UNTIL
SIZE differential <= 1 OR "no more observables left";
END REPEAT
faults := differential;
END TASK-METHOD causal-covering;
Figure 6.8
Default causal-covering method for the diagnosis task.
Template Knowledge Models 141
result
Figure 6.9
Inference structure for the default diagnostic method.
The verify step can be modelled as a single inference in the case of a simple ver-
ification method such as the one used in the car-diagnosis application, where the
domain knowledge is assumed to contain direct association between hypotheses and
expected values of observables. However, in many applications the verify step will
need to be modelled in more detail. Some frequently occurring variations are dis-
cussed further on.
The last four functions are executed in a loop in which the candidates are tested in
the order dictated by the select inference. The loop terminates either when the differential
contains at most one hypothesis or when no more observables can be specified. Thus, the
method can lead to three situations:
1. The differential is empty: no fault is found. Apparently, the evidence found con-
flicted with the knowledge about faults.
2. Precisely one solution is found. This is usually the ideal outcome.
3. A set of faults remains. The system cannot differentiate between the remaining fault
candidates.
142 Chapter 6
Typical variations The method sketched is in fact a simple form of what is called
“model-based diagnosis” ¡in the literature. There is a complete research field connected
to diagnosis, and knowledge engineers interested in complex diagnostic applications will
probably make themselves familiar with this literature. A good starting point is the library
of diagnostic methods described by Benjamins (1993). The default method described here
is a variation of one of the methods described by Benjamins. Here, we limit the discussion
to a few common and relatively simple extensions and variations of the diagnostic method
without any claim of completeness.
Abstraction of findings Often it is useful to add an inference in which one tries to find
an abstraction of the findings obtained. Knowledge about faults is often expressed
in abstract terminology, which does not relate directly to the findings found.
Multiple faults The default method assumes that there is only one fault that causes the
complaint. If this assumption cannot be made, the method has to be refined. This can
be done by inserting an inference after the cover step that transforms the differential
into a set of potential fault sets. A common way of realizing this inference is through
set covering.
Fault selection A the end of the method we can introduce an inference that, if neces-
sary, selects the most promising ones from the remaining fault candidates. Several
preference techniques exist for this. The introduction of this step is particularly use-
ful if the method is extended to cope with multiple faults, because in that case the
number of hypotheses usually increases considerably and the verification step may
not be able to rule out a sufficient number of candidates.
Add simulation methods In the verification step one can use simulation methods to
derive expected values for findings. This requires two major extensions:
1. Additional domain knowledge about system behavior, e.g., a model that can be
used for qualitative or qualitative prediction of behavior.
2. A separate prediction step within verification. Note that prediction can be a
complex task in its own right.
Template Knowledge Models 143
system
feature
system system
can cause feature
state
system system
observable state
causal
value: universal status: universal dependency
fault
prevalence: number[0..1]
Figure 6.10
Typical domain schema for diagnosis.
Typical Domain Schema Figure 6.10 shows a typical domain schema for simple diagno-
sis. It assumes that each system being diagnosed can be characterized in terms of a number
of system features. There are two types of system features, namely those that can be ob-
served (e.g., a certain color) and those that represent an internal state of the system (e.g.,
some disease process). Faults are defined as subtypes of internal states, meaning that not
necessarily may every internal system state act as a fault. For example, often only the start-
ing points in the causal networks are allowed as faults, as is the case in the car-diagnosis
example.
The structure of the causal system model used by the cover and the specify inference
(see the static roles in Figure 6.9) is represented as a rule type causal-dependency. This
rule type describes rules in which the antecedent (some expression about an internal system
state) can-cause the consequent (some expression about a system feature, which could be
either another state or an observable value). The connection symbol can-cause is chosen
deliberately to make clear that the causal transition is not certain, but depends on unknown
other factors.
6.6 Monitoring
General Characterization
144 Chapter 6
Default method Figure 6.11 shows the default method that is applicable to most simple
monitoring tasks. The method is event-driven: the method becomes active every time new
data come in. This is modelled with the use of the transfer function receive in which
an external agent (a human user or another system) has the initiative (see Chapter. 5 and
Chapter. 9 for more details on transfer functions).
Once a new finding has come in, four inferences are defined for processing the data:
1. A system parameter is selected that can tell us something about the new data.
2. A norm value is specified for the parameter. Typically, a monitoring system will
have as domain knowledge a system model, consisting of a number of parameters.
For each parameter knowledge needs to be provided about the normal parameter
values in different system contexts. For example, if we have a system for monitoring
intensive care for premature infants, heart rate could be a parameter, and the normal
value would typically be a value above 100 beats/minute.
3. A comparison is made of the new finding with the norm, leading to a difference
description (e.g., 5 beats/minute below the norm).
Template Knowledge Models 145
TASK monitoring;
ROLES:
INPUT:
historical-data: "data from previous monitoring cycles";
OUTPUT:
discrepancy: "indication of deviant system behavior";
END TASK monitoring;
TASK-METHOD data-driven-monitoring;
REALIZES: monitoring;
DECOMPOSITION:
INFERENCES:
select, specify, compare, classify;
TRANSFER-FUNCTIONS:
receive;
ROLES:
INTERMEDIATE:
finding: "some observed data about the system";
parameter: "variable to check for deviant behavior";
norm: "expected normal value of the parameter";
difference: "an indication of the observed norm deviation";
CONTROL-STRUCTURE:
receive(new-finding);
select(new-finding -> parameter);
specify(parameter -> norm);
compare(norm + finding -> difference);
classify(difference + historical-data -> discrepancy);
historical-data := finding ADD historical-data;
END TASK-METHOD data-driven-monitoring;
Figure 6.11
Method specification for the data-driven method for monitoring. A data-driven method typically starts with a
“receive” transfer function, meaning that system control is dependent on the reception of external data. For this
reason, the role “new-finding” is not listed as a task input: it is an input during the task.
Typical variations In some domains the method model-driven monitoring is more ap-
propriate. Model-driven monitoring describes a monitoring approach where the system has
the initiative. This type of monitoring is typically executed at regular points in time, e.g.,
each month the progress of a software project is measured. In this case, the input data need
146 Chapter 6
system
model
new
receive select parameter
finding
historical
data
Figure 6.12
Inference structure of the task template for monitoring.
to be actively acquired. The system actively acquires new data for some selected set of
parameters and then checks whether the observed values differ from the expected ones.
In some cases classification is quite complex, and is best treated as a subtask with
internal structure. The classification method discussed earlier might be of help then.
6.7 Synthesis
Although “synthesis” is in essence just a common denominator for a group of task types,
some of which are described further on, we found it useful to include a general synthesis
Template Knowledge Models 147
Figure 6.13
Terminology in synthetic tasks. The examples are taken from a PC configuration domain.
model, because it turns out that in many synthetic tasks a similar reasoning pattern appears.
Also it is useful to define some common terminology. The model sketched here should be
viewed as an “ideal” model, which often cannot be used in precisely this form, or should
be extended in various ways. Also, the terminology used is by definition very abstract.
General Characterization
Goal Given a set of requirements construct a system structure that ful-
fills these requirements
Terminology system structure: the system being synthesized, e.g., a physical
artifact, a plan, a schedule, or a set of assignments.
constraint, preference, requirement These three terms appear
in most synthesis domains. Requirements are often divided up
into “hard” and “soft” requirements. In the literature, different
definitions are given of these terms. We propose to use the ter-
minology depicted in Figure 6.13. The main property of require-
ments in general is that they are external to the system. When
the requirements are “operationalized” for use in an application it
usually turns out that there are two types of requirements: “hard”
requirements and “soft” requirements. Typically, hard require-
ments have the same role and representation as the “constraints”
148 Chapter 6
Default method Figure 6.14 shows the specification of this idealized method for syn-
thetic tasks. It consists of four steps:
TASK synthesis;
ROLES:
INPUT: requirements: "the requirements that need to be
fulfilled by the artefact";
OUTPUT: system-structure-list: "partially ordered list
of preferred system structures";
END TASK synthesis;
TASK-METHOD idealized-synthesis-method;
REALIZES: synthesis;
DECOMPOSITION:
INFERENCES: operationalize, generate, select-subset, sort;
ROLES:
INTERMEDIATE:
hard-requirements: "requirements that need to be met";
soft-requirements: "requirements that act as a preference";
possible-system-structures: "all possible system structures";
valid-system-structures: "all system structures that are
consistent with the constraints";
CONTROL-STRUCTURE:
operationalize(requirements -> hard-requirements
+ soft-requirements);
generate(requirements -> possible-system-structures);
select-subset(possible-system-structures + hard-requirements
-> valid-system-structures);
sort(valid-system-structures + soft-requirements
-> system-structure-list);
END TASK-METHOD idealized-synthesis-method;
Figure 6.14
Specification of the idealized method for synthetic tasks.
4. Sort systems in preference order Often, the space of valid designs is still very large.
To reduce this set we need to apply preference criteria. Typically, we have two types
of preference-related knowledge:
a. The actual preferences, e.g., that the system should be as cheap as possible. In
our society this is often an important, if not the only preference criterion.
b. The relative importance of the preferences. Preferences are often rated accord-
ing to some preference scale.
If there is a strict ordering in the set of preferences, this step is not very difficult.
However, preference order is often tangled and no clear preference order exists. In
this case, some balancing function needs to be introduced to decide about the order-
ing.
Figure 6.15 shows the corresponding inference structure. We do not describe this
method in more detail. As you will see, the methods described in the rest of this chapter
150 Chapter 6
operationalize requirements
system
generate composition
knowledge
possible
system
structures
hard select
constraints
requirements subset
valid system
structures
preferences
soft
sort
requirements preference
ordering
knowledge
list of preferred
system structures
Figure 6.15
Inference structure of the “ideal” method for synthesis tasks.
152 Chapter 6
2. Verify the current design; if the extended design is OK, then continue with step 1,
else go to step 3.
3. Critique the current design and generate an ordered list of actions to revise the cur-
rent design.
4. Select an action and modify the design accordingly until the verify function suc-
ceeds.
5. Return to step 1. If no further extensions are available, report the configuration
found.
Operationalize requirements The needs and desires of the user have to be translated
into operational constraints and preferences that the method can work on. For exam-
ple, the “soft” requirement fast system is translated into the preference “maximize
the parameter SPEED of the component “PROCESSOR.” This operationalization is
by no means always trivial, and extensive knowledge elicitation may be required.
Specify skeletal design A skeletal design is a predefined format for the design: which
type of components should the solution contain? In many simple configuration-
design problems there is just one fixed basic artifact structure with some optional
components. Configuration of a personal computer system is an example of this.
In this case, this function is simply a look-up of the default skeletal design. In
more complex applications, several skeletal designs exist, one of which needs to be
selected.
Propose design extension This function is typically a task by itself with at least two
inferences:
1. compute a design extension, given the component choices in the current design.
Parameter values are usually logically dependent on the selected component
type. For example, a certain processor has a certain price and a certain speed.
For computed values, it is useful to keep a record of the values this computation
depends on. This can be used in the modification function later on.
2. prefer a design extension by using preferences in the knowledge base and user
preferences to select a component or parameter value for a design element in
the skeletal design that is not yet instantiated. If an ordinal scale of prefer-
ences exist use this in the selection. Again, it is useful to keep a record of the
preferences used to select a value for a component.
The straightforward approach is to try to find a computed extension first, before the
preference inference is invoked.
Template Knowledge Models 153
Verify current configuration Check with the help of the internal constraints and those
supplied by the user whether the current configuration is internally consistent. If the
verification fails, produce the violated constraint as an additional output.
Critique the current design A simple but effective form of critiquing is to include do-
main knowledge that associates a constraint with “fixes”: actions that can be un-
dertaken to modify the design such that the violation disappears. For example, a
violation of the constraint “minimum storage capacity” can be fixed with the action
“upgrade hard disk.” Such fixes typically suggest an ordered list of possible actions.
In more complex cases, the fix can involve updates of more than one design value.
As a general rule, only design elements for which a value has been “preferred” can
be subject to fixing.
Select an action This is usually a simple selection of the first untried element of the
action list generated by the CRITIQUE function.
Modify the configuration This function actually applies the fix action to the design.
The function also removes all components for which the value depended on the
changed element, and invokes compute value to recompute new values.
Figure 6.17 shows the inference structure for the default configuration-design method.
This is a good example of what is called a “provisional” inference structure in Section 6.2.
Some functions are likely to turn out as complex tasks in an actual application.
Two major variations have to be considered:
1. Perform verification plus revision only when a value has been proposed for all design
elements. This change requires in fact only a simple adjustment of the control struc-
ture of the method, but can have a large impact on the competence of the method.
Consult Motta et al. (1996) for a detailed discussion of this issue.
2. Avoid the use of fix knowledge. Fixes can be viewed as search heuristics to guide
the potentially extensive space of alternative designs once a constraint is violated.
However, it could turn out that fixes are not or only fragmentary available in the
application. In that case, the knowledge will have to fall back on a technique such as
chronological backtracking to realize the revision process. This solution is usually
much more demanding.
Typical domain schema Figure 6.18 shows the main domain knowledge types involved
in configuration design using the default method. The central concept is design element.
This is a supertype of component and parameter. Parameters are linked to a certain
component. Components themselves also act as a kind as a kind of parameter: their “value”
is the “model” selected for the component. For example, for a hard-disk component we
can select several models, each with its own parameter values (for capacity, access type,
154 Chapter 6
TASK configuration-design;
ROLES:
INPUT: requirements: "requirements for the design";
OUTPUT: design: "the resulting design";
END TASK configuration-design;
TASK-METHOD propose-and-revise;
REALIZES: configuration-design;
DECOMPOSITION:
INFERENCES: operationalize, specify, propose, verify, critique,
select, modify;
ROLES:
INTERMEDIATE:
skeletal-design: "set of design elements";
extension: "a single new value for a design element";
violation: "constraint violated by the current design";
truth-value: "boolean indicating result of the verification";
action-list: "ordered list of possible repair (fix) actions";
action: "a single repair action";
CONTROL-STRUCTURE:
operationalize(requirements -> hard-requirements
+ soft-requirements);
specify(requirements -> skeletal-design);
WHILE NEW-SOLUTION propose(skeletal-design + design
+ soft-requirements -> extension) DO
design := extension ADD design;
verify(design + hard-requirements
-> truth-value + violation);
IF truth-value == false
THEN
critique(violation + design -> action-list);
REPEAT
select(action-list -> action);
modify(design + action -> design);
verify(design + hard-requirements
-> truth-value + violation);
UNTIL truth-value == true;
END REPEAT
END IF
END WHILE
END TASK-METHOD propose-and-revise;
Figure 6.16
Version of the propose-and-revise method for configuration design.
Template Knowledge Models 155
skeletal
requirements specify
design
soft
operationalize propose extension
requirements
hard
requirements
modify
critique violation
Figure 6.17
Inference structure for the propose-and-backtrack method.
price, etc.). The propose-and-revise method in fact treats components in a similar manner
to parameters. Components can be organized in an aggregate component structure through
the subcomponent-of relation (see the lower left of Figure 6.18).
The domain schema contains three rule types. The rule type calculation-expression
describes knowledge pieces that represent computational dependencies between design
elements. An example is the weight parameter of an aggregate component which can
be derived from the combined weights of it subcomponents. The rule type constraint-
expression describes constraints on components. The antecedent consists of one or more
logical expressions about design elements. If the antecedent evaluates to true, the construct
is assumed to be true. The conclusion of the rule is an expression about some constraint
label, e.g., that the constraint “minimum-storage-capacity” has been exceeded. Finally,
the rule type preference-expression defines a dependency between a design element and
a preference. An example would be the preference “Intel inside” which requires as an
antecedent that the parameter maker of the component processor holds the value “Intel.”
Preferences are associated with a preference rating, indicating the relative importance of
the preferences. The exact representation of the preference rating is application-specific.
A fix is modelled in the schema as a complex relation. It links a constraint to a set
156 Chapter 6
action
type
name preference-rating
constraint defines
expression constraint
1+
1+ 1+
design design defines
computes
element element preference
preference
expression
calculation
expression
component parameter
Figure 6.18
Typical domain knowledge types in configuration design through propose-and-revise.
of fix-actions that can be applied to the design in case the constraint is violated. A fix-
action is a relation class (see Chapter. 5) and thus itself also a relation, namely between an
action (e.g., upgrade, downgrade, increase) and a design element. An instance of a fix in a
computer-configuration domain could look like this:
6.9 Assignment
General Characterization
Goal Create a relation between two groups of objects, subjects and re-
sources, that meets the requirements and obeys the constraints.
Typical example Assignment of offices to employees. Assignment of airplanes to
gates.
Terminology subject: An object (employee, airplane) that needs to get a certain
resource.
resource: An object (office, gate) that can used for a certain pur-
pose by a subject.
subject-group: A group of subject objects, usually constructed
for the purpose of joint subject assignment to a resource.
allocation: A relation between an subject and a resource.
Input Two object sets, one set consisting of subjects and the other set
consisting of resources available for assignment. Possible addi-
tional inputs: existing assignments, component-specific require-
ments.
Output A set of allocations of subject-resource allocations.
Features Assignment is a relatively simple synthetic task. One can see it as
a variation of configuration design, the main difference being the
underlying system structure which in assignment is not a physical
artifact.
Default method The template defined in this section covers only a simple method for
assignment which has proved useful. If this method is not appropriate, e.g., because exten-
sive backtracking is required, it is best to use a configuration design method instead.
The method specification is shown in Figure 6.19. The method contains three infer-
ences:
1. Select subset of subjects This inference selects a subset of the subjects to be assigned
based on domain-specific priority criteria. For example, in an office-assignment
domain the management staff could be assigned first. At Schiphol airport, KLM
airplanes may have priority. The knowledge that is used here can range from formal
regulations to heuristics used to constrain the search.
2. Group subjects This inference generates a group of subjects that can be assigned
jointly to a single resource. In many assignment domains the resources are not all
for single-subject use. For example, offices may host more than one employee.
Grouping typically brings a special kind of domain knowledge into play related to
constraints and preferences regarding subject-subject interaction. For example, plac-
ing a smoker with a nonsmoking person is nowadays not considered acceptable. If
158 Chapter 6
TASK assignment;
ROLES:
INPUT:
subjects: "The subjects that need to get a resource";
resources: "The resources that can be assigned";
OUTPUT:
allocations: "Set of subject-resource assignments";
END TASK assignment;
TASK-METHOD assignment-method;
REALIZES: assignment;
DECOMPOSITION:
INFERENCES: select-sub-set, group, assign;
ROLES:
INTERMEDIATE:
subject-set: "Subset of subjects with the same
assignment priority";
subject-group: "Set of subjects that can jointly be assigned
to the same resource. It may consist of a single subject.";
resource: "A resource that gets assigned";
current-allocations: "Current subject-resource assignments";
CONTROL-STRUCTURE:
WHILE NOT EMPTY subjects DO
select-subset(subjects -> subject-set);
WHILE NOT EMPTY subject-set DO
group(subject-set -> subject-group);
assign(subject-group + resources
+ current-allocations -> resource);
current-allocations :=
< subject-group, resource > ADD current-allocations;
subject-set := subject-set DELETE subject-group;
resources := resources DELETE resource;
END WHILE
subjects := subjects DELETE subject-set;
END WHILE
END TASK-METHOD assignment-method;
Figure 6.19
Default method for assignment without backtracking.
no grouping is needed, this step is best viewed as a no-op, in which a single subject
is (randomly) selected and becomes the “subject-group,” which in this case would
consist of just a single element. This inference may actually require complex reason-
ing. It may actually be useful to view grouping as a task and decompose it further to
describe the internal process in more detail. An effective method is the following:
a. First, generate all the possible groupings.
b. Then apply successive “select-subset” steps in which constraints and prefer-
Template Knowledge Models 159
select subject
subjects
subset set
subject
group
group
current
allocations
Figure 6.20
Inference structure for the assignment method.
ences are used in a specific order to filter out unwanted or less-preferred group-
ings.
This is not a method you will see an expert apply, but a computer can handle it with-
out any problem! The advantage is that you are sure of getting the optimal solution,
whereas this remains unsure when you use heuristic expert knowledge for generating
groupings. This grouping method is in fact an instantiation of the idealized method
for synthetic tasks in general (presented earlier in this chapter).
3. Assign In the assign step a resource is selected that fits best with the constraints and
preferences connected to the subjects involved. The current allocations are often
an important input, because some assignments may actually depend on where some
subject is placed (e.g., a secretary needs to be placed close to the person she works
for).
Typical method variations As we noted earlier, the method sketched above cannot han-
dle backtracking over allocations already made. If this is required for the application task,
then you should probably use the configuration-design method described in this chapter.
The following variations of the assignment method occur relatively frequent:
Existing allocations In some applications, there may be existing allocations at the point
where the task starts (e.g., assignment of airplanes to gates). In that case you are
likely to need this as an additional input for all three subfunctions.
6.10 Planning
General Characterization
Goal Given some goal description, generate a plan consisting of a (par-
tially) ordered set of actions or activities that meet the goal.
Typical example Planning of therapeutic actions for treating a disease.
Terminology goal: the goal that one intends to achieve through carrying out the
plan, e.g., acute bacterial infection.
action: a basic plan element, e.g., “give antibiotic type A three
times per day for a period of one week in a dosage of 500 mg.”
plan: a partially ordered collection of actions, e.g., the adminis-
tration of two antibiotics in parallel.
Input The goal to be achieved by the plan plus additional requirements.
Output The action plan that reaches the goal.
Features Be aware that in many domains the term “planning” is used in a
different sense. The term “planning” may map to “scheduling”
in the terminology of this chapter: allocation of activities to time
slots. In other domains the term “planning” has a wider mean-
ing and covers both the task types “planning” and “scheduling.”
Template Knowledge Models 161
Method We have not included a separate template for planning. Instead, you can use
two previous templates to model a planning task, namely the synthesis template or the
configuration-design template. We advise the following modelling strategy:
If the space of possible plans is not too large, use the synthesis template. The design
space is determined by the set of basic plan actions plus the ways in which these
elements can be combined. In some therapy-planning domains both the action set
and the combinations are limited, so that the synthesis template can be used. The
advantage of the synthesis template is that it will always find the “best” plan. For
applying the template to planning you have to add a few simple refinements to the
template. Firstly, you have to separate the goal role from the other requirements, and
use it as input for the generate step. Also, the role terminology can be made specific
for planning. Figure 6.21 shows the inference structure instantiated for the planning
task.
If the space of possible plans is large, we advise you to use the method described for
configuration design. This propose-and-revise method requires using several types
of additional knowledge to prune the search space. The method is easy to adapt to
planning.
Both methods assume, as with configuration design, that the set of basic plan compo-
nents (the actions) is fixed. If this assumption is not true, automation of the task is likely
to be infeasible with current techniques. Also, if the grain size of the plan action is small,
the methods described may work poorly.
6.11 Scheduling
General Characterization
Goal Given a set of predefined jobs, each of which consists of tempo-
rally sequenced activities called units, assign all the units to re-
sources, while satisfying constraints.
Typical example Production scheduling in plant floors.
Terminology job: a temporal sequence of units.
unit: an activity to be performed at a resource.
resource: an agent that may satisfy a demand of a unit.
constraint: a restrictive condition on the mapping of units on
resources.
Input A set of jobs consisting of units.
Output Mapping of units on resources, in which all the start and end times
of units are determined.
162 Chapter 6
plan
generate composition
knowledge
operationalize
possible
plans
hard select
constraints
requirements subset
valid plans
preferences
soft
sort
requirements preference
ordering
knowledge
list of preferred
plans
Figure 6.21
Inference structure for planning based on the synthesis template.
Template Knowledge Models 163
Default method A default scheduling method assigns every unit to a resource, fixing the
start and end times of each unit. The method specification is given in Figure 6.22. This
method iterates assigning a unit to a resource, while a candidate unit is available. After the
creation of an initial schedule, several inferences are called iteratively, in order to select a
candidate unit, select a target resource, assign the unit to the resource, evaluate a current
schedule, and modify the schedule. This default method is illustrated in Figure 6.23 as an
inference structure. Each of the above mentioned inferences are described in some detail
below.
Specify an initial schedule A schedule is a place holder of input entities, and also a
skeletal structure of an output. An initial schedule usually has no assignment be-
tween units and resources.
Select a candidate unit to be assigned This inference picks up a single unit as a can-
didate for assignment. A unit can be selected with reference to its temporal relation
to other units. For example, it is possible to select a unit with the latest end time in
order to complete jobs as closely as possible to due dates, or a unit with the earliest
start time to release available jobs as early as possible.
Select a target resource for the candidate unit This inference picks up a target re-
source for a selected unit. A typical condition to be considered here is a resource
type constraint, which excludes a resource whose type is not equivalent to a resource
type designated by the unit. Since a load of each resource can be calculated by accu-
mulating that of all the units assigned to the resource, it is possible to take account
of load balance of alternative resources by selecting a resource with minimum load.
Assign the unit to the target resource This function establishes an assignment of a
candidate unit to a target resource. Two types of constraints are considered here:
164 Chapter 6
TASK scheduling;
ROLES:
INPUT: jobs: "The activities that need to be scheduled";
OUTPUT: schedule: "activities assigned to time slots";
END TASK scheduling;
TASK-METHOD temporal-dispatching;
REALIZES: scheduling;
DECOMPOSITION:
INFERENCES: specify, select, assign, modify, evaluate;
ROLES:
INTERMEDIATE:
candidate-unit: "";
target-resource: "";
truth-value: "";
CONTROL-STRUCTURE:
specify(jobs -> schedule);
WHILE HAS-SOLUTION select(schedule -> candidate-unit) DO
select(candidate-unit + schedule -> target-resource);
assign(candidate-unit + target-resource -> schedule);
evaluate(schedule -> truth-value);
IF truth-value == false
THEN modify(schedule -> schedule);
END IF
END WHILE
END TASK-METHOD temporal-dispatching;
Figure 6.22
Default method for scheduling.
a resource capacity constraint and a unit precedence constraint. The capacity con-
straint is to prevent a resource from being allocated to more units than it can process
at one time. The precedence constraint restricts the process routing to follow a pre-
defined temporal sequence.
Evaluate a current schedule This function checks if a current schedule can be satisfied
with regard to given constraints or evaluation criteria. Typical criteria in schedul-
ing problems are the number of jobs processed within a certain time interval (i.e.,
throughput), and the fraction of time in which a resource is active (i.e., resource
utilization).
Modify the current schedule This function adjusts the position of a unit in order to
improve evaluation of the current schedule. This modification may require further
adjustment for either units assigned to the same resource, or units that are temporally
sequenced under a job.
Template Knowledge Models 165
job specify
truth
select schedule evaluate
value
candidate
assign modify
unit
target
select
resource
Figure 6.23
Inference structure for the default scheduling method.
Typical variations Two types of scheduling methods are well known in the literature:
constructive methods and repair methods, Zweben et al. (1993), and both types are often
employed complementarily in practical situations. The constructive scheduling methods
incrementally extend valid, partial schedules until a complete schedule is created or until
backtracking is required. The repair methods begin with a complete, but possibly flawed,
set of assignments and then iteratively modify the whole or a part of the assignments.
In the specification of Figure 6.22, the method primarily realizes a constructive
method, and a repair method is interleaved at the end of the main iteration loop. However,
it is possible to put the repair method outside the main loop. Scheduling under unreliable
environments often requires dynamic repairs in response to variable conditions. In such
cases, repair methods play a more important role, and are devoted to local repairs rather
than (re)scheduling from scratch.
It must be noted here that the inference structure in Figure 6.23 captures a high-level
inference flow regardless of ways of composing the constructive and repair methods. Some
other variations in scheduling methods are found in Hori et al. (1995) with examples of
components elicited from existing scheduling systems. A broader collection of scheduling
166 Chapter 6
schedule job
release-date: TIMe
due-date: TIME
inclides
{temporally
job unit
ordered}
preference
resource unit constraint
is performed-at
type: STRING start: TIME
start-time: TIME end: TIME
{dynamically
end-time: TIME linked} resource-type: STRING
Figure 6.24
Typical domain schema for scheduling problems.
Typical domain schema Figure 6.24 shows a typical domain schema for scheduling
problems. An essential feature of the scheduling problems lies in one-to-many associations
between a resource and units, which is established dynamically by a scheduling method.
The resource capacity constraint mentioned earlier is a condition imposed on this relation.
On the other hand, a job aggregates several units that are sequenced temporally. This
aggregate relation is fixed in advance in a problem specification, and must be maintained
as a unit precedence constraint. Resources and jobs are held by a single entity called
a schedule. The schedule entity is exploited by inference functions such as select and
modify in the default scheduling method.
The schema in Figure 6.24 captures an essential core of domain knowledge for
scheduling problems. It is possible for the schema to be further elaborated taking account
of structural regularities in a concrete application domain. A domain schema given in Hori
and Yoshida (1998) can be regarded as an elaboration of this schema to be exploited for
scheduling problems in plant floors.
Table 6.3
Typical combinations of task types in application tasks.
are seen frequently together. For example, monitoring and diagnosis are often seen in
combination. The output of monitoring is used as input for the diagnosis task. Table 6.3
lists a number of typical task-type combinations. Of course, other combinations are possi-
ble as well. The table lists the basic combinations. Combinations of combinations are also
possible, e.g., monitoring, diagnosis, and planning.
chapter. The knowledge model is thus in fact a continuation of the task decomposition in
the context models.
7.1 Introduction
So far, we have mainly concentrated on the contents of the knowledge model. As in any
modelling enterprise, inexperienced knowledge modelers also want to know how to un-
dertake the process of model construction. This is a difficult area, because the modelling
process itself is a constructive problem-solving activity for which no single “good” solu-
tion exists. The best any modelling methodology can do is provide a number of guidelines
that have proved to have worked well in practice.
This chapter presents such a set of guidelines for knowledge-model construction. The
guidelines are organized in a process model that distinguishes a number of stages and
prescribes a set of ordered activities that need to be carried out. Each activity is carried
170 Chapter 7
out with the help of one or more techniques and can be supported through a number of
guidelines. In describing the process model we have tried to be as prescriptive as possible.
Where appropriate, we indicate sensible alternatives. However, the reader should bear in
mind that the modelling process for a particular application may well require deviations
from the recipe provided. Our goal is a “90%-90%” approach: it should work in 90% of
the applications for 90% of the knowledge-modelling work.
As pointed out in previous chapters, we consider knowledge modelling to be a special-
ized form of requirements specification. Partly, knowledge modelling requires specialized
tools and guidelines, but one should not forget that more general software-engineering
principles apply here as well. At obvious points we refer to those, but these references will
not be extensive.
This chapter does not cover the elicitation techniques often used in the knowledge
analysis and modelling process. An overview of useful elicitation techniques can be found
in Chapter. 8.
1. Knowledge identification. Information sources that are useful for knowledge mod-
elling are identified. This is really a preparation phase for the actual knowledge
model specification. A lexicon or glossary of domain terms is constructed. Existing
model components such as generic task models and domain schemas are surveyed,
and components that could be reused are made available to the project. Based on an
elaborate characterization of the application task and domain at hand, a decision is
made about the components that will actually be reused.
Typically, the description of knowledge items in the organization model and the
characterization of the application task in the task model form the starting point
for knowledge identification. In fact, if the organization-model and task-model de-
scriptions are complete and accurate, the identification stage can be done in a short
period.
2. Knowledge specification. In the second stage the knowledge engineer starts to con-
struct a specification of the knowledge model. In the standard case, the specification
language is the semiformal language presented in the previous chapters. In some
cases (for example, for safety-critical systems) this might be followed by a specifi-
cation in a fully formal language.
The reusable model components selected in the identification stage provide part of
the specification. The knowledge engineer will have to “fill in the holes” between
these predefined parts. As we will see, there are two approaches to knowledge-
model specification, namely starting with the inference knowledge and then moving
Knowledge Model Construction 171
to related domain and task knowledge, or starting with domain and task knowledge
and linking these through inferences. The choice of approach depends on the quality
and detailedness of the chosen generic task model (if any).
In terms of the domain knowledge, the emphasis in this stage is on the domain
schema, and not so much on the knowledge base(s). In particular, one should not
write down the full set of knowledge instances that belong to a certain knowledge
base. This can be left for the next stage.
3. Knowledge refinement. In the final stage, attempts are made to validate the knowl-
edge model as much as possible and to complete the knowledge base by inserting
a more or less complete set of knowledge instances (e.g., instances of rule types).
An important technique for validating the initial specification that comes out of the
previous stage is to do a simulation based on some externally provided scenarios.
This simulation can be paper-based or include the construction of a small, dedicated
prototype. The simulation should give an indication whether the model constructed
can generate the problem-solving behavior required. Only after such an initial eval-
uation is completed is it useful to spend time on “completing” the knowledge base
(i.e., adding knowledge-base contents).
These three stages can be intertwined. Sometimes, feedback loops are required. For
example, the simulation in the third stage may lead to changes in the knowledge-model
specification. Also, completion of the knowledge bases may require looking for additional
knowledge sources. The general rule is: feedback loops occur less frequently if the appli-
cation problem is well understood and similar problems have been tackled successfully in
prior projects.
We now look at the three stages in more detail. For each stage we indicate typical
activities, techniques, and guidelines. Within the scope of this book, we cannot give full
accounts of all the techniques. Where appropriate we indicate useful references for study-
ing a particular technique.
- domain familiarization
(listing of sources, glossary,
knowledge summaries, scenarios)
identification - list potential model components (for reuse)
(task-related components,
domain-related components)
Figure 7.1
Overview of the three main stages in knowledge-model construction. The arrows indicate typical but not absolute
time dependencies. For each stage some activities are listed on the right.
Explore and structure the information sources for the task, as identified in the knowl-
edge item listings. During this process, create a lexicon or glossary of terms for the
domain.
Study the nature of the task in more detail, and check or revise the generic task
type. List all the potential reusable elements for this application task for all three
knowledge categories.
Knowledge Model Construction 173
The starting point for this activity is the list of knowledge items described in worksheet
TM-2. One should study this material in some detail. Two factors are of prime importance
when surveying the material:
1. Nature of the sources. The nature of the information sources determines the type
of approach that needs to be taken in knowledge modelling. Domains with well-
developed domain theories are usually easier than ill-specified domains with many
informal and/or diffuse sources.
2. Diversity of the sources. If the information sources are very diverse in nature, with
no single information source (e.g., a textbook or a manual) playing a central role,
knowledge modelling requires more time. Sources are often conflicting, even if they
are of the same type. For example, having multiple experts is a considerable risk
factor.
In the context of this book we cannot go into details about the multiexpert situation,
but the references at the end of this chapter include a number of useful texts.
Techniques used in this activity are often of a simple nature: text marking in key
information sources such as a manual or a textbook, one or two structured interviews to
clarify perceived holes in the domain theory. The goal of this activity is to get a good
insight, but still at a global level. More detailed explorations may be carried out in less
understood areas, because of their potential risks.
The main problem the knowledge engineer is confronted with is to find a balance
between learning about the domain without becoming a full domain expert. For exam-
ple, a technical domain in the processing industry concerning the diagnosis of a specific
piece of equipment may require a large amount of background knowledge to understand,
and therefore the danger exists that the exploration activity will take long. This is in fact
the traditional problem with all knowledge-engineering exercises. One cannot avoid (nor
should one want to) becoming a “layman expert” in the field. The following guidelines
may be helpful in deciding upon the amount of detail required for exploring the domain
material:
Guideline 7-1: TALK TO PEOPLE IN THE ORGANIZATION WHO HAVE TO TALK TO
EXPERTS BUT ARE NOT EXPERTS THEMSELVES
Rationale: These “outsiders” have often undergone the same process you are now
undertaking: trying to understand the problem without being able to become a full
expert. They can often tell you what the key features of the problem-solving process
are on which you have to focus.
Guideline 7-2: AVOID DIVING INTO DETAILED , COMPLICATED THEORIES UNLESS
THEIR USEFULNESS IS PROVEN
174 Chapter 7
Rationale: Usually, detailed theories can safely be omitted in the early phases of
knowledge modelling. For example, in an elevator configuration domain the expert
can tell you about detailed mathematical theories concerning cable traction forces,
but the knowledge engineer typically only needs to know that these formulas exist,
and that they act as a constraint on the choice of the cable type.
Guideline 7-3: C ONSTRUCT A FEW TYPICAL SCENARIOS WHICH YOU UNDERSTAND
AT A GLOBAL LEVEL
Rationale: It is often useful to construct a number of typical scenarios: a trace
of a typical problem-solving process. Spend some time with a domain expert to
construct them, and ask nonexperts involved whether they agree with the selection.
Try to understand the domain knowledge such that you can explain the reasoning of
the scenario in superficial terms.
Scenarios are a useful thing to construct and/or collect for other reasons as well. For
example, validation activities often make use of predefined scenarios.
Never spend too much time on this activity. Two person-weeks should be the maxi-
mum, except for some very rare difficult cases. If you are doing more than that, you are
probably overdoing it.
The results achieved at the end of the activity can only partly be measured. The tangi-
ble results should be
listing of domain knowledge sources, including a short characterization:
summaries of selected key texts;
description of scenarios developed.
However, the main intangible result, namely your own understanding of the domain, stays
the most important one.
The goal of this activity is to pave the way for reusing model components that have already
been developed and used elsewhere. Reuse is an important vehicle for quality assurance.
This activity studies potential reuse from two angles:
1. Task dimension. A characterization is established of the task type. Typically, such
a type has already been tentatively assigned in the task model. The aim here is to
check whether this is still valid using the domain information found in the previous
step and the definitions of task types describes in Chapter. 6. Based on the selected
task type, one starts to build a list of task methods, and/or inference structures that
are appropriate for the task.
2. Domain dimension. Establish the type of the domain: is it a technical domain?, Is
the knowledge mainly heuristic?, and so on. Then, look for standardized descrip-
tions of this domain or of similar domains. These descriptions can take many forms:
Knowledge Model Construction 175
field-specific thesauri such as the Art and Architecture Thesaurus (AAT) for art ob-
jects or the Medical Subject Headings (MeSH) for medical terminology, “ontology”
libraries, reference models (e.g., for hospitals), product model libraries (such as the
ones using the ISO STEP standard). Over the last few years there have been an
increasing number of research efforts constructing such knowledge bases.
Guidelines for task-type selection Selecting the right task type is important. The guide-
lines below may help you in making the right choice.
Rationale: Be aware of the point made in Chapter. 6, that there is hardly ever a one-
to-one match between application task and a task type. Ideally, these distinctions will
already have been disclosed in the task model, but it may happen that you “discover”
this during knowledge modelling.
Rationale: This guideline refers to the frequently occurring situation in which the
application task has already a name that also occurs in the task-type list, e.g., “travel
planning.” These application task labels do not necessarily match with the definition
of the task type used in this book. The meaning of a term like “planning” varies
and our task-type definitions are in a sense arbitrary decisions about where to put
borderlines between tasks. Consult carefully the “remarks” paragraph in the first
section of each task-template section to learn about typical confusions with other
task types. For example, “diagnosis” performed by nonexperts (or by experts who
have little data available) is often actually a task of the “assessment” type.
The goal of this stage is to get a complete specification of the knowledge, except the con-
tents of the knowledge base: these may only contain some example knowledge instances.
The following activities need to be carried out to build such a specification:
Choose a task template.
Construct an initial domain conceptualization.
Specify the three knowledge categories in either a “middle-out” (starting with in-
ference knowledge) or a “middle-in” (starting with task and domain knowledge in
parallel) way.
176 Chapter 7
Chapter. 6 contains a small set of task decompositions for a number of task types such as
diagnosis and assessment. The chapter also gives pointers to other repositories where one
can find potentially useful task templates. We strongly prefer an approach in which the
knowledge model is based on an existing application. This is both efficient and gives some
assurance about the model quality, depending on the quality of the task template used and
the match with the application task at hand.
Several features of the application task can be important in choosing an appropriate
task template:
the nature of the output (the “solution”): e.g., a fault category, a decision category, a
plan;
the nature of the inputs: what kind of data are available for solving the problem?:
the nature of the system the task is analyzing, modifying, or constructing: e.g., a
human-engineered artifact such as a photocopier, a biological system such as a hu-
man being, or a physical process such as a nuclear power plant;
constraints posed by the task environment: e.g., the required certainty of the solution,
the costs of observations.
The following guidelines can help the selection of a particular template:
Guideline 7-6: PREFER TEMPLATES THAT HAVE BEEN USED MORE THAN ONCE
Rationale: Empirical evidence is still the best measurement of quality of a task
template: a model that has proved its (multiple) use in practice is a good model.
Guideline 7-7: IF YOU THINK YOU HAVE FOUND A SUITABLE TEMPLATE , CON -
STRUCT AN “ANNOTATED ” INFERENCE STRUCTURE
Rationale: In an annotated inference structure one adds domain examples to the
generic figure. This is a good to way to get an impression about what the “fit”
is between the template and the application domain. An example of an annotated
inference structure was shown in Figure 5.20.
The goal of this activity is to construct an initial data model of the domain independent of
the application problem being solved or the task methods chosen. Typically, the domain
schema of a knowledge-intensive application contains at least two parts:
Rationale: Even if the information needs for your application are much higher (as
they often are in knowledge-intensive applications), it is still useful to use at least
the same terminology and/or a shared set of basic constructs. This will make future
cooperation, both in terms of exchange between software systems and information
exchange between developers and/or users, easier.
Rationale: The domain-specific part of the domain schema can usually be handled
by the “standard” part of the CommonKADS language. The notions of concepts,
subtypes and relations have their counterparts in almost every modern software-
engineering approach, small variations permitting. The description often has a more
178 Chapter 7
Constructing the initial domain conceptualization can typically be done in parallel with
the choice of the task template. In fact, if there needs to be a sequence between the two
activities, it is still best to proceed as if they are carried out in parallel. This is to ensure
that the domain-specific part of the domain schema is specified without a particular task
method in mind.
There are basically two routes for completing the knowledge model once a task template
has been chosen and an initial domain conceptualization has been constructed:
Route 1: Middle-out Start with the inference knowledge, and complete the task knowl-
edge and the domain knowledge, including the inference-domain role mappings.
This approach is the preferred one, but requires that the task template chosen pro-
vide a task decomposition that is detailed enough to act as a good approximation of
the inference structure.
Route 2: Middle-in Start in parallel with decomposing the task through consecutive
applications of methods, while at the same time refining the domain knowledge to
cope with the domain-knowledge assumptions posed by the methods. The two ends
(i.e., task and domain knowledge) meet through the inference-domain mappings.
This means we have found the inferences (i.e., the lowest level of the functional
decomposition). This approach takes more time, but is needed if the task template is
still too coarse-grained to act as an inference structure.
Figure 7.2 summarizes the two approaches. The middle-out approach can only be
used if the inference structure of the task template is already at the required level of detail.
If decomposition is necessary, the process essentially becomes “middle-in.” Deciding on
the suitability of the inference structure is therefore an important decision criterion. The
following guidelines can help in making this decision:
tasks
and
methods
inference
structure
middle in
middle out
role mapping
Figure 7.2
Middle-in and middle-out approaches to knowledge-model specification. The middle-out approach is preferred,
but can only be used if the inference structure of the task template is already at the required level of detail. If
decomposition is necessary, the process essentially becomes “middle-in”.
180 Chapter 7
Rationale: A key point underlying the inference structure is that it provides us with
an abstraction mechanism over the details of the reasoning process. An inference is a
black box, as far as the specification in the knowledge model is concerned. The idea
is that one should be able to understand and predict the results of inference execution
by just looking at its inputs (both dynamic and static) and outputs.
Rationale: This is not a hard rule, but it often works in practice. The underlying
rationale is simple: if there are more than two static roles (types of static domain
knowledge in the knowledge base) involved, then it is often required to specify con-
trol over the reasoning process. By definition, no internal control can be represented
for an inference; we need to consider this function as a task that is being decom-
posed.
Although in the final model, we “know” what are tasks and what are inferences, this is
not true at every stage of the specification process. We use the term “function” to denote
anything that can turn out to be either a task or an inference. We can sketch for what we call
“provisional inference structures” in which functions appear that could turn out to be tasks.
In such provisional figures we use a rounded-box notation to indicate functions. Figure 7.3
shows an example of such a provisional inference structure. In this figure GENERATE and
TEST are functions. These functions will either be viewed as tasks (and thus decomposed
through a task method) or be turned into direct inferences in the domain knowledge.
An important technique at this stage is the self report. This technique, which is dis-
cussed in more detail in the next chapter, usually gives excellent data about the structure
of the reasoning process: tasks, task control, and inferences. The adequateness of a task
template can be assessed by using it as an “overlay” of the transcript of a self report. The
idea is that one should be able to interpret all the reasoning steps made by the expert in
the protocol in terms of a task or an inference in the template. Because of this usage, task
templates have also been called “interpretation models.”
If the task template is too coarse-grained and requires further decomposition, a self-
report protocol usually gives clues as to what kind of decompositions are appropriate.
Because we require of the knowledge model that it can explain its reasoning in expert
terms, the self-report protocol (in which an expert tries to explain his own reasoning) is
the prime technique for deciding whether the inference structure is detailed enough. Also,
such protocols can provide you with scenarios for testing the model (see the knowledge
refinement activities further on).
Guidelines for specifying task knowledge The following guidelines apply to the speci-
fication of tasks and task methods:
Knowledge Model Construction 181
initial generate
differential
complaint hypotheses
finding select
test
established focus
hypothesis
Figure 7.3
Example of a provisional inference structure. “Generate” and “test” are functions. These functions will either be
viewed as tasks (and thus decomposed through a task method) or be turned into direct inferences in the domain
knowledge. The knowledge engineer still has to make this decision.
Guideline 7-15: W HEN STARTING TO SPECIFY A TASK METHOD , BEGIN WITH THE
CONTROL STRUCTURE
Rationale: The control structure is the “heart” of the method: it contains both the
decomposition (in terms of the tasks, inferences, and/or transfer functions mentioned
in it), as well as the execution control over the decomposition. Once you have the
control structure right, the rest can more or less be derived from it.
Guideline 7-16: W HEN WRITING DOWN THE CONTROL STRUCTURE , DO NOT CON -
CERN YOURSELF TOO MUCH WITH DETAILS OF WORKING MEMORY REPRESEN -
TATION
182 Chapter 7
Rationale: The main point of writing down control structures is to characterize the
reasoning strategy at a fairly high level: e.g., “first this task, then this task” or “do this
inference until it produces no more solutions.” Details of the control representation
can safely be left to the design phase. If one spends much time on the control details
in this stage, it might well happen that the work turns out to be useless when a
decision is made to change the method for a task.
Guideline 7-17: C HOOSE ROLE NAMES THAT CLEARLY INDICATE HOW THIS DATA
ITEM IS USED WITHIN THE TASK
Rationale: Knowledge modelling (as in modelling in general) is very much about
introducing an adequate vocabulary for describing the application problem, such that
future users and/or maintainers of the system understand the way you perceived the
system, The task roles are an important part of this naming process, as they appear
in all simulations or actual traces of system behavior. It makes sense to choose these
names with care.
Guidelines for specifying inference knowledge The following guidelines may help you
in developing a specification of inferences and their corresponding knowledge roles:
Rationale: There are two ways to classify an inference: according to the role the
inference plays in the overall reasoning process (e.g., “rule-out hypothesis”) and the
type of operation it performs to achieve its goal (“select from a set”). Document the
inference with both names.
Guideline 7-22: U SE A STANDARD SET OF INFERENCE TYPES AS MUCH AS POSSI -
BLE
Rationale: Earlier versions of KADS prescribed a fixed set of inference types, many
of which are also used in this book. Experience has taught that prescribing a fixed set
of inference types is too rigid an approach. Nevertheless, we recommend adherence
to a standard, well-documented set as much as possible. This enhances understand-
ability, reusability, and maintenance. In Chapter. 13 we have included a catalog of
inferences used in this book, each with a number of typical characteristics. Also,
Aben (1995) and Benjamins (1993) contain descriptions of sets of inference types
that have been widely used and are well documented. It is also useful to maintain
your own catalog of inferences.
Guideline 7-23: BE CLEAR ABOUT SINGLE OBJECT ROLES OR SETS
Rationale: A well-known confusion in inference structures is caused by the lack of
clarity whether a role represents one single object or a set. For example, a select
inference takes a set as input. In an inference structure a special notation can be used
to indicate sets of objects (see the appendix).
Guideline 7-24: I NFERENCES THAT HAVE NO INPUT OR THAT HAVE MANY OUTPUTS
ARE SUSPECT
Rationale: Although CommonKADS has no strict rules about the cardinality of the
input and output roles of inferences, inferences without an input are considered un-
usual and inferences with many outputs (more than two) are also unusual in most
models. Often these phenomena are indications of incomplete models or of over-
loading inferences (in the case of many outputs).
Guideline 7-25: C HOOSE REUSABLE ROLE NAMES
Rationale: It is tempting to use role names that have a domain-specific flavor. How-
ever, it is recommended to use domain-independent role names as much as possible.
This enhances reusability. Anyway, you can still add the domain-specific terms as
annotations to the roles.
Guideline 7-26: S TANDARDIZE ON LAYOUT
Rationale: Like data-flow diagrams, inference diagrams are often read from left to
right. Structure the layout in such a way that it is easy to detect what the order of the
reasoning steps is. The well-known “horseshoe” form of heuristic classification is a
good example of a layout that has become standardized.
184 Chapter 7
Guideline 7-27: D O NOT BOTHER TOO MUCH ABOUT THE DYNAMICS OF ROLE OB -
JECTS IN THE INFERENCE STRUCTURE
Rationale: Inference structures are essentially static representations of a reasoning
process. They are not very well suited to represent dynamic aspects, such as a data
structure, which is continuously updated during reasoning. A typical example is
the “differential,” an ordered list of hypotheses under consideration. During every
reasoning step the current differential is considered and hypotheses are removed,
added, or reordered. In the inference structure this would result in an inference
that has the differential as input and as output. Some creative solutions have been
proposed (e.g., double arrows with labels), but no satisfactory solution currently
exists. We recommend being flexible and not bothering too much about this problem.
Chapter. 13), but will almost always give rise to domain-knowledge types that are rel-
evant to the final method(s) chosen for achieving the task. Also, the communication
model may require additional domain knowledge, e.g., for explanation purposes.
During the knowledge-specification stage we are mainly concerned with structural descrip-
tions of the domain knowledge: the domain schema. This schema contains two kinds of
types:
1. Domain-knowledge types that have instances that are part of a certain case. One
can view these as “data types”; their instances are similar to instances (“rows”) in a
database .
2. Domain-knowledge types that have instances that are part of a knowledge base.
These can be seen as “knowledge types”: their instances make up the contents of
the knowledge base(s).
Instances of the “data types” are never part of a knowledge model. Typically, data
instances (case data) will only be considered when a case needs to be formulated for a
scenario. However, the instances of the “knowledge types” need to be considered during
knowledge-model construction. In the knowledge-specification stage a hypothesis is for-
mulated about how the various domain-knowledge types can be represented. When one
fills the contents, one is in fact testing whether these domain-knowledge types deliver a
representation that is sufficiently expressive to represent the knowledge we need for the
application.
Usually, it will not be possible to define a full, correct knowledge base at this stage
of development. Knowledge bases need to be maintained throughout their lifetime. Apart
from the fact that it is difficult to be complete before the system is tested in real-life prac-
tice, such knowledge instances also tend to change over time. For example, in a medical
domain knowledge about resistance to certain antibiotics is subject to constant change.
In most cases, this problem is handled by incorporating editing facilities for updating
the knowledge base into the system. These knowledge editors should not use the internal
system representations, but communicate with the knowledge maintainer in the terminol-
ogy of the knowledge model.
Various techniques exist for arriving at a first, fairly complete version of a knowledge
base. One can check the already available transcripts of interviews and protocols, but this
typically delivers only a partial set of instances. One can organize a focused interview,
in which the expert is systematically taken through the various knowledge types. Still,
omissions are likely to persist. A relatively new technique is to use automated techniques
186 Chapter 7
to learn instances of a certain knowledge type, but this is still in an experimental phase (see
the references at the end of this chapter).
Guideline 7-32: L OOK ALSO FOR EXISTING KNOWLEDGE BASES IN THE SAME DO -
MAIN
Rationale: Reusing part of an existing knowledge base is one of the most powerful
forms of reuse. This really makes a difference! There is always some work to be
done with respect to mapping the representation in the other system to the one you
use, but it is often worth the effort. The quality is usually better and it costs less time
in the end.
Validation can be done both internally and externally. Some people use the term “verifica-
tion” for internal validation (“is the model right?”) and reserve “validation” for validation
against user requirements (“is it the right model?”).
Checking internal model consistency can be done through various techniques. Stan-
dard structured walk-throughs can be appropriate. Software tools exist for checking the
syntax. Some of these tools also point at potentially missing parts of the model, e.g., an
inference that is not used in any task method.
External validation is usually more difficult and more comprehensive. The need for
validation at this stage varies from application to application. Several factors influence this
need. For example, if a large part of the model is being reused from existing models that
were developed for very similar tasks, the need for validation is likely to be low. Molds for
tasks that are less well understood are more prone to errors or omissions.
The main method of checking whether the model captures the required problem-
solving behavior is to simulate this behavior in some way. This simulation can be done
in two ways:
Table 7.1
Paper simulation of the reasoning process in knowledge-model terms (see the middle column). The scenarios used
here should have been predefined (i.e., in the identification phase). The first column indicates what happens in
domain-specific terms; the second column describes the corresponding knowledge-model action; the final column
gives a short explanation.
and use the knowledge model to generate a paper trace of the scenario in terms
of the knowledge model constructs. This can best be done in a table with three
columns. The left column describes the steps in the scenario in application-domain
terminology. The middle column indicates how each step maps onto a knowledge-
model element, e.g., an inference is executed with certain roles as input and output.
The right column can be used for comments:
How well does the model fit?
Are possible differences between the model and the scenario on purpose?
Where should the model be adapted?
Table 7.1 shows an example of a paper simulation for a scenario of the car-diagnosis
application.
implementation-specific pieces of code, such that the simulation can be done within
a short time period (hours or days instead of weeks)
following elements:
A diagram of the full domain schema
Knowledge Model Construction 189
An inference-structure diagram
A list of knowledge roles (both dynamic and static) with their domain mappings
Textual specifications of the tasks and task methods
This set of specifications, although it lacks some of the textual detail, is in practice suffi-
cient to be understood without problems by the other project members.
It will be clear that in building this specification a large amount of other material is gathered
that is useful output as a kind of background documentation. It is therefore worthwhile to
produce a “domain documentation document” containing at least the full knowledge model
plus the following additional information:
A list of all information sources used
A listing of domain terms with explanations (= glossary)
A list of model components that were considered for reuse plus the corresponding
decisions and rationale
A set of scenarios for solving the application problem
Results of the simulations undertaken during validation
All the transcripts of interviews and protocols as appendices
Worksheet KM-1 (see Table 7.2) provides a checklist for generating this document.
Table 7.2
Worksheet KM-1: Checklist for the “knowledge-model documentation document”.
8
Knowledge-Elicitation Techniques
8.1 Introduction
This chapter discusses the problem of knowledge elicitation. Knowledge elicitation com-
prises a set of techniques and methods that attempt to elicit knowledge of a domain special-
ist through some form of direct interaction with that expert. The domain specialist, usually
called the “expert,” is a person that possesses knowledge about solving the application task
we are interested in (cf. the “knowledge provider” role in Figure 2.6).
We begin by reviewing the nature and characteristics of the elicitation activity. Next,
we consider the different types of expert who may be encountered. We then look at a range
of methods and techniques for elicitation. We illustrate the use of these techniques with
an example of an elicitation scenario. In this example it will become clear how elicitation
techniques can be used to support the knowledge-modelling activities described in Chap-
ter. 7. The example concerns an application in which offices are assigned to employees.
In the example we make use of an knowledge-elicitation tool set named PC-PACK, which
supports the use of the techniques. This scenario and a demo version of the PC-PACK tools
can be downloaded from the CommonKADS website.
192 Chapter 8
become common practice and conform to clear standards. This will help ensure that the
results are robust, that they can be used on various experts in a wide range of contexts by
any competent knowledge engineer. We also hope to make our techniques reliable. This
will mean that they can be applied with the same expected utility by different knowledge
engineers. But however systematic we want to be, our analysis must of necessity begin
with the expert.
8.3 On Experts
Experts come in all shapes and sizes. Ignoring the nature of your expert is a potential pitfall
in knowledge elicitation. A coarse guide to a typology of experts might make the issues
clearer. Let us take three categories we shall refer to as “academics,” “practitioners,” and
“samurai”. In practice experts may embody elements of all three types. Each of these types
of expert differs along a number of dimensions. These include the outcome of their expert
deliberations, the problem-solving environment they work in, the state of the knowledge
they possess (both its internal structure and its external manifestation), their status and
responsibilities, their source of information, and the nature of their training.
On the basis of these dimensions we can distinguish three different types of expert:
1. The academic type regards his domain as having a logically organized structure.
Generalizations over the laws and behavior of the domain are important to the aca-
demic type. Theoretical understanding is prized. Part of the function of such experts
may be to explicate, clarify, and teach others. Thus they talk a lot about their do-
mains. They may feel an obligation to present a consistent story both for pedagogic
and professional reasons. Their knowledge is likely to be well structured and acces-
sible. These experts may suppose that the outcome of their deliberations should be
the correct solution of a problem. They believe that the problem can be solved by
the appropriate application of theory. They may, however, be remote from everyday
problem- solving.
2. The practitioner class on the other hand is engaged in constant day- to-day problem-
solving in the domain. For them specific problems and events are the reality. Their
practice may often be implicit and what they desire as an outcome is a decision that
works within the constraints and resource limitations in which they are working.
It may be that the generalized theory of the academic is poorly articulated in the
practitioner. For the practitioner heuristics may dominate and theory is sometimes
thin on the ground.
3. The samurai is a pure performance expert – the only reality is the performance of
action to secure an optimal performance. Practice is often the only training and
194 Chapter 8
responses are often automatic. Samurai usually explicate their knowledge verbally.
One can see this sort of division in any complex domain. Consider, for Example, med-
ical domains where we have professors of the subject, busy House staff working the wards,
and medical ancillary staff performing many important but repetitive clinical activities.
The knowledge engineer must be alert to these differences because the various types
of expert will perform very differently in knowledge-elicitation situations. The academics
will be concerned about demonstrating mastery of the theory. They will devote much ef-
fort to characterizing the scope and limitations of the domain theory. Practitioners, on the
other hand, are driven by the cases they are solving from day to day. They have often com-
piled or routinized any declarative descriptions of the theory that supposedly underlie their
problem-solving. The performance samurai will more often than not turn any knowledge-
elicitation interaction into a concrete performance of the task – simply exhibiting their
skill.
But there is more to say about the nature of experts and this is rooted in general principles
of human information processing. Psychology has demonstrated the limitations, biases,
and prejudices that pervade all human decision-making – expert or novice. To illustrate,
consider the following facts, all potentially crucial to the enterprise of knowledge elicita-
tion.
It has been shown repeatedly that the context in which one encodes information is the
best one for recall. It is possibl, then, that experts may not have access to the same infor-
mation when in a knowledge- elicitation interview as they do when actually performing
the task. So there are good psychological reasons to use techniques which involve observ-
ing the expert actually solving problems in the normal setting. In short, protocol analysis
techniques may be necessary, but will not be sufficient for effective knowledge elicitation.
Consider also the issue of biases in human cognition. One wellknown problem is that
humans are poor at manipulating uncertain or probabilistic evidence. This may be impor-
tant in knowledge elicitation for those domains that require a representation of uncertainty.
Consider the rule:
This seems like a reasonable rule, but what is the value of X – should it be 0.9, 0.95,
0.79? The value thatis finally decided upon will have important consequences for the
working of the system, but it is very difficult to decide upon it in the first place. Medical
diagnosis is a domain full of such probabilistic rules, but even expert physicians cannot
accurately assess the probability values.
Knowledge-Elicitation Techniques 195
In fact there are a number of documented biases in human cognition which lie at the
heart of this problem. People are known to undervalue prior probabilities, to use the ends
and middle of the probability scale rather than the full range, and to anchor their responses
around an initial guess. Cleaves (1987) lists a number of cognitive biases likely to be found
in knowledge elicitation, and makes suggestions about how to avoid them. However, many
knowledge engineers prefer to avoid the use of uncertainty wherever possible.
Cognitive bias is not limited to the manipulation of probability. A series of experiments
has shown that systematic patterns of error occur across a number of apparently simple
logical operations. For example, modus tollens states that if “A implies B’ is true, and “not
B” is true, then “not A” must be true. However, people, whether expert in a domain or not,
make errors on this rule. This is in part due to an inability to reason with contrapositive
statements. Also in part it depends on what A and B actually represent. In other words,
they are affected by the content. This means that one cannot rely on the veracity of experts’
(or indeed anyone’s) reasoning.
All this evidence suggests that human reasoning, memory, and knowledge represen-
tation is rather more subtle than might be thought at first sight. The knowledge engineer
should be alert to some of the basic findings emanating from cognitive psychology. While
no text is perfect as a review of bias in problem-solvin, the book by Meyer and Booker
(1991) is reasonably comprehensive.
1. Interviewing
2. Protocol analysis
3. Laddering
196 Chapter 8
4. Concept sorting
5. Repertory grids
The first two elicitation methods are both natural under the definition above. The other
techniques - such as laddering, concept sorting, and repertory grids - are more contrived.
In the rest of this section we discuss the individual techniques. In the following section the
use of these techniques is demonstrated in a practical example.
8.4.1 Interviewing
Almost everyone starts knowledge elicitation with one or more interviews. The interview
is the most commonly used knowledge-elicitation technique and takes many forms, from
the completely unstructured interview to the formally planned, structured interview.
Table 8.1
Probes to elicit further information in structured interviews.
1. Ask the expert to give a brief (10-minute) outline of the target task, including the
following information:
a. An outline of the task, including a description of the possible solutions or out-
comes of the task;
b. A description of the variables which affect the choice of solutions or outcomes;
c. A list of major rules which connect the variables to the solutions or outcomes.
2. Take each rule elicited in stage 1; ask when it is appropriate and when it is not.
The aim is to reveal the scope (generality and specificity) of each existing rule, and
hopefully generate some new rules.
3. Repeat stage 2 until it is clear that the expert will not produce any additional infor-
mation.
The task selection is important. The scope of the task should be relatively small and
should typically be guided by an initial model selection (e.g., a task template).
It is also important in this technique to be specific about how to perform stage 2. We
have found that it is helpful to constrain the elicitor’s interventions to a specific set of
probes, each with a specific function. Table 8.1 contains a list of probes which will help in
stage 2.
The idea here is that the elicitor engages in a type of slot/filler dialogue. Listening
out for relevant concepts and relations imposes a large cognitive load on the elicitor. The
provision of fixed linguistic forms within which to ask questions about concepts, relations,
attributes,and values makes the elicitor’s job very much easier. It also provides sharply
198 Chapter 8
focused transcripts which facilitate the process of extracting usable knowledge. Of course,
there will be instances when none of the above probes are appropriate (such as the case
when the elicitor wants the expert to clarify something). However, you should try to keep
these interjections to a minimum. The point of specifying such a fixed set of linguistic
probes is to constrain the expert into giving you all, and only, the information you want.
The sample of dialogue below is taken from a real interview of this kind. It is the
transcript of an interview by a knowledge engineer (KE) with an expert (EX) on VDU
fault diagnosis. Also, the type of probe by the knowledge engineer is indicated. In the
transcripts we use the symbol + to represent a pause in the dialogue.
This is quite a rich piece of dialogue. From this section of the interview alone we can
extract the following rules.
Of course, these rules may need refining in later elicitation sessions, but the text of the
dialogue shows how the use of the specific probes has revealed a well-structured response
from the expert. A possible second-phase elicitation technique would be to present these
rules back to the expert and ask about their truthfulness, scope and so forth. One can also
apply the teach-back technique of Johnson & Johnson (1987) This involves creating an
intermediate representation of the knowledge acquired, which is then ”taught back” to the
expert, who can then check or, if necessary, amend the information.
Knowledge-Elicitation Techniques 199
Potential pitfalls In all interview techniques (and in some of the other generic techniques
as well) there exist a number of dangers that have become familiar to knowledge engineers.
One problem is that experts will only produce what they can verbalize. If there are
nonverbalizable aspects to the domain, the interview will not recover them. This can arise
from two causes. It may be that the knowledge was never explicitly represented or artic-
ulated in terms of language (consider, for example, pattern recognition expertise). Then
there is the situation where the knowledge was originally learned explicitly in a proposi-
tional or language-like form. However, in the course of experience it has become routinized
or automized. We often use computing analogy to refer to this situation and speak of the
expert as having compiled the knowledge.
This can happen to such an extent that experts may regard the complex decisions they
make as based on hunches or intuitions . Nevertheless, these decisions are based upon large
amounts of remembered data and experience, and the continual application of strategies.
In this situation they tend to give black box replies: ”I don’t know how I do that....”, ”It is
obviously the right thing to do.... .”
Another problem arises from the observation that people (and experts in particular)
often seek to justify their decisions in any way they can. It is a common experience of the
knowledge engineer to get a perfectly valid decision from an expert, and then to be given
a spurious justification.
For these and other reasons we have to supplement interviews with additional methods
of elicitation. Elicitation should always consist of a program of techniques and methods.
We discuss a set of techniques in the remainder of this section.
When to use Unstructured interviews are usually only carried out in the early stages of
the modelling process, e.g., during organizational analysis or at the start of the knowledge
identification phase.
The structured interview is particularly useful in the knowledge refinement stage, in
which the knowledge bases need to be “filled.” The probes direct the search for missing
knowledge pieces. The structured interview also provides useful information in the later
phases of knowledge identification and during initial knowledge specification, e.g., to get
information about key concepts and relations.
A good guideline is to tape every structured interview and to create a transcript from
it. During unstructured interviews one can just take notes, although a transcript can have
an added value, e.g., for creating a glossary. The transcript can be used in knowledge-
analysis tools such as PC PACK to create markups in order to identify potential concepts,
properties, and relations.
Protocol analysis (PA) is a generic term for a number of different ways of performing some
form of analysis of the expert(s) actually solving problems in the domain. In all cases the
200 Chapter 8
engineer takes a record of what the expert does – preferably by video- or audiotape – or
at least by written notes. Protocols are then made from these records and the knowledge
engineer tries to extract meaningful structure and rules from the protocols.
Getting data for protocol analysis We can distinguish two general types of PA-online
and offline. In on-line PA the expert is being recorded solving a problem, and concurrently
a commentary is made. The nature of this commentary specifies two subtypes of the oline
method. The expert performing the task may be describing what he or she is doing as
problem-solving proceeds. This is called self-report (or “thinking aloud”). A variant on
this is to have another expert provide a running commentary on what the expert performing
the task is doing. This is called shadowing.
Offline PA allows the expert(s) to comment retrospectively on the problem solving
session – usually by being shown an audiovisual record of it. This may take the form
of retrospective self report by the expert who actually solved the problem, it could be a
critical retrospective report by other experts, or there could be group discussion of the
protocol by a number of experts, including its originator. In the case in which only a
behavioral protocol is obtained, then obviously some form of retrospective verbalization
of the problem-solving episode is required.
Analyzing the transcript Where a verbal or behavioral transcript has been obtained we
next have to contemplate its analysis. Analysis might include the encoding of the transcript
into ’chunks’ of knowledge (which might be actions, assertions, propositions, key words,
etc.), and should result in a rich domain representation with many elicited domain features
together with a number of specified links between those features.
Knowledge-Elicitation Techniques 201
There are a number of principles that can guide the protocol analysis. For example,
analysis of the verbalization resulting in the protocol can distinguish between information
that is attended to during problem-solving, and that which is used implicitly. A distinction
can be made between information brought out of memory (such as a recollection of a sim-
ilar problem solved in the past), and information that is produced on the spot by inference.
The knowledge chunks referred to above can be analyzed by using the expert’s syntax, or
the pauses he takes, or other linguistic cues. Syntactical categories (e.g., use of nouns,
verbs, etc.) can help distinguish between domain features and problem-solving actions and
so on.
In trying to decide when it is appropriate to use PA, bear in mind that it is alleged
that different KE techniques differentially elicit certain kinds of information. With PA it
is claimed that the sorts of knowledge elicited include the ”when” and ”how” of using
specific knowledge. It can reveal the problem-solving and reasoning strategies, evaluation
procedures,and evaluation criteria used by the expert, and procedural knowledge about
how tasks and subtasks are decomposed. A PA gives you a complete episode of problem
solving. It can be useful as a verification method to check that what people say is what they
do. It can take you deep into a particular problem. However, it is intrinsically a narrow
method since usually one can only run a relatively small number of problems from the
domain.
Finally, when performing PA it is useful to have a set of conventions for the actual
interpretation and analysis of the resultant data. Ericsson & Simon (1993) provide the
classic exposition of protocol analysis although it is oriented toward cognitive psychology.
Coding scheme Traditionally, psychologists analyze think-aloud protocols with the use
of a coding scheme. The coding scheme consists of a predefined set of actions and/or
concepts that one should use to classify text fragments of the protocol. In knowledge
modelling, the selected task template can fulfill the role of a coding scheme. The analyst
marks where a certain inference is made, a certain task is started, or a knowledge role
is used. Because task templates are useful as a coding scheme for expertise data, these
templates have also been called “interpretation models”.
Guidelines for PA sessions When actually eliciting data for protocol analysis through
a self-report or other means, the following are a useful set of tips to help enhance its
effectiveness:
Guideline 8-1: P RESENT PROBLEMS AND DATA IN A REALISTIC WAY
Rationale: The way problems and data are presented should be as close as possible
to a real situation.
Guideline 8-2: T RANSCRIBE THE PROTOCOLS AS SOON AS POSSIBLE
Rationale: The meaning of many expressions is soon lost, particularly if the proto-
cols are not recorded. In almost all cases an audio recording is sufficient, but video
202 Chapter 8
Potential pitfalls Protocol analyses share with the unstructured interview the problem
that they may deliver unstructured transcripts which are hard to analyze. Moreover, they
focus on particular problem cases and so the scope of the knowledge produced may be
very restricted. It is difficult to derive general domain principles from a limited number of
protocols. These are practical disadvantages of protocol analysis, but there are more subtle
problems.
Two actions, which look exactly the same to the knowledge engineer, may be the result
of two quite different sets of considerations. This is a problem of impoverished interpreta-
tion by the knowledge engineer. The KE simply does not know enough to discriminate the
actions. The obverse to this problem can arise in shadowing and the retrospective analyses
of protocols by experts. Here the expert(s) may simply wrongly attribute a set of consider-
ations to an action after the event. This is analogous to the problems of mis-attribution in
interviewing.
A particular problem with self report, apart from being tiring, is the possibility that
verbalization may interfere with performance. The classic demonstration of this is for a
driver to attend to all the actions involved in driving a car. If one consciously monitors
such parameters as engine revs, current gear, speed, visibility, steering wheel position and
so forth, the driving invariably gets worse. Such skill is shown to its best effect when
performed automatically. This is also the case with certain types of expertise. By asking
the expert to verbalize, one is in some sense destroying the point of doing protocol analysis
- to access procedural, real-world knowledge.
Having pointed to these disadvantages, it is also worth remembering that context
is sometimes important for memory - and hence for problem solving. For most non-
verbalizable knowledge, and even for some verbalizable knowledge, it may be essential
to observe the expert performing the task. For it may be that this is the only situation in
which the expert is actually able to perform it.
Knowledge-Elicitation Techniques 203
8.4.3 Laddering
Laddering is a somewhat contrived technique, and you will need to explain it fully to
the expert before starting. The expert and the knowledge engineer construct a graphical
representation of the domain in terms of the relations between domain and problem-solving
elements. The result is a qualitative, two-dimensional graph where nodes are connected by
labeled arcs. The graph takes the form of a hierarchy of trees. No extra elicitation method
is used here, but expert and elicitor construct the graph together by negotiation.
The key point is that, having acquired some of the key terms in the domain, organizing
them into some sort of structure is a natural thing to do. Laddering is a very straightforward
means.
The laddering technique is typically used to construct some initial, informal hierar-
chies. One can see a laddering tool as a scruffy tool for hierarchical ordering without
imposing too many semantic restrictions. The objects in the ladders can be of many differ-
ent types. The terms “concept” and “attribute” should be interpreted loosely in the context
of laddering. For example, no strict distinction needs to be made yet between “concepts”
and “instances” (something that is difficult in the early phases of knowledge modelling).
We will see that the tool used in the scenario supports laddering of any type of object.
Object types can be defined by the user and are available as text markers.
When to use Laddering is used mostly in the early phases of domain exploration. It is
the groundwork for the more formal representation in the knowledge model.
Concept sorting is a technique that is useful when we wish to uncover the different ways an
expert sees relationships between a fixed set of concepts. In the simplest version an expert
is presented with a number of cards on each of which a concept word is printed. The cards
are shuffled and the expert is asked to sort the cards into either a fixed number of piles or
204 Chapter 8
into any number of piles the expert finds appropriate. This process is repeated many times.
Using this task one attempts to get multiple views of the structural organization of
knowledge by asking the expert to do the same task over and over again. Each time the
expert sorts the cards he should create at least one pile that differs in some way from
previous sorts. The expert should also provide a name or category label for each pile of
each different sort.
Variants of the simple sort are different forms of hierarchical sort. One such version
is to ask the expert to proceed by producing first two piles; on the second sort, three; then
four, and so on. Finally we ask if any two piles have anything in common. If so you have
isolated a highe-order concept that can be used as a basis for future elicitation.
The advantages of concept sorting can be characterized as follows. It is fast to apply
and easy to analyze. It forces into an explicit format the constructs which underlie an
expert’s understanding. In fact it is often instructive to the expert. A sort can lead the
expert to see structure in his view of the domain which he himself has not consciously
articulated before. Finally, in domains where the concepts are perceptual in nature (i.e.,
x-rays, layouts, and pictures of various kinds), then the cards can be used as a means of
presenting these images and attempting to elicit names for the categories and relationships
that might link them.
There are, of course, features to be wary of with this sort of technique. Experts can
often confound dimensions by not consistently applying the same semantic distinctions
throughout an elicitation session. Alternatively, they may oversimplify the categorization
of elements, missing out on important caveats.
An important tip with all of the techniques we are reviewing is always to audiotape
these sessions. An expert makes many asides, comments, and qualifications in the case of
sorting ranking and so on. In fact one may choose to use the contrived methods as a means
to carry out auxiliary structured interviews. The structure this time is centered around the
activity of the technique.
When to use Concept sorting can discover new concepts and attributes, and is therefore
particularly helpful in constructing a domain schema in unfamiliar domains. The technique
is able to uncover many different viewpoints from which one can look at an application
domain. Concept sorting requires some prestructuring of the data, e.g., thorough markups
of interview transcripts. The technique is complementary to repertory grids.
The final technique we will consider is the repertory grid. This technique has its roots in the
psychology of personality (Kelly 1955) and is designed to reveal a conceptual map of a do-
main in a fashion similar to the card sort, as discussed above (see Shaw and Gaines (1987)
for a full discussion). The technique as developed in the 1950s was very time-consuming
to administer and analyze by hand. This naturally suggested that an implemented version
Knowledge-Elicitation Techniques 205
would be useful.
Briefly, subjects are presented with a range of domain elements and asked to choose
three, such that two are similar and different from the third. Suppose we were trying to
uncover an astronomer’s understanding of the planets. We might present him with a set of
planets, and he might choose Mercury and Venus as the two similar elements, and Jupiter
as different from the other two. The subject is then asked the reason for differentiating
these elements, and this dimension is known as a construct. In our example ”size” would
be a suitable construct. The remaining domain elements are then rated on this construct.
This process continues with different triads of elements until the expert can think of
no further discriminating constructs. The result is a matrix of similarity ratings, relating
elements, and constructs. This is analyzed using a statistical technique called cluster anal-
ysis. In knowledge engineering, as in clinical psychology, the technique can reveal clusters
of concepts and elements which the expert may not have articulated in an interview. The
repertory grid is builtup interactively, and the expert is shown the resultant knowledge.
Experts have the opportunity to refine this knowledge during the elicitation process.
When to use This technique can be seen as the statistical counterpart of concept sorting.
Like the latter, the repertory grids is particularly useful when trying to uncover the structure
of an unfamiliar domain. It is used mainly to support the specification of the domain
schema, both in its initial and in its more advanced stages.
Table 8.2 summarizes the main features of the techniques discussed above. The five cat-
egories of techniques are just a selection from the available elicitation techniques. For
example, rule induction techniques can be used to derive domain rules automatically. Such
a tool can also used to discover rule patterns, leading to specification of rule types in the
knowledge model.
Table 8.2
Summary of the elicitation techniques discussed.
that an initial knowledge-identification phase has been conducted. The results of this phase
are described in the next subsection. In the scenario we go through the initial activities in
the knowledge-specification phase (see Figure 7.1).
Available information Within the group, there are a number of different types of work-
ers. Thomas is head of the research group. Eva does the administrative management of the
group. Monika and Ulrike are the secretaries. Hans, Katharina, and Joachim are heads of
research projects. The other people are employed as researchers.
The floor of the building where the group moves to is depicted in Figure 8.1. The
shaded rooms are not available as office space. C5-117 and C5-119 to C5-123 are large
rooms which can hold two people. The others are rooms for single-person use.
Knowledge-Elicitation Techniques 207
C5-124
printer/ C5-123 C5-122 C5-121 C5-120
copier
service area
C5-119
C5-118
C5-113 C5-114 C5-115 C5-116 C5-117 seminar
room
Figure 8.1
Floor plan of the sample problem.
In Tables 8.3 and 8.4 we have listed part of a transcript in which the expert solves
the allocation problem in a self-report setting. This protocol constitutes the main “raw
material” we use in this elicitation scenario.
Several elicitation techniques are useful when building an initial domain schema, particu-
larly when the knowledge engineer does not have some data model or domain schema she
can reuse. In this subsection we show the use of the following techniques:
Protocol analysis A simple mark-up tool can be used to find the relevant domain terms
in a transcript and in other information sources.
Concept sorting and repertory grid Both these tools can be used to discover domain
features that were not directly apparent from the domain material, such as a new
concept or attribute.
208 Chapter 8
Figure 8.2
Marking up domain terms in the office-assignment material.
Knowledge-Elicitation Techniques 209
Table 8.3
Stylized transcript of a self-report protocol: part I.
Marking up the protocol In using PC-PACK on the example, we need to use a so-called
protocol editor to mark-up appropriate words. There are various color-coded markers avail-
able. In Figure 8.2 a snapshot of the protocol editor is shown. For the moment, only the
concept, attribute, and value markers are used.
In the introductory material we can markup terms such as “office” and the terms re-
lated to the type of work: head-of-group, head-of-project, manager, secretary, and re-
searcher. These terms are marked as concepts. In addition, individual people and offices
are marked as concepts. At this stage we are not yet making a distinction between concepts
and instances. The term “concept” is used here in a sloppy way.
Other terms can be marked as potential “attributes,” such as the project a person is
working on, whether someone is a smoker, and the size and location of a room. Within
this tool, the attributes are just identified and not connected to concepts yet. In fact, this
should really be seen as a first structuring of raw material. An attribute could easily become
a relation in the final domain schema (e.g., the “project” attribute).
Finally, some terms can be marked as attribute values. In this scenario example values
are large, single, and does-not-smoke, Again, these values might well become full concepts
in the final schema.
210 Chapter 8
Table 8.4
Stylized transcript of a self-report protocol: part II.
Sometimes when marking up a protocol there is a need to add small annotations, per-
haps to clarify why a particular marking-up choice was made or to elaborate on a topic.
PC-PACK’s protocol editor supports this by providing Post-it-type annotations for terms.
Laddering tool PC-PACK contains a tool for performing laddering. The first task in the
example will be to create a concept tree to form a useful knowledge structure from the
concepts identified. The resulting diagram is shown in Figure 8.3. Firstly, a hierarchy of
people, with the management structure and people’s roles, has been created by dragging
and dropping the appropriate lower-level concepts marked up in the protocol. A similar
method is followed to place all of the room names under the superclass ”Offices.”
The next task is to use the laddering tool to structure the attributes elicited from the
protocol. For this purpose the laddering tool has an “attribute” mode (see Figure 8.4). The
attributes are represented through a parent node, with possible attribute values as children.
Not all values may be present as markups in the domain material, but the knowledge analyst
is free to add additional attributes and values.
In Figure 8.4 we see five attributes that have been identified: role, smoker, gender,
Knowledge-Elicitation Techniques 211
Figure 8.3
A ladder of objects involved in the office-assignment problem.
212 Chapter 8
Figure 8.4
A ladder of objects involved in the office-assignment problem.
Knowledge-Elicitation Techniques 213
size, and location. “Gender” was entered directly by the knowledge engineer. The same
holds for the attribute “role,” which was added to model information also represented in
the management hierarchy,
The other attributes stem from markups in the material. For some attributes only one
value is mentioned in the transcript, e.g., central for the “location” attribute. It is usually
easy to come up with alternative values through common sense (e.g., non-central). This
can also be noted as a specific question for the next structured interview (“can you tell me
what kind you locations you distinguish for rooms?”). This is typical of elicitation: the
(provisional) knowledge structures the analysts build are subsequently used to focus the
elicitation of expertise data.
The newly defined attributes can now be used in the previously defined concept ladder.
Three attributes (smoker, gender, and role) can be attached to the person concept; the other
two attributes (size, location) contain information related to offices.
Card-sort tool The card sort tool supports the concept sorting technique. A snapshot of
this tool is shown in Figure 8.5.
The card sort tool is most effective when sorting an entire set of concepts along a new
dimension. As an example, we can add a new dimension to the knowledge base: hacker.
This dimension is suggested by protocol fragment 8a, in which the expert considers per-
sonal features related to implementing systems. New sorting piles must now be elicited for
the values of “hacker.” These values are does-hack and does-not-hack. The result is shown
in Figure 8.5. The new dimension is added as an attribute to the existing set of attributes.
Repertory-grid tool In Figure 8.6 we see a sample grid for the office-assignment task.
Along the horizontal axis we see a set of concepts within which we want to find some
new distinguishing features. Along the vertical axis we see a selection of attributes that are
thought to be relevant to this group of concepts. Not all attributes identified in the laddering
tool will necessarily be relevant to the grid being constructed. For this example we can use
smoker, hacker, gender, and role.
With all the constructs added, you are now ready to begin rating the elements. The pro-
cess of rating each construct is as simple as clicking in the box at the desired point along
the scale. Using the information from the protocol text, and the matrix tool, go through the
constructs rating the values. For binary constructs, such as smoker/nonsmoker, it is usual to
give a score at one of the poles. However, it can be seen that under certain circumstances,
for example, if more knowledge were available, binary constructs like smoking can be-
come more continuous, for instance, a rating of number of cigarettes smoked per day. For
the ”role” construct a way to rate secretary, manager, head of group, head of project, and
researcher must be found. In this case the construct is chosen to rate the amount of in-
volvement in research, from secretary (no research) through to researcher (only research).
When all elements have been rated for all four constructs, the repertory grid can be
displayed, see Figure 8.6. The constructs are plotted against the elements and the score
214 Chapter 8
Figure 8.5
Using the card-sort tool. The piles suggest a new attribute to be added to the ladder, namely “hacker”.
Knowledge-Elicitation Techniques 215
Figure 8.6
Focused repertory grid.
216 Chapter 8
given is displayed in the grid. The real power of the repertory grid, however, comes from
the dendogram. This part of the diagram, so named because of its resemblance to a tree-
like srucure, shows at a glance the similarity hierarchy of both elements and constructs.
Figure 8.6 for example, shows that Katharina and Uwe are very similar (both females,
smokers, and hackers), as are Ulrike and Monika (the two secretaries). Eva is also very
similar to this group, which is not surprising since she is the manager and has a lot in com-
mon with the secretaries. In a dendrogram, the nearer to that diagram that the branches
join, the more similar the elements (or constructs). Thus we can see that broader groupings
also exist; for example, the male smokers Andy and Hans form a close subgroup, yet still
ultimately join up with the largest group of nonsmoking, male, hacker researchers.
The repertory grid tool appears to elicit similar knowledge to the other tools, the den-
drogram resembling the structure of the personnel hierarchy. However, it is only on the
constructs and entities that are chosen. The grid updates automatically as these are added
or removed from the analysis, and this provides a very powerful way to see the effect of
different attributes on the knowledge model.
We have seen that the most appropriate technique for eliciting knowledge about the reason-
ing process is protocol analysis of a transcript resulting from a think-aloud session, such as
a self-report. In this scenario we show how we can analyze the self-report of the allocation
expert to find an appropriate task template. We have classified the task as an assignment
task. The fact that the task is called “office assignment” already suggests this, but in prac-
tice this is not a guarantee. However, if we look at the definition of assignment (two groups
of objects, etc.),it is clear that it matches our current application task. Therefore, we can
propose the assignment template described in Chapter. 6 (cf. Figure 6.20)) as a candidate
specification of the task and inference knowledge.
We can now use this template as a coding scheme for the protocol, to find out whether
the model indeed fits with this application. For this purpose, we have defined the three
inferences as PC-PACK “processes,” which we subsequently can use to mark up the tran-
script. The three process are shown as a process ladder in Figure 8.7.
In Figure 8.8 we see again a snapshot of the PC-PACK protocol editor. At the right-
hand side of the figure you can see that we now have three “process” markers, respectively,
for the inferences select-subset, group, and assign. The text fragments marked in this
figure are all related to the select-subset inference. We can look at a listing of the “select”
fragments at the bottom. The text fragments marked all concern the order in which the
assignment process is performed.
Figure 8.9 shows some additional markups, in this case for the group inference. We
can see that the text fragments are all concerned with the way researchers are grouped
together in double rooms.
It is clear that the protocol provides a good “fit” with the task template. This is suffi-
Knowledge-Elicitation Techniques 217
Figure 8.7
Process ladder. The three inferences of the assignment template are defined as PC-PACK processes.
218 Chapter 8
Figure 8.8
Markups in the transcript, indicating inference steps of the assignment template. The inferences act as a coding
scheme for the transcript. The markups in this figure are related to the “select” process.
Knowledge-Elicitation Techniques 219
Figure 8.9
Markups in the transcript for the “group” process.
220 Chapter 8
cient to incorporate the task template for assignment with some confidence into the knowl-
edge model for the office-assignment application. In Figure 8.10 we have included an
annotated inference structure that can be constructed on the basis of the results of protocol
analysis. We now have an already quite detailed initial knowledge model and can safely
continue with completing the knowledge-model specification.
Several techniques are useful for detailed knowledge specification. There is a group of
techniques that can be used to derive rules automatically or manually from domain data.
This technique is known as rule induction. Rule-induction techniques are useful for gen-
erating sample rules for the knowledge bases. and are often applied in an exploratory
fashion to discover patterns that can be represented with rule types. In addition, these rule-
discovery techniques can be applied in the knowledge refinement phase, when the contents
of the domain models (= knowledge bases) needs to be completed.
The subject of rule induction and discovery lies outside the scope of the present work.
The reader is referred to other sources, such as the PC-PACK documentation, for more
details on these issues. An alternative method of acquiring rules is to mark them up with a
tool such as the PC-PACK protocol editor tool. The process of marking up rules is identical
to marking up concepts, Attributes, and values. In fact, a user can define her own set of
custom markers.
select subject
subjects
subset set
Figure 8.10
Inference structure for assignment together with domain-specific annotations for the office-assignment problem.
9
Modelling Communication Aspects
Task Agent
Transaction Information
Communication
Exchange
Plan
identifier/name Specification
part-of part-of
I/O info objects
agents involved communication type
dialogue diagram communication plan message content
transaction control constraints message control
info exchange spec info form/medium
involved-in
Task structure
transfer functions
.....
Knowledge Model
Figure 9.1
Overview of the communication model and how it relates to the other CommonKADS models.
Modelling Communication Aspects 225
to the user, or, alternatively, the user provides input data to the knowledge system. The
description of the agents involved, together with their capabilities, stems from the agent
model. The tasks, as well as their (input/output) information objects and their assignment
to the various agents, originate from the task model. If tasks are knowledge intensive, they
are usually refined in the knowledge model. The latter has a special leaf subtask type called
a transfer function, indicating that input or output reasoning information has to be obtained
from or delivered to another agent.
The key communication-model component describing such communicative acts is
called a transaction. A transaction tells what information objects are exchanged between
what agents and what tasks. It is, so to speak, the go-between of two tasks carried out by
different agents. Transactions are the building blocks for the full dialogue between two
agents, which is described in the communication plan. Transactions themselves may con-
sist of several messages, which are then detailed in the information exchange specification.
This specification is based on predefined communication types and patterns, which make
it easy to build up message protocols in a structured way.
Accordingly, the process of constructing the CommonKADS communication model
goes in terms of three subsequent layers, from global to detailed specifications, as follows
(see also Figure 9.1).
1. the overall communication plan, which governs the full dialogue between the agents;
2. the individual transactions that link two (leaf) tasks carried out by two different
agents;
3. the information exchange specification that details the internal message structure of
a transaction.
This chapter explains this stepwise construction process, offers a number of specific
techniques, and discusses how to verify and validate the communication model. We illus-
trate the main points through an application coming from the energy distribution industry.
Since the entry point of the analysis is a top task distributed over more than one agent,
it is evident that constructing the communication model crucially depends on information
from other CommonKADS models. In order to start with communication modelling, the
following information needs to be available:
From the task model, the list of tasks carried out by the considered agent. For the
communication model, we are interested in the leaf tasks, i.e., those that are not
decomposed further, together with their input/output information objects.
From the knowledge model, the set of so-called transfer functions, that is, leaf nodes
in the task/inference structure that depend on data from or deliver reasoning results to
the outside world. (Recall that a task/inference structure in the knowledge model is
a refinement of a nonleaf, knowledge-intensive task stemming from the task model).
From the agent model, the description of relevant agents, with their knowledge (or
more generally, capabilities), responsibilities and constraints. The communication
model must be constructed such that it satisfies the ensuing agent requirements, but
in its turn it may also add requirements for communicative capabilities of an agent.
This is depicted in Figure 9.1. Thus, normally the knowledge engineer will have al-
ready done a significant part of task, agent, and knowledge analysis, before starting with
communicating modelling. This also follows by looking at the main steps in constructing
the communication plan.
1. Go through all task-model leaf tasks and knowledge model transfer functions. Make
a list of all tasks that have input or output information objects that must be exchanged
with another agent. Do this for each agent.
2. From this list, identify the set of associated agent-agent transactions. Here, a trans-
action is simply defined as the communication link needed between two leaf tasks
(including transfer functions) carried out by two different agents. Transactions are
the basic building blocks of the communication plan. Give each transaction an un-
derstandable name. Typically, this is a verb-noun combination indicating the com-
municative act performed with the information object (e.g., present diagnostic con-
clusions to the user).
3. The results of the previous two steps can be conveniently combined in a so-called di-
alogue diagram, where in a single picture we see an overview of all transactions and
the tasks they are linking for every agent. The general form of a dialogue diagram is
shown in Figure 9.2. The dialogue diagram presents the complete information-flow
part of the communication plan.
4. Finally, the communication plan is completed by adding control over the transac-
tions. This may be done in pseudocode or state-transition diagram form. In basic
practical cases it is often a simple sequence that follows straightforwardly from the
information flow. But when, for example, exception or outside event handling is
involved, a control specification is usually needed.
Modelling Communication Aspects 227
Task A1 Task B1
Transaction
Tr. 1
Task B2
Task A2 Transaction
Tr. 2
Task A3
Transaction
Tr. 3
Task A4
Task B3
Transaction
Tr. 4
Task A5 Task B4
Figure 9.2
The general layout of a dialogue diagram. It forms the central part of the communication plan, as it shows the
overall information flow related to agent communication.
The dialogue diagram shown in Figure 9.2 is useful for displaying the flow of discourse
between two agents. But it does not show control. In strongly reasoning-oriented tasks,
control over transactions is often a quite simple sequence that follows the flow of informa-
tion objects. However, this is not good enough in situations where, for example, external
events occur that conditionally trigger tasks or transactions. In such cases, we need some
way to describe control over transactions. In the CommonKADS communication model we
do this in a conventional way, either by means of some kind of structured control language
or pseudocode, or by means of state diagrams.
As these are well-known techniques from software and information engineering, there
228 Chapter 9
j j
messages)
(transaction-1 transaction-2) CHOICE operator, denoting
exclusive either/or operation
_ transaction-2)
(similarly for messages)
_ (transaction-1 OR operator, denoting
nonexclusive either/or operation
(similarly for messages)
, . Syntactic separators for the control
specification
Table 9.1
Specifying control over transactions and messages in the communication model by means of basic communication
operators and control constructs in pseudocode form.
is no need to give a long-winded elaboration of them. For state diagrams in the commu-
nication model we can employ the object-oriented Unified Modelling Language (UML)
notation. Likewise, Table 9.1 contains the constructs, i.e., basic communication opera-
tors and control constructs, specialized to the communication model. We note that this
approach to control will be used both for specifying the control over transactions and for
specifying control within transactions, the internal structure of which may contain differ-
ent messages of different types (as we will see later on in discussing the third layer of the
communication model). Below we discuss a practical industrial application.
Modelling Communication Aspects 229
kWh
utility kWh customer utility customer
+
info
Figure 9.3
Paradigm shift in energy utilities due to the new information society: from a pure product delivery concept to
two-way customer-oriented services.
Due to the deregulation of the European energy market, the electric utility industry is in a
transition from being a regulated and rather monopolistic power generation industry, to a
business operating in a dynamic and competitive free market environment. For the utility
industry a new business paradigm is therefore emerging. The usual business of generating,
distributing, and billing customers for kiloWatt hours —essentially a pure product-oriented
delivery concept— is being transformed into offering different kinds of new value-added
customer services (Figure 9.3). These vary from automated metering and billing-at-a-
distance, advice on optimizing energy use, tailored rates and contracts, to home automa-
tion, home energy management, and demand-side management at the customer’s premises.
This paradigm shift will open up new opportunities, but will also necessitate new ways of
thinking for most utilities, as it requires two-way communication between the utility and
the customer. Here, utilities are facing the fact that proper utilization of information and
knowledge is a key component in a competitive market. The traditional power distribution
net must be supplemented with an information network allowing for extensive two-way
communication between customers and the utility, in order to provide the new services
mentioned above. Information and communication technologies (ICTs) are crucial en-
ablers here.
Recent advances in ICTs have made it technologically and financially possible to equip
many different types of nodes in the electrical network (including 230V and other sub-
stations, industrial loads and even household equipment) with significant communication
(230V power grid, radio, cable TV, conventional copper lines, etc.) as well as computing
capabilities of their own. In this way, nodes in the electrical network will obtain the capa-
bilities to act as intelligent and interacting agents on behalf of the customer and the utility.
There are quite a number of different advanced information technologies that jointly act as
enablers here, such as: (a) cheap programmable chips that can be built in into many types
230 Chapter 9
Existing forms of energy load management are limited to a low number of large facilities
since manual control still plays an important part. The benefits of multiagent systems for
load management are a higher level of automation, a much larger scale, and more flexibility
and distributedness.
An innovative approach is to achieve dynamic and automatic load balancing by means
of software agent technology. Devices can now be equipped with communication and
information-processing capabilities, by supplying them with networked, communicating
microprocessors together with smart software running on top of them, as depicted in Fig-
ure 9.4. In everyday language, it is now technologically possible for software-equipped
communicating devices to “talk to,” “negotiate,” “make decisions,” and “cooperate” with
each other over the low-voltage grid and other media. This enables radically new ap-
proaches to utility applications. We use this concept to achieve distributed load manage-
ment in a novel fashion: by a cooperating “society of intelligent devices”. Knowledge plus
communication are the ingredients of intelligence in systems.
Every device or load, such as heaters, radiators, and boilers in a household, is repre-
sented by a software agent responsible for efficient and optimal use of energy, while taking
the customer preferences into account. We call these agents Homebots. A key idea is that
the communication and cooperation between devices for the purpose of load management
takes the form of a computational market where they can buy and sell power demand. Indi-
vidual equipment agents communicate and negotiate, in a free-market bidding-like manner,
to achieve energy and cost savings for both the utility and the customer. The market models
Modelling Communication Aspects 231
Figure 9.4
Devices and loads are equipped with smart small software programs. These software agents communicate, act,
and cooperate as representatives assisting the customer, to achieve given goals such as power load management.
adapted from business, such as auctions, offer promising concepts to automatically manage
large distributed technical systems. This is a decentralized way to reduce unwanted peak
loads.
Informally, the task distribution over agents is as follows. To begin with, a software
agent representing the utility (say, at the level of a transformer station) announces the
start of a load management action to the customer agents (which may represent a smart
electricity meter in a household or plant, or equipment beyond that such as radiators). For
example, if its goal is to reduce current energy consumption, it may offer a price or tariff
different from the usual one. The customer agents then determine to what extent they are
interested in participating in the load management action. This is based on the customer’s
preferences and is, of course, also changeable and programmable by the customer. On this
basis, the customer agents prepare bids to sell some power (that is: to postpone or reduce
energy use) in return for a financial rebate as offered by the utility, cf. Figure 9.5.
232 Chapter 9
Computational Market:
bids at an auction
IIbuy 2.1
IIbuy
Ibuy
buy
buy
2.1
2.1
2.1
2.1
kWh
kWh for
for
kWh
kWh
kWh for
for
for
0.49
0.49 SEK.
SEK.
0.49
0.49SEK.
0.49 SEK.
SEK.
I buy 2
kWh for I buy 3
0.5 SEK. kWh for
0.4 SEK.
Figure 9.5
Distributed load management is implemented in terms of an auction, whereby software agents representing the
utility and the customers bid and negotiate to buy and sell power demand.
The totality of bids is then assessed in an auction as in a free competitive market. The
auction results in a reallocation of the available power. In our system, power is treated as
any resource or commodity that is traded on a market. In a load management action there is
a certain (limited) supply of it. Both the utility and the customer agents also have a certain
demand for it, for which they are willing to pay a certain price. How much everyone gets,
and at what price, is determined automatically in the auction.
In realizing an auction on the computer, we employ long-established formal theory on
the functioning of competitive markets, which is available from economic science (espe-
cially from the field known as micro-economic theory). Customer preferences are in this
framework expressed in terms of so-called utility functions. They represent the value for
the customer for getting a certain amount of power in a numerical way: the higher the num-
ber, the higher the demand. Due to its rigorous mathematical form, this theory is readily
adaptable for implementation on a computer. The corresponding algorithms that calculate
Modelling Communication Aspects 233
the market equilibrium have been adapted from numerical analysis and optimization (since
market mechanisms can be reformulated as a kind of optimum search problems).
Market negotiation and computation continues until a market equilibrium is estab-
lished. This is the case when supply becomes equal to demand in the auction process.
Then, each participating agent achieves the best possible deal in terms of obtaining power
use versus spending financial budget. Economic market equilibrium can be shown to corre-
spond to the optimum allocation of available power over all involved equipment agents. No
agent will then gain any further by buying or selling power, and so the load management
action as a market process is completed.
After the auction has been completed, its outcomes — that is, the allocation of power
corresponding to the market equilibrium — are awarded and communicated to all agents
involved. Next, the loads are scheduled in accordance with the awarded power over some
agreed period (say, the next hour). This is implemented through appropriate on/off switch-
ing of the involved loads, whereby telecommunication over the power line will play a role.
Finally, agreed results as well as implemented outcomes are monitored by all parties, pro-
viding a database of the facts needed in the contracts between utility and customer. This
whole process is carried out automatically.
This informal task description leads us to a top view of the communication plan in the
Homebots system in a straightforward way, as seen in the dialogue diagram of Figure 9.6.
The important transactions, with their input/output information objects, in this announce-
bid-award computational market scheme are the following:
1. kickoff the auction: sends a trigger signal to the customer agents to commence a load
management action;
2. submit the bids: transmits the bids from the customer agents to the auctioneer for
further processing;
3. present the awarded power allocation: informs the customer agents about the results
of the auction;
4. present the associated real-time schedule: provides the customer agents with the
calculated schedule that implements the awarded allocation;
5. receive the resulting real-time implementation data: transmits the actual metering
data. This is needed for billing as well as for assessment of the need for further load
management actions.
For simplicity, we have given the simplest possible task distribution and agent architec-
ture. Other architectures and scenarios are very well possible. For example, it is probably
preferable to separate the utility agent (representing the interests of the utility) from the ac-
tual auctioneer agent supervising the bidding process. In large-scale applications, customer
234 Chapter 9
Utility Customer
Express
Announce
Preferences
C
Bid
O
Assess M
M
U
N
Award
I
C
A
Schedule T
E
Implement
Monitor Monitor
Figure 9.6
Dialogue diagram of the Homebots system: tasks in the power auction for electricity load management, with their
communication links.
agents will be hierarchically ordered. The initiative of various tasks can also be different.
In so-called direct load management, the initiative to an auction lies with the utility agent,
but in indirect load management the customer may take the initiative, though within a pre-
set contractual framework. Also, the scheduling task can be allocated to agents in different
ways. The computational market approach is very flexible in this respect. Figure 9.6 thus
only intends to show the basics of a power load management scenario.
In this basic scenario, control within the communication plan is straightforward, as
it follows the information flow from the subtasks. The top-level control is shown in Fig-
ure 9.7 with the help of a state diagram. Information about the UML state diagrams can
be found in Section 14.3. As an notational extension, agent task-transaction pairs are in-
Modelling Communication Aspects 235
Auction Completed
Auction
&
[Reduction need? Yes] Running [Convergence? No] [Convergence? Yes] Awards Communicated
/Announce & Kick-off /Next Round /Award & Present
Resource
Opted Preferences
Allocation
Out Calculated
Computed
[Power Need?]
/Bid & Submit
[Bids Received?]
/Assess
Bid
Submitted
Figure 9.7
Communication plan control in the auction process of the Homebots system in state diagram form. The UML
state-diagram notation has been used here.
dicated by an ampersand (Announce & Kick-off, Bid & Submit, Award & Present). Only
the auction part of the load management action has been given in the figure. Generally, a
state-based representation is convenient, as the formal semantics of agent communication
languages such as KQML and FIPA-ACL is based upon agent states.
The elements needed to specify an individual transaction are shown in Figure 9.8. For the
specification itself we can use the transaction description (worksheet CM-1) displayed in
Table 9.2. Most of it is rather self-explanatory. Collecting all this information within a
single worksheet helps to make the transaction description self-contained, and thus more
236 Chapter 9
identifier
& name
agents communication
involved plan
TRANS-
ACTION
information
constraints objects
information exchange
specification
Figure 9.8
The components that together specify a transaction in the CommonKADS communication model.
Table 9.2
Worksheet CM-1: Specifying the transactions that make up the dialogue between two agents in the communica-
tion model.
agent capability (e.g., sensory capabilities related to sound or vision) when relevant in-
formation comes in a certain form or medium, or the occurrence of an outside triggering
event as is often the case in real-time embedded systems. Sometimes, it is also useful to
state postconditions that are assumed to be valid after the transaction. In state adminis-
tration and legal matters, it is usually assumed that “every citizen knows the law” in all
its often intricate detail, whether this is actually true or not. Likewise, a transaction may
simply suppose that a transmitted information object is correctly received and processed
by the receiving agent, without actually asking for an acknowledgment. That this is a non-
trivial postcondition, and therefore worth reflecting about in communication modelling, is
something we all are familiar with from lost letters in both regular and electronic mail.
The final component of a transaction description is called the information exchange
specification. Basically, it gives the type, content and form of the message that “packages”
the information object that is transmitted. In very simple cases, e.g., when only data strings
are exchanged between two systems, it can be sufficient to give the content of the message
as a proposition or predicate statement here. However, it is well possible that a single
transaction contains more than one message. For example, this occurs in a buying/selling
negotiation task running between two parties. Then there is one transaction linking the buy
and sell tasks, but this transaction has an internal structure consisting of a bid-react-rebid
pattern of messages. Moreover, it is sometimes necessary to be able to state in what form
or through which medium information in a transaction is conveyed. For all these reasons,
238 Chapter 9
the information exchange specification is usually not given directly as a basic component
in worksheet CM-1, but a reference is given instead to a separate (worksheet) description.
How to describe this detailed information exchange specification is discussed in the next
section.
Table 9.3
Worksheet CM-2: Specifying the messages and information items that make up an individual transaction within
the communication model.
that transaction sentences are often composite and convey, in one shot, different types of
information. As an everyday example, let us consider the following sentence, viewed as a
transaction between two agents: “I’m getting cold, so could you please shut the window?”
If we think about it, it becomes clear that this sentence actually consists of several messages
that, moreover, have a different intent. Specifically:
1. The first part, “I’m getting cold,” is, strictly speaking, no more than a bare informa-
tion or notification message, stating that the speaking agent apparently does not find
the current temperature comfortable anymore. Note that this notification message
does not necessarily imply any action on the part of either agent.
2. In contrast, the second part, “Could you please shut the window?,” is directly aimed
at eliciting activity by the other agent, here in the form of a request for action.
240 Chapter 9
So, within one transaction sentence, we have two messages here, differing in content
as well as intent. That this is a rather general situation will be clear upon reflecting about
variations of the sketched transaction. Take, for example, the alternative sentence: “I’m
getting cold, why were you so stupid to open the window?” Or consider alternative answers
to the original question such as “Of course, dear” vs. “I’m watching this world champion
football match on TV, so why don’t you do it yourself?” Note also that in all these cases the
connective “so” has nothing to do with any form of logical deduction. This is why such
statements have to be broken down into more than one message. To do otherwise even
feels very artificial. It is easy to come up with other illustrations that are quite interesting
from a communication model viewpoint. As an example, we leave it as an exercise to the
reader to make a communication analysis of the following message: “It’s the economy,
stupid!”
It goes without saying that the pragmatics of human communication is often quite del-
icate. We believe, however, that considerations like those above are also relevant in the
modelling of communication where information and knowledge systems are concerned.
Nowadays, systems that involve multiagent communication, such as information systems
based on the Internet or the World Wide Web, have to confront such issues. In such sys-
tems, agent communication is often inspired by the so-called speech act theory, which
distinguishes between the actual content (“locutionary nature”) of a speech act or message
— what is actually being said — and its intended effect (“illocutionary force”) upon the
other agent.
This distinction is employed in many agent communication models and languages, and
also in CommonKADS. This can be practically done by associating each message with a
set of predefined communication types, which must be filled in as indicated in worksheet
CM-2, cf. Table 9.3.
A possible set of communication types is presented in Table 9.4. We do not pretend any
originality here: this set is a basic and simplified version of communication types found in
various agent communication languages, a currently very active and still changing research
area. In organizing the communication types, we use two dimensions: the first dimension is
the purpose of a message — task delegation, task adoption, or pure information exchange
—, whereas the second dimension indicates the degree of commitment or strength one
is exerting this purpose. So, in a nutshell one may say that for typing the intention of
a message, we have that intention = purpose commitment. In the table, this leads to
3 4 = 12 predefined basic communication types.
The semantics of the communication types in Table 9.4 is as follows:
Request/Propose: refer to a message sent by an agent that sees a potential for coop-
eration, but wishes to negotiate on the terms of this cooperation. Loosely: “I have
an interest, but not yet a commitment.”
Require/Offer: refer to a message indicating that the sending agent already has made
a precommitment, and intending to prompt the receiving agent for its commitment.
This type thus denotes a conditional commitment.
Modelling Communication Aspects 241
Table 9.4
Predefined communication types, used in specifying the intention (intended effect on the receiving agent) with
which a message is sent.
Order/Agree: the message types indicating that the agent has made a commitment,
and thus will act accordingly in carrying out its tasks.
Reject-td/ta: denote that the agent does not want to commit or cooperate in task
delegation (td) or adoption (ta).
Ask/Reply: evidently refer to messages that have as intent a query for information
from another agent, and delivery of such information in return.
Report: types a message sent after an agent has acted toward a (previously) agreed-
upon task goal, with the intention of letting the other agent know the status of
achievement (e.g., success, failure, outcome of the action).
Inform: refers to a message type that just delivers, provides or presents information
objects to another agent. It indicates an independent informative action, where no
previous request or agreement is involved (in contrast to reply or report messages).
We see that we now have at our disposal a rather rich vocabulary to specify the inten-
tion of messages. (Other, and richer proposals exist in the agent software literature, but the
present one does cover a wide range of practical knowledge systems.) It is also seen that
only giving the (propositional) content of messages is very limited, and that additional,
explicit specification of intention greatly improves the understanding of communicative
acts. And this is what the communication model aims at. This is further magnified by
realizing that the above communication types are not only suitable for characterizing sep-
arate messages. The communication types lend themselves very well to construct typed
patterns or chains of messages that naturally belong together. A possible library of such
patterns, adapted on software agent work done at Daimler-Chrysler, is presented in Fig-
ure 9.9. These communication patterns also constitute a currently very active and open
agent research area; they are sometimes known as conversation policies.
Question/answer patterns are a straightforward example occurring in many knowl-
edge systems. Negotiation tasks and associated bidding protocols provide another, more
complex, pattern. In the following section we show an example of this, relating to our
market-based program for energy management.
242 Chapter 9
3) 5) 7)
2) 4) 6) 8)
Figure 9.9
Library of message patterns, built from the predefined communication types. Branching arrows indicate (exclu-
sive) alternatives.
The Homebots system contains several transactions, as discussed above. Some are rather
simple, especially the transaction linked to the announce task serving to kickoff the auction
(cf. Figs. 9.6 and 9.7). The second transaction, whereby the bids are calculated (bid task
carried out by the customer agents) and subsequently submitted to the auctioneer, is much
more interesting. For reasons of space, we only treat this transaction in some detail, see
Table 9.5.
We thus see that the submit-the-bid transaction is composite. It is a single transaction
because it is an exchange link between two tasks, but it handles more than one core infor-
mation object. Both types of agents are problem-solving and reasoning agents, both are
acting as sender and receiver in this transaction, and both hold part of the overall initiative.
This is a typical multiagent situation that contrasts with the one usually encountered in con-
ventional knowledge systems. A more detailed specification of the information exchange
is given in Table 9.6.
For the control specification of the submit-the-bid transaction in worksheet CM-2, we
now use the pseudocode format as presented in Table 9.1. The specifcation is shown in
Modelling Communication Aspects 243
Table 9.5
Worksheet CM-1: The submit-the-bid transaction in the Homebots system.
Figure 9.10.
In multiagent systems, transactions related to task adoption and delegation come into
play. In contrast, conventional knowledge systems that have an advisory function (e.g.,
assessment of housing applications, or diagnosis of technical systems) are characterized by
the fact that most communication refers to basic information exchange. Core information
objects are either simply provided or delivered (INFORM message type), or exchanged
through a question-and-answer pattern (ASK/REPLY communication types). Supporting
information items such as help and explanation texts belong to a different type, however.
Since such items are only presented as an open option to the user, the related support
item messages are typically of the OFFER communication type. Hence, for intelligent
multiagent systems we indeed need a richer repertoire in communication modelling.
Table 9.6
Worksheet CM-2: The submit-the-bid messages and their communication types in the Homebots system.
Modelling Communication Aspects 245
REPEAT
WHILE <market convergence condition not satisfied>
IF <interest in load management>
THEN PROCESS(bid-task);
SEND(BID-MESSAGE)
ELSE SEND(OPT-OUT-MESSAGE)
END-IF
IF <bids received>
THEN PROCESS(assess-task)
ELSE PROCESS(decision sub-procedure [e.g. WAIT...])
END-IF
SEND(AUCTION-DATA-MESSAGE)
&
SEND(NEXT-ROUND-MESSAGE)
END-REPEAT
PROCESS(award-task)
(et cetera).
Figure 9.10
Control specification of the submit-the-bid transaction.
and in many guises. A walk-through is a form of peer group review: colleagues of the
responsible knowledge engineer or developer undertake to evaluate the communication
plan and give their comments back to the knowledge engineer.
A walk-through is a suitable technique to validate the communication model in an
early stage, as it is well possible to “mentally simulate” the flow of transactions in a com-
munication plan, and it is helpful that this is done by (relative) outsiders. This procedure
is useful in order to:
check the adequacy of the transaction structure;
identify whether the list of information objects is complete;
detect the need for additional help or explanation items.
Different, more or less formal, setups can be used for a walk-through session. One
possibility is to do it in the form of a prepared meeting, with a starting presentation by the
knowledge engineer, and a round of commentary and discussion, finalized by short formal
minutes with recommendations from the reviewers.
Present a simple and natural dialogue.
Speak the user’s language.
Minimize the user’s memory load.
Maintain consistency in terminology.
Give feedback about what is going on.
Show clearly marked exits from unwanted states.
Offer shortcuts for the experienced user.
Give help, explanations, and documentation.
Provide good error messages.
Even better: design to prevent errors.
Table 9.7
Nielsen’s heuristic evaluation guidelines for usability.
Clearly, a significant part of the communication model will often be tied in with user-
interface issues. As this constitutes a whole computer science field in itself, we will not
treat these issues here but refer instead to the vast literature on user interface design. Never-
theless, we would like to list a number of heuristics and guidelines presented by Nielsen in
his book Usability Engineering (1993). His approach is called “heuristic evaluation” of us-
ability. The associated guidelines are shown in Table 9.7. They may, for example, serve as
a set of evaluation criteria to be used in inspection sessions, like the communication plan
walk-through discussed previously. Empirical studies have shown that one should have
several evaluators, and a number of about five seems to give the best cost-benefit ratio.
Modelling Communication Aspects 247
9.7.4 Guidelines for balancing the communication model against other Com-
monKADS models
The relation of the communication model to the other models has been explained in the
introduction to this Chapter, and is also shown in Figure 9.1. On this basis, we have
defined a number of rules and guidelines for what the boundaries and connections should
be of the communication model vis-à-vis the other models. These rules and guidelines are:
Leaf tasks from the task model as well as the knowledge model are key inputs to
the communication model, insofar as they handle information objects that must be
exchanged between agents. (Such leaf tasks in the knowledge model are called trans-
fer functions.) The foremost rule for communication modelling says that a separate
transaction must be defined for each information object exchanged, and for each
distinct pair of leaf tasks.
The agent model describes the agent capabilities (knowledge), responsibilities, and
constraints. Check whether these are compatible with the constraints for the transac-
tions in the communication model. It may be that communication requires additional
capabilities from an agent. If so, add these by revising the agent model.
As a double-check, verify whether the communication plan is compatible with struc-
ture, power/culture, process, and resources in the organization model.
The rule of structure-preserving design in the design model also holds for the com-
munication model constructs, in the same way as it holds for the knowledge-model
structure.
A borderline case between the design and communication models are the syntac-
tic form and media aspects of information items in the detail information exchange
specification. They might belong to either one. The demarcation criterion is that
if there is an intrinsic conceptual reason that information items take on a certain
form or are carried by a certain medium, this is to be modelled in the communica-
tion model. Otherwise, it is a matter of implementation choice, as a consequence
of which it belongs to the design model. For example, some information objects
“must” come with a certain form or medium. For example, a signature authorizing a
purchase or expense might not be considered legally valid if it is given in electronic
form. Such a constraint is part of the communication model. The general rule is to
model form and media aspects in the design model, unless there is a good conceptual
reason not to.
Decisions as to what supporting information items to introduce belong to the com-
munication model, and not to the design model, because they are a matter of user
task support, not system implementation.
248 Chapter 9
1. Identify the core information objects to be exchanged between agents. Do this by check-
ing, for each agent, the list of leaf tasks from the task model and the knowledge model
(the transfer functions).
2. Identify the associated list of transactions, as exchange links between two tasks, and give
each transaction a suitable, i.e., user-understandable, name.
3. Now, construct the dialogue diagram so that you have a pictorial overview of the overall
communication plan. If needed, add a specification of the control over the transactions.
This yields a complete communication plan.
4. Describe all individual transactions, following the format given in Figure 9.8 and work-
sheet CM-1.
5. Describe the internal structure of each transaction where necessary, by filling in the in-
formation exchange specification according to worksheet CM-2.
6. Validate and balance the communication model according to the techniques and guide-
lines given.
Table 9.8
Steps in communication-model construction.
tions, chain management, information and knowledge sharing, distributed intelligence, in-
telligent agents, and multiagent technology. These developments try to come to grips with
the fact that knowledge processes are becoming more and more inherently distributed.
In the software world, this leads from large and relatively monolithic information and
knowledge systems to relatively independent interacting software agents. A recent collec-
tion on software agents can be found in Bradshaw (1997). Modelling of communication
in intelligent agent systems is of prime importance, but significantly more complex than in
conventional knowledge systems. Bradshaw’s book contains a chapter on the KQML agent
communication language referred to in this chapter. The mentioned Daimler-Chrysler work
on communication in distributed intelligent systems is found in Haddadi (1995). The orig-
inal version of the CommonKADS communication model was developed by Waern et al.
(1993) for conventional knowledge systems; the report contains a good case study of a
conventional single-system/single-user expert system for diagnosis of telecommunication
equipment in the field. The version of the CommonKADS communication model expanded
to intelligent multiagent systems presented in this chapter was developed by Akkermans
et al. (1998). More information related to the Homebots case study is found elsewhere
(Akkermans et al. 1996, Ygge 1998).
10
Case Study: The Housing Application
10.1 Introduction
In this chapter we describe a small case study to illustrate the analysis models and methods
discussed so far. The case study concerns a domain in which rental houses are assigned to
applicants. A short description of this domain can be found in in the next section.
Table 10.1
Part of the table that indicates the relation between rent and income.
be used by applicants to adapt their application strategy (e.g.,by applying next time for a
house in a less popular area).
To be eligible for a residence, applicants have to satisfy a number of criteria. There
are four types of eligibility criteria. Firstly, people have to apply for the right residence
category. Secondly, the size of the household of the applicant needs to be consistent with
requirements on minimum and maximum habitation of a certain residence. The third cri-
terion is that there should be a match between the rent of the residence and the income of
the applicant. Table 10.1 shows some sample rent-income criteria. Finally, there can be
specific conditions that hold for one particular residence.
Currently, assessing whether applicants satisfy these criteria is done manually by civil
servants of the local government. This manual checking takes a lot of time, and you are
asked to develop a system for automatic assessment of residence applications. Input to the
system are the data about a particular application: data about an applicant and a residence.
The system output should be a decision about whether the application is in line with the
criteria yes or no.
The system has to communicate with a database system containing data about resi-
dences and applicants, and with another program that computes a priority list of applicants
for each residence.
Table 10.2 shows the first worksheet OM-1, which lists the perceived organizational prob-
lems, characterizes the organization context (which for the purpose of the current analysis
is assumed to be invariant), and provides a list of possible solutions.
Two problems are listed: (1) the fact that assessment of individual residence applica-
tions takes too much time, and (2) the fact that there is not enough time to handle urgent
Case Study: The Housing Application 253
Table 10.2
Worksheet OM-1: Problems, organizational context, and possible solutions.
cases, for which specialized rules and regulations apply. It seems normal to think that
there is some causal connection between these two problems, and that solving one will
thus also solve the other. This is in practice often dangerous. For example, if the first
problem is solved through automation, it might be the case that the human resources that
become available do not have the skill to carry out the other task. The solution listed on the
worksheet is typical in the sense that it does not consist of a single item (building a soft-
ware system), but combines software development with organizational measures (training
personnel, creating new organizational roles, reorganizing the business process).
Although often a project is initiated with a particular target system already in mind
(in this case a system for automatic assessment of residence applications), it is useful to
make an explicit note of the problems the system is supposed to solve, and also to look at
possible alternative solutions and other measures. As we see in this worksheet, the solution
proposed is in fact a “package” of which the software system is just one element. This is
typical of many projects.
The second row of worksheet OM-1 describes the organizational context. These ele-
ments are assumed to stay the same during the project at hand. This means that we assume
that the mission and goals of the organization are fixed as far as the project is concerned. It
might well be that the project comes to conclusions which could affect the organizational
goals, but this process lies outside our current scope. The mission and goals in this case
study reflect the fact that this organization is a recently privatized department of the local
254 Chapter 10
Table 10.3
Worksheet OM-2: Variant aspects of the housing organization.
administration and is moving in the direction of a “real” business. The people working in
the organization used to be civil servants.
OM-2: Description Of Focus Area in the Organization The second worksheet de-
scribes the part of the organization on which the project focuses. The worksheet contains
six slots. The first two slots, “structure” and “process,” are usually best shown in a graphi-
cal way.
Figure 10.1 shows the current organization structure. This figure combines the “struc-
ture” slot with the “people” slot, indicating the roles of people in the organization. In many
organizations the roles of people are tightly connected to their physical position: in such
cases this kind of combination makes sense. Here we see that the organization is struc-
tured in a hierarchical fashion. The directorate forms the top level of the hierarchy. There
are four departments. The “public service” department is responsible for producing the
bi-weekly magazine, as well as answering questions of the public. The “residence assign-
ment” department carries out the actual assignment work. This assignment work can be
split up into two parts: standard cases and urgent cases. The computer department main-
tains the databases and other software. The policy department assists the directorate in the
formulation of the long-term policy of the organization.
Figure 10.2 shows the main business processes in the organization. As in the social
security domain (cf. Chapter. 3) we distinguish a primary process and a secondary process.
The primary process is responsible for delivering the “product” (in this case an assignment
of a residence); the secondary process describes support activities for the primary pro-
cess. Such a division into primary and secondary is probably useful in many application
Case Study: The Housing Application 255
policy
department
statistical analyst
staff member
directorate
director
deputy director
residence computer
public
service assignment support
Figure 10.1
Structure and people in the current situation.
domains.
The primary process in this case study consists of four steps. The process is carried
out in bi-weekly cycles. First, a magazine is produced which is distributed to the public
and which contains the available residences in a particular cycle as well as the results of
256 Chapter 10
primary secondary
process process
residence
assignment policy
information
Figure 10.2
Primary and secondary business processes in the current situation. The notation used is that of a UML activity
diagram.
the previous cycle. Secondly, incoming applications (e.g., through posted paper forms) are
entered into the database. This data-entry task performs a check on whether the registration
number of the applicant and the number of the residence are indeed valid numbers. The
third task looks at each individual application and checks whether the applicant is applying
for a residence she is entitled to (see the criteria described at the beginning of this chapter).
This assessment task is responsible for the first problem mentioned in worksheet OM-1.
Finally, the available residences are assigned to one of the correct applications for this
house.
CommonKADS does not prescribe a fixed graphical notation for the figures con-
structed in the course of organizational analysis. In Figure 10.2 the UML notation for
activity diagrams is used. Unless you have your own favorite, this is probably a good
Case Study: The Housing Application 257
Table 10.4
Worksheet OM-3: Process breakdown.
In this worksheet we describe the main tasks that appeared in the “process” slot in OM-2.
We have limited the tasks in this worksheet to the tasks in the primary process of Fig-
ure 10.2. As we see, two of the four tasks are knowledge intensive, namely the assessment
and the assignment task. For each task we also indicate its significance. It is worthwhile to
note that “significance” is an elusive concept and hard to quantify. A typical quantitative
yardstick is the workload of a task, as can be seen in the social-security case in Chapter. 3.
However, most of the time we have to be content with a qualitative estimate, such as figure
on a five-point scale. Here, we see that the two knowledge-intensive tasks also score high
in terms of significance. This is an indication that is useful to consider automation of the
task.
258 Chapter 10
!
ment tion paper-form
criteria assessment
electronic
Assign- Priority 4. Yes Yes Yes Yes
ment calculator / Residence
rules assigner assignment
Urgency Assigner 4. Yes Yes Yes No: often
rules Residence incom-
assignment plete,
ambigu-
ous,
inconsis-
tent
Table 10.5
Worksheet OM-4: Knowledge assets.
The fourth worksheet gives a short description of the main knowledge assets in the part of
the organization we are focusing on. In the housing case three knowledge assets are listed
(see also OM-2). The first knowledge asset concerns knowledge about assessment criteria
for applications. The main issue concerned with this asset refers to its form: we would like
to have it in electronic form in order to make it available for automation. The knowledge
concerning assignment rules has no associated problems. This is not a surprise, because it
is possessed by a program that uses this knowledge.
The knowledge concerning urgent cases is apparently the most difficult to get a grip
on. The reason for this is that it consists of rules and regulations from different sources
at different periods in time. Also, the criteria and definitions used are open to interpreta-
tion. The project we describe here has refrained from tackling a task in which this type
of knowledge is featured. However, in a future project it might useful to develop this type
of knowledge, e.g., to increase its quality. Worksheet OM-4 is a typical focus point for
knowledge-management activities in which we are interested in describing knowledge at
a coarse-grained level and defining strategies for knowledge development and distribution
in the organization (see also Chapter. 4).
Case Study: The Housing Application 259
Table 10.6
Worksheet OM-5: Feasibility of the solution “automation of the application-assessment task in combination with
retraining staff for urgency handling”.
In the final worksheet OM-5 we indicate the feasibility of potential solutions for perceived
organizational problems. The worksheet for the housing application describes the feasibil-
ity of the solution we proposed in worksheet OM-1, namely automating the application-
assessment task plus retraining staff. When discussing business feasibility it is often dan-
gerous to expect large paybacks in terms of cost reduction. Even if a system saves labor
and effort, it might well be that it is from a social perspective it will be impossible to fire
people. It is usually more realistic to use the knowledge system for quality improvement.
In the housing case the technical feasibility appears to be high, mainly because as-
sessment problems are well understood and the knowledge is already present in an explicit
(paper) form. For the project feasibility you have to carefully consider the availability of
the required software-development expertise. In particular, the availability of the expert is
often an important bottleneck to consider. In the housing case this appears (luckily) not to
be a problem.
All in all, the proposed solution is judged to be feasible provided it is backed by people
260 Chapter 10
in the organization. Therefore,the first action proposed is to inform the staff involved of
the plans and to elicit their explicit support. Such actions depend of course on the local
traditions, and in this case might be typical of the “consensus” culture in the Netherlands.
Worksheet TM-1 in Table 10.7 contains a description of application assessment. The de-
scription is at a more detailed level than in the organization model. In the task model we are
“zooming in” on a task. We describe both the internals of a task (I/O, control information,
data manipulated), as well as external information such as the goal of the task, performance
requirements, quality criteria, and constraints. The worksheet lists some typical examples
of task information for the assessment task.
Often, we can link in relevant analysis descriptions made for other applications. For
example, Figure 10.3 and Figure 10.4 show respectively a data-flow diagram and a state-
transition diagram that have been taken from prior system-development work in this do-
main. Also, existing database schemata are often useful to link in here. The database
schema for the database of applicants and residences is shown in the section on knowledge
modelling (see Figure 10.6).
In the task model we also take a closer look at the knowledge assets involved in the task.
Worksheet TM-2 is used for this purpose. In this worksheet we characterize the nature of a
knowledge asset in terms of a number of attributes related to nature, form, and availability
of the knowledge.
Table 10.8 is an instance of this worksheet for the knowledge asset “assessment cri-
teria.” We see that the nature of this type of knowledge is formal and/or rigorous, highly
specialized, and quickly changing. The form of the knowledge is paper. This is in itself not
a problem (the paper description is in fact quite precise and unambiguous), but if we look
at availability we see that there is a problem connected with the form. We would like to
have the knowledge in electronic form so that it can be made available to a computer pro-
gram. Finding bottlenecks is a central issue in knowledge analysis at this course-grained
Case Study: The Housing Application 261
data entry
applicant
checking
application
data
application
rental
assign
agency
free residence
assignment
assignment
database
Legenda
data flow
processing data (external)
function store actor
Figure 10.3
Data-flow diagram for the main processes, data flow and data stores of the application-assessment task, as well
as directly related tasks.
262 Chapter 10
T IMING AND Carried out for every application delivered by the data entry task Each time a
CONTROL new application is received from data entry, this task can be carried out. The
residence-assignment task for a certain residence can only be carried out if the
assessment task has validated all applications for this residence. Applications
that fail the validation test can be thrown away without notification of the
applicant. It would be good to keep a log of all task activations plus a summary
of the results.
AGENTS In the new situation: knowledge system
K NOWLEDGE AND Assessment criteria
COMPETENCE
R ESOURCES –
Q UALITY AND The task is not time-critical, but it is expected that assessment will be quick (at
PERFORMANCE most a few seconds). System availability should be at least 95%. In case the
system is not available, the applications that need to be validated should be
placed in a queue.
Table 10.7
Worksheet TM-1: First analysis of the application-assessment task.
data entry
checking
[data = incorrect]
application received
before deadline
garbage bin
[data = correct]
further
processing
[decision = eligible]
Figure 10.4
State diagram for the main flow of control during a single execution of the application-assessment task and the
preceding task “data entry”.
In Table 10.9 we see an instance of worksheet AM-1 for the “assigner” agent. This
is the human role in the organization most affected by the proposed solution. Her work is
likely to change dramatically. Information added to this worksheet relates mainly to the
skills and competencies required for the agent. In this case we see that social skills are
required, in particular for handling the urgent cases. Given the proposed organizational
changes, the need for these skills will be larger in the future.
We complete the context analysis with filling in worksheet OTA-1, which summarizes the
proposed organization changes, improvements, and actions. The worksheet for the housing
case is shown in Table 10.10.
264 Chapter 10
Table 10.8
Worksheet TM-2: Knowledge asset characterization plus identification of bottlenecks.
A number of information sources were scanned. In this case, the written information turned
out to be extremely helpful The reason for this was that one of the goals of the new business
process for residence assignment was to make it as transparent as possible to the public.
Case Study: The Housing Application 265
Table 10.9
Worksheet AM-1: The “assigner” agent.
ATTITUDES AND Management thinks the changes will be received positively by the
COMMITMENTS agents whose work changes. This has to be verified though interviews
and/or other means.
P ROPOSED ACTIONS 1. Propose preliminary plan for full development.
2. Conduct interviews with agents affected by the new situation and
define accompanying measures in case of negative attitudes.
Reconsider the project if there is a negative attitude among these
agents.
3. Select staff for retraining as “urgency handler”.
4. Plan the training program.
Table 10.10
Worksheet OTA-1: Summary of organizational changes, improvements, and actions.
266 Chapter 10
People should be able to see which criteria were applied. The bi-weekl magazine with the
available houses also contained an explicit description of the assessment criteria and of the
full procedure. Additional sources of information were a few transcripts of interviews with
assigners and a document describing information about urgent cases.
From the task point of view we have to look at templates for the assessment task. The
template described in this book (see Chapter. 6) is of course a candidate, but there are also
others. For example, the CommonKADS library book (Breuker and Van de Velde 1994)
contains a chapter on assessment models.
From the domain point of view we can get information from the existing databases of
residences and applicants. The data models of those databases are candidates for (partial)
reuse. This also simplifies the realization of the connection between the assessment system
and the database(s).
For the housing application we chose the task template for assessment described in Chap-
ter. 6. There are two reasons for this choice:
1. The inference structure appears to fit well with the application. A good technique for
establishing such a fit is to construct an “annotated inference structure.” An example
is shown in Figure 10.5. The dynamic roles have been annotated with application-
specific examples. We see that the role “norm” can be played by a “rent-fits-income”
criterion. The knowledge needed for the evaluation of this norm can be found in the
decision table (cf. Table 10.1).
If it is easy to find examples that cover the domain well, the chances are high that
the template is useful. One can see it as a hypothesis that needs to be verified in the
remainder of the knowledge-modelling process, e.g., by filling the static knowledge
roles and simulating the knowledge model.
It can be the case that the inference structure needs some adjustment. For example,
in the housing application we need an additional input for the evaluate inference,
because it turns out that there are sometimes special rules connected to a particular
residence. This addition is shown as a shaded area in Figure 10.5. In most domains
some tuning of the template is necessary.
2. A second reason for choosing the task template of Chapter. 6 is that it already con-
tains a typical domain schema. This schema gives us a head start in domain mod-
elling.
In general, it can be said there are still considerable differences between the available
reusable components. These concern scope, level of detail, and formality. Although
Case Study: The Housing Application 267
abstract
criteria such as
"rent fits income,"
"correct household size"
abstracted
specify norms select
case
abstractions such as
age category are added
to the case
case-specific
evaluate norm
norms
norm
decision match
value
Figure 10.5
Annotated inference structure for the residence-assessment problem. The shaded area is an addition needed for
this application. The rest is taken directly from the task template for assessment.
As suggested by a guideline in Chapter. 7, this activity should be carried out in parallel with
the previous one to ensure that the “task” view does not bias the “domain” view too much,
268 Chapter 10
residence
applicant
number: natural
category: {starter-residence, registration-number: string
followup-residence} applicant-type: {starter,
build-type: {house, apartment} existing-resident}
street-address: string name: string
city: string street-address: string
num-rooms: natural city: string
rent: number birth-date: date
min-num-inhabitants: natural age: natural
max-num-inhabitants: natural age-category: age-category-value
subsidy-type: subsidy-type-value residence gross-yearly-income: natural
surface-in-square-meters: natural application household-size: natural
floor: natural household-type: household-type-value
lift-available: boolean application-date: string
Figure 10.6
Representation of the two central domain concepts in residence assessment: “residence” and “applicant”.
and vice versa. For the initial domain conceptualization the data models of the existing
databases turned out to be the major source of information.
Residences and applicants In the housing domain, we find two central object types
which can be modelled with standard data-modelling techniques, namely residence and
applicant. Both can be specified through a CONCEPT with a collection of attributes.
Figure 10.6 shows these two concepts graphically. The two concepts are related through the
residence application relation. An instance of this relation indicates that a certain person
has applied for a certain residence. Applicants can apply for at most two residences. There
is no limit on the number of requests for one residence. The application is represented as a
reified relation, meaning that the relation can also be viewed as a (complex) concept with
its own attributes. The attribute application-date is a prototypical example of a relation
attribute, as its value depends on both objects participating in the relation.
The value-types for attributes can be chosen from the predefined list (see the appendix),
but one can also define a customized value-type. The definition below is an example of a
value-type specification for the attribute household-type of the concept applicant (see
Figure 10.6).
VALUE-TYPE household-type-value;
TYPE: NOMINAL;
VALUE-LIST: {single-person, multiple-persons};
END VALUE-TYPE household-type-value;
The TYPE slot indicates whether an ordering is assumed on the values in the list.
The type nominal says that no ordering is assumed. For another value-type, namely age-
category, an ordering exists and thus the type will get the value ordinal.
Case Study: The Housing Application 269
Housing criteria In addition to the information about residences and applicants, the no-
tion of criterion stands out as an important concept in this domain. Assessment is all about
criteria. We saw in the domain description in Section 10.2 that for this system we need to
distinguish four types of criteria:
These four criteria can be true or false for a particular case. We represented this by
defining four subtypes of a concept residence-criterion (see Figure 10.7). The criteria all
have a attribute truth-value, which can be used to indicate whether a criterion is true or
false.
For the moment we limit the domain schema description to the residence, applicant,
and criteria definitions. In the next activity we add additional domain-knowledge types,
in particular the rule types are needed to model the domain knowledge for assessment.
In Chapter. 6 we learned that this task is an instance of the task type assessment. Because
the task templates provided such a good covering of the knowledge components for this
application (see the activity concerning the choice of a task template), the construction
process can take the form of the “middle-out” approach described in Chapter. 7. We can
assume that the inferences in Figure 10.5 are at the right level of detail and start modelling
from there.
The full knowledge-model specification in textual format can be found in the appendix.
270 Chapter 10
residence
criterion
truth-value: BOOLEAN
Figure 10.7
Subtype hierarchy representing the four types of criteria.
Task knowledge The task and task method specification can almost directly be copied
from the default method for assessment described in Chapter. 6. The main distinction is
that we decided here to structure the method in a composite task with two subtasks. This
is a somewhat stylistic decision, and is typical for small variations in the default models in
Chapter. 6 introduced by a knowledge engineer for a particular application. The resulting
task-decomposition diagram is shown in Figure 10.8. The figure shows in a graphical form
all tasks plus their methods and the inferences they are ultimately linked to.
The top-level task is named ASSESS-CASE. The task definition in Figure 10.9 de-
scribes the I/O of this task. It is common to give tasks a domain-independent name. How-
ever, in the textual specification we can (optionally) add a domain-specific name (ASSESS-
RESIDENCE-APPLICATION in Figure 10.9). Note that the input and output is also de-
scribed in a domain-independent vocabulary. We use a term such as case-description instead
of residence-application.
The task method for the top-level assessment task structures the reasoning process into
two steps:
assess case
task
assess trough
task method
abstract & match
abstract match
task method method method
Figure 10.8
Tasks and task methods in the residence-assessment domain. The task methods at the lowest level of decomposi-
tion refer to inferences (the ovals).
The abstractions in our housing application are of a very simple nature: they concern
only the age-category and the household type. In general, abstractions are an integral
part of many knowledge-intensive applications. The power of abstraction seems to
be an integral element of (human) expertise. It is a technique that helps us to cope
with the intrinsic complexity of reality.
2. Matching the (abstracted) case against the decision knowledge. Once the case is
in the right (abstracted) form, we can see how it matches with the assessment criteria.
The result of this match is a decision.
The task control within the ASSESS-CASE task is a simple sequence of the two sub-
tasks. The task method introduces one additional role, namely abstracted-case. This is an
example of an intermediate reasoning result, introduced by the decomposition.
Domain knowledge We can now look at the domain schema provided with the assess-
ment template (see Figure 6.7). We can see the following relationships between this
schema and the housing domain:
272 Chapter 10
TASK assess-case;
DOMAIN-NAME: asses-residence-application;
GOAL: "
Assess whether an application for a residence by a certain
applicant satisfies the criteria.";
ROLES:
INPUT:
case-description: "Data about the applicant and the residence";
case-specific-requirements: "Residence-specific criteria";
OUTPUT:
decision: "eligible or not-eligible for a residence";
END TASK assess-case;
TASK-METHOD assess-through-abstract-and-match;
REALIZES:
assess-case;
DECOMPOSITION:
TASKS: abstract-case, match-case;
ROLES:
INTERMEDIATE:
abstracted-case: "Original case plus abstractions";
CONTROL-STRUCTURE:
abstract-case(case-description -> abstracted-case);
match-case(abstracted-case + case-specific-requirements
-> decision);
END TASK-METHOD assess-through-abstract-and-match;
Figure 10.9
Specification of the top-level task “assess-case.” For the housing application we structured the task knowledge
into one overall task and two subtasks.
case as a whole is in fact one instance of the residence-application relation.
A norm corresponds in this domain to one of the four criteria shown in Figure 10.7.
For the three rule types (i.e. abstraction rules, criteria requirements, and decision
rules), we discuss in the following paragraphs whether and in which form these exist
in the housing domain.
Abstractions The abstractions that are required for this particular assessment model
are simple. Basically, we need to abstract the age of applicants into a value indicating one
of three possible age-category values, and we also need to abstract the number of family
members into a value for household-type (single or not). Both abstracted values are used
later on in the evaluation of the norm rent-fits-income (see also the rent-income table in
Section 10.2.
The abstraction knowledge can be represented using a RULE-TYPE as shown in Fig-
ure 10.10.This rule type is in fact a domain-specific version of the rule type defined in
Case Study: The Housing Application 273
applicant
abstraction
1+ 1
applicant has-abstraction applicant
RULE-TYPE applicant-abstraction;
ANTECEDENT: applicant;
CARDINALITY: 1+;
CONSEQUENT: applicant;
CARDINALITY: 1;
CONNECTION-SYMBOL:
has-abstraction;
END RULE-TYPE applicant-abstraction;
Figure 10.10
The rule type for applicant abstractions. The arrows go from antecedent to consequent. Note that the intended
meaning is that the antecedent and the consequent consist of expressions about feature values of the concepts
indicated (in this case “applicant”).
Figure 6.7. As explained earlier, rule types are a sort of relation in which the arguments
are not object instances but expressions about features of an object. An example of an
abstraction rule (i.e., an instance of the rule type) would look like this:
applicant.household-size > 1
HAS-ABSTRACTION
applicant.household-type = multiple-persons;
The antecedent and the consequent are separated through the connection symbol has-
abstraction. The idea is that this connection symbol is chosen in such a way that it pro-
vides a meaningful name for the dependency between the antecedent and the consequent.
One can see that both antecedent and consequent are expressions about attribute values of
applicants. This is typical of rules, and distinguishes them from relation instances.
Note that in a rule type we define the expressions that can be part of a rule somewhat
implicitly. The statement
ANTECEDENT: applicant;
CARDINALITY: 1+;
274 Chapter 10
in the abstraction rule type means that the antecedent of a rule instance of this type consists
of at least one expression (but possibly more) about a feature of the concept applicant. We
use the term “feature” to refer to both concept attributes and relations in which the concept
is involved. For both, the “dot” notation (concept.feature) is used.
For the rule expressions we assume that a standard set of expression operators is avail-
6
able, depending on the value-type of the operand. For example, if the expression concerns
a numeric attribute, the standard operator set is = = > < .
Criteria requirements The largest part of the domain knowledge is concerned with
logical rules that specify when a certain criterion is true or false. These rules specify the
requirements that need to be met for a criterion. The rule type residence-requirement
(see Figure 10.11) defines this requirement knowledge in a schematic way. Again, this rule
type is a domain-specific version of the rule type defined in Figure 6.7. An instance of this
type is a rule in which the antecedent consists of expressions about a residence application
(could concern both the residence and the applicant). The consequent is an expression
about the truth value of a criterion.
All cells in the first three columns of the rent-income table in Section 10.2 correspond
to an instance of this rule type. For example, the first cell in the first column corresponds
to:
residence-application.applicant.household-type = single-person
residence-application.applicant.age-category = up-to-22
residence-application.applicant.income < 28000
residence-application.residence.rent < 545
INDICATES
rent-fits-income.truth-value = true;
Requirements for other criteria can be expressed in similar ways. The example within
the definition in Figure 10.7 shows one rule for the criterion correct-household-size.
Domain schema overview Figure 10.11 shows the domain schema for the housing
application. The domain schema resembles in many aspects the default assessment domain
schema shown in Figure 6.7. The main difference is that the domain types in this chapter
have domain-specific names.
Case Study: The Housing Application 275
residence applicant
residence residence
indicates
requirement application
has
abstraction
residence
criterion
1+
residence
abstraction
residence
decision implies residence
decision
rule
Figure 10.11
Domain schema for the housing application. The attributes and subtypes defined in previous figures have been
left out.
KNOWLEDGE-BASE system-description;
USES:
applicant-abstraction FROM assessment-schema,
EXPRESSIONS:
applicant.household-size = 1
HAS-ABSTRACTION
applicant.household-type = single-person;
applicant.household-size > 1
HAS-ABSTRACTION
applicant.household-type = multi-person;
END KNOWLEDGE-BASE system-description;
KNOWLEDGE-BASE measurement-system;
USES:
residence-requirement FROM assessment-schema,
residence-decision-rule FROM assessment-schema;
EXPRESSIONS:
/* sample requirement for norm ‘‘rent fits income’’ */
rent-fits-income.truth-value = false
IMPLIES
decision.value = not-eligible;
END KNOWLEDGE-BASE measurement-system;
Figure 10.12
Knowledge bases for the residence-assessment application. The first knowledge base “system-description” con-
tains the abstraction rules. The second knowledge base “measurement-system” contains both the static residence
requirements, as well as the final decision rules. Only some sample rules are listed.
domain schemata.
2. The knowledge base measurement-system can contain instances of two different
domain-knowledge types: norm requirements and decision rules.
The EXPRESSIONS slot of the knowledge base contains knowledge instances. These
instances should belong to the types listed in the USES slot. In Figure 10.12 only a few
sample rule instances are listed. During knowledge-model specification we typically do not
yet try to list all the instances in the knowledge bases, but are satisfied with a few typical
examples. In the knowledge refinement phase, the knowledge bases can be completed (see
further).
Case Study: The Housing Application 277
Having defined the tasks and their methods, as well as full domain schemas plus partial
knowledge bases, we can now connect these two by completing the specification of the
inferences of Figure 10.5.
Inference knowledge We identified five inferences that are needed to realize the assess-
ment tasks:
1. abstract This inference is able to take some case data, in the housing data about an
applicant and a residence, as input and produce a new abstracted data item as a result.
In the housing case only a few simple abstractions were found.
2. specify This inference generates a list of criteria that could be evaluated for a cer-
tain case. In the housing domain, there are four criteria: correct residence cate-
gory, correct household type, rent consistent with income, and (optionally) addi-
tional residence-specific requirements.
3. select This inference selects one norm from the list. The selection can be done ran-
domly, or be based on heuristics like “first the most likely one to fail.”
4. evaluate This inference evaluates a particular norm for the case at hand, and returns
a truth value, indicating whether the norm holds for this case. An example output of
this inference would be that for a particular case A the norm rent-fits-income is true.
5. match The match inference takes as input all results of criteria evaluation, and suc-
ceeds if a decision can be reached. In the housing domain, the decision not-eligible
can be reached as soon as one of the four criteria turns out to be false. The decision
eligible can only be reached after all criteria have been evaluated and are true.
The inferences provide the link between the tasks and their methods on the one hand
and the domain schema on the other hand. The main distinction with a task is that an
inference does not have a “method” associated with it. The inference is assumed to be
completely specified through its input, output, and static knowledge (the dynamic and
static role definitions). No internal control is specified for the inference.
For each role used in the inference, a mapping is defined from the role to the domain
objects that can play this role. In this way the functional names (the roles) provide an
indirect link between the “functions” themselves (tasks and inferences) and the “data” (the
domain schema).
Figure 10.13 shows the textual specification of two of the inferences, namely abstract
and evaluate. In Figure 10.14 the corresponding knowledge roles are specified. The textual
specification of knowledge roles is richer than the graphical one (which in its basic form
only shows the dynamic roles). The main additional piece of information concerns the
domain-mappings for each knowledge role.
278 Chapter 10
INFERENCE abstract;
ROLES:
INPUT:
case-description;
OUTPUT:
abstracted-case;
STATIC:
abstraction-knowledge;
SPECIFICATION: "
Input is a set of case data. Output is the same set of data
extended with an abstracted feature
that can be derived from the data using the corpus of
abstraction knowledge.";
END INFERENCE abstract;
INFERENCE evaluate;
ROLES:
INPUT:
norm,
abstracted-case,
case-specific-requirements;
OUTPUT:
norm-value;
STATIC:
requirements;
SPECIFICATION: "
Establish the truth value of the input norm for the given
case description. The underlying domain knowledge is formed by both
the requirements in the knowledge base as well as additional
case-specific requirements, that are part of the input.";
END INFERENCE evaluate;
Figure 10.13
Textual specification of two of the inferences.
Take, for example, the specification of abstract. In this inference three knowledge
roles are used. The knowledge-role specifications show the domain mappings of infer-
ence roles to domain-knowledge constructs. The input role case-description is mapped
onto the domain-knowledge relation residence-application. This means that all objects of
residence-application can play the role of case description. Remember that residence-
application is in fact a relation between an applicant and a residence. One can see a
residence-application object as a conglomerate of attribute values about a certain resi-
dence and a certain applicant. This is indeed precisely what constitutes a case descrip-
tion in this domain. The static role abstraction-knowledge is mapped onto the domain-
knowledge type application-abstraction. For static roles, it is common to indicate also
the knowledge base in which the knowledge is stored, in this case the knowledge base
Case Study: The Housing Application 279
KNOWLEDGE-ROLE case-description;
TYPE: DYNAMIC;
DOMAIN-MAPPING:
residence-application;
END KNOWLEDGE-ROLE case-description;
KNOWLEDGE-ROLE case-specific-requirements;
TYPE: DYNAMIC;
DOMAIN-MAPPING:
SET-OF residence-requirement;
END KNOWLEDGE-ROLE case-specific-requirements;
KNOWLEDGE-ROLE abstracted-case;
TYPE: DYNAMIC;
DOMAIN-MAPPING:
residence-application;
END KNOWLEDGE-ROLE abstracted-case;
KNOWLEDGE-ROLE norm;
TYPE: DYNAMIC;
DOMAIN-MAPPING:
residence-criterion;
END KNOWLEDGE-ROLE norm;
KNOWLEDGE-ROLE norm-value;
TYPE: DYNAMIC;
DOMAIN-MAPPING:
residence-criterion;
END KNOWLEDGE-ROLE norm-value;
KNOWLEDGE-ROLE abstraction-knowledge;
TYPE: STATIC;
DOMAIN-MAPPING:
residence-abstraction FROM system-description;
END KNOWLEDGE-ROLE abstraction-knowledge;
KNOWLEDGE-ROLE requirements;
TYPE: STATIC;
DOMAIN-MAPPING:
residence-requirement FROM measurement-system;
END KNOWLEDGE-ROLE requirements;
Figure 10.14
Textual specification of knowledge roles used in the inferences “abstract” and “evaluate”.
280 Chapter 10
measurement-system.
The second inference evaluate is defined in a similar manner. Note the use of SET-
OF to indicate that the role consists of a set of domain objects. The use of SET-OF is
not necessary for static roles, where by default we assume that it concerns a set. Note-
worthy is also that the roles norm and norm-value both map onto the same domain type
residence-criterion. The difference between the two roles is that the value for the truth-
value attribute is filled in, but this difference is something that we cannot express in our
notation. However, the difference in meaning should be clear intuitively.
Filling the knowledge bases for the housing case was not difficult. All information about
abstractions, requirements and decision rules was included in the bi-weekly. A full listing
of the domain knowledge for the knowledge bases can be found in the appendix. These
rules will typically need regular updating. For example, the rent-income table is likely to
be changed every year or so.
In the housing case study, knowledge-model validation was done by building a prototype
system containing only the reasoning stuff. This prototype is shown as an example imple-
mentation in Chapter. 12 (see the system traces in Figures 12.5–12.8). Based on the traces
provided by such a running system one can detect faults, inconsistencies, and further im-
provements. In this activity the scenarios drawn up earlier are useful as sample material
for the prototype.
assessment finished/
report decision
data needed/ask
application
assessment
application received/
order assessment
waiting for
case data
data received / reply
Figure 10.15
State diagram representing the communication plan for the assessment task.
Table 10.11
Worksheet CM-1: Transaction “order application assessment”.
These transactions can be described in more detail with the help of worksheet CM-1. Ta-
ble 10.11 and Table 10.12 show the worksheets for the first two transactions.
282 Chapter 10
Table 10.12
Worksheet CM-1: Transaction “obtain application data”.
10.9 Summary
In this chapter we have seen how a simple knowledge-intensive application can be analyzed
and modelled using the CommonKADS modelling framework. One important point to note
is that the knowledge model for assessment described in this chapter was not developed
from scratch, but is in fact a small variation on an existing model that we were able to
reuse.
11
Designing Knowledge Systems
11.1 Introduction
In this chapter we look at the problem of turning requirements specified in the analysis
models into a software system. The major input for the design process in CommonKADS
is the knowledge model, which can be viewed as a specification of the problem-solving
requirements. Other inputs are the external interaction requirements (defined in the com-
munication model), and also a set of “nonfunctional” requirements (defined in the orga-
nization model) typically related to budget, software. and hardware constraints. Based
on these requirements, the CommonKADS design model describes the structure of the
software system that needs to be constructed in terms of the subsystems, software mod-
ules, computational mechanisms, and representational constructs required to implement
the knowledge and communication models.
284 Chapter 11
Application Software
domain system
Analysis
models
experts
knowledge
software
model
textbooks architecture
protocols algorithm
communication design
model
cases
data structure
design
design
reasoning model
strategies task
model
hardware
required platform
response time
agent
problems & model
implementation
opportunities language
organization
model
Figure 11.1
The design model, contrary to the other five CommonKADS models, is part of the “software world”.
In system design, a radically different viewpoint and vocabulary are used when com-
pared to the other models. System design is concerned with software and its internal or-
ganization. It is as if we turn our head from the application domain, and start looking at
the other side: the resulting system. The other models, in particular the knowledge and
communication models, can be seen as setting the requirements for this design process.
This change of viewpoint is shown somewhat intuitively in Figure 11.1.
Design of knowledge-intensive systems is essentially not much different from design
of any complex information system. We assume that you have background knowledge of
design methods in general software engineering. A good overview of the software design
process can be found in the textbook by Sommerville (1995). Here, we mainly focus on
design issues that are specific to knowledge-intensive systems,
Designing Knowledge Systems 285
Central to the design process is the software architecture. A software architecture de-
scribes the structure of the software in terms of subsystems and modules, as well as the
control regimen(s) through which these subsystems interact. In this chapter we present
a reference architecture that can be used for CommonKADS-based knowledge-intensive
systems. A reference architecture is a skeletal form of an architecture that can be instanti-
ated for a class of systems. A reference architecture predefines a number of architectural
design decisions. A reference architecture is a powerful way of supporting the design
process.
The reference architecture makes use of an important modern design principle, namely
the principle of structure-preserving design. This principle dictates that both the content
and the structure of the information contained in the analysis models (in particular the
knowledge model and the communication model) are preserved during design. As we shall
see, this principle facilitates transparency and maintainability of the design, and therefore
ensures a high design quality.
Similar to earlier chapters, we document the design model through a number of worksheets
that act as a checklist for the design decisions that need to be taken. This chapter starts off
with a more detailed discussion about the principle of structure-preserving, because it is
considered central to design in CommonKADS. We then give an overview of the design
process in the form of four typical design steps that one needs to take. Subsequently, these
four steps are described in greater detail. Each step has an associated worksheet. We finally
look at two special cases of design, namely design of prototypes and design of distributed
systems.
the final system both the domain-knowledge structures specified during analysis as well as
their relations to knowledge roles. In other words, design should be a process of adding
implementation detail to the analysis models.
Thus, the basic principle behind this approach is that distinctions made in the analysis
models are maintained in the design and the implemented artifact, while design decisions
that add information to the knowledge and communication models are explicitly docu-
mented. Design decisions specify computational aspects that are left open during analy-
sis, such as the representational formats, computational methods used to compute infer-
ences, dynamic data storage, and the communication media. The advantage of a structure-
preserving design is that the knowledge and communication models act as a high-level
documentation of the implementation and thus provide pointers to elements of the code
that must be changed if the model specifications change.
Preservation of information is the key notion. Structure-preserving design ensures that
the design process meets quality criteria. These quality criteria are:
Reusability of code Structure-preserving design prepares the route for reusability of
code fragments of a KBS, because the purpose and role of code fragments are made
explicit. Reusable code fragments can be of various types and grain size, ranging
from implementations of inferences to implementations of an aggregation of infer-
ences plus control knowledge. The layered structure of CommonKADS knowledge
models facilitates this type of reusability.
Maintainability and adaptability The preservation of the structure of the analysis
model makes it possible to trace an omission or inconsistency in the implemented
artifact back to a particular part of the model. This considerably simplifies main-
tenance of the final system. It also facilitates future functionality extensions. Ex-
perience with the systems designed in a structure-preserving fashion indicates that
systems built in this fashion are indeed much easier to maintain than conventional
systems.
Explanation The need to explain the rationale behind the reasoning process is a typical
feature of knowledge-intensive systems. A structure-preserving approach facilitates
the development of explanation facilities that explain the reasoning process in the
vocabulary of the knowledge model. For example, for some piece of domain knowl-
edge it should be possible to ask:
in which elementary problem-solving steps it is used and which role it plays in
this inference;
when and why it is used to solve a particular problem (task and inference
knowledge).
As the knowledge model is phrased in a vocabulary understandable to a human ob-
server, a structure-preserving design can provide the building blocks for “sensible”
explanations.
Designing Knowledge Systems 287
tem which interact with the user in the vocabulary of the model.
One can build debugging and refinement tools which spot errors and/or gaps in
particular parts of a domain-knowledge base by examining its intended usage
during problem-solving.
It is possible to focus the use of machine-learning techniques to generate a
particular type of knowledge, e.g., abstraction and specification knowledge.
Step 1: Design the system architecture In the first step we specify the general archi-
tecture of the system. Typically, this step is largely predefined by the reference
architecture provided by CommonKADS (see the next section), and can therefore be
carried out quickly.
Step 2: Identify the target implementation platform In this step we choose the hard-
ware and software that should be used for system implementation. This choice is
made early on in the design process, because choices in this area can seriously af-
fect the design decisions in steps 3 and 4. Often, the choice is largely or completely
predefined by the customer, so there is in reality not much to choose from.
Step 3: Specify the architectural components In this step the subsystems identified in
the architecture are designed in detail. Their interfaces are specified and detailed
design choices with respect to representation and control are made. CommonKADS
provides a checklist for the design decisions that need to be made here.
288 Chapter 11
detailed detailed
design specify
architecture application
architecture hw/sw platform
specification design
Figure 11.2
The four steps in system design. The lower part of the figure shows the support knowledge provided by Com-
monKADS to help in constructing the design model.
Step 4: Specify the application within the architecture In the final step we take the
ingredients from the analysis models (e.g., tasks, inferences, knowledge bases, trans-
actions) and map those onto the architecture. As we will see, the strength of the
CommonKADS reference architecture is that it already predefines to a large extent
how this mapping should be performed.
The design process is graphically summarized in Figure 11.2. The next four sections
describe the four steps in more detail. Each section defines a worksheet that acts as a
documentation of the design step. Together, the filled-in worksheets constitute the design
model of an application.
At this point it might be useful to make a note about the ordering of steps in the design
process. As we all know, design is a creative process. When humans perform a design
activity, they hardly ever do this in a purely rational top-down fashion. Designers design
in an ad hoc fashion, mixing bottom-up with top-down design at will. This is all normal
and should not be regarded as “bad.” However, in documenting the design it is wise to
Designing Knowledge Systems 289
write it down as if the design had been done in a rational way. This makes the design much
more understandable to outsiders. The reader is referred to the paper entitled “A Rational
Design Process: How and Why to Fake It” by Parnas and Clements (1986) for a convincing
argument in favor of this approach.
For CommonKADS we have defined a typical architecture that can be used in most
applications. This architecture is described at two levels of granularity.
We first describe the architecture of the system as a whole. The architecture is based on
the Model-View-Controller (MVC) metaphor (Goldberg 1990). The MVC architecture
was developed as a paradigm for designing programs in the object-oriented programming
language SmallTalk-80. In this architecture three major subsystems are distinguished:
1. Application model This subsystem specifies the functions and data that together de-
liver the functionality of the application. In the case of a CommonKADS-based
system, the application model contains the reasoning functions. The “data” in the
application model are the respective knowledge bases and the dynamic data manip-
ulated during the reasoning process.
2. Views The “views” subsystem specifies external views on the application functions
and data. Typically, these views are visualizations of application objects on a user-
interface screen, but it could also be a view of a data request in terms of a SQL query.
Views make static and dynamic information of the application available to external
agents, such as users and other software systems.
The separation of application objects from their visualizations is one of the important
strengths of an MVC-type architecture. Application objects are decoupled from their
visualizations, and built-in update mechanisms are used to ensure the integrity of the
visualizations. Typically, there can be multiple visualizations of the same object.
Architectural facilities are used to ensure integrity. The original MVC architecture
290 Chapter 11
focused mainly on views as user-interface objects, but they can equally well be used
to interface with other software systems.
3. Controller The controller subsystem is the central “command & control” unit. Typi-
cally, it implements an event-driven control regimen. The controller contains handles
for both external and internal events, and may also have a clock and start a system
process in a demon-like fashion. The controller activates application functions, and
decides what to do when the results come back. The controller defines its own view
objects to provide information (e.g., on a user interface) about the system-control
process.
The controller implements the communication model, in particular the control infor-
mation specified in the communication plan and within the transactions.
The application model contains the software components that should realize the functions
and data specified during analysis. In CommonKADS terms, the application model con-
tains the reasoning functions (the tasks and inferences) and the information and knowledge
structures (the domain knowledge). The reference architecture of this subsystem is shown
in Figure 11.4. The architecture is based on the following principles:
User Input
controller views User interface
controller
views
Sensors handles input External system
from external provides output
interface
agents external agents
(user requests, (user interface,
Databases incoming data) database query)
application model
reasoning functions
domain-knowledge schema(s)
data/knowledge bases
Figure 11.3
Reference architecture for a CommonKADS system. The architecture is essentially an instantiation of the MVC
(Model-View-Controller) architecture.
task
I/O roles
method task method
execute
inference
I/O roles
static roles
method inference method
I/O roles
static role
domain mapping
access functions
dynamic role
knowledge base
datatype
knowledge-base name
domain mapping
uses
current binding
domain construct
access functions
access/update
inferencing functions
functions
Figure 11.4
Architecture of the “application-model” subsystem. The subsystem is decomposed in an object-oriented way and
follows the structure-preserving design principle. The dotted lines indicate method-invocation paths.
Designing Knowledge Systems 293
Table 11.1
Worksheet DM-1: System architecture description. The structure of this worksheet is based on the description of
system architecture in Sommerville (1995, Chapter 13).
Table 11.2
Worksheet DM-1 instantiated for the reference CommonKADS architecture. This sheet can be used as a template
in which deviations from the reference architecture should be clearly reported.
design we have to specify a method (algorithm) for implementing the inference, using the
roles specified for this inference. Other objects in the architecture come directly from anal-
ysis, but contain additional design-specific details. For example, the dynamic roles have
an associated data type as well as a number of access/modify operations, which enable the
use of the dynamic roles as the “working memory” of the reasoning system. In the section
on step 3 of the design process, we go through the full list of design-specific extensions,
and outline the options from which the designer has to choose.
Worksheet DM-1 summarizes the outcomes of this first step in the design process.
Table 11.1 shows the general structure of the worksheet. In Table 11.2 one can find an
instantiated version of this worksheet based on the decisions taken for the reference Com-
monKADS architecture described in this section. This sample sheet can be used as the
point of departure for system design. Deviations from the reference architecture should be
clearly reported in this worksheet.
294 Chapter 11
Guideline 11-1: I F THERE ARE NO EXTERNAL CONSTRAINTS FOR THIS STEP, DEFER
IT TO AFTER STEP 3
Rationale: Steps 2 and 3 influence each other. If you are free in selecting (which
you usually are not) you can select the one that suits best your architectural decisions
Standard interfaces to other software Often, access to databases is needed. You then
need to use a protocol for interacting with the database, e.g., ODBC. For a distributed
system, a standard CORBA interface is desirable.
Language typing Given the O-O nature of the architecture design, an object-oriented
typing of software objects simplifies the mapping of analysis and design onto code.
In a language like Prolog with hardly any typing, the designer has to build her own
typing environment.
In the next chapter we look at three sample software environments each of which could
be a reasonable choice. It is not meant as a limitative list, but more as typical examples of
a class of software environments. The three environments are:
In step 3 of the design process we define the architecture components in more detail. In
particular we define the interfaces between the subsystems and/or system modules. In
this section we describe for each component which generic architectural facilities can be
provided, and what kind of options the designer has in making these design decisions.
As remarked previously, some implementation platforms may actually provide you
with a CommonKADS architecture in which the decisions have been predefined. That has
advantages (step 3 hardly takes any time), but destroys your potential for creativity (if there
was a need for that anyway).
For the components involved in the architectural design decisions we refer back to
Figs. 11.3 and 11.4.
296 Chapter 11
Table 11.3
Worksheet DM-2: Specification of the facilities offered by a software environment in which the target system
will be implemented.
11.6.1 Controller
The controller realizes a vent-driven control approach with one central control component.
The following is a list of typical design decisions that need to be taken in connection with
the controller:
Decide on an interface of an event handler, both for external events (incoming data
or requests) and internal events (return values of application model functions).
Decide whether the controller should be able to perform demon-like control, in
which case an internal clock and an agenda mechanism need to be designed for
the controller.
Should interrupts be possible, e.g., of execution of application model components?
Is there a need for concurrent processing?
Guideline 11-2: BE CAREFUL WITH ALLOWING INTERRUPTS AND / OR CON -
CURRENCY
Rationale: Specification of control in either case is both difficult and error-
prone. Make yourself familiar with the specialized literature on this subject.
The need for complex architectural facilities for the controller depends heavily on the
complexity of the communication model. A system with a highly interactive style such as
the Homebots system used as in example in Chapter. 9 will require elaborate facilities. On
the other hand, a system performing the assessment of residence applications discussed in
Chapter. 10 (which is mainly a batch-processing system) have only very few demands.
The following facilities are typically needed:
Designing Knowledge Systems 297
receive transfer function in the application model.
Event handlers for internal events, in particular events generated by transfer func-
tions of the type obtain. This may also require suspend and resume operations on
application-model functions.
Event handlers for providing information about the reasoning process: tracing infor-
mation, what-if scenarios, printing reports, and so on.
Event handlers for aborting execution of a function.
Typically, each transaction will be implemented as an event handler or as a combi-
nation of event handlers. The controller defines its own view objects to represent “meta”
information about the system-control process.
For the task object we need to define two operations: (1) an initialize method that can
be used to set values for the input values of the task, and (2) an execute operation, which
should invoke the corresponding task method. For the latter method one has to decide
whether it has a boolean return value, indicating success or failure of the task. This decision
depends on the type of control used in the operationalization of the control structure (see
below).
The two main decisions for the design of a task method are concerned with the oper-
ationalization of the control structure. The first decision concerns the control language
used. Control in knowledge models is usually specified in an imperative form, but is still
defined in informal pseudocode. The designer has to decide on a set of control constructs
for implementing the control structures. You need at least sequencing, selection (if . . . then
. . . else) and iteration (repeat . . . until, while . . . do, for . . . do). You may also want to con-
sider control constructs for concurrent processing and synchronization. Part of the control
language is provided by invocation of operations on other architecture objects, such as dy-
namic roles (working memory operations), tasks, inferences, and transfer functions (all
subfunction calls).
The second decisionis concerned with the place where the control structure is defined.
In an O-O approach it seems natural to view this as the implementation of an execute
operation, but this destroys the declarative nature. For example, it would then not be easy
to define a view that shows the flow of control. However, the alternative is to “objectify”
the whole control structure, which implies a significant amount of work. This decision is
typically strongly influenced by the target implementation platform. We will see example
solutions in Chapter. 12.
298 Chapter 11
Like a task, the design of the inference object is largely based on the information contained
in the knowledge model. In design we usually assume that an inference has an “internal
memory” for the solutions found. This memory is “reset” each time a task in which the
inference is involved terminates. An inference execution fails if no new solution is found.
The design decisions with respect to inferences are related to the definition of operations
that enable inference execution. We usually need three operations, namely:
The execute operation retrieves the static and dynamic inference inputs and invokes
the inference method. If a new solution is found, it should be stored in some internal
state variable.
The has-solution? and more-solutions? are tests that can be used in the task-method
control language and “try” a method without actually changing the state of work-
ing memory. Implementation of these methods might actually benefit from a truth-
maintenance mechanism for the operations on dynamic roles (see below).
In the implementation there is ample room for efficiency improvement, in particular
by storing intermediate results of inference-method invocations. The design of inferences
is usually easy within a logic-programming environment (e.g., Prolog), because inference
execution behaves very much like backtracking in this type of language.
Inferences do not specify how the inference will be achieved. This how description is
typically something that has to be added during design. During analysis, the knowledge
engineer often takes what one could call an automated-deduction view of a particular in-
ference: the knowledge engineer specifies an inference in such a way that she knows that it
is possible to derive a conclusion, given the available knowledge, no matter how complex
such a derivation might be in practice. In analysis, the emphasis lies on a competence-
oriented description: can I make this inference in principle, and what is the information I
need for making it happen? An inference method specifies a computational technique that
actually does the job. Some example inference methods are constraint satisfaction, forward
and backward chaining, matching algorithms, and generalization.
One can take the view that inference methods are part of inferences (i.e., included in
the implementation of the execute operation) and thus should not have the status of separate
architectural components. However, the relation between inferences and inference methods
is typically not one-to-one. Several inferences may apply the same inference method, but
for different purposes. The reverse can also be true, namely that one inference is realized
through multiple methods. Thus, incorporating inference methods into inference prevents
making full use of the reusability concept.
Inference methods should not have direct access to the dynamic and static roles, but
rather receive these as input arguments when the method is invoked by an inference. This
Designing Knowledge Systems 299
enables the designer to keep a catalog of inference methods available that can be used
for many tasks. In practice it turns out that many applications just require some simple
rule-chaining methods plus some set-selection methods.
For the static role object, the architecture needs to provide access functions. They can
typically be of three kinds:
1. Give all instances of a knowledge role.
2. Give a single knowledge instance of the role.
3. Does a certain knowledge instance exist?
The first request is by far the most common access function and is for most applica-
tions sufficient. The access functions typically delegate the request to the access functions
defined for the corresponding knowledge base.
2. We have to define some access and modify functions. These access functions typi-
cally match the needs of the access functions of the static role object (see above).
3. We are likely to need knowledge-base modification and/or analyze functions. These
functions are related to the editor functions that allow a knowledge maintainer to
update, debug, and/or refine the system knowledge bases.
The domain constructs such as concepts, relations, and rule types are usually only included
for documentation purposes, and do not require any additional architectural facilities.
11.6.10 Views
Views realize the presentation of the system to external agents. In the architecture we have
to provide two types of facilities for realizing views:
1. a number of view types, such as windows, menu types, browsers, figures, and so on;
2. architectural facilities for linking an application-model object to one or more views,
and ensuring integrity of the views by sending update messages to the relevant views
at the moment an application-model object changes.
With respect to view types for user interfaces, the current state of the art is to use
a number of predefined graphical user-interface methods. Most implementation environ-
ments (see the next chapter) provide a standard set of those facilities. Also, views to
present information to other software systems have been standardized. For example, most
implementation environments can handle SQL output to other systems.
Two types of user interfaces are typically required for knowledge-intensive systems,
namely (i1 the interface with the end user(s) and (2) the interface with the expert(s). We
briefly discuss the architectural facilities for both interfaces.
End-user interface Typically, the end user is not the same person as the domain expert
that helped to develop the system. In most cases the system goal is to make the expert’s
knowledge available to (relatively speaking) laypersons.
For this end-user interface the main architectural design decision is whether the tar-
get environment delivers you with sufficient “view” power. Does it provide the facilities
required? The state of the art is a graphical direct-manipulation interface. However, if
you want to have speech recognition and/or generation, then special facilities may need
to be designed. In the standard case and given one of the three implementation platforms
described in Chapter. 12, there should be sufficient “view” power.
Expert interface Assuming the domain experts are not the end users of the system, we
usually need an additional interface to allow the experts to interact with the system. This
expert interface typically consists of two components:
Designing Knowledge Systems 301
1. An architectural facility that allows the expert to trace the reasoning process of the
system in the terminology of the knowledge model. This allows the expert to see the
reasoning subsystem “in action” and to identify errors and/or gaps in the underlying
knowledge.
2. Facilities to edit, refine, and extend the knowledge base. An example would be a
dedicated rule editor, which in the ideal case would be specialized for specific rule
types.
In Figure 11.5 an archetype of a tracer interface for the reasoning component is shown.
The window contains four areas:
1. In the upper-left part the control structure of the task method that is currently being
executed is shown. The current locus of control is shown in highlighted form.
2. The upper-right box shows the inference structure. Inferences blink when these are
executed.
3. In the lower-left quadrant the current bindings are shown of the active dynamic roles.
Each role is shown as a “place” with the name of the role and a listing of the domain
instance(s) that currently play(s) this role.
4. Finally, the lower-right part shows the static role for the (last) inference being exe-
cuted. The instances listed are typically rules of a certain knowledge base.
Such a tracer facility can be of great help in the knowledge refinement stage, and is
particularly useful in the prototyping stages.
From the architecture, it will be clear that the analysis information, in particular the knowl-
edge model, can be mapped easily onto architecture components, thus creating a number
of architecture component instances (tasks, inferences, etc.). This mapping process can
be done manually or through some automatic means. The latter approach is preferred,
because it reduces the chance of errors. One criterion for implementation environment
selection is therefore whether the target environment has some mapping support tools. In-
formation about available mapping tools can be found on the CommonKADS website (see
the preface).
302 Chapter 11
Application tracer
Task control Inference structure
x B z
EXECUTING task t2
REPEAT A C
new-solution(inference B)
inference A
u D v
inference C
UNTIL has-solution(inference D)
w
U V W IF obj.a1 =< 2
object 4 object 6 object 7 THEN obj.a3 := "abc"
object 5
Figure 11.5
Archetype of a tracer interface for the reasoning components. The window contains four areas. In the upper-left
the control structure of the task method that is currently being executed is shown. The current locus of control
is shown in highlighted form. The upper-right box shows the inference structure. Inferences blink when these
are executed. In the lower-left quadrant the current bindings are shown of the active dynamic roles. Each role is
shown as a “place” with the name of the role and a listing of the domain instance(s) that currently play(s) this
role. Finally, the lower-right part shows the static role for the (last) inference being executed. The instances listed
are typically rules of a certain knowledge base.
The scope of the mapping tools may vary. In particular, the following ingredients
might or might not be present:
Automatic creation of the control structure specification
Automatic creation of the rule instances
Automatic creation of controller objects
For controller objects CommonKADS does not provide much in terms of standardized
object descriptions, which means that extensive mapping support is unlikely to be present
Designing Knowledge Systems 303
Table 11.4
Worksheet DM-3: Checklist of decisions with respect architecture specification.
In this section we list the additional design decisions that need to be made for a certain
application. The decisions are discussed in connection with the component involved. Three
components are not mentioned, because they do not require any further application design
after their mapping in step 4a: tasks, static roles, and domain constructs.
Controller As we saw in step 4a, in most cases the designer will have to do some hand-
work to transform the communication model onto controller. The amount of work needed
depends heavily on the facilities required here. As a minimum, a bootstrapping procedure
is needed for starting the system. Event handlers for obtaining user information are almost
always necessary as well.
Some complicating factors for controller design are:
Complex external interaction (cf. Homebots system).
Strong user control over the reasoning process. An example of such a system is the
system developed by Post et al. (1996), which supports the dispatching of ambu-
lances by emergency call operators. Due to potential time constraints, the system
control needed to be extremely flexible and adaptive.
304 Chapter 11
Need for “what-if” scenarios.
Need for concurrent reasoning processes.
In case of a real-time system, the designer should become familiar with the specialized
literature on this subject.
Task method Formalize the method control structure in the control language provided
by the architecture.
Inference Write a specification of the invocation of an inference method This method in-
vocation should show how the dynamic and static roles map onto arguments of the method.
Often, some “massaging” of the inputs is necessary, as the role representation of (static)
roles are purposely not optimized for reasoning purposes.
Inference method For each inference the designer needs to specify or select an inference
method. These methods can be typical reasoning methods described in the AI literature, or
simple standard algorithms (sorting, subset selection, etc.).
Dynamic role Choose a datatype for each role. This choice is constrained by datatypes
provided by architecture. Use “real” role sets whenever possible, as it leads to more natural
dynamic behavior (random selection of set elements and therefore more variation).
Views The choice of the type of view (e.g., a browser or a menu) is guided by general
user-interface design principles, which are already described adequately in other works.
Ch. 17 of Sommerville (1995) is a good starting point and provides a set of useful guide-
lines.
In the case of the end-user interface the choice of the interface should be strongly
guided by available application-domain representations.
Guideline 11-3: C HOOSE VIEWS THE END USER IS ALREADY FAMILIAR WITH
Rationale: Still too often computer scientists try to impose on end users represen-
tations that they like themselves. Typically, each application domain has its own
“views” of information, which have developed over the years and have proved their
worth in practice. It is usually best to try to base your views as much as possible
on these existing representations. It considerably raises your chances of user accep-
tance.
Table 11.5
Worksheet DM-4: Checklist for application-design decisions.
out a “risky” or poorly understood part of the prospective system. We briefly discuss two
types of prototype design.
There are two standard situations in which it may be useful to develop a prototype of the
reasoning part (without an elaborate controller and/or views) :
The knowledge model is not based on an existing task template, but is constructed
“by hand.” In this case there is usually a risk that the reasoning behavior will be
different from what the analyst expects.
There seem to be gaps in the domain knowledge but it not clear what these gaps
precisely are.
306 Chapter 11
Prototypes of the reasoning engine should allow us to trace the reasoning process in
knowledge-model terms and therefore need an interface such as sketched in Figure 11.5.
Such a reasoning trace can give the expert or the knowledge engineer a deeper insight into
the “model dynamics” and reveal problems or errors that are not apparent from the static
knowledge-model description.
To carry out this type of prototyping without spending a large amount of time (and
resources) on it, it is important to have some tools available for supporting the prototype
generation. In particular, the knowledge-model mapping tools are important in addition to
an implementation environment with a “CommonKADS package.” Designing and imple-
menting the prototype should be achievable within a matter of days.
Often, it is also useful to build early on a prototype of the user interface, containing both
“controller” and “view” objects. In particular, if either the views or the controller require
complex representations, such prototypes are useful. In case of complex user-system re-
quirements, such a prototype plus some associated user experiments might be the only
way of getting the information necessary for a proper communication model. Again, see
the work of Post et al. (1996) for an example of this type of prototyping.
1. Reasoning Service The most straightforward way of distributed usage is to make the
bare reasoning engine available as a “service” without any real user interface and
without event handlers. For example, the house-assessment application could be
made available such that only the “knowledge-model” elements are implemented.
This application could be accessed by potential applicants to test the houses they
want to apply for. It is fair to say that this approach is only in a limited way “dis-
tributed.”
implement a broker that delivers on request the domain constructs, such as a stan-
dardized description of a Chinese vase of a certain period.
The first generation of these systems are currently being built. A good example of
the possibilities is the terminology server for art objects built in the context of the
European research project GRASP (Wielinga and Schreiber 1997). The server is
part of a distributed system that assists in finding the rightful owners of stolen art
objects.
Method Service Combinations of tasks, task methods and inference can well be used
as subsystems that provide a service in a distributed system. One can imagine that
implementation of the various task templates can be used as services in a distributed
system. The domain knowledge would in that case be provided by the client, who
may actually be able to get this domain knowledge from another server (see the
previous point). The practical use of this type of service is, however, still a research
issue. Some projects are now tackling this issue.
12.1 Introduction
You might have wondered while reading about all this model stuff in previous chapters:
does this paperwork ever lead to something that works? The answer is yes! In fact, imple-
mentation of CommonKADS models is usually relatively straightforward. It may still be
a reasonable amount of work, particularly if there are specific user-interface requirements,
but there should not be any major obstacles left.
In this chapter we show how you can implement a CommonKADS analysis and de-
sign in two sample implementation environments. One is a public domain Prolog envi-
ronment and is particularly targeted to the academic community. The other environment
is Platinum’s AionDS version 8.0 (nicknamed Aion), an environment which integrates an
object-oriented and a rule-based approach and is used in business practice. However, we
310 Chapter 12
"views"
application
realization "controller" "model"
Figure 12.1
Software architecture of the Prolog implementation. The software on top of Prolog can be conceived of as con-
sisting of three layers. The first layer implements object-oriented concepts on top of Prolog and implements the
underlying view-update facilities required for the MVC architecture. The second layer implements in a generic
way the mapping of CommonKADS objects on the MVC architecture. The third layer is the implementation
of the actual application. The inference-method library is an additional architectural facility that provides algo-
rithm implementations frequently used for implementing inferences, e.g. rule-based reasoning. This library is
implemented directly on top of Prolog..
should stress that our choice is in a sense arbitrary and is based on convenience (in partic-
ular, availability). The sample application is the housing application, the analysis of which
is described in Chapter. 10 and the design in Chapter. 11. The source code of both imple-
mentations can be found at the CommonKADS website (see the preface). For the Prolog
implementation a link is added to the download site of the Prolog system used.
The implementation described here uses the public domain SWI-Prolog system (Wiele-
maker 1994) which runs on Windows95 and UNIX platforms (see the CommonKADS
web-site for more information). The implementation is intended purely for educational
purposes. No efforts have been made to add gadgets such as syntax checking, editors, and
so on. Detailed analysis of the code will undoubtedly reveal places where the implemen-
tation contains bugs or can be improved. Still, we hope it serves its role as an insightful
example of an implementation.
Knowledge-System Implementation 311
Figure 12.1 shows the main elements of the software architecture of the Prolog im-
plementation. The software can be conceived of as consisting of three layers, i.e. two
architectural layers and one application layer:
1. The first layer implements object-oriented concepts on top of Prolog and provides
the underlying view-update facilities required for the MVC architecture (the O-O
kernel in Figure 12.1).
2. The second layer contains class, attribute, and operation definitions for generic Com-
monKADS objects (the CommonKADS kernel in Figure 12.1). These definitions
provide the building blocks for realizing a CommonKADS application in an MVC-
like architecture.
3. The third layer is the implementation of the actual application: it constitutes a spe-
cialization of the generic objects of the CommonKADS kernel. It implements the
“model,” “view,” and “controller” objects of the actual application.
The next subsections discuss these three layers in more detail. The inference-method
library is an additional architectural facility that provides algorithm implementations fre-
quently used for implementing inferences, e.g. rule-based reasoning. This library is imple-
mented directly on top of Prolog and is used in the realization of the housing application.
We first implemented a small set of O-O primitives on top of Prolog. The purpose of
these primitives is to simplify the mapping from the CommonKADS design as shown in
Figure 11.4 onto the Prolog code. Thus, it is meant to ensure a transparent structure-
preserving implementation.
The object-oriented primitives fall into three classes:
Figure 12.2 gives an overview of the main O-O primitives. For more details the reader
is referred to the source code. The O-O layer also provides architectural facilities for the
separation of application objects from their visualization. This is done by including a
mechanism that broadcasts any change in an object state. Such state changes can be object
creation or changes of an attribute value. As we will see, the view subsystem can catch
such messages to update object visualizers.
312 Chapter 12
% DEFINITIONS
def_class(+Class, +ListOfSuperClasses)
def_attribute(+Class, +Attribute, +ValueType, +Cardinality).
def_operation(+Class, +Operation, +ListOfArgumentTypes, +RetrunType).
def_value_type(+ValueType, +NominalOrOrdinal, +ListOFValues).
% ACTIONS
% QUERIES
Figure 12.2
Informal specification of the main predicates available in the O-O layer on top of Prolog. The plus indicates
that the argument needs to be bound when the predicate is called; the minus indicates that the variable will be
bound by execution of the predicate; arguments preceded by ? can either be bound or unbound. For example, the
predicate “is-a” can be used both to find an immediate superclass of an object (second argument is unbound) or to
check whether this the case for a particular superclass (second argument is bound). Operations may have a void
return type: in that case the “invoke” predicate will return an empty list [].
Using the O-O primitives discussed above, we implemented the CommonKADS architec-
ture described in the previous chapter. This architecture should allow an easy mapping of
the communication model onto the “controller” sub-system and of the knowledge model
onto the “model” sub-system. The architecture of the latter sub-system is in fact a more
or less direct implementation of the sub-system decomposition shown in Figure 11.4. The
classes, attributes, and operations defined in the CommonKADS Prolog architecture are
shown in Figure 12.3.
The classes shown contain the information contained in the knowledge and commu-
nication model. This is in accordance with the structure-preserving principle. In addition,
these class definitions contain implementation-specific extensions. We distinguish two
types of extensions:
1. architecture-specific extensions which concern code that can be written generically
(the same for each application):
Knowledge-System Implementation 313
<controller object>
transaction
execute()
handler()
Figure 12.3
Classes for the “model” and the “controller” sub-system of the Prolog architecture. Only one controller class is
included: “transaction.” The signature of operations is defined in a sketchy manner. The keyword “universal”
stands for any object.
314 Chapter 12
2. application-specific extensions which point to the code that needs to be added for
each individual application.
Our goal is to keep these application-specific extensions as minimal as possible. This
enables fast construction of prototype systems, e.g. for knowledge-model validation.
In the Prolog architecture we have included only simple facilities for handling the com-
munication model. It only supports (like Prolog itself) a single thread of control. Therefore,
the transactions need to be implemented as standard procedure invocations. This implies
that the Prolog environment has limited use in highly interactive systems and is particularly
intended for prototyping of the reasoning part (the knowledge model).
least the following extensions need to be defined for implementing an application:
For each transaction in the communication model: define an implementation of the
handler operation.This handler should implement the information transfer in and
out of the system. For example, the “get-case” transaction contains the code for
retrieving data about a case that needs to be assessed from a database of cases.
For each task method: define an implementation of the execute operation. This
code should implement the control structure of the task method using the control
primitives provided by the architecture.
For each inference: define an implementation of the method-call operation. This op-
eration is responsible for specifying how the inference is realized using existing or
newly coded inference methods. The Prolog architecture has a small catalog of pre-
defined inference methods, including a rule interpreter which can handle backward
and forward reasoning.
The implementation of the method-call operation can contain any “hack” you want,
because the internals of an inference are assumed to be of no interest to the user.
Knowledge-System Implementation 315
For each dynamic role: specify the value of the datatype attribute. In the Prolog
architecture this value can be an element, set (unordered collection, no duplicates),
or list (ordered collection, duplicates allowed).
In the next subsection we see how these three types of application-specific extension
are defined for the housing application.
Mapping the analysis objects This step is carried out by specializing the
CommonKADS-specific classes defined in the architecture. Thus, the task ASSESS-CASE
is defined as a specialization of the class task. The reader is referred to the Prolog code at
the CommonKADS website for examples. This mapping process form knowledge model
to implementation should typically be supported by an appropriate CASE tool that handles
the code generation from the knowledge-model specification. Manual transformations are
tiresome and error-prone, and should be avoided as much as possible.
Coding the implementation-specific extensions In this second step one has to write
additional code for the four items identified earlier: (1) defining data types for dynamic
roles, (2) writing an implementation for each method-call operation for each inference,
(3) writing an implementation of the execute method of each task method, and (4) writing
handlers for each transaction. Examples of this code for the housing application are shown
in Figure 12.4. The implementation of the method call for the inference evaluate is a
typical example of what one has to do when implementing an inference.
The implementation consists of the following steps (see Figure 12.4):
1. The various inputs for the inference are retrieved using operation calls on the respec-
tive dynamic and static roles. This particular inference is in fact a bit more complex
than usual, because part of its dynamic input is in fact a rule set (the case-specific
requirements). As we see, the implementation joins the static and the dynamic rule
sets into one rule set (see the append clause). The first two retrievals concern the two
other dynamic input roles.
2. The rule interpreter is invoked. This is a predefined method which can do forward as
well as backward reasoning. For the evaluate inference, backward reasoning is used
(the first argument of the predicate). The second and third arguments are, respec-
tively, the rule set and the data set used for backward reasoning. Finally, the fourth
316 Chapter 12
/*
Step 4b: Adding implementation-specific decisions
*/
Figure 12.4
Application-specific code for the housing system.
argument of the predicate is the goal that needs to be proved by backward reasoning.
In this case the goal is to find a truth value for a norm. We can see that for this
inference the operational interpretation is apparently that successful execution of the
“requirement” rules means the norm is true. Only in case no rule is successful does
the norm get a “false” value.
3. Finally, the result is placed in the output role norm-value.
The implementation of the execute operation for the task methods should be more or
less a direct implementation of the control structure of the task method. For example,
compare the code at the bottom of Figure 12.4 with the control structure of the MATCH
method in the knowledge model. The main difference is that I/O is done “behind the
Knowledge-System Implementation 317
Figure 12.5
Starting the assessment of a residence application. The data of the case to be assessed are shown at the right. For
this case, there are no case-specific requirements.
Coding the views Finally, we need to write the required view implementations. In our
sample system we only included some simple views to allow tracing the system execution.
Most of the views currently in the Prolog code are generic: they can be used for other
applications as well. Views often have a compositional structure. For example, in the
Prolog implementation there is one large application window frame, consisting of three
sub-views. These sub-views are represented as text windows which show information
about transaction, task, and inference activity, as well as changes in dynamic role values.
The views are shown in the next section (Figures 12.5-12.8).
318 Chapter 12
Figure 12.6
The task “abstract-case” has finished. It has produced two new case attributes: the age category and the household
type (see the last two lines of the role “abstracted-case-description”).
Included are a number of screen dumps that give the flavor of the Prolog housing appli-
cation in action. In Figure 12.5 we see the start of the assessment process for a particular
residence application. The data of the case to be assessed are shown at the right. For this
case, there are no case-specific requirements. The interface used here is a simple generic
tracer that is typically used for validation prototypes The application window consists of
three areas. In the upper-left text area state changes of transactions and tasks are written.
In lower-left area the success and failure of inference execution is reported. The text area at
the right is used to report changes in the contents of dynamic knowledge roles. The system
halts each time some new piece of information is written in one of the text areas, and waits
for the user to press the continue button (see bottom of the window).
Knowledge-System Implementation 319
Figure 12.7
The match case task has been activated. One norm has been (randomly) selected, namely “correct-household-
size.” This norm is evaluated and turns out to be true for this case. The match inference does not deliver a
decision, so the process needs to be repeated for other norms.
In the second figure (Figure 12.6) the task ABSTRACT-CASE is being carried out.
As we see in the lower-left box, the abstraction inference has succeeded two times and
produced two new case attributes: the age category and the household type (see the last
two lines of the role abstracted-case-description. Once the inference fails, the abstraction
task will be terminated.
In Figure 12.7 we see that the MATCH-CASE task has been activated. Four norms
have been specified as being relevant to this case. One norm has been (randomly) selected,
namely correct-household-size. This norm is evaluated and turns out to be true for this case.
The match inference does not deliver a decision. This means that the select-evaluate-match
loop needs to be repeated for other norms.
Finally, in Figure 12.8 we see the termination of the assessment process. All norms
320 Chapter 12
Figure 12.8
The termination of the assessment process. All norms have been found to be true for this case (see the role
“evaluation-results”), so the decision is that the applicant is eligible for the house in question.
have been found to be true for this case (see the role evaluation-results). This leads to the
decision that the applicant is eligible for the house in question.
The second CommonKADS implementation of the housing application uses the imple-
mentation environment AionDS. This environment is used in business practice. The 8.0
version (also called Aion) has an object-oriented basis (classes, attributes, methods) with
rule-based extensions, such as rule definition formats and rule-execution methods. Al-
though the environment is a different one, the implementation is in essence quite simi-
Knowledge-System Implementation 321
housing application
<other applications>
application layer
assessment task-template
<other templates>
template layer
CommonKADS
CommonKADS Library
layer
framework
Framework Library layer
assessment
framework
Figure 12.9
Architecture of the Aion implementation. The solid boxes have been implemented. The dashed boxes are possible
additions.
lar to the Prolog implementation described in the previous section. Figure 12.9 gives an
overview of the architecture. The code of the implementation can be downloaded from the
CommonKADS website, You will require the Aion environment to be able to run it. The
architecture consisats of four layers:
1. Framework layer. This layer provides the basic architectural facilities underlying
the implementation. The application was built in such a way that it constitutes a
framework for realizing task-oriented applications. CommonKADS-based applica-
tions fall into this category. The design principles of this framework are discussed in
the next subsection. This layer is implemented as a single Aion library (the framelib).
322 Chapter 12
4. Application layer. The uppermost layer contains the actual application-specific code
for the system. Each application is implemented as a separate Aion library. Because
the need for class specialization is avoided, task templates can be connected to empty
applications. The source code at the website contains the library file for the housing
application (the housinglib).
Compared with the Prolog implementation, the Aion implementation has one addi-
tional layer: the “task-template” layer. The latter is in this respect more sophisticated, as
it supports more refined forms of reuse. The Aion implementation splits the “model” sub-
system of the MVC architecture into two sub-parts: a domain-independent, task-specific
part (i.e. the implemented task template) and a part containing the model elements that
represent knowledge specific to the application domain. This makes it easier to use part of
the “model” for other applications that incorporate the same template.
In the following each of the respective layers is described in more detail. The final
subsection shows traces of the running system.
Figure 12.10
Aion screen showing classes and interfaces of the framework library.
Framework for the housing application The Aion housing application is implemented
as an agent-oriented framework. The framework layer provides the basic facilities for re-
alizing an application framework. It defines a general notion of frame-agent representing
an active actor-like object class. The frame agent is subsequently specialized within the
class task. A notion of task does not exist in Aion, and therefore had to be added. To the
class TaskObject methods are attached for starting and terminating tasks,
The implementation of the framework makes heavy use of the concept of “interface.”
324 Chapter 12
Similar to the Prolog implementation, the CommonKADS layer of the Aion system defines
a number of classes for CommonKADS objects. Inferences and tasks are defined as sub-
classes of the frame-agent class hierarchy in the framework library. Inferences and tasks
are thus the “active” objects that can have a status such as “activated” or “terminated” (cf.
the status attribute for the same objects in the Prolog system). Dynamic roles are introduced
as interfaces, i.e. IDynamicRole (the “I” character at the beginning is the convention used
here to denote interfaces), as these require a binding to an application(-domain) object.
Figure 12.11 shows a part of the Aion classes defined in the CommonKADS layer.
The interface mechanism described in the previous section is used to provide for each
class definition a specialization hook such that the application can refine or override the
generic code provided by this layer. For example, for the class inference an interface is
defined, in this case IInference. The purpose of this interface definition is to allow the
application developer to write an object class definition that implements the role defined
by the interface, and in this way extend or overwrite the generic implementation of an
inference.
The CommonKADS layer also contains a number of predefined inferences. So far,
only the inferences that are used in the assessment template are included, but more infer-
ences could easily be added. Typically, one would want to supply default implementations
for all inferences that occur in the inference catalog of this book (cf. Chapter. 13). These
default inference implementations are a good example of the principle of maximizing both
support as well as flexibility. One could argue whether these generic inference implemen-
tations should be part of the task-template layer, but the developers quite understandably
argued that most, if not all inferences, are applicable in a wider context than just a sin-
gle task. The interfaces of type IInferenceable specify the roles for the generic inference
implementations.
In addition to the CommonKADS model classes (as said before, the communication
model was outside the scope of this system) the CommonKADS layer also provides the
Knowledge-System Implementation 325
Figure 12.11
Aion classes and interfaces defined for the CommonKADS layer.
generic facilities for tracing the behavior of the active objects (i.e. task, inferences, and
dynamic roles) in the implementation. This CommonKADS tracer (the CK-viewer library)
uses a generic implementation of the observer pattern (defined in the observer library; see
above). The CommonKADS tracer displays in a similar manner to the Prolog tracer an
account of the reasoning process in terms of the behavior of tasks and inferences, as well
as the state changes of dynamic roles. In the examples of the running Aion system you will
see this CommonKADS tracer in action.
326 Chapter 12
Figure 12.12
A number of sample classes in the assessment library. This library is part of the task-template layer of the
Aion framework for implementing CommonKADS systems. Each library in this layer should provide a default
implementation of a certain task type.
This layer, which is not present as a separate entity in the Prolog system, makes use of
the notion of template knowledge model in CommonKADS. Here one can see clearly the
parallel with design patterns in O-O. The task templates are precisely what one would
expect from a pattern for a knowledge-intensive task.
The task-template layer offers an implementation for a certain task type. At the mo-
ment, only a library has been built for the assessment template, but it would be only a
matter of putting in more work to add other templates. Figure 12.12 shows a number of
classes defined in the assessment library. This layer defines concrete tasks, such as the
Knowledge-System Implementation 327
Figure 12.13
Aion implementation of the control structure in the MatchCase task.
top-level task ASSESSMENT, as sub-classes of the general task class. The same is true
for knowledge roles and inferences. Again, interface definitions are used to provide appli-
cation developers with a hook to provide different behavior of the assessment library (for
inferences as well as tasks).
The application layer is the focus point for an application developer. For each application
the developer should define a separate library, and indicate how classes and interfaces
from the lower layers are used in the implementation. Each application typically includes
the framework library as well as the CommonKADS library. In addition, an application
may include several task templates. In that case, the application has to indicate how these
328 Chapter 12
Figure 12.14
Classes for the application library of the housing system.
templates are used together to reach the overall goal. In other words, the application should
have strategic knowledge (cf. Chapter. 13) about how to combine tasks to reach a certain
goal. The application developer will have to code the following parts:
The extensions mainly concern mappings of domain classes to roles in the template.
In addition, the implementation of the inference methods may need to be coded.
For the latter, the framework provides a number of standard method implementa-
tions (typically rule-based), as well as default implementation of concrete inferences
(abstract, select, etc.).
A number of sample classes coded for the housing application are shown in Fig-
ure 12.14. The extensions that need to be coded are similar to those required for the Prolog
system. As in the latter system, ideally a large part of this coding should be supported by a
CASE tool that performs (semiautomatically) the mapping from a knowledge-model spec-
ification to Aion code. Also, one could think of a wizard guiding a application developer
through the process of attaching a task template to an application.
The CommonKADS tracer, specified in the CK-viewer library (connected to the Com-
monKADS layer) can be used to generate a trace of the reasoning process. This trace
is similar to the tracer developed for the Prolog system; only the window organization is
slightly different. Such a tracer turns the running system into a white box, providing the
developer with a debugging tool. The tracer is also useful if one wants to demonstrate the
reasoning part to a domain expert.
The Aion tracer puts its information on three sheets: one for status information about
tasks, one for status information about inferences, and one for changes in dynamic roles
(the “blackboard”). Figures 12.15 and 12.16 show the housing application in action. The
state of the reasoning process in Figure 12.15 corresponds to the state shown in Figure 12.6.
Similarly, Figure 12.16 corresponds to the system state in Figure 12.7.
Figure 12.15
Trace of the running housing application in Aion. The abstraction task has just finished. The generic Com-
monKADS tracer puts its information on three sheets: one for status information about tasks, one for status
information about inferences, and one for changes in dynamic roles (the “blackboard”), The figure shows the
trace information on the task and the dynamic roles sheets.
Knowledge-System Implementation 331
Figure 12.16
This trace of the running housing application shows the situation where a norm has been selected and is now
being evaluated. In this figure the inference and the dynamic roles sheets are shown.
332 Chapter 12
MIKE approach (Angele et al. 1998) is one of the few exceptions. Fensel and van Harme-
len (1994) give a overview of languages for implementing CommonKADS-like knowledge
models.
The implementation described in Section 12.3 was provided courtesy of Leo
Hermans and Rob Proper of Everest, ‘s Hertogenbosch, the Netherlands (e-mail:
[email protected]).
13
Advanced Knowledge Modelling
13.1 Introduction
In Chapter. 5 we introduced a basic set of knowledge-modelling techniques. However, in
complex applications the knowledge engineer will be in need of a larger set of modelling
tools. Here, we provide some of those techniques.
We dive deeper into the modelling of domain, inference, and task knowledge. In the
first section we discuss some advanced domain-modelling constructs that you might find
useful. Most of these constructs are not unique to knowledge modelling: they are also used
in advanced data modelling. Examples of advanced constructs are:
1. specification of the precise meaning of subtype relations;
2. the possibility of defining for a single concept multiple subtype hierarchies along
several dimensions, which are termed “viewpoints”;
3. a built-in part-of construct that can be used to model “natural” part-whole relations;
334 Chapter 13
employee
CONCEPT employee;
SUPER-TYPE-OF:
system-analyst,
system-designer,
project-manager;
SEMANTICS:
DISJOINTNESS: NO;
COMPLETENESS: NO;
......
END CONCEPT employee-role;
Figure 13.1
Disjointedness and completeness are specified at the level of the supertype. This requires an explicit supertype-
of declaration in the textual specification, followed by the definition of the semantics properties (see above).
No graphical format is provided. The example shows the supertype “employee.” The intended meaning of the
definition is that instances of this concept can optionally also be instances of one or more of the subtypes.
infection
infection
meningitis pneumonia
meningitis pneumonia
viral bacterial
acute chronic
pneumonia pneumonia
pneumonia pneumonia
Figure 13.2
A fragment of two subtype hierarchies.
The figure shows a small fragment of a hierarchy of infections organized in two different
ways. In the left tree, pneumonia has viral pneumonia and bacterial pneumonia as its immediate
subtypes. One level lower, the distinction between acute and chronic is made. In the tree
fragment at the right, these two,levels are inverted. The problem is often that neither tree
is satisfactory. For example, in the tree on the left we cannot talk about acute pneumonia;
the same is true for viral pneumonia in the tree on the right.
The problem lies in the fact that levels in a subtype tree are often not really subordinate
to each other. For example, the distinction between “acute” and “chronic” infections has
an equal standing when compared to “viral” and “bacterial” infections. Both can be seen
as dimensions along which subtypes are defined. In fact, each level in Figure 13.2 can be
viewed as such a dimension:
The first distinction between the diseases is made on the basis of localization of the
infection: the meninges of the brain (meningitis) or the lung (pneumonia).
The second and third dimensions are the time factor (acute vs. chronic) and the
causal agent (viral vs. bacterial, also called “etiology “ in medicine).
If there is not a clear ordering of the dimension in a particular domain, it is usually
better to define these dimensions at the same level of abstraction. CommonKADS provides
the notion of viewpoint for this purpose. The term “viewpoint” is used here because
a dimension is a way to “view” a certain object. A viewpoint definition is similar to
a supertype-of definition. The main difference is that the dimension along which the
Advanced Knowledge Modelling 337
CONCEPT infection;
SUPER-TYPE-OF:
meningitis, pneumonia;
VIEWPOINTS:
time-factor:
acute-infection, chronic-infection;
causal-agent:
viral-infection, bacterial-infection;
END CONCEPT infection;
CONCEPT acute-viral-meningitis;
SUB-TYPE-OF:
meningitis, acute-infection, viral-infection;
END CONCEPT acute-viral-meningitis;
Figure 13.3
Defining viewpoints in CommonKADS. Each viewpoint is defined through a dimension such as “causal agent”
(or “etiology”) along which subtypes are organized. These subtypes can subsequently be used to define concepts
through multiple inheritance (see “acute-viral-meningitis).
subtypes are defined is explicitly stated. Figure 13.3 shows an alternative definition of
infection and its subtypes. The convention is that we use supertype-of to define the main
or dominating subtype dimension. In the case of infection we decided to make localization
the main dimension. Alternatively, we could have defined this dimension also as a third
viewpoint, thus giving all dimensions the same status. The graphical representation of this
definition is shown in Figure 13.4. Using these three dimensions we can now define an
infection subtype such as acute-viral-meningitis as a subtype of three concepts.
Note that in Figure 13.4 we have introduced a number of new concepts that we were
not able to represent in the first hierarchy in Figure 13.2: acute-infection, chronic-infection,
viral-infection, and bacterial-infection.
Subtype dimensions occur in almost any domain. For example, in the house assign-
ment domain described in Chapter. 10 one encounters two subtype trees of houses (see
Figure 13.5). A residence can be both characterized in terms of its building type (house
or apartment) or the type of “ownership” (rental house etc.).
13.2.3 Aggregates
infection
time factor
causal agent
acute viral
meningitis
Figure 13.4
Graphical representation of the viewpoints defined on “infection” and the their use in multiple inheritance.
residence
owner status
Figure 13.5
Two sets of subtypes of “residence.” The left part of the tree is defined along the dimension “owner status”.
CONCEPT patient-visit-record;
PROPERTIES:
date-of-visit: DATE;
attending-physician: NAME;
HAS-PARTS:
patient-data;
anamnesis;
physical-examination;
test;
CARDINALITY: 0+;
ROLE: test-done;
END CONCEPT patient-visit-record;
Figure 13.6
Specification of a part-whole relation through the has-parts construct. The default cardinality is precisely one,
meaning that each instance instance of the aggregate has exactly one instance of the part. Thus, a patient-visit-
record consists of precisely one instance of patient-record, anamnesis and physical-examination, and of a random
number of tests. Each part can be given a role name, similar to the general relation construct. The default role
names are “part” and “whole”.
340 Chapter 13
patient visit
record
data-of-visit
physician
0+
patient data anamnesis physical test
examination
name main complaint
date-of-birth additional symptoms
sex history
faeces test
blood test
Figure 13.7
An example of a part-whole relation in a medical domain. The part-whole relation is visualized by adding a
diamond symbol to the “whole” side of the relation. On the “part” side the cardinality of the part-whole relation
can be indicated in the usual manner.
In many domains there is a need to express some mathematical domain theory. The stan-
dard data-modelling primitives do not support the specification of such theories very well.
Therefore, we imported into the CommonKADS knowledge-modelling language a format
for writing down equations, expressions, and complete mathematical models. The format
we use is the neutral model format (NMF).
NMF is a language for expressing mathematical models. The main objectives of NMF
are (1) to make a distinction between a model and the simulation environment in which
the model can be executed; and (2) that models should be easy to understand and express
for nonexperts. Standardization of NMF is in progress. NMF is currently used by the
American Society of Heating, Refrigerating, and Air-Conditioning Engineers.
In NMF one can express mathematical formulas and mathematical models (with pa-
rameterization). One of the ideas is that the models are stored in reusable libraries and can
Advanced Knowledge Modelling 341
thus be shared.
Most of the equation syntax of NMF will be familiar to programmers. Consult the
appendix for details on how to use the NMF syntax and how it is embedded into the Com-
monKADS knowledge-modelling language.
Rule types The notion of rule type is an important modelling technique in Com-
monKADS knowledge models. It is there that most of the the real differences with tra-
ditional data models reside. It is worth taking the following guidelines into account when
using rule types:
Guideline 13-1: S PEND TIME TO FIND APPROPRIATE NAMES FOR THE RULE TYPE
ITSELF AND FOR THE CONNECTION SYMBOL
Rationale: Choosing the right names can greatly enhance the understandability of
the domain-knowledge specification. It is hard to underestimate this name-giving en-
terprise. The rule-type name (the label of the ellipse in the graphical notation) should
be a name applicable to the rule as a whole. The connection symbol should enable
readability of the rule: it should make a logical connection between the antecedent
and the consequent.
Figure 13.8
Graphical and textual notation for constraints: rule types which model expressions about one type of concept.
rule-type definition allows you to specify constraints. The textual and graphical notations
are simple and straightforward. An example of a constraint definition is shown in Fig-
ure 13.8. This rule type models a set of logical formulas concerning a component. The
notation is simpler, because a connection symbol is not required.
Rule instances In this book we use a intuitive notation for writing down instances of rule
types. As an example, let’s take the “requirement” rule from Chapter. 10:
residence-application.applicant.household-type = single-person
residence-application.applicant.age-category = up-to-22
residence-application.applicant.income < 28000
residence-application.residence.rent < 545
INDICATES
rent-fits-income.truth-value = true;
If one takes a close look at this rule from a formal point of view, it turns out that it
contains at least two tacit assumptions:
VAR x, y: employee;
The first rule really does not convey the meaning we want to attach to the rule. Here
we really need to introduce variables to understand the intended meaning of the rule.
344 Chapter 13
Rule types and knowledge bases In most cases knowledge bases will contain instances
of rule types. You can follow the guideline that there should be a separate knowledge base
for each rule type. In general, however, there can be many-to-many relationships between
rules and knowledge bases. Rules of similar types or role can be placed together, particu-
larly of rule sets that are small. One should take care that rule types in the same knowledge
base have different connection symbols. Otherwise, the rule types cannot be distinguished.
Also, rules of the same type may be spread over several knowledge bases if they play dif-
ferent roles in the reasoning process. This means that the structure of knowledge bases is
much more bound to the structure of the reasoning process than the rule types.
If a domain schema gets too large, it is good engineering practice to split it up into parts or
modules. A guideline one can use here is the size of the schema:
Guideline 13-4: I F THE SCHEMA DIAGRAM DOES NOT FIT ON A PAGE , THEN CON -
SIDER SPLITTING THE SCHEMA INTO MODULES
Rationale: This is an extremely pragmatic guideline. People like figures, and we
should be able to tell the full story of a schema in one figure. If this cannot be done, it
makes sense to break down the element (in this case, the domain schema) into parts.
Be careful not to cheat, e.g., by decreasing font sizes. This only has the undesirable
effect of making the figure unreadable, and makes matters worse.
Schema modularization can be achieved with the USE construct. For each domain-
schema we can define which other schemas are being used. There are two options:
1. A full domain schema is imported into the current schema. This has the effect of
making all definitions of the used schema also part of the using schema.
The syntax is:
DOMAIN-SCHEMA system-connections;
USES: system-components; /* The imported schema */
.....
DOMAIN-SCHEMA liver-disease;
USES:
disease FROM general-medical-concepts;
finding FROM general-medical-concepts;
organ FROM general-medical-concept;
.....
The USES construct is a first step in schema organization. In Chapter. 13 we will see
more ways of working with multiple schemata.
One can describe a domain schema at several levels of abstraction. For example, if we
model a computer system we can describe it in terms of specific elements such as “CPU,”
“memory,” and “screen.” We can also generalize from this description, and introduce an
abstract notion such as “component.” In knowledge analysis one is often interested in
making schema generalizations, because these make the schema more general and thus
more widely applicable.
There are different ways in which one can generalize a domain schema, based on
different types of generalizations. We discuss here four types of domain schemata:
too large. However, this view might well change in the near future, as there are
numerous research projects active in this area.
In the short term, the most promising generic domain schemata are probably those
which generalize over particular artifacts, and describe general features of artifacts.
Such generic domain schemata usually define a viewpoint related to some physical
process type: flow, heat, energy, power, electricity. Such processes reappear in many
different technical domains. For example, flow processes occur in many technical
systems as well as biological systems.
3. Method-specific domain schema. The method-specific domain schema contains the
conceptualizations required by a certain method for realizing a task. It is the most
specific domain schema from the use perspective. This perspective is important,
because the way we look at knowledge is often dependent on its use. The domain
schemata in Chapter. 6 are examples of such method-specific domain schemata. As
you may have noticed, these schemata do not contain any domain-specific terms,.
For example, the assessment schema is in many respects similar to the schema for the
housing application (see Chapter. 10), but all domain-specific concepts have been re-
placed by domain-neutral terms. This makes it easier to reuse the domain-knowledge
schema in combination with the task template in a new assessment domain.
4. Task-specific domain schema. In Chapter. 6 we have given only one method per
task type, sometimes with slight variations. If one would compare different method-
specific domain schemata for the same task type, we would note that there is a gen-
eral core of knowledge types. This “intersection” of all method-specific domain
schemata is called the task-specific domain schema. This domain schema contains
the minimal conceptualizations required to carry out a certain type of task. For ex-
ample, a study in configuration design showed that part of the domain-knowledge
schema in Figure 13.9 (see Chapter. 6) is actually such an task-specific domain
schema for configuration. This domain schema is shown in Figure 13.9 as a gray
area. All knowledge types outside this area are the method-specific extensions re-
quired by the propose-and-revise method.
Now, you might be wondering, what kind of domain schemata are the car-diagnosis
schema and the house-assessment schema in earlier chapters? These contain both domain-
specific terms (e.g., house, applicant) and method or task-specific terms (e.g., decision,
requirement). Such domain schemata are in fact amalgamates of different types of domain
schemata. Such an amalgamation is not a bad thing in itself. The application actually re-
quires this tight coupling of domain schemata in order to be able to make a system work.
For this amalgamation of domain- and use-specific knowledge types, we use the term ap-
plication domain schema.
Using multiple schemata for knowledge sharing and reuse One program of work in
current knowledge-engineering research is to study the nature of types of schema gener-
alizations. Generalized domain schemata are called “ontologies,” a term borrowed from
Advanced Knowledge Modelling 347
action
type
constraint
implies
expression
design design
computes
element element
calculation
expression
component parameter
task-specificontology for
the configuration task type
Figure 13.9
The domain-knowledge schema of the propose-and-revise method revisited. The gray part can be seen as the
“representation core” needed by every configuration task, and thus constitutes the task-specific domain schema.
The non-gray areas represent the method-specific extension required for the domain schema of the propose-and-
revise method.
Inferences are important components of knowledge models. Inferences act as the building
blocks of the reasoning process. From the start, people have been interested in getting
a standard set of these building blocks. There is a parallel here with design tasks. As
we pointed out in Chapter. 6, design task can only be automized if the design is made up
from predefined building blocks of a sufficiently large grain size. The same holds more or
less for knowledge engineering itself. If we would have a standard set of inferences, the
knowledge-modelling problem would be a much easier task.
Unfortunately, no such standard set exists to date. Several proposals have been put
forward. A good overview and discussion of the issues involved can be found in the work
of Aben (1995). Still, when we choose names for inferences we do so carefully, trying to be
as precise as possible. In this book we have tried to keep the same intuitive interpretation
whenever an inference was used at more than one place. However, there is no formal theory
behind it.
For the moment, the best we can do is offer you a small catalog of the eight inferences
used in this book. This catalog is a structured textual description. For each inference we
briefly indicate the following characteristics:
Operation A description of the sort of input and output the inference operates on.
Example An example of the inference in some application.
Static knowledge A characterization of the domain knowledge used to make the infer-
ence.
Typical task types The types of tasks the inference typically occurs in.
Used in template The task templates in Chapter. 6 in which this inference occurs.
Advanced Knowledge Modelling 349
Control behavior How is the computational behavior of the inference? This behavior
can be described through the following two characteristics:
1. Does the inference always produce a solution? If the inference can fail, we can
use the control primitive HAS - SOLUTION (see Chapter. 5) when invoking the
inference in the control structure of a task method.
2. Can the inference produce multiple solutions, given the same input? If the
answer is yes, the inference can be used in a loop, using the control primitive
NEW- SOLUTION .
Computational methods What computational methods are likely to be used when re-
alizing this inference during design and implementation?
Remarks Remarks about the inference, which could not be made under any of the pre-
vious headings.
We make no claim that the catalog is complete. Some people may also want to attach
a different meaning to an inference. The catalog is meant as a rough guideline for a novice
CommonKADS user.
Inference Abstract
Operation: Input is some data set, output is either a new given ab-
stracted from the data set, or the input set plus an ab-
stracted given (i.e., the updated input set). The choice be-
tween these two options is a mainly stylistic.
Example: Data abstraction in medical diagnosis: any body tempera-
ture higher than 38.5 C is abstracted to “fever.”
Static knowledge: Abstraction rules, subtype hierarchy.
Typical task types: Abstraction occurs mainly in analytic tasks. In this book
the inference is found in the assessment task template and
is mentioned as a typical extension in diagnosis.
Used in templates: Assessment.
Control behavior: This inference typically may succeed more than once.
Make sure to add any abstraction found to the data set to
allow for chained abstraction.
Computational methods: Forward reasoning with abstraction rules, generalization in
a subtype hierarchy.
350 Chapter 13
Inference Assign
Operation: This inference is concerned with a resource that is assigned
to an actor, a unit, or similar “active” objects.
Example: Assign a room to an employee.
Static knowledge: This inference uses a mix of constraints and preferences.
Typical task types: The inference is rather specific for synthetic tasks; it is
hard to think of an example of assignment in an analytic
task.
Used in templates: Assignment, scheduling.
Control behavior: This inference may fail. It also can produce more than one
solution from the same input (e.g., room assignments with
the same preferences).
Computational methods: In simple cases a rule-based approach can be chosen.
More complex problems may require the use of constraint-
satisfaction algorithms.
Remarks: In some cases the “assign” inference comprises simply the
computation of a formula. In such a case we advise you to
use the inference compute instead. The inference assign is
different from the task ASSIGNMENT. The latter is a task
that comprises a number of different inferences, including
the actual assignment (cf. the “house-assignment” exam-
ple, in which the assignment turned out to be algorithmic
and not knowledge intensive).
Inference Classify
Operation: Associate an object description with a class it belongs to
Example: Classify a discrepancy as being “minor” or “major.”
Static knowledge: Class definitions, consisting of necessary and sufficient
features. For example, a Grey Reinet apple should have
a rusty surface (a necessary condition).
Typical task types: Although this inference is most common in analytic tasks,
it can also be used in synthetic tasks, e.g., classifying a
design to be of a certain type.
Used in templates: Monitoring.
Advanced Knowledge Modelling 351
Control behavior: classify typically produces precisely one solution (cf. its
use in the monitoring template).
Computational methods: Mostly simple pattern matching.
Remarks: When a classify inference comes up in an application, one
should always ask, is this a full knowledge-intensive task
in its own right or is it just a simple inference? One can
take this decision by looking at the domain knowledge. If
the classification is a simple deduction from class defini-
tions, then one can view it safely as an inference. This
requires typically the presence of sufficient conditions for
the classes. If this is not the case, the inference process is
more complex, and one should consider modelling this as
a separate task. Also, if the output can be a set of possible
classes, it is likely this function needs to be specified as a
task in its own right.
Inference Compare
Operation: Input is two objects. The inference returns equal if the two
objects are the same. If this is not the case, the inference
returns either not-equal (in case of two symbols) or some
numerical value, indicating the difference.
Example: Comparison of two findings: the one predicted by the sys-
tem and the one actually observed.
Static knowledge: In simple cases, no domain knowledge is required, be-
cause the comparison is purely syntactic. In other cases,
domain knowledge may need to come into play to make
the comparison. For example, if the objects are charac-
terized by numerical values, the domain knowledge could
provide knowledge about intervals within which two val-
ues are assumed to be equal.
Typical task types: Mainly analytic tasks.
Used in templates: Diagnosis (Chapter. 5), monitoring.
Control behavior: Produces precisely one solution.
Computational methods: Often requires only simple techniques.
Inference Cover
Operation: Given some effect, derive a system state that could have
caused it.
Example: Cover complaints about a car to derive potential faults.
352 Chapter 13
Static knowledge: This inference uses some sort of behavioral model of the
system being diagnosed. A causal network is the most
common candidate.
Typical task types: This inference is specific for diagnosis.
Used in templates: Diagnosis.
Control behavior: This inference produces multiple solutions for the same
input.
Computational methods: Abductive methods are used here. These can range from
simple to complex, depending on the nature of the diag-
nostic method employed.
Remarks: Cover is an example of a task-specific inference. Its use is
much more restricted than, for example, the select infer-
ence.
Inference Critique
Operation: Given a proposed solution, generate one or more problems
with it. The purpose is usually to find ways of improving
the solution.
Example: Critique the design of an elevator.
Static knowledge: The knowledge used by this inference is usually domain-
specific; there is hardly any general critiquing knowledge
for design. Its character tends to be heuristic and context-
dependent.
Typical task types: This inference is found in synthetic tasks.
Used in templates: Configuration design.
Control behavior: In the configuration-design template, it succeeds precisely
once. However, one can think of situations where multiple
outputs can be generated from the same design.
Computational methods: The computational methods tend to be domain-specific.
Remarks: The critique inference is an important step in the propose-
critique-modify methods for design described by Chan-
drasekaran (1990).
Inference Evaluate
Operation: Input is a set of data and a norm. Output is a truth value
indicating whether or not the data set complies with the
norm. If the evaluation always concerns the same norm,
the norm can be omitted as a dynamic input role (cf. the
scheduling template).
Advanced Knowledge Modelling 353
Inference Generate
Operation: Given some input about the system (system features, re-
quirements), provide a possible solution.
Example: Generating possible classes to which a rock sample may
belong.
Static knowledge: When used for analytic tasks: knowledge about all the
possible solutions (the solutions are enumerable for an-
alytic tasks). When used for synthetic tasks: system-
composition knowledge, e.g., plan elements and possible
ways of connecting these (in sequence, in parallel).
Typical task types: This inference can be used in all kinds of tasks. The out-
put therefore varies depending on the type of task in which
it occurs. If used in diagnosis, generate produces a fault
category that could explain the faulty behavior. If the in-
ference is used in planning, it would produce a possible
plan.
Used in templates: Classification, synthesis.
Computational methods: Can produce multiple solutions for the same input. Some-
times, this inference is defined as producing a set. In that
case, the inference succeeds precisely one time, namely
with the set of all possible solutions. This is a stylistic
matter.
Computational methods: In analytic tasks: simple look-up. In synthetic tasks: algo-
rithm for computing all possible combinations.
Remarks: This is a generic inference that occurs in many domains.
The inference is typically associated with a “generate &
test” approach, in which there is some “blind” generation
of possible solutions to the problem. The inference cover
can be seen as a specific form of generate.
354 Chapter 13
Inference Group
Operation: Input is a set; output is a aggregate object containing two
or more elements of the input set
Example: Grouping of employees for joint assignment to an office.
Static knowledge: Domain-specific knowledge about positive and negative
preferences for grouping. For example, in the office-
assignment application, criteria may act as positive or neg-
ative preferences. A combination of a smoker and a non-
smoker is a typical example of a high-priority conflict.
Such strong conflicts are usually considered to be enough
to rule out certain solutions. The preferences are often or-
dered, but the ordering scale varies.
Typical task types: Mainly synthetic tasks.
Used in templates: Assignment.
Control behavior: Can provide multiple solutions.
Computational methods: Constraint satisfaction; generate full combination space
and then use negative and positive preferences for repeated
subset filtering (cf. Schreiber (1994))
Remarks: In earlier descriptions of inference typologies, the name
“assemble” was used for a similar inference.
Inference Match
Operation: Given a set of inputs, see whether these together lead to a
combined result.
Example: Match the norms for which values have been established
to see whether it leads to a decision.
Static knowledge: Rules that indicate whether a combination of findings or
results leads to some joint conclusion.
Typical task types: Mainly confined to analytic tasks.
Used in templates: Assessment.
Control behavior: Inference fails or succeeds a single time.
Computational methods: Forward reasoning.
Remarks: This is a difficult one. The name “match” has several dif-
ferent meanings. We have opted here for a quite specific
definition, without actually committing to one particular
task.
Advanced Knowledge Modelling 355
Inference Modify
Operation: This inference takes a system description as both input and
output. An optional input is the actual modification action
that needs to be carried out.
Example: Modifying the design of an elevator by upgrading the ma-
chine model.
Static knowledge: Knowledge about the action: one-time action or repeat-
able action, e.g., upgrading a component or increasing a
parameter value.
Typical task types: Mostly synthetic tasks.
Used in templates: Configuration design, scheduling.
Control behavior: Delivers one output.
Computational methods: Simple update.
Remarks: It is possible to use this inference in diagnosis, e.g., in case
of a reconfiguration test.
Inference Operationalize
Operation: Given some requirements for a system, transform these re-
quirements into a format which can be used in a reasoning
process.
Example: Transform requirements like“fast computer” to parameter
values such as “at least Pentium of 266 hertz.”
Static knowledge: This is a tricky step in a design process. Knowledge tends
to be heuristic. The choices made here may need to be
revisited during the design.
Typical task types: Synthetic tasks.
Used in templates: Configuration design, synthesis.
Control behavior: It is preferable that this inference propose several alterna-
tive operationalizations.
Computational methods: Forward reasoning.
Remarks: Most methods for synthesis leave this step out, because it
is difficult to automize. Yet, it is a crucial, if not the most
crucial step in artifact design.
Inference Propose
Operation: Generate a new element to be added to the design.
Example: Propose a hard disk model for a PC configuration.
356 Chapter 13
Inference Predict
Operation: Given a description of a system, generate a prediction of
the system at some point in the future.
Example: Predict the blood pressure of a patient.
Static knowledge: Requires a model of the system behavior. This model will
be either quantitative or qualitative
Typical task types: At the moment, mainly analytic tasks. The inference is
often used in model-based diagnosis.
Used in templates: Diagnosis.
Control behavior: One time.
Computational methods: Qualitative reasoning, mathematical algorithm.
Remarks: Prediction can be a knowledge-intensive task in its own
right, meaning that it should not be viewed as an inference.
Inference Select
Operation: Select an element or a subset/list from a set/list.
Example: Select a diagnostic hypothesis from the disease differen-
tial.
Static knowledge: Often, the domain knowledge provides selection criteria.
Typical task types: Found in all tasks. Some synthetic tasks can be formu-
lated almost completely as consisting of subset selection,
namely when the design space is relatively small.
Used in templates: All templates.
Control behavior: Multiple solutions; can thus be used in a loop condition:
(while new-solution select())
Advanced Knowledge Modelling 357
Inference Sort
Operation: Input is a set of elements. Output is a sorted list containing
the same elements
Example: Sorting a set of valid designs based on the preferences
(e.g., cheapest first).
Static knowledge: Comparison function that decides on the relative order of
two elements.
Typical task types: This inference is most frequently encountered in synthetic
tasks, where it is used to apply preferences to a set of pos-
sible designs. It can also be used in analytic tasks, e.g.,
for ordering a set of hypotheses (e.g., by using knowledge
about the a priori likelihood of the hypothesis).
Used in templates: Synthesis.
Computational methods: Use standard sorting methods.
Control behavior: This inference succeeds precisely one time with one par-
ticular input.
Remarks: The name of this inference has a very computational fla-
vor, but it is in fact an effective way of describing certain
expert reasoning patterns. Sorting is used by experts to
structure the search space. The sorting knowledge is often
some form of search-control knowledge. Sometimes, sort-
ing is modelled as a repeated invocation of a knowledge-
intensive select inference.
Inference Specify
Operation: This inference takes as input an object and produces as
output a new object that in some way is associated with
the input object.
Example: Specify an observable for a hypothesis.
358 Chapter 13
Inference Verify
Operation: Input is a description of a system which is being tested.
Output is a truth value, indicating whether the system has
passed the test. An optional output is the name of a viola-
tion (only if the verification failed).
Example: Verify the design of a computer
Static knowledge: For analytic tasks: knowledge indicating consistency of a
hypothesis with a set of data. For synthetic tasks: internal
constraints and external constraints (“hard requirements”).
Typical task types: This inference can occur in any type of task. It is most
often found in methods that apply a “generate & test” ap-
proach.
Used in templates: Diagnosis, configuration design.
Control behavior: The inference succeeds precisely once.
Computational methods: Forward reasoning.
the domain, and therefore the resulting templates are easier for other people in the company
or for newcomers to understand.
Initiating, developing, and maintaining such a catalog are typically activities which
can fall under the responsibility of the knowledge manager. In addition, there will often be
a need to harmonize the domain schemata used by these applications.
The default methods for task types described in Chapter. 6 are, in the research literature,
often called problem-solving methods (PSMs). The main distinction between a TASK-
METHOD in the knowledge model and a PSM is the fact that a PSM description is not yet
directly linked to an application task. A task method is thus best seen as an instantiation
of a PSM for a task. In current knowledge-engineering research PSMs are an important
object of study, but in this book we do not discuss their full implications in depth (for more
information see the bibliographical notes below).
PSMs offer both advantages and disadvantages when compared with task methods.
The major advantage is that one can exploit the fact that in many task-specific methods
the same patterns reoccur. For example, in many methods an “empirical-validation” pat-
tern can be identified: some hypothesis is posed about the state of affairs in the world,
this hypothesis is subsequently tested through some data-gathering method, and then the
hypothesis is rejected or not based on the comparison between the hypothesis and actual
observations. Most of the analytic task templates in Chapter. 6 contain such a pattern. A
PSM allows one to capture this pattern, without committing to task-specific jargon. A PSM
has therefore in principle a higher reusability potential.
As usual, in the advantage also lies the disadvantage. Because a PSM is so general
and does not commit to a particular task, its description tends to be abstract and difficult to
understand. Therefore, PSMs may pose a usage problem in daily knowledge-engineering
practice because people do not understand what a PSM may do. Another disadvantage is
that the grain size of a PSM is usually smaller than that of a task template. A task template
is in fact a package of methods tailored to a task type. This method-configuration process
needs to be done by the knowledge engineer, if one decides to construct the task-knowledge
specification from PSMs.
In Chapter. 5 we assumed that the knowledge engineer chooses one particular method for
realizing a certain task. This choice of task for a method is fixed in the specification. For
some applications this approach is too rigid. This is particularly true of systems developed
in a changing or varying environment. One can think of a diagnostic system for which
the sort of data about the malfunctioning system are of varying quantity or grain size. In
one some environment the system gets detailed system data, and a model-based diagnosis
360 Chapter 13
method can be used. In another environment the data are just some global indicators,
ruling out any detailed behavioral analysis. In the latter case a classification method might
be called for.
One way of modelling this situation is to allow the definition of multiple task methods
for a single task. In addition, one would need to specify a “decision” function for handling
the method choice.
However, we recommend using a less complicated solution which avoids the introduc-
tion of new modelling constructs:
Figure 13.10 shows a task-decomposition diagram for handling the problem mentioned
above with this work-around. The example concerns the introduction of multiple “hypoth-
esis generation” methods. The method GENERATION - STRATEGY has two subtasks and
one transfer function of the OBTAIN type. The idea is to ask an external agent (who that
is is defined in the communication model) what the grain size is of the data. Based on this
information, the system will opt for one of the two generation methods.
In Chapter. 6 we saw that application tasks often consist of a number of task types. Ta-
ble 6.3 lists typical task combinations. But in Chapter. 5 we suggested that a knowledge
model concerns one particular knowledge-intensive task. The question therefore arises:
how can we combine several task types to define how these together solve the application
task?
We can use a similar work-around as mentioned for the problem of multiple methods:
defining the application task as a supertask of the tasks corresponding to one task type.
The task method of this supertask then defines how the tasks need to be combined to solve
the overall problem.
This approach works. However, you will sometimes find it is not optimal. You will dis-
cover that the reasoning process about how to combine tasks is a full knowledge-intensive
task in its own right. It is a task with a metalevel flavor. Reasoning about combining tasks
to achieve a goal is called “strategic reasoning.” The reader is referred to the literature for
references to work on strategic reasoning. Dynamic method configuration can also be seen
as strategic reasoning. This is certainly an area for further development in the near future.
Advanced Knowledge Modelling 361
generate generate
hypothesis hypothesis
generation
strategy
causal
covering
heuristic causal
match covering
Figure 13.10
Example for the work-around when you want to be able to handle multiple methods for a task. A separate decision
task is used as an “in-between.”. The example concerns the introduction of multiple “hypothesis generation”
methods. The method “generation-strategy” has two subtasks and one transfer function of the “obtain” type. The
idea is to ask an external agent (who that is, is defined in the communication model) what the grain size is of the
data. Based on this information, the system will opt for one of the two generation methods.
362 Chapter 13
1. Activity diagram
2. State diagram
3. Class diagram
4. Use-case diagram
For each diagram we describe the basic elements, their notation, and for what purposes the
diagram can be used within CommonKADS. We have not striven for a full coverage of the
UML diagrams. In particular, some detailed notations for class diagrams have been left
out.
This chapter has been written as a reference text for the UML notations. For this reason
we have made the text more or less self-contained, and therefore the description of the class
diagram overlaps at some points with the domain-schema description in Chapter. 5.
An activity diagram models the control flow and information flow of a procedure or pro-
cess. Activity diagrams are best used if the procedure is not, or only to a limited extent,
influenced by external events. This means that the control flow should be largely syn-
chronous. If external events dominate and create asynchronous control, a state diagram is
a more appropriate modelling technique.
Activity diagrams can be used at various levels of abstraction. For example, one can
use an activity diagram to model the main tasks or activities in a business process. Alter-
natively, they can be used to model an algorithm.
Activity diagrams are a useful diagramming technique in the context of Com-
monKADS. Two model components are most likely to benefit from this notation:
data entry
generate
processing
output
Figure 14.1
Notation for activity states and activity state transitions.
[data correct]
further
data entry
processing
[data incorrect]
dump in
waste basket
Figure 14.2
Notation for a decision.
14.2.3 Decision
State transitions are not always deterministic. It may be the case that control is transferred
to one state or another depending on the outcome of the previous state. We can model this
selection process with a DECISION. Figure 14.2 shows an example decision.
A decision has an incoming state transition and two or more outgoing state transi-
tions. A condition (or “guard”) on the outgoing arrow defines the situations in which this
branch is taken. The data involved in the condition should involve some piece of internal
information.
14.2.4 Concurrency
In some cases activity states can be active in parallel. This type of concurrent activity can
be specified with the split/join notation for control. An example is shown in Figure 14.3.
The horizontal bar is used for splitting and joining control threads. In the example of
Figure 14.3 we see four activity states related to having a dinner. We see that the activities
of cooking dinner and choosing and opening an appropriate bottle of wine can be carried
out in parallel. Only when both activities have been completed successfully can we enjoy
our dinner.
buy food
and drinks
open wine
cook dinner
bottle
have dinner
Figure 14.3
Notation for concurrent activity states.
SALES DESIGN
DEPARTMENT DEPARTMENT
get customer
information
design
elevator
calculate
cost
write
tender
Figure 14.4
Notation for swim lanes in an activity diagram. The example concerns part of a business process of a company
selling elevators.
UML Notations Used in CommonKADS 369
SALES DESIGN
CUSTOMER
DEPARTMENT DEPARTMENT
non-standard standard
cost elevator
calculation design
Figure 14.5
Notation for object flow. Object flows are attached to transitions between activity states. The flow itself is shown
as a dashed line extending from or leading to a state transition.
Figure 14.5 shows the notation for object flows. It was already included in Chapter. 3,
but is repeated here for convenience. This figure is a refined representation of the process
modelled in Figure 14.4. We added an additional swim lane for the customer and included
the major information objects involved in the process.
As we see, object flows are attached to transitions between activity states. The flow
itself is shown as a dashed line extending from or leading to a state transition. The notation
:class-name stands for an anonymous object of the specified class (for details see the
section on class diagrams). It is common in activity diagrams to add an additional status
label to an object (such as “entered” or “placed”) if the object occurs more than once in
an activity diagram. An object flow starting from a transition indicates that the object is
created as a result of the activity state. An example is the object :tender, which is created
as a result of the write tender activity. If one attaches an object input flow to a state
370 Chapter 14
receive
request
archive archive
(request) (request)
archive process
request
Figure 14.6
Sending and receiving signals in an activity diagram. Signals typically introduce the notion of external control
into an activity diagram.
transition (e.g., :customer-information in Figure 14.5) this means that the transition is
dependent on the existence of this object.
If the transition from one activity state to another is completely determined by the
production of a certain object, the state transition can be replaced by introducing the object
as an intermediate input-output flow between two activity states. An example of this is the
placement of the :elevator-design object between two activities in Figure 14.5.
14.2.7 Signals
Activity diagrams typically show control within a certain process or procedure, without
any limitations posed by the “outside world” (which is effectively everything outside the
scope of the activity diagram). Interaction with the external world can be included in an
activity diagram through the use of signals.
UML distinguishes two kinds of signals: sending signals and receiving signals. A
sending signal is shown as a side effect of an activity-state transition. The notion used for
a sending signal is a convex hexagon; for a receiving signal it is a concave hexagon (see
Figure 14.6). Signals come in pairs: sending signals should have receiving counterparts. If
in the transition from one activity state to another a signal is sent or received, this is shown
through two sequential state-transition lines. The intended meaning is that there is in fact
a direct transition between the two states with the signal as an explicit side effect.
UML Notations Used in CommonKADS 371
A state diagram is a technique which helps to model the dynamic behavior of a system
that is influenced by external events. The UML state-diagram notation is used in the Com-
monKADS communication model to specify the communication plan control. In addition,
state diagrams can be used in the task model to describe task control, in particular asyn-
chronous tasks.
14.3.2 State
A STATE models the state a system is in over a period of time. In object-oriented analysis
one usually assumes that a state is always a state of some object class. Not all objects have
“interesting” states. A good guideline is to develop state diagrams for all object classes
with significant dynamic behavior.
During a state ACTIVITIES and ACTIONS can be performed. Activities take time,
and can be interrupted by events that cause a state transition. Actions, on the other hand,
are assumed to be instantaneous from our modelling point of view. Actions, in contrast to
activities, cannot be interrupted.
Within states we can define three types of actions:
1. entry actions, which are always carried out when a state is entered;
2. exit actions, which are done whenever the state is terminated;
3. event-based actions: some event occurs which does not trigger a state transition, but
only the execution of an action. An example is the insert(coin) event in Figure 14.9.
Figure 14.7 shows the UML notation for states: a rectangle with rounded corners. The
first compartment contains the state name. This name should typically be a verbal form
that indicates a time duration (usually ending with “-ing” such as “waiting” or “checking”).
The second compartment contains any state variable you may want to introduce. A timer is
a frequently encountered state variable in state diagrams. The third compartment contains
the actions and activities connected to the state. The following syntax is used:
entry / <action> <event> / <action> do / <activity> exit / <action>
372 Chapter 14
duration
state variables
entry/switch onTV
state actions do/watch
exit/turn off TV
& activities
Figure 14.7
Notation for a state.
permission-from-control-tower
[check -= OK]
/ take-off
ready for ^control-tower.confirm-takeoff(flightID) airborne
take off
entry/final check
Figure 14.8
Notation for a state transition. The state diagram describes an airplane departing from an airport.
Over time, system objects can go from one state to another. This is modelled with a STATE
TRANSITION link between states. The nature of the state transition can be described with
a textual annotation of the transition. The syntax of this text string is:
An EVENT causes a state transition to occur. If no event is specified, the state transi-
tion occurs once the activities carried out within the state are completed. A state without
outgoing events is thus the same as an activity state in an activity diagram. Events come
from “outside the diagram,” e.g., from other objects or from external agents.
A GUARD is a condition on the transition. Only if this condition is true, does the tran-
sition take place. Guards typically refer to state variables and represent different outcomes
of processing performed within a state.
UML Notations Used in CommonKADS 373
An ACTION is some process that always occurs when the state transition takes place.
If all actions going out of a state have the same associated action, this action can also be
placed as an EXIT action within this state (see earlier). Vice versa, if all incoming actions
into a state have the same associated action, the action can be placed as an ENTRY action
in this state.
Finally, the SEND clause sends a message to some other object. Such a message will
be received by the other objects as an event. A send-event pair in a state diagram is the
same as a sending-receiving signal pair in an activity diagram.
An example of a state transition with a label containing all four ingredients is shown in
Figure 14.8. The transition from the state ready-for-takeoff to the state airborne occurs
when permission is received from the control tower (an event), under the condition that the
final instrument check is OK (a guard). The action takeoff is executed when the transi-
tion occurs. The transition has the side effect of sending a message to the control tower
confirming the takeoff.
Similar to activity diagrams, state diagrams can contain start and stop states. The same
symbols are used in both diagrams. State diagrams model possible behavioral states of an
object of a certain class. As an example, a state diagram of a vending machine is shown in
Figure 14.9.
The initial state is the idle state. There is no stop state. The process is modelled as
a never-ending cycle. A new cycle starts when a coin is inserted. This leads to a state
in which further coins can be inserted and in which one can select an item (e.g., a drink
or a chocolate bar). Once an item is selected the system goes into a state in which the
transaction requested is checked. There are three possible outcomes of this processing
request state, which are all represented as guards. Based on the outcome the system will
proceed with dispensing the item (and possibly some change) or return to the inserting
money state. If the customer presses the cancel button, the system returns the balance.
The same happens if no action is performed by the customer during a certain time period.
Note that the states in the lower part of the diagram have no outgoing events. This means
that the state transitions take place automatically once the work to be done (see the “do”
activity) is completed.
UML state diagrams have facilities for defining both aggregate states and generalized
states. In the first case the aggregate stands for a set of concurrent substates. In case
of a generalized state the system is in one of the possible substates.
In practice, we mostly use only concurrent states without an explicit name for the
aggregate state. For this type of concurrency one can use the same notation as used in
the activity diagram: a horizontal bar. Figure 14.10 shows an example of two concurrent
states. The diagram models a cash machine in which card and cash are ejected in parallel
(some cash machines do this in sequence).
374 Chapter 14
timer
cancel button balance
pressed
insert(coin)/add to balance
[balance <
time out item price]
cancelling
[item not
select(item)
available]
do/return balance
processing request
dispensing
item dispensing
change
do/dispense item
[balance > item price]
do/dispense change
Figure 14.9
State diagram for a vending machine.
The purpose of a class diagram is to describe the static information structure of the appli-
cation domain. The class diagram is in fact an extension of traditional entity-relationship
modelling. The extensions reflect the increasing requirements that are placed on the ex-
pressivity of information models.
The diagram is part of the system analysis process and therefore should be phrased in
domain terminology. The analyst should take care to avoid any commitment toward design
UML Notations Used in CommonKADS 375
entering
transaction processing
data
cash card
take out take out
entered
cash card
idle
Figure 14.10
Notation for state concurrency. The diagram models a cash machine in which card and cash are ejected in parallel.
details. The class diagram notation is the richest UML notation. Here, we only discuss the
core elements of the class-diagram notation.
The graphical notation for a domain schema in the knowledge model (see Chapter. 5) is
based on the UML class-diagram notations. There are three differences between a domain
schema and a class diagram :
Taking these differences into account, class-diagram notations can be used freely in
CommonKADS. The class diagram can also used in the task model to describe the general
information types involved in a certain task or business process.
14.4.2 Class
Classes are the central objects in class diagrams. Classes represent groups of objects in the
application domain that share a similar information structure. Classes are shown as boxes
with three compartments:
1. The top compartment contains the name of the class (in boldface type). This name
should as precise as possible. Choosing the right names is an important skill for an
analyst.
2. The middle compartment contains a specification of the ATTRIBUTES of the class.
An attribute is some simple piece of information that is common to all objects be-
longing to a class. For each attribute a value set is specified, indicating the range of
possible values of the attribute.
3. The lower compartment specifies operations that can be carried out on objects of the
class. Operations may have parameters and a return type, but neither is necessary.
The ID of the object is always implicitly assumed to be a parameter, and need not be
listed explicitly. The syntax for the specification of operations is as follows:
types:
STRING: list of printable characters, started and ended with a ” character
NUMBER: any numeric value
INTEGER: value belongs to subset of integer numbers
BOOLEAN: value is either “true” or “false”
DATE: value is some calendar data
UNIVERSAL: any value is allowed
f g
You can also define your own value set. Enumeration types are most frequent (see the
“ hard-cover, paperback ” example for the attribute cover-type of class library-book in
Figure 14.11).
UML Notations Used in CommonKADS 377
class name
attribute-1: value-set
attribute-2: value-set
available(): Boolean
Figure 14.11
Notation for classes.
14.4.3 Association
Direction Names of an association may indicate a certain direction. For example, the
association owned-by between a car and a person is directional: it can only be read
as “car X is owned by person Y.” The married-to association on the other hand does
not imply a direction. Directionality of an association is indicated by carets attached
378 Chapter 14
husband wife
man woman
0-1 married-to 0-1
husband
wife
man woman
0-1 0-1
married-to
Figure 14.12
Notation for an association. The diamond notation is the general one. The diamond symbol can be omitted in
case of a binary association.
person
0-1
married-to
1+ 0+
address student course
enrolled in >>
<< has address
major subject
>>
major
Figure 14.13
Examples of different types of cardinality in associations defined for a “student” object class.
Argument role When we specify an association, it can be useful to specify the role
played by objects in the association. The married-to association provides a clear
example of roles: “husband” and “wife” are roles played by the man and wife ob-
jects in this association. The roles “employer” and “employee” in the works-for
association in Figure 14.15 (see further) is another clear example of the use of roles
played by arguments in an association.
380 Chapter 14
husband wife
man woman
0-1 0-1
marriage
date: Date
city
registered in >>
Figure 14.14
Notation for an association class.
An interesting feature of associations is that they can have attributes of their own. For
example, the wedding date is an attribute of the married-to association. It turns out that
in many cases it is useful to treat associations as information objects in their own right.
Associations act in many applications as kinds of structured, complex objects, for which
one can define attributes, operations, and all the other stuff connected to object classes. In
UML this can be achieved by defining an ASSOCIATION CLASS.
The graphical notation for an association class is shown in Figure 14.14. The asso-
ciation name is placed in a class box. This class box is linked with a dashed line to the
association. The analyst is free to add attributes, methods and the like to the associa-
tion class. In fact, from a modelling point of view there is no limitation on association
classes when compared to “normal” classes. Association classes are an important abstrac-
tion mechanism in modelling applications and occur in almost every model with a certain
degree of complexity.
The need for an association class arises in the case where attributes cannot be placed
in one of the association arguments. In the example in Figure 14.15 the attributes salary
and job-title can only be placed in the person object if we assume that a person works for
only one company. If this is not true we have to create an association class, and move the
UML Notations Used in CommonKADS 381
person
company name
employer employee
social security #
name address
1 << works for 1+ salary
job title
company person
employer employee
name name
1+
1+ social security #
address
works for
salary:
job title
Figure 14.15
The need for an association class arises if attributes cannot be placed in one of the association arguments. In this
example the attributes salary and job title can only be placed in the “person” object if we assume that a person
works for just one company (as in the upper diagram). If this is not true (cf. the lower diagram), we have to create
an association class, and move the attributes to this class.
382 Chapter 14
executer-of >>
agent task
1+ 0+
computer
human
program
man woman
Figure 14.16
Notation for generalization: arrowpoint. In this example the association “executer-of” is also defined for all
subclasses of agent.
attributes to this class. Some analysts would actually say that the association class is the
preferred modelling method in both cases, because it captures the domain structure better.
14.4.5 Generalization
Generalization is one of the most common constructs in class diagrams. With generaliza-
tion we can build class hierarchies. We usually assume inheritance of object-class charac-
teristics (attributes, operations, and associations) from superclasses to subclasses.
Figure 14.16 shows the notation for generalization: an open triangular arrow. In this
example the association executer-of is inherited by all subclasses of agent. In Figure 14.17
you find a second example of generalization concerning paragraph types in a document.
We see here that object attributes are specified as high as possible in the class hierarchy.
An important reason for introducing generalization is parsimony: specify an object
characteristic at the right level of abstraction. In many applications, we see in fact that
one single hierarchy is too limitative to capture adequately the information structure in
the domain. Therefore, UML offers a number of advanced techniques for generalization,
including multiple inheritance and the specification of multiple hierarchies along different
dimensions. These issues are treated in more depth in Chapter. 13.
UML Notations Used in CommonKADS 383
paragraph
paragraph#
table figure
file-format
Figure 14.17
Subclasses of paragraph in a document. Attributes of the superclass are being inherited by the subclass, meaning
that all paragraphs have a paragraph number. For some subclasses new attributes are being introduced.
14.4.6 Aggregation
An aggregation can best be viewed as a predefined binary association in which one ar-
gument plays the role of “aggregate” or “whole,” and the other argument constitutes the
“part.” Part-whole relations occur in many domains. These relations can be used to model
both physical as well as conceptual aggregates. An example of an aggregation is shown in
Figure 14.18.
The notation used is that of a line with a diamond symbol attached to the “whole”
side of the association. Like any other association, cardinality can be specified for an
aggregation relation. In the case of the audio system shown in Figure 14.18 we see that
the system should have an amplifier, may have either two or four speakers, and optionally
includes a set of headphones.
0-1 audio
CD player system
record 0-1
player
0-1
tuner
Figure 14.18
Notation for aggregation. The example concerns an old-fashioned audio system consisting of a number of differ-
ent components.
14.4.7 Object
Objects of a certain class are sometimes useful to include in class diagrams1 . The notation
used is shown in Figure 14.21. The object name is bold underlined and followed by a colon
1 Strictly speaking, these diagrams are called object diagrams.
UML Notations Used in CommonKADS 385
document
paragraph
name
type
style
0+
open
print
Figure 14.19
Notation for composition.
audio
system
1+
0-1 2,4
amplifier
input
system head speaker
phones
CD player record
player
Figure 14.20
Combining aggregation and generalization often provides an elegant modelling method. In this figure we can
show that at least one of the four input systems for sound carriers needs to be a part of the audio system.
386 Chapter 14
airplane
#seats: integer
#seats = 80
Figure 14.21
Notation for class objects.
and the name of the class it belongs to. One can also define an anonymous object by just
writing :class-name in the object box.
The object notation is also used in other diagrams, such as the activity diagram.
Use-case diagrams are typically used in the early phases of system development. The
diagrams show what kind of services a customer and/or user can expect from the system to
be developed. Therefore, use-case diagrams are mainly a tool for the initial requirement-
engineering phase. The diagrams describe the system functionality from the outsider’s
point of view. It is useful as a communication vehicle between the developer and the
customer.
Use-case models are similar to viewpoint-oriented analysis models (Sommerville
1995, Chapter 5). The diagram fits well with the agent model (see Chapter. 3), where
it can be used as a summary of agent interactions with the prospective system. Use-case
diagrams can also be used as a technique to present the proposed solution to the customer
or to other stakeholders.
UML Notations Used in CommonKADS 387
library system
system
lend book
add book
use case 1
to catalog
make book
reservation
use case 2 remove book
from catalog
search library
catalog
Figure 14.22
Notation for use cases. Left: general notation for use cases and system. Right: use cases in a library system.
A use case is a service provided by a system2 . The system can be a software system or
some other system in the world. Use cases interact with actors (see further) who are not
part of the system.
A use case is shown graphically as an ellipse with the name of the use case as a label.
A use case is always placed in a rectangular box denoting a particular system. The name
of the system is written in the upper part of the box. Figure 14.22 shows examples of the
notation.
14.5.3 Actor
Actors are agents (i.e., humans or computer programs) that interact with the system. An
actor makes use of services provided by the system or provides information for system
services. Actors are defined at the “class” level, meaning that an actor stands for a group
of actor objects. In a library system example actors would be “lender” and “librarian,” but
not individual lenders or librarians.
2 Our advice for people having problems with understanding the term “use case” (like ourselvs): replace it in
Example actors:
Figure 14.23 shows the UML notation for actors. The name of the actor class is placed
at the bottom of the actor figure.
14.5.4 Relationships
The most common relationship in use-case diagrams is the INTERACTION relation be-
tween an actor and a use case. This is shown as a simple solid line between an actor and a
use case. There can be many-to-many relations between actors and use cases.
Figure 14.24 shows an example of a use-case diagram for the library system. In this
case the actors are human agents. Typically, actors should be seen as roles played in
an application setting. One human could play multiple roles, and thus take the form of
multiple actors.
In addition, the analyst can define generalization relationships. Generalization is
treated in more detail in the section on the class diagram. In use-case diagrams the same
notation is used (an open triangle arrow). A generalization can be defined between actors
as well as between use cases. The latter is the most common form. In the case study in
Section 14.7 we see an example of use-case generalization (cf. Figure 14.27).
library system
lend book
add book
to catalog
make book
reservation
remove book
from catalog librarian
lender
search library
catalog
Figure 14.24
Notation for a use-case diagram.
14.6.1 Stereotypes
Stereotypes are a built-in extendibility mechanism of UML. A stereotype allows the user
to define a new metatype. The standard metatypes of UML are the basic constructs like
CLASS and ASSOCIATION (we use a bold/uppercase notation for metatypes throughout
this document). Sometimes, we like to distinguish certain subsets of classes, activity states,
and so on. For example, if we use activity states to model a business process, we may
want to distinguish between primary and secondary processes (see, e.g., Figure 10.2). We
can achieve this by introducing stereotypes. Figure 14.25 shows the notation used for
stereotypes3. The name of a stereotype is placed above the construct involved (activity
state, class, ...), enclosed in two angle brackets from each side.
Stereotypes can loosely be viewed as kinds of supertypes. They are particularly helpful
in increasing the readability of UML diagrams.
14.6.2 Annotations
The UML constructs are meant to convey the maximum amount of useful information
for a particular purpose (e.g., expressing the static information structure for class-diagram
3 We use two angle brackets for indicating stereotypes. The official notation is to use guillemets, but this is not
supported by many word-processing systems. The two angle brackets are a reasonable approximation.
390 Chapter 14
<<actor>>
student <<primary>>
sales
student-card#: integer
....
Figure 14.25
Notation for stereotypes.
Figure 14.26
A UML annotation.
constructs). However, sometimes we feel the need to include some additional piece of
information, which is not easily modelled with a predefined notational construct. For such
a situation UML uses a general ANNOTATION construct.
The graphical form is shown in Figure 14.26. Annotations can be included in every
UML diagram. They do not have any formal status.
A university department offers about thirty courses for students. Most of the students are
following the major program offered by the department. In addition, students from other
programs follow the courses (typically some 20%).
Like many other departments, the department wants to have software for course enroll-
ment, storing and retrieving exam results, and other administrative stuff related to courses
and students. The prospective system has received the name “CAS” (for course adminis-
tration support). It is the purpose of this case study to show how one can specify the data
UML Notations Used in CommonKADS 391
and functions for CAS, using the UML notations discussed earlier.
We have not striven in this case study for completeness. We use the case study mainly
as a means of demonstrating the use of the four diagramming techniques.
In a use-case diagram we can express the services that are expected from the CAS sys-
tem. First, we have to identify a number of actors that interact with the system. In this
example we have limited the set of actors to student, tutor, and the administrative staff of the
department.
The system is expected to provide the following services to these actors:
Personal student data A student can change his or her own personal data, such as home
address and telephone number. The administrative staff should be able to do this as
well.
Course information Students can access information about the courses that are being
taught by the department. The tutors should be able to both access and update the
course information.
Course enrollment Students can enroll in a course. Tutors and administrative staff
should be able to look at the enrollment status of courses.
Exam results A student can get access to his personal exam results. The tutors and the
administrative staff have access to all exam results. The exam results can only be
entered by the administrative staff.
The use-case diagram in Figure 14.27 shows how these services can be represented as
use cases with which actors interact. The use cases at the top of the system box are an
example of the use of a generalization relation between use cases. The use case browse
exam results is a generalization of the browsing of results on an individual basis (this is a
service for all three actors) and the browsing of results per course (which is only allowed
for tutors and administrative staff).
In the class diagram in Figure 14.28 we have focused on the static information structure,
and not paid attention (yet) to operations. This is typical of the early phases of analysis, as
operations are often only added at a later stage.
In the class diagram we see the main object classes of this domain. The class course is
a central entity. Courses have tutors. “Tutor” is a defined here as a role a university staff
member plays in the teaches relation. From the cardinality specification we can see that
the staff members may teach any number of courses (including zero), and that courses have
392 Chapter 14
browse browse
individual course
results results
browse
exam results
enter
exam results
tutor
enroll browse
in course enrollments
student
inspect update
course info course info
administrative
update staff
student data
Figure 14.27
Use-case diagram for the CAS application.
at least one tutor, but possibly more. Courses also can be related to other courses through
the requires association. This can be used to define prerequisites for courses.
Students and courses can be related through the enrollment association. We see here a
typical example of an association class. An enrollment is an information object in its own
right. For example, we like to store data about the enrollment date. Also, we can link an
enrollment object to the exam results of a student for a course. Note that the purpose
of building a class diagram is to capture an adequate view of the domain information
structure. If we would design a database scheme, other considerations come into play.
This is, however, not our present concern.
UML Notations Used in CommonKADS 393
exam
date: date
result: [0..10]
0+ requires
course-exam
1
enrollment
0+
student date: date
course
student-card#: string
course-code: string
name: string 0+ 0+ year: integer
address: string
trimester: 1-3
date-of-birth: data
study-points: integer
major: Major
learning-goals: string
.........
description: text
literature: text
maximum-#students: integer
0+
university
staff member
title: string
1+
position: string
department: string
telephone: string tutor
room: string
e-mail: string
Figure 14.28
Class diagram for the CAS application.
The diagram is somewhat simplified for presentation purposes. For example, the stu-
dent information is in reality more extensive.
14.7.4 Activities
Activity diagrams are useful when modelling procedures and processes within a system.
An example in the CAS example is the enrollment procedure. An activity diagram for this
procedure is shown in Figure 14.29.
The first activity in this diagram is concerned with entering the enrollment request.
Once this piece of work is finished, two parallel “check” activities are started, namely
394 Chapter 14
submit
enrollment
request
check check
preconditions student limit
[preconditions
inform about not OK} [above limit]
inform about
prerequisites
student limit
Figure 14.29
Activity diagram for course enrollment.
(1) checking whether the student has fulfilled the course preconditions, and (2) checking
whether the student limit for the course is not exceeded. Both activities are followed by a
decision point. Only if both checks are OK (see the horizontal bar for control joining) is
the enrollment registered.
This enrollment is a typical process for activity modelling. The process is not governed
by events from outside. If the latter is the case, it is better to use a state diagram.
14.7.5 States
The CAS system is basically a query system, and therefore from the information-
processing point of view not really dominated by external events (for the user-interface
side this may be different). Therefore, there is in this application not much need for state
diagrams. We have included one state diagram for the “update student data” procedure.
UML Notations Used in CommonKADS 395
timer
local
processing
Figure 14.30
State diagram for updating personal student data.
Student personal data are stored in a general university database. For this reason, the up-
date service of the CAS system should first send a change request to the university database,
and only change the local database if an acceptance answer is received. The corresponding
state diagram is shown in Figure 14.30. We have included a state variable timer in the
waiting state. If no answer is received from the university database within a certain time
limit, the update process is cancelled.
This procedure could also have been modelled with an activity diagram in which two
send/receive signals are placed. In case of a limited number of external events, the choice
between the two types of diagrams is mainly a matter of personal preference.
For a full UML. overview the reader is referred to textbooks such as Booch et al.
(1998) and Eriksson and Penker (1998).
15
Project Management
Strategy
Phase
Information
Analysis
System
Design
Program
and Test
Operation
Maintenance
Figure 15.1
The classic “waterfall” life cycle for software engineering.
The strategy phase, for example, ends with a document outlining the results of a fea-
sibility study, the project brief, i.e., the business goals the project is supposed to meet,
and the project plan. The information analysis phase starts from here, and delivers a re-
quirements document based on the structure, flow, and control of the information that is
to be processed by the prospective system. Then the design stage turns this into a tech-
nical system specification of the architecture and software module structure in relation to
the chosen software and hardware platform. Next, the system is programmed accordingly,
integrated, and tested, after which it is handed over to the user organization. At this point,
system development is completed, and the operational life of the system has started. From
the software engineering point of view, the work done on the system is called maintenance,
although in practice often many new requirements and new functionalities are introduced
in the course of time. The system lifecycle ends when it is phased out or decommissioned.
Thus, characteristic of the waterfall approach is its linear sequence of prefixed phases.
The result of each phase has to be accepted and signed off by the customer. In project
management terms, therefore, the end of each phase usually represents a milestone of the
project at which a go/no-go decision is taken for the next phase. If one would carry out
a knowledge system project according to the CommonKADS methodology, but within a
waterfall framework, it would probably be phased as follows:
2. Impact and improvement study (task and agent models, Chapter. 3);
3. Knowledge analysis (knowledge model, Chapter. 5);
4. Communication interface analysis (communication model, Chapter. 9);
5. System design (design model, Chapter. 11);
6. Knowledge-system implementation (Chapter. 12).
A strong advantage of the waterfall model is that it provides a very clear-cut handle
for managerial control. However, practice has shown that it has a number of disadvantages
as well:
The early phases are mainly document-oriented, and visible and operational results
in terms of software that can be tried out and judged by end users appear only rather
late in the lifecycle. Hence, it is sometimes difficult to see progress and to maintain
the confidence in the project by stakeholders such as managers, clients and prospec-
tive users.
Prefixed phases make changes during the project — owing to changed external cir-
cumstances, new insights gained from ongoing work in the project, changing needs
and requirements — very difficult and costly. So, the waterfall model is very rigid.
It is adequate for applications for which the road to go is clear well in advance,
for example, yet another database or spreadsheet application that is based on many
similar previous experiences within the organization. It does not work for advanced
information systems or innovative projects where uncertainty or change plays a role.
Therefore, it is also not very well suited for knowledge-intensive systems projects.
Other models for the software process have therefore been developed that produce
useful results at an earlier stage and are more flexible in dealing with uncertainty and
change. One such model, called an evolutionary or prototyping approach, is shown in
Figure 15.2. It may be considered as the extreme opposite of the waterfall model: it aims to
produce practical results quickly in a number of iterative improvements based on learning
from the previous cycle. So it is highly adaptable and experimental. This is its strong
point as well as its weak spot: due to its lack of structure it is not really possible to come
up with sound project goals and plans in advance. The prototyping approach has been en
vogue for quite some time in the childhood years of expert systems, but experience has
shown that it is hard to keep managerial control over such projects. Rather than being
planned and controlled, they emerge and unfold organically over time. But, as with the
waterfall approach, this is not what you really want for projects that aim at industry-quality
knowledge systems. Neither extreme rigidity nor extreme flexibility yields the solution for
knowledge projects.
A model for software development that attempts to combine the good features of both
the waterfall and prototyping approaches has been proposed by Barry Boehm (1988). It
stems from the area of complex large-scale information systems, as found in large govern-
ment software projects, and in defense and high-tech industries such as aerospace and so
on. As the straight line symbolizes the waterfall approach and the circle represents the pro-
400 Chapter 15
Gather
Expert data
Implement
Iterate
Prototype
Validate
Get feedback
Figure 15.2
Rapid, evolutionary prototyping approach to software system development.
totyping approach, this intermediate approach is known as the spiral model, as depicted in
Figure 15.3. The four quadrants indicate recurring and structured steps of project manage-
ment activity. Through this, the aim is to achieve progress by means of subsequent cycles
that may be adapted on the basis of experience from previous cycles. Depending on the
situation, one may decide for analysis and design documents as in the waterfall model, but
also for prototyping activities if these are judged to be more illuminating or useful. In this
way, the spiral model aims at striking a balance between structured control and flexibility.
The CommonKADS approach to the management of knowledge projects has grown out of
this idea of a spiral development.
The CommonKADS life-cycle approach is based on the following principles:
Project planning concentrates first of all on products and outputs to be delivered,
rather than on activities or phases.
Project planning is done in a configurable and adaptive manner in terms of spiral-like
cycles, which are driven by a systematic assessment of the risks to the project.
Project Management 401
MONITOR REVIEW
cycle-3
cycle-2
cycle-1
cycle-0
PLAN RISK
Figure 15.3
The spiral model for the software life cycle.
1. Review. This is the first stage in the project management at each cycle. The current
status of the project is reviewed, and the main objectives for the upcoming cycle
are established. For the initial round, cycle-0, an overall project plan, including
a quality plan, is developed. Internal and external constraints on the project are re-
viewed and alternatives are investigated by the project manager. An important task to
close off the review stage is to ensure the commitment of the various stakeholders of
the project, which may include managers and decision-makers involved, customers,
prospective users, experts.
402 Chapter 15
MONITOR REVIEW
- review progress
-set cycle objectives
- monitor development work - consider constraints
- prepare acceptance assessment - investigate alternatives
- evaluate cycle results - commit
PLAN RISK
Figure 15.4
The CommonKADS configurable life cycle, based on the spiral model, for knowledge system projects. The four
quadrants indicate the stages and activities to be carried out in project management.
2. Risk. The general directions for the project as set at the review stage constitutes the
input for the second project management stage: risk assessment. The obstacles that
are potentially in the way of success of the project are identified, and their signifi-
cance is assessed. How to do this is discussed in detail in the next section. Needed
counteractions are decided upon by the project manager, and fed into the subsequent
stage: planning.
3. Plan. Given a clear view obtained in the previous two stages on objectives, existing
risks, and associated actions to be undertaken, the next step is to make a detailed
plan for the next cycle. This covers the standard planning activities in project man-
agement, including establishing a work breakdown structure in terms of tasks, a
schedule of these tasks, e.g., with the help of a Gantt chart, allocating the needed
resources and personnel to these tasks, and agreeing on the acceptance criteria for
the work to be carried out.
4. Monitor. After this, the next cycle of development work commences. The progress
of this work is being monitored and, where needed, steered by the project manager.
The meetings with stakeholders relating to the acceptance of the work in the current
Project Management 403
Table 15.1
Worksheet PM-1: Worksheet for carrying out project risk identification and assessment.
cycle are being prepared, and the produced outputs are evaluated. The results of this
evaluation are then fed into the next stage, the review part of the next cycle.
might be countered by deciding to develop and demonstrate a prototype for the graphical
user-interface. Thus, the actions following from such a risk assessment will help to shape
project planning for the next cycle.
It is important to note that the first three questions make the project lifecycle con-
figurable and scalable. It is at the discretion of the project manager to decide on which
model, model components, and/or software modules the project will work on in the next
cycle. This is based on the outcomes of previous cycles, so that the project is able to learn
from experience, and can adapt to changing needs or circumstances in a flexible way.
An example of a project management cycle is depicted in Figure 15.5. Suppose we
are at the very beginning of a knowledge project. Then, it is often necessary to gather
relevant information from the different parties involved to understand the current situation
better (review stage, upper left). A major risk may be that the problem to be solved,
as it is perceived by the various parties in the organization, is not really fully clear for
the project team (risk stage, middle upper left). For example, the project team may in
part come from another part of the organization, as is often the case with separate IT
development departments. Thus, in planning the first cycle of the project (plan stage,
upper right), it is important to achieve a state whereby the problem description has been
validated by the relevant outside parties to the project (e.g., the key decision-makers at
the management level). Development activities are planned accordingly, for example, by
Project Management 405
Table 15.2
Worksheet PM-2: How to describe a model state, as an objective to be achieved by the project.
scheduling meetings or interviews with those decision-makers. The bottom part indicates
the monitor stage of the project management cycle, where the actual development activities
are carried out. Here we see example components from organization and task modelling,
as treated in Chapter. 3. The results of the development work are then evaluated against the
project and quality plan, after which a new project cycle begins with a new review stage
(middle left). The next round of the spiral commences.
406 Chapter 15
Project Management
Set objectives
- understand Define target model states
current situation OM: problem description
Identify risks = validated
- problem description
incomplete
Plan
Review objectives
development activities
Development
current model => new model description
OM: process OM: structure
= described = described
Quality OM: problem OM: problem
Control = validated = described
Figure 15.5
Example project management cycle with associated development activities. The arrows indicate the sequencing
of activities. OM, organization model; TM, task model.
Acting as a project manager requires active involvement in a wide, even disparate, variety
of tasks. To summarize what a project manager should take care of it is helpful to give
a concise overview of the project management documentation that is typically produced
in the course of a knowledge project. We will do that in this section, along with some
supporting checklists.
CommonKADS project documentation is listed in Table 15.3. The documentation of
each cycle is outlined in Table 15.4.
Project Management 407
Project deliverables.
An overall work breakdown, covering the list of project cycles and an associated description of
project tasks and schedule. A more detailed description for each cycle is given in the cycle docu-
mentation (see below).
Overall resources available to the project within any established budget.
Project organization, personnel, external dependencies, reporting relationships, training, and expe-
rience.
References to the contract and other relevant external or background material.
Quality plan This plan is also produced at the initiating cycle of the project (cycle 0), in conjunction with the
above overall project plan. The important elements of the quality are treated separately later on in in this
section.
Cycle documentation For each cycle, a more detailed management document is produced. Its structure and
content is discussed in the separate box immediately below.
Project close down report At the end of a project, it is worthwhile to produce a document that evaluates the
project as a whole. Such a report will cover the lessons learned for the organization from the project, and
indicate recommendations and proposed guidelines for the future. This may refer to different areas, e.g.,
follow-up work, improvements to the quality system, intercompany and client cooperation, or staffing,
resourcing, and training issues.
Table 15.3
Overview of project documentation.
As we may expect, much of the documentation applies generally to any project. The
cycle reports constitute the project management documentation that is most specific for
knowledge projects, as it follows from the CommonKADS life-cycle model. But note that
similar documentation will occur in other areas of information-system development, where
a spiral approach is employed as the model for project management.
The quality plan, as indicated above, is an important part of any project plan as it
is developed in the initial cycle (cycle-0). Quality attributes relevant to knowledge sys-
tem projects are presented in Figure 15.6. The lower part is representative of information
systems in general. The upper two branches of the tree are characteristic for knowledge-
oriented projects. The branch indicated as knowledge capture refers to the quality features
of the activities of knowledge acquisition, modelling and validation as carried out by the
knowledge engineers in the project team. The knowledge usability branch denotes the qual-
ity features of the knowledge as it will be embedded in the prospective system. Thus, this
branch represents the view of future users and beneficiaries of the knowledge, which often
are mainly outside the project.
For the most part, therefore, the quality plan for knowledge projects has the same
408 Chapter 15
adequacy
structuredness
Knowledge
validity
Capture
coverage
testability
effectiveness
completeness
Knowledge reliability
Usability certainty
accessibility
transferability
suitability
interoperability
Functionality accuracy
compliance
security
maturity
Reliability fault tolerance
recoverability
understandability
Usability learnability
operability
Quality
Feature
time behaviour
Efficiency
resource behaviour
analyzability
changeability
Maintainability
stability
testability
adaptability
installability
Portability
conformance
replaceability
Figure 15.6
Tree of quality features.
Project Management 409
Review
– Position and purpose of the cycle within the overall project plan
– Summary of the outcome of the previous cycle, defining the starting point of the current cycle
– Cycle objectives and outline plan
– Constraints, considered alternatives, choices made for the cycle
Risk
– List and explanation of identified risks
– Risk assessment according to the worksheet PM-2 given in Table 15.2
– Resulting conclusions for the cycle plan and development approach
Plan
– Cycle plan, covering task breakdown, resource allocation, cycle outputs, accounting for the risk
assessment and detailing the overall project plan
– The cycle outputs are based on the concept and definition of CommonKADS model states, accord-
ing to the worksheet PM-1 given in Table 15.2
– A description of the (agreed) acceptance criteria, on the basis of which the planned cycle outputs
will be evaluated
Monitor
– Periodic progress reports, as standardly required by the organization
– Records of acceptance assessment meetings evaluating the cycle outputs
– Concluding review of the actual results measured against the expected project progress, as an input
to the next cycle.
Table 15.4
Overview of project cycle documentation.
structure and covers the same topics as is the case for other information systems. Table 15.5
provides a checklist of the main elements in a quality plan.
A key element in the contract and the project plan will always be the set of deliverables
that the project is required to produce. A default list is given in Table 15.6.
Note that this standard list is only to be used as a general guideline. Depending on
the size and nature of the project, changes in the list will have to be made by the project
manager. For example, in exploratory projects it may be that only the first two deliverables
are needed. In small projects it might be useful to combine deliverables (for example, on
knowledge modelling, communication, design, and test), while in large projects it may be
wise to split up deliverables (for example, split up the knowledge-model document into
parts regarding task structure, domain knowledge, and problem-solving methods). This
possibility of breaking up or combining deliverables is precisely the idea behind a spiral
development approach. The decision on this rests with the project manager, in agreement
of course with the outside decision-makers that are involved. This is an important element
in the configurability and scalability of the CommonKADS lifecycle.
410 Chapter 15
Table 15.5
Topics covered in a quality plan.
Table 15.6
Default list of project deliverables.
Table 15.7
A conventional requirements document and how it maps onto the CommonKADS model suite.
Data that are available suggest that the economy of knowledge projects is similar to that
of other advanced complex information systems. Typical, but rough, indicators are given
in Table 15.8. The more knowledge intensive an application is, the more effort has to be
relatively spent to knowledge acquisition, modelling, and validation. This is analogous to
information systems in general, where a global rule is that information analysis and design
take a greater portion of the total effort with increasing complexity of the application.
412 Chapter 15
Table 15.8
Rough distribution of efforts spent in the various activities in a typical knowledge system project.
steam generator
reactor vessel
P
core
pumps
neutron detectors
Figure 15.7
Schematic diagram of a nuclear reactor. The type shown here is a so-called pressurized water reactor.
that noise analysis experts are usually involved. Therefore, the idea sparking off the project
was that a knowledge system might be a useful tool in enhancing existing mathematical
software by supporting the diagnostic interpretation of the system condition. This would
make noise analysis more widely applicable. The project brief was to investigate to what
extent this was indeed the case.
In the initiating cycle-0 of the project, the broad overall project goals and approach were
worked out and agreed with the specialist group. In the review stage of first cycle, the
central objective defined was to get a detailed understanding of the state of the art and how
a knowledge system can advance this. Next, a risk analysis was carried out. It revealed
the following main risks (see Table 15.9): (1) insufficient acquaintance with the domain
by the knowledge engineer; (2) complexity of the noise analysis task unknown; (3) limited
414 Chapter 15
1,0E -04
1,0E -05
1,0E -06
nA P S D
1,0E -07
1,0E -08
1,0E -09
1,0E -10
0 1 3 4 5 6 8 9 10 11 13 14 15 16 18 19 20 21 23 24 25 26 28 29 30 31
f (Hz)
Figure 15.8
An example of a noise spectrum of a nuclear reactor, as measured by the neutron detectors.
availability of the expert. The first and third risks are very common in knowledge-system
projects. The second risk, task complexity, resulted from inspection of early knowledge
elicitation data. It appeared to be ambiguous in the sense that the noise interpretation
task could be quite simple like a classification task. Alternatively, it might be extremely
complex as a form of heavy mathematical model-based reasoning, or it might be of in-
termediate complexity as in assessment-type tasks. Different parts of the elicitation data
could be construed or interpreted to support either position.
As a result of this risk assessment, in the planning stage a cycle plan has been devel-
oped that incorporates the risk countermeasures listed in Table 15.9. Outputs of the cycle
are in terms of CommonKADS model components and their states. The resulting plan is
shown in the form of a Gantt chart in Figure 15.9, and in summary worksheet form in
Table 15.10. The following activities were defined in the cycle-1 plan:
KM-a As a preliminary part of the domain layer of the knowledge model, make a
Project Management 415
Table 15.9
Risk analysis: worksheet PM-1 for the nuclear reactor noise analysis and interpretation system.
OM-a OM-b
40 8
TM-a TM-c
8 8
KM-a TM-b
24 60 time
CYCLE-1
Figure 15.9
Gantt chart of the plan for the first cycle in the noise-analysis knowledge project. The numbers denote the
estimated hours of effort for an activity.
416 Chapter 15
Table 15.10
Cycle 1 planning: worksheet PM-2 for the nuclear reactor noise analysis and interpretation system.
In the subsequent develop/monitor stage, activities were carried out following the cy-
cle plan. Knowledge acquisition was done by means of an array of different methods,
including open and structured interviews, think-aloud protocols by the expert, consulting
other specialists, collecting and studying technical domain literature, and on-the-job train-
ing in the expert group by the knowledge engineer, actually processing measured real-time
data from a nuclear-power station. This approach was indeed successful to counteract the
risks mentioned in Table 15.9. A highly interesting conclusion was that reactor-noise inter-
pretation is very close to an assessment type of task. So it could be concluded that, given
the fact that the task is not overly complex for a knowledge system, further work on the
project was warranted. Secondly, it was concluded that further work could be based on the
reusable task template discussed in Chapter. 6. This turned out to be indeed the case and it
made things significantly easier later on.
Project Management 417
Even with the flexible but sound planning due to the spiral project management ap-
proach, project monitoring is needed during development (i.e., the fourth quadrant of the
spiral) because unexpected things might happen along the road. In the noise-analysis
project, although the activities did follow the cycle plan, the total effort (indicated in Fig-
ure 15.9) appeared to be underestimated. This was mainly due to the time it took to consult
other experts from outside organizations. The benefit of doing this was, however, that a full
second case study could be carried out, based on another nuclear reactor of a different type
and situated at a different location. The results were a clear validation of the conclusion
that reactor noise interpretation is an assessment type of task. Later on, it turned out to be
possible to build a single generic task-inference model covering both cases, and based on
the reusable assessment task template.
Another interesting experience was that initially the expectations of expert and knowl-
edge engineer differed more than was anticipated, although they had formally agreed on
project brief and approach beforehand. This became visible in the transcripts of an inter-
view dialogue such as the following:
In response, initially the expert tended to find such questions not very relevant for
developing a knowledge system, and came up with counterquestions such as:
Expert: In what language are you going to implement the system? On a VAX
or a PC?
This shows — and this is a very general experience, especially in open-ended projects as
many knowledge projects are — that expectation management with respect to the various
project parties and stakeholders is a crucial activity.
The first project cycle concentrated on task-domain content and complexity, as the main
risks were perceived to be related to these aspects. Now that these risks were seen to
be quite well under control, the second cycle of the noise analysis project focused on
the economic cost-benefit aspects. The reason was that, given the technical feasibility
established in the first cycle, the main risk was now considered to lie in the danger that the
initial estimates of the economic feasibility might be overly crude. Thus, the cycle-2 plan
aimed to cater for this by two activities: OM-c, detailing the organization model especially
with respect to its problems/opportunities and resources components (see Chapter. 3), and
TM-d, comparing the envisaged system task model with the capabilities and associated
418 Chapter 15
Table 15.11
Cycle 2 planning: worksheet PM-2 for the nuclear reactor noise analysis and interpretation system.
costs of existing commercial systems on the market (see Table 15.11 and Figure 15.10,
left). This resulted in a more detailed insight into added value vs. cost of the prospective
noise knowledge system.
The results of the second cycle were not unequivocal. A market study done by an
external company indicated the potential for significant savings on a worldwide basis. It
was also clear, however, that national interests and political issues related to nuclear energy
were a complicating factor, difficult to quantify in financial terms. Visits of the knowledge
engineer to potential end users revealed some, not unexpected, reluctance to change ex-
isting work procedures and habits, as well as some, again not unexpected, differences in
attitudes between engineers on the work floor and their managers. The comparison with
existing commercial systems indicated a quite clear ceiling cost of the knowledge system
of some tens of thousands of dollars (a figure which was estimated to be achievable). Fur-
thermore, gradually it became clear that there was some commercial potential in industrial
sectors other than nuclear energy, such as the offshore business. Overall, a moderate further
investment in noise knowledge systems was considered to be justified.
Next, the third project cycle again focused more on domain-knowledge content: the
main risk seen was to ensure that the delicate and rather technical aspects of this domain
could indeed be adequately converted into forms and rules suitable for computer treatment.
Accordingly, the cycle-3 plan concentrated on filling in the various components of the
knowledge and design models, as seen in Figure 15.10 (right). Here, the activities defined
were KM-b: describe the assessment task instances in both cases; KM-c: describe the
related inference structures; KM-d/e: describe/complete the domain models in the domain
layer of the knowledge model; DM: identify ensuing architecture, platform, and application
design decisions (cf. Chapter. 11).
These activities involved some significant technical effort, but did not lead to any un-
expected surprises. As the focus was on domain-knowledge content, a regular contact with
the noise-analysis expert group was maintained to ensure the quality of the work. From the
Project Management 419
KM-b KM-d
20 20
KM-c KM-e
OM-c
20 20
40
TM-d DM
40 time 10 time
CYCLE-2 CYCLE-3
Figure 15.10
Gantt charts of the second (left) and third (right) cycles of the noise-analysis project planning. For an explanation
of the activities, see the text.
third cycle onward, the project lifecycle became rather predictable, and could be more and
more managed in the waterfall form discussed early in this Chapter. For the implemen-
tation of the noise knowledge system, it appeared that any standard rule-based software
architecture and environment was suitable, with a characteristic size of several dozens of
rules for a single application.
The project case study described above is, we believe, quite typical of many knowledge
projects. Especially in the initial stages, the project manager has to deal with a lot of un-
known or uncertain factors. Here, not even the task type and complexity of the application
were really clear in advance. In fact, it was taken into account that review of the first cycle
could well lead to the recommendation to stop the project, because the task appears too
complex for an information system.
These elements of uncertainty present a compelling case for a flexible and configurable
project management approach, as the proposed CommonKADS version of the risk-based
spiral model. Even a superficial look at the case study reveals that it leads to a clearly
nonwaterfall project. In knowledge projects, iterative elements must be built in by design,
often in the course of the learning process itself. Further testimony to this is to imagine
what the project plan would have looked like if outcomes of a previous cycle had been
slightly different (for example, more or less complexity of the task, more or less optimistic
economic feasibility estimates, etc.).
The case study also emphasizes the importance of analyzing the business and orga-
nizational aspects of knowledge-system development and introduction. These aspects are
often underestimated by computer scientists and software engineers, who tend to have
420 Chapter 15
a bias and preference for the content-related technical aspects. Then, the danger is often
poor handling of what we have called expectation management of end users and customers.
Managers, marketers and persons with a background in business administration are often
more sensitive to these aspects. For them, a common pitfall is often a limited appreciation
of the technical aspects of a knowledge project. As hinted at in the case study, a reasonable
technical competence is needed to make adequate economic and marketing forecasts, at
least in a second, more refined round of the project spiral.
Sometimes, there can be unanticipated positive surprises. Reactor noise analysis and
housing application or credit card fraud assessment are obviously very disparate domains.
Nevertheless, it turned out that the assessment task template, originally developed in finan-
cial and policy applications, could be reused in this highly technical noise analysis domain.
Such a finding is gratifying: something a project team can hope for, strive for, but never
plan for. It is a showcase for the unexpectedly wide range and potential of generic and
reusable models we have argued for at several places in this book. It also points to a per-
sonal quality of good knowledge project managers: to be able to surf good project waves
even if you did not expect them — as if you expected them.
1. One of the tedious aspects of being a project manager is that you have to balance
the different, and often conflicting, interests of outside stakeholders such as clients,
users, experts, and department managers. This is extremely time-consuming and
slows down the technical progress of your project. So the best strategy is not to
waste too much time on these stakeholders. Since they usually are ignorant about
knowledge systems, they are not of much help to you anyway.
2. Cautious project managers live by so-called 80-20 rules. For example, 80% of the
system functionality can often be delivered with only 20% of the project budget,
while the final 20% of functionality consumes the remaining 80% of the budget.
Such rules are used to argue in favor of modest improvements in the degree of au-
tomation. You, on the other hand, will of course make the real difference. So, go for
the full 100%. If you cannot think big, you will never act big.
3. Knowledge projects are often viewed by outsiders as innovative, sometimes complex
and risky. This may lead to some resistance. To overcome this, you as a project
manager have to sell the project well. Do this by introducing high ambitions and
expectations from all stakeholders right from the start. In this era of business process
Project Management 421
redesign, one must go for quantum leaps of improvement, not limited steps. After
all, as a project manager you will be remembered only for a big success, not for
a number of small ones. (The drawback to this strategy is that you may also be
remembered for a big mistake.)
4. The power, image, and position of a project manager within an organization in prac-
tice depends on the budget he controls. This is why a spiral approach is not adequate:
the danger is that you get your budget only in portions cycle by cycle, and that each
time you have to make a case for it based on results. To improve your status in the
organization, you will be better off by going for one big budget right at the start,
based on big expectations.
5. We are happy to include some more real-life recipes for disaster from you : : : Of
course, the above “guidelines” are meant in an ironic way. This helps, we hope,
to get the message across about careful project management: because situations as
indicated here are not only Dilbert-like cartoons but regrettably do really happen in
projects.
Aben, M. (1995). Formal Methods in Knowledge Engineering. Ph.D. thesis, University of Amsterdam, Faculty
of Psychology.
Akkermans, J., Gustavsson, R., and Ygge, F. (1998). An integrated structured analysis approach to intelligent
agent communication. In Cuena, J., editor, Proceedings IFIP 1998 World Computer Congress, IT&KNOWS
Conference, London. Chapman & Hall.
Akkermans, J., Ygge, F., and Gustavsson, R. (1996). Homebots: Intelligent decentralized services for energy
management. In Schreinemakers, J., editor, Knowledge Management – Organization, Competence and Method-
ology, pp. 128–142. Würzburg, Germany, Ergon Verlag.
Angele, J., Fensel, D., Landes, D., and Studer, R. (1998). Developing knowledge-based systems with MIKE.
Journal of Automated Software Engineering.
Argyris, C. (1993). Knowledge for Action. San Francisco, Jossey-Bass.
Ben-Natan, R. (1995). CORBA: A Guide to Common Object Request Broker Architecture. New York, McGraw-
Hill.
Benjamins, V. R. (1993). Problem Solving Methods for Diagnosis. Ph.D. thesis, University of Amsterdam,
Amsterdam.
Benjamins, V. R. and Fensel, D. (1998). Editorial. International Journal of Human-Computer Studeis, 49(5).
Benus, B. and de Hoog, R. (1994). Knowledge management with CommonKADS models. In Liebowitz, J.,
editor, Moving Towards Expert Systems Globally in the 21st Century, pp. 193–200.
Boehm, B. (1981). Software Engineering Economics. Englewood Cliffs, NJ, Prentice-Hall.
Boehm, B. (1988). A spiral model of software development and enhancement. Computer, pp. 61–72.
Booch, G. (1994). Object-Oriented Analysis and Design with Applications. Redwood City, CA, Benjamin
Cummings.
Booch, G., Rumbaugh, J., and Jacobson, I. (1998). The Unified Modelling Language User Guide. Reading, MA,
Addison-Wesley.
Bradshaw, J. (1997). Software Agents. Menlo Park, CA, MIT Press.
Breuker, J. A. and Van de Velde, W., editors (1994). The CommonKADS Library for Expertise Modelling.
Amsterdam, IOS Press.
Breuker, J. A., Wielinga, B. J., van Someren, M., de Hoog, R., Schreiber, A. T., de Greef, P., Bredeweg, B.,
Wielemaker, J., Billault, J. P., Davoodi, M., and Hayward, S. A. (1987). Model Driven Knowledge Acquisition:
Interpretation Models. ESPRIT Project P1098 Deliverable D1 (task A1), University of Amsterdam and STL Ltd.
Chandrasekaran, B. (1988). Generic tasks as building blocks for knowledge-based systems: The diagnosis and
routine design examples. The Knowledge Engineering Review, 3(3):183–210.
Chandrasekaran, B. (1990). Design problem solving: A task analysis. AI Magazine, 11:59–71.
Chandrasekaran, B. and Johnson, T. R. (1993). Generic tasks and task structures: History, critique and new
directions,. In David, J. M., Krivine, J. P., and Simmons, R., editors, Second Generation Expert Systems. Berlin,
Springer Verlag.
424
Checkland, P. and Scholes, J. (1990). Soft Systems Methodology in Action. Chichester, UK, Wiley.
Chi, M. T. H., Glaser, R., and Farr, M. (1988). The Nature of Expertise. Hillsdale, NJ, Erlbaum.
Clancey, W. J. (1985). Heuristic classification. Artificial Intelligence, 27:289–350.
Davenport, T. and Prusak, L. (1998). Working Knowledge. Boston,, Harvard Business School Press.
de Hoog, R., Benus, B., Vogler, M., and Metselaar, C. (1996). The CommonKADS organization model: content,
usage and computer support. Expert Systems With Applications, 11(1):29–40.
Drucker, P. (1993). Post-Capitalist Society. Oxford, UK, Butterworth-Heinemann.
D’Souza, D. F. and Wills, A. C. (1998). Objects, Components and Frameworks with UML: The Catalysis Ap-
proach. Reading, MA, Addison-Wesley.
Edvinsson, L. and Malone, M. S. (1997). Intellectual capital. New York, Harper Business.
Ericsson, K. A. and Simon, H. A. (1993). Protocol Analysis: Verbal Reports as Data, revised edition. Cambridge,
MA, MIT Press.
Eriksson, H.-E. and Penker, M. (1998). UML Toolkit. New York, Wiley.
Feltovich, P., Ford, K., and Hoffman, R. (1997). Expertise in Context – Human and Machine. Cambridge, MA,
MIT Press.
Fensel, D. and van Harmelen, F. (1994). A comparison of languages which operationalise and formalise KADS
models of expertise. The Knowledge Engineering Review, 9:105–146.
Fletcher, S. (1997). Analysing Competence – Tools and Techniques for Analyzing Jobs, Roles and Functions.
London, Kogan-Page.
Gaines, B. R. (1997). Using explicit ontologies in KBS development. International Journal of Human-Computer
Studies, 45(2).
Gamma, E., Helm, R., Johnson, R., and Vlissides, J. (1995). Design Patterns: Elements of Reusable Object-
Oriented Software. Reading, MA, Addison Wesley.
Goldberg, A. (1990). Information models, views, and controllers. Dr. Dobbs Journal, July:54–61.
Gruber, T. R. (1993). A translation approach to portable ontology specifications. Knowledge Acquisition, 5:199–
220.
Gruber, T. R. (1994). Towards principles for the design of ontologies used for knowledge sharing. In Guarino, N.
and Poli, R., editors, Formal Ontology in Conceptual Analysis and Knowledge Representation. Boston, Kluwer.
Guarino, N. (1995). Formal ontology in the information technology. International Journal of Human-Computer
Studies, 43(5/6).
Haddadi, A. (1995). Communication and Cooperation in Agent Systems. Berlin, Springer-Verlag.
Hall, E. (1998). Managing Risk – Methods for Software Systems Development. Reading, MA, Addison-Wesley.
Harrison, M. (1994). Diagnosing Organizations — Methods, Models and Processes. Thousand Oaks, CA, Sage
Publications.
Hayes-Roth, F., Waterman, D. A., and Lenat, D. B. (1983). Building Expert Systems. New York, Addison-Wesley.
Hori, M. (1998). Scheduling knowledge model in CommonKADS. Technical Report Research Report RT0233,
Tokyo Research Laboratory, IBM Japan.
Hori, M., Nakamura, Y., Satoh, H., Maruyama, K., Hama, T., Honda, Takenaka, T., and Sekine, F. (1995).
Knowledge-level analysis for eliciting composable scheduling knowledge. Artificial Intelligence in Engineering,
9(4):253–264.
Hori, M. and Yoshida, T. (1998). Domain-oriented library of scheduling methods: Design principle and real-life
application. International Journal of Human-Computer Studies, 49(5):601–626.
425
Jacobson, I., Christerson, M., Jonsson, P., and Övergaard, G. (1992). Object-Oriented Software Engineering.
Reading, MA, Addison-Wesley.
Johansson, H., McHugh, P., Pendlebury, A., and Wheeler III, W. (1993). Business Process Reengineering. New
York, Wiley.
Kelly, G. A. (1955). The psychology of personal constructs. New York, Norton.
Kirwan, B. and Ainsworth, L. (1992). A Guide to Task Analysis. London, Taylor & Francis.
Klinker, G., Bhola, C., Dallemagne, G., Marques, D., and McDermott, J. (1991). Usable and reusable program-
ming constructs. Knowledge Acquisition, 3:117–136.
Linster, M. (1994). Sisyphus’91/92: Models of problem solving. International Journal of Human Computer
Studies, 40(3).
Marcus, S., editor (1988). Automatic Knowledge Acquisition for Expert Systems. Boston, Kluwer.
Martin, B., Subramanian, G., and Yaverbaum, G. (1996). Benefits from expert systems: An exploratory investi-
gation. Expert Systems With Applications, 11(1):53–58.
Martin, J. (1990). Information Engineering. Englewood Cliffs, NJ, Prentice-Hall. Three volumes. See especially
Vol. 2: Planning and Analysis.
McGraw, K. L. and Harrison-Briggs, K. (1989). Knowledge Acquisition: Principles and Guidelines. Prentice-
Hall International.
Meyer, M. A. and Booker, J. M. (1991). Eliciting and Analyzing Expert Judgement: A Practical Guide, Vol. 5
of Knowledge-Based Systems. London, Academic Press.
Mintzberg, H. and Quinn, J. (1992). The Strategy Process – Concepts and Contexts. Englewood Cliffs, NJ,
Prentice-Hall.
Motta, E., Stutt, A., Zdrahal, Z., O’Hara, K., and Shadbolt, N. (1996). Solving VT in VITAL: a study in model
construction and reuse. International Journal of Human-Computer Studies, 44(3/4):333–372.
Newell, A. (1982). The knowledge level. Artificial Intelligence, 18:87–127.
Nonaka, I. and Takeuchi, H. (1995). The Knowledge-Creating Company. Oxford, UK, Oxford University Press.
Parnas, D. L. and Clements, P. C. (1986). A rational design process: How and why to fake it. IEEE Transactions
on Software Engineering, 12:251–257.
Peratec (1994). Total Quality Management. London, UK, Chapman & Hall.
Porter, M. (1985). Competitive Advantage. New York, Free Press.
Post, W., Koster, R. W., Sramek, M., Schreiber, A. T., Zocca, V., and de Vries, B. (1996). FreeCall: A system for
emergency-call handling support. Methods of Information in Medicine, 35(3):242–255.
Post, W., Wielinga, B., Hoog, R. D., and Schreiber, A. (1997). Organizational modeling in commonkads: The
emergency medical service. IEEE Intelligent Systems, 12(6):46–52.
Puppe, F. (1990). Problemlösungsmethoden in Expertensystemen. Studienreihe Informatik. Berlin, Springer-
Verlag.
Quinn, J. (1992). Intelligent Enterprise. New York, Free Press.
Ricketts, I. (1998). Managing your Software Project. London, Springer-Verlag.
Rumbaugh, J., Blaha, M., Premerlani, W., Eddy, F., and Lorensen, W. (1991). Object-Oriented Modelling and
Design. Englewood Cliffs, NJ, Prentice Hall.
Schein, E. (1992). Organizational Culture and Leadership. San Francisco, Jossey-Bass.
Schreiber, A. T. (1994). Applying KADS to the office assignment domain. International Journal of Human-
Computer Studies, 40(2):349–377.
426
Schreiber, A. T. and Birmingham, W. P. (1996). The Sisyphus-VT initiative. International Journal of Human-
Computer Studies, 43(3/4).
Scott-Morgan, P. (1994). The Unwritten Rules of the Game. New York, McGraw-Hill.
Shadbolt, N. R. and Burton, A. M. (1989). Empirical studies in knowledge elicitation. SIGART Special Issue on
Knowledge Acquisition, ACM.
Shaw, M. L. G. and Gaines, B. R. (1987). An interactive knowledge elicitation technique using personal construct
technology. In Kidd, A. L., editor, Knowledge Acquisition for Expert Systems: A Practical Handbook, New York.
Plenum Press.
Sommerville, I. (1995). Software Engineering. Harlow, UK, Addison-Wesley.
Sommerville, I. and Sawyer, P. (1997). Requirements Engineering – A Good Practice Guide. Chichester, UK,
Wiley.
Spencer, L. M. and Spencer, S. M. (1993). Competence at Work. New York, Wiley.
Steels, L. (1990). Components of expertise. AI Magazine, Summer.
Steels, L. (1993). The componential framework and its role in reusability. In David, J.-M., Krivine, J.-P., and
Simmons, R., editors, Second Generation Expert Systems, pp. 273–298. Berlin, Springer-Verlag.
Stefik, M. (1993). Introduction to Knowledge Systems. Los Altos, CA. Morgan Kaufmann.
Stewart, T. (1997). Intellectual Capital – The New Wealth of Organizations. London, Nicholas Brealey.
Studer, R., Benjamins, R., and Fensel, D. (1998). Knowledge engineering: Principles and methods. Data &
Knowledge Engineering, 25:161–198.
Sveiby, K. E. (1997). The New Organizational Wealth: Managing and Measuring Knowledge Based Assets.
Berrett-Koehler.
ten Teije, A., van Harmelen, F., Schreiber, A. T., and Wielinga, B. J. (1998). Construction of psm as parametric
design. International Journal of Human-Computer Studeis, 49:363–389.
Tissen, R., Andriessen, D., and Deprez, F. (1998). Value-Based Knowledge Management. Amsterdam, Addison-
Wesley.
Tu, S. W., Eriksson, H., Gennari, J. H., Shahar, Y., and Musen, M. A. (1995). Ontology-based configuration
of problem-solving methods and generation of knowledge acquisition tools: The application of PROT́EG É - II to
protocol-based decision support. Artificial Intelligence in Medicine, 7(5).
van der Spek, R. and de Hoog, R. (1994). A framework for a knowledge management methodology. In Wiig, K.,
editor, Knowledge Management Methods. Practical Approaches to Managing Knowledge, pp. 379–393. Arling-
ton, TX, Schema Press.
van der Spek, R. and Spijkervet, A. (1994). Knowledge management. dealing intelligently with knowledge.
Technical report, CIBIT, Utrecht, the Netherlands.
van Harmelen, F. (1998). Applying rule-base anomalies to KADS inference structures. Decision Support Sys-
tems, 21(4):271–280.
van Harmelen, F., Wielinga, B., Bredeweg, B., G.Schreiber, W.Karbach, Reinders, M., Voß, A., Akkermans,
H., Bartsch-Spörl, B., and Vinkhuyzen, E. (1992). Knowledge-level reflection. In Pape, B. L. and Steels, L.,
editors, Enhancing the Knowledge Engineering Process – Contributions from ESPRIT, pp. 175–204. Amsterdam,
Elsevier Science.
van Someren, M. W., Barnard, Y., and Sandberg, J. A. C. (1993). The Think-Aloud Method. London, Academic
Press.
Vanwelkenhuysen, J. and Rademakers, P. (1990). Mapping knowledge-level analysis onto a computational frame-
work. In Aiello, L., editor, Proceedings ECAI–90, pp. 681–686, London. Pitman.
427
Waern, A., Höök, K., Gustavsson, R., and Holm, P. (1993). The CommonKADS Communication Model. Es-
prit Project P5248 Deliverable KADS-II/M3/SICS/TR/003, Swedish Institute of Computer Science, Stockholm.
Available from the CommonKADS Website.
Watson, G. (1994). Business Systems Engineering. New York, Wiley.
Weggeman, M. (1996). Knowledge management: The modus operandi for a learning organization. In Schreine-
makers, J., editor, Knowledge Management — Organization, Competence and Methodology, pp. 175–187.
Würzburg, Germany, Ergon Verlag.
Wielemaker, J. (1994). SWI-Prolog 1.9: Reference Manual. University of Amsterdam, Social Science Informat-
ics, Amsterdam. For information see: www.swi.psy.uva.nl/usr/jan/SWI-Prolog.html.
Wielinga, B., Sandberg, J. A. C., and Schreiber, A. T. (1997). Wmethods and techniques for knowledge manage-
ment: What has knowledge engineering to offer? Expert Systems With Applications, 13(1):73–84.
Wielinga, B. J. and Schreiber, A. T. (1997). Configuration design problem solving. IEEE Expert, 12(2).
Wielinga, B. J., Schreiber, A. T., and Breuker, J. A. (1992). KADS: A modelling approach to knowledge engineer-
ing. Knowledge Acquisition, 4(1):5–53. Reprinted in: Buchanan, B. and Wilkins, D. editors (1992), Readings in
Knowledge Acquisition and Learning, San Mateo, CA, Morgan Kaufmann, pp. 92–116.
Wiig, K. (1996). Knowledge Management Methods: Practical Approaches to Managing Knowldge. Arlington,
TX, Schema Press.
Wiig, K., de Hoog, R., and van der Spek, R. (1997a). Knowledge management (special issue). Expert Systems
With Applications, 13(1).
Wiig, K., de Hoog, R., and van der Spek, R. (1997b). Supporting knowledge management: selection of methods
and techniques. Expert Systems With Applications, 13(1):15–27.
Wolf, M. and Reimer, U., editors (1996). Proceedings of PAKM ’9, Basel.
Wong, B., Chong, J., and Park, J. (1994). Utilization and benefits of expert systems: A study of large american
industrial corporations. International Journal of Operations & Production Management, 14(1):38–49.
Ygge, F. (1998). Market-Oriented Programming and its Application to Power Load Management. Ph.D. thesis,
Lund University, Sweden.
Yourdon, E. (1989). Modern Structured Analysis. Englewood Cliffs, NJ, Prentice Hall.
Zweben, M., Davis, E., Daun, B., and Deale, M. J. (1993). Scheduling and rescheduling with iterative repair.
IEEE Transactions on Systems, Man, and Cybernetics, 23(6):1588–1596.
Zweben, M. and Fox, M. S. (1994). Intelligent Scheduling. San Matteo, CA, Morgan Kaufmann Publishers.
Appendix: Knowledge-Model Language
This appendix contains detailed information about the CommonKADS language for speci-
fying knowledge models. Section A.10 contains a full specification of the language syntax
using a BNF notation. Section A.11 contains the full knowledge model for the housing
application (see Chapter. 10).
A.9 Overview
The conventions used in the syntax specification. are listed in Table A.12.
Comments Comments are not formerly part of the syntax of CML. Comments follow
the C style: initiate a comment with /* and terminate with */. Comments can appear
anywhere between the symbols of the CML syntax.
Names The most frequently occurring low-level construct is a name. CML defines a
name to start with a letter followed by letters, digits, hyphens, and underbars in an arbitrary
order. If additional characters are required in a name, the entire name must be embedded
in single quotes. For example, the concept Monster of Loch Ness is defined as follows:
h i
X Y ... One or more occurrences of X separated by Y. This construct is mainly used to abbreviate
j
comma-separated lists. For example, “Name, ...” is short for “Name , Name *”.
h i
X Y One of X or Y (exclusive or).
X Grouping construct for specifying the scope of operators.
symbol Bold: predefined terminal symbols of the language. In the syntax definition these symbols
are given in lowercase. In a CML file they must be given in uppercase.
Symbol Capitalized: user-defined terminal symbols of the language.
symbol Lowercase: nonterminal symbols.
”Text” Arbitrary text between double quotes. A double quote inside the text can be escaped with
a backslash.
‘X’ Escapes the operator symbol (e.g., *) and denotes the literal X.
Table A.12
Conventions for syntax specification.
Hyphens, Underbars, and Spaces The ASCII character set has given us two symbols
that are used interchangeably: the hyphen and the underbar. Some languages allow a
hyphen in a name (e.g., Lisp), whereas others disallow it (e.g., C). CML allows both char-
acters as it is language in which one may want to denote concepts which already have an
established notation (e.g., hole-in-1).
Users of CML are advised to use the notation of a concept as it appears in (public) sources.
In general, the consequence is to use hyphens and spaces rather than underbars. To avoid
practical problems when CML is translated into other languages, the CML parser has sev-
eral options through which hyphens and spaces can be converted to underbars automati-
cally.
The table below lists the operators that can be used in expressions. Note that the operator
for equality is == (and not =). The main entry point in the syntax for expressions is
equation (Section A.10.10, p. 443).
Operators are listed in order of increasing precedence. Operators of equal precedence are
grouped between horizontal rules.
Note that a hyphen may be used in names and is also an operator. White space around
hyphens intended as a minus sign may thus be necessary.
CONCEPT vehicle;
ATTRIBUTES:
no-of-wheels: INTEGER;
431
Operator Description
= Equivalence (mathematics)
:= Assignment (programming)
< Less than (comparison)
<= Less than or equal to (comparison)
> More than (comparison)
>= More than or equal to (comparison)
== Equal to (comparison)
!= Not equal to (comparison)
-> Implication (logical)
<- Inverse implication (logical)
<-> Double implication (logical)
AND Conjunction (logical)
OR Disjunction (logical)
XOR Choice (logical)
+ Addition (arithmetic)
- Subtraction (arithmetic)
* Multiplication (arithmetic)
/ Division (arithmetic)
** Exponentiation (arithmetic)
- Negation (arithmetic)
NOT Negation (logical)
. Dereference (programming)
‘ Derivative (mathematics)
( ... ) Grouping
[ ... ] Subscript
Table A.13
List of expression operators.
AXIOMS:
no-of-wheels > 3;
A.10 Syntax
A.10.1 Synonyms
The following table lists the synonyms of terms used in the CML syntax specification.
The term in the left column is used in this document, the term(s) in the right column give
synonyms accepted by the CML parser.
432
It is usual for psm knowledge to be defined separately, for example as part of a library of
PSMs.
h j
[ terminology ]
j i
domain-schema
ontology-mapping knowledge-base *
end domain-knowledge [ Domain-knowledge ; ] .
Domain schema
The keyword definitions, to introduce the constructs defined in the schema, is no longer
required.
domain-schema ::= domain-schema Domain-schema ;
[ terminology ]
[ use : use-construct , ... ; ]
[ definitions : ] domain-construct*
end domain-schema [ Domain-schema ; ] .
j j j
j j j
domain-construct ::= binary-relation concept mathematical-model
relation rule-type structure value-type .
Concept
The notion of concept is used to represent a class of real or mental objects in the domain
being studied. The term concept corresponds roughly to the term entity in ER-modelling
and class in object-oriented approaches.
Every concept has a name, a unique symbol which can serve as an identifier of the concept,
possible super concepts (multiple inheritance is allowed).
concept ::= concept Concept ;
[ terminology ]
j
[ super-type-of : Concept , ... ;
j
[ disjoint : yes no ; ]
[ complete : yes no ; ] ]
[ sub-type-of : Concept , ... ; ]
[ has-parts : has-part+ ]
[ part-of : Concept , ... ; ]
[ viewpoints : viewpoint+ ]
[ attributes ]
[ axioms ]
end concept [ Concept ; ] .
has-part ::= Concept ;
[ role ]
[ cardinality ] .
j
Concept , ... ;
j
[ disjoint : yes no ; ]
[ complete : yes no ; ] .
434
Axioms
The axioms slot supports the specification of (mathematical) relationships that are defined
to be true.
axioms ::= axioms :
equation ; ... .
CONCEPT chess-square;
ATTRIBUTES:
rank: INTEGER;
file: INTEGER;
AXIOMS:
1 >= rank >= 8;
1 >= file >= 8;
END CONCEPT chess-square;
This restricts the value of the rank (column) and file (row) of a chess-square to be between
1 and 8.
Attributes
Semantics The cardinality of an attribute defines how many values that particular attribute
may take. If the cardinality is omitted it is assumed to be precisely one. The attribute can
be a differentiation of a superconstruct; both the name and value set of the attribute can be
differentiated. Consider the following example.
Compatibility In the original syntax definition the colon after differentiation-of was
missing.
435
CONCEPT vehicle;
ATTRIBUTES:
wheels: INTEGER;
END CONCEPT vehicle;
CONCEPT human;
ATTRIBUTES:
legs: INTEGER;
DIFFERENTIATION-OF: wheels(vehicle);
END CONCEPT human;
Rule type
rule-type-body ::= j
constraint-rule-type implication-rule-type .
Mathematical model
mathematical-model ::= mathematical-model Mathematical-model ;
[ terminology ]
[ parameters : parameter+ ]
[ equations : equation-list ]
end mathematical-model [ Mathematical-model ; ] .
Structure
structure ::= structure Structure ;
[ terminology ]
[ sub-type-of : Structure , ... ; ]
form : "Text" ;
[ attributes ]
[ axioms ]
end structure [ Structure ; ] .
The notion of structure is used to describe objects with an internal structure that the knowl-
edge engineer does not want to describe (at this moment) in detail. The form slot can be
used to describe the structure.
Relation
j
j
argument-type ::= domain-construct-type
set-of domain-construct-type
list-of domain-construct-type .
domain-construct-type ::= j
built-in-type user-defined-type .
j j j j
j j
built-in-type ::= object concept rule-type structure
j
relation binary-relation
mathematical-model value-type .
j j j
j j
user-defined-type ::= Concept Rule-type Structure
Relation Binary-relation Mathematical-model .
437
Binary relation
j j j
j j
relation-type ::= transitive asymmetric symmetric
irreflexive reflexive antisymmetric .
Type range
j j
j
type-range ::= primitive-type primitive-range
Value-type String-value , ... .
j j j j
j j j j
primitive-type ::= number integer natural real
string boolean universal date text .
Value type
j
[ terminology ]
h i
[ type : nominal ordinal ; ]
jh j i
value-list : Value , ...
value-specification : primitive-type "Text" ;
[ attributes ]
end value-type [ Value-type ; ] .
438
j
j
instance-expression ::= Construct ( Instance , ... )
Instance ‘.’ Property ‘:=’ equation
Instance has-part Instance [ role : Role ] .
j
j
rule-type-expression ::= equation
type-operator rule-type-expression
rule-type-expression part-operator rule-type-expression .
type-operator ::= j
sub-type-of super-type-of type-of . j
part-operator ::= j
has-part dimension role . j
h j
[ use : use-construct , ... ; ]
j
inference
i
knowledge-role
transfer-function *
end inference-knowledge [ Inference-knowledge ; ] .
439
Inference
Transfer function
h j j j i
[ terminology ]
type : provide receive obtain present
roles :
input : Dynamic-knowledge-role , ... ;
output : Dynamic-knowledge-role , ... ;
end transfer-function Transfer-function ; .
Knowledge role
j
[ terminology ]
type : static dynamic ;
h j
domain-mapping :
i
dynamic-domain-reference
static-domain-reference ;
end knowledge-role Knowledge-role ; .
Task
task ::= task Task
[ terminology ]
[ domain-name : Domain ; ]
[ goal : "Text" ; ]
roles :
input : role-description+
output : role-description+
[ specification : "Text" ; ]
end task [ Task ; ] .
Task method
The decomposition of a task method allows the specification of a function if it is not known
whether the decomposition is an inference or a tasks. This facilitates the construction of
flexible libraries.
task-method ::= task-method Task-method ;
[ realizes : Task ; ]
task-decomposition
[ roles : intermediate : role-description+ ]
control-structure : control-structure
[ assumptions : "Text" ; ]
end task-method [ Task-method ; ] .
j
j
statement ::= function-call ;
j
control-loop
j
conditional-statement
role-operation
"Text" ; .
function ::= j j
Task Inference transfer-function .
h empty Role i j
h control-condition and control-condition i j
h control-condition or control-condition i j
h control-condition xor control-condition i j
h not control-condition i j
h size Role comparison-operator Integer i j
h Role comparison-operator Value i j
h ( control-condition ) i j
"Text" .
Role j
h unary-role-operator role-expression i j
role-expression ::=
PSM
j j j
j j j
problem-type ::= assessment assignment classification
j j j
configuration design diagnosis
j j
modelling monitoring planning
prediction scheduling "Text" .
A.10.10 Equations
The equation syntax is adopted from NMF (Neutral Model Format). NMF is an emerging
standard for the definition of mathematical models. The basic entry point is equation
(Section A.10.10, p. 443).
A description of the operators and their precedence is given in operator-precedence (Sec-
tion A.9.3, p. 430).
Equation
j
j
equation ::= ‘(’ equation ‘)’
j
sign-operator equation
j
negation-operator equation
j
equation arithmetic-operator equation
j
equation logical-operator equation
j
equation comparison-operator equation
j
equation dereference-operator equation
j
equation equation-operator equation
j
unsigned-constant
j
variable-expression
function-expression
conditional-expression .
unsigned-constant ::= j
Unsigned-integer Unsigned-real "Text" .j
function-expression ::= Function ( equation , ... ) .
Operators
equivalence-operator ::= ‘=’ .
sign-operator ::= j
‘+’ ‘-’ .
444
arithmetic-operator ::= j j j j
‘+’ ‘-’ ‘*’ ‘/’ ‘**’ .
comparison-operator ::= j j j
‘<’ ‘>’ ‘<=’ ‘>=’ ‘==’ ‘!=’ . j j
derivative-operator ::= ‘’’ .
A.10.11 Support
Cardinality
cardinality ::= cardinality : cardinality-spec ; .
j
j
cardinality-spec ::= any
j
Natural
Natural "+"
Natural "-" Natural .
paragraphTerminology
terminology ::= [ description : "Text" ; ]
[ sources : "Text" ; ]
[ synonyms : Name , ... ; ]
[ translation : Name , ... ; ] .
A construct can be annotated with a textual description and sources (textbook, dictionary)
as well as with a list of synonyms and translations.
DOMAIN-KNOWLEDGE residence-domain;
DOMAIN-SCHEMA assessment-schema;
CONCEPT residence;
DESCRIPTION:
"A description of a residence in the database of the
distribution system";
ATTRIBUTES:
number: NATURAL;
category: {starter-residence, follow-up-residence};
build-type: {house, apartment};
street-address: STRING;
city: STRING;
num-rooms: NATURAL;
rent: REAL;
min-num-inhabitants: NATURAL;
max-num-inhabitants: NATURAL;
subsidy-type: subsidy-type-value;
surface-in-square-meters: NATURAL;
floor: NATURAL;
lif-available: BOOLEAN;
AXIOMS:
min-num-inhabitants <= max-num-inhabitants;
END CONCEPT residence;
VALUE-TYPE subsidy-type-value;
TYPE: NOMINAL;
VALUE-LIST: {subsidizable, free-sector};
END VALUE-TYPE subsidy-type-value;
CONCEPT applicant;
DESCRIPTION:
"A person or group of persons (household) registered as
potential applicatns for a residence";
ATTRIBUTES:
registration-number: STRING;
applicant-type: {starter, existing-resident};
name: STRING;
street-address: STRING;
city: STRING;
birth-date: STRING;
age: NATURAL;
age-category: age-category-value;
gross-yearly-income: NATURAL;
household-size: NATURAL;
household-type: household-type-value;
446
AXIOMS:
applicant.age = FLOOR(TODAY() - applicant.birth-date);
END CONCEPT applicant;
VALUE-TYPE age-category-value;
TYPE: ORDINAL;
VALUE-LIST: {’upto 22’, ’23-64’, ’65+’};
END VALUE-TYPE age-category-value;
VALUE-TYPE household-type-value;
TYPE: NOMINAL;
VALUE-LIST: {single-person, multi-person};
END VALUE-TYPE household-type-value;
BINARY-RELATION residence-application;
DESCRIPTION:
"Application of an applicant for a certain residence. ";
ARGUMENT-1: applicant;
CARDINALITY: 0+;
ARGUMENT-2: residence;
CARDINALITY: 0-2;
ATTRIBUTES:
application-date: DATE;
END BINARY-RELATION residence-application;
RULE-TYPE residence-abstraction;
ANTECEDENT:
residence-application;
CARDINALITY: 1+;
CONSEQUENT:
residence-application;
CARDINALITY: 1;
CONNECTION-SYMBOL:
has-abstraction;
END RULE-TYPE residence--astraction;
CONCEPT residence-criterion;
ATTRIBUTES:
truth-value: BOOLEAN;
END CONCEPT residence-criterion;
CONCEPT correct-household-size;
SUB-TYPE-OF: residence-criterion;
END CONCEPT correct-household-size;
CONCEPT correct-residence-type;
SUB-TYPE-OF: residence-criterion;
END CONCEPT correct-residence-type;
447
CONCEPT residence-specific-constraints;
SUB-TYPE-OF: residence-criterion;
END CONCEPT residence-specific-constraints;
CONCEPT rent-fits-income;
SUB-TYPE-OF: residence-criterion;
END CONCEPT rent-fits-income;
RULE-TYPE residence-requirement;
ANTECEDENT:
residence-application;
CARDINALITY: 1+;
CONSEQUENT:
residence-criterion;
CARDINALITY: 1;
CONNECTION-SYMBOL:
indicates;
END RULE-TYPE residence-requirement;
CONCEPT residence-decision;
ATTRIBUTES:
value: {eligible, not-eligible };
END CONCEPT residence-decision;
RULE-TYPE residence-decision-rule;
ANTECEDENT:
residence-criterion;
CONSEQUENT:
residence-decision;
CONNECTION-SYMBOL:
implies;
END RULE-TYPE residence-decision-rule;
KNOWLEDGE-BASE system-description;
USES:
residence-abstraction FROM assessment-schema;
EXPRESSIONS:
/* Abstraction rules */
applicant.age < 23
HAS-ABSTRACTION
applicant.age-category = ’upto 22’;
applicant.age >= 65
448
HAS-ABSTRACTION
applicant.age-category = ’65+’;
applicant.household-size = 1
HAS-ABSTRACTION
applicant.household-type = single-person;
applicant.household-size > 1
HAS-ABSTRACTION
applicant.household-type = multi-person;
END KNOWLEDGE-BASE system-description;
KNOWLEDGE-BASE measurement-system;
USES:
residence-requirement FROM assessment-schema,
residence-decision-rule FROM assessment-schema;
EXPRESSIONS:
/* Requirements */
residence.description.subsidy-type = free-sector
INDICATES
correct-residence-category.truth-value = true;
residence.description.min-num-inhabitants <=
applicant.household-size
AND
residence.description.max-num-inhabitants >=
applicant.household-size
INDICATES
correct-household-size.truth-value = true;
rent-fits-income.truth-value = true;
INDICATES
rent-fits-income.truth-value = true;
rent-fits-income.truth-value = true;
/* decision rules */
correct-residence-category.truth-value = false
IMPLIES
decsion.value = not-eligible;
correct-household-size.truth-value = false
IMPLIES
decsion.value = not-eligible;
rent-fits-income.truth-value = false
IMPLIES
decsion.value = not-eligible;
residence-specific-constraints.truth-value = false
IMPLIES
decsion.value = not-eligible;
END KNOWLEDGE-BASE measurement-system;
END DOMAIN-KNOWLEDGE
INFERENCE-KNOWLEDGE assessment-inferences;
KNOWLEDGE-ROLE case-description;
TYPE: DYNAMIC;
DOMAIN-MAPPING:
residence-application;
END KNOWLEDGE-ROLE case-description;
KNOWLEDGE-ROLE case-specific-requirements;
TYPE: DYNAMIC;
DOMAIN-MAPPING:
SET-OF residence-requirement;
END KNOWLEDGE-ROLE case-specific-requirements;
KNOWLEDGE-ROLE decision;
TYPE: DYNAMIC;
DOMAIN-MAPPING:
residence-decision;
END KNOWLEDGE-ROLE decision;
454
KNOWLEDGE-ROLE abstracted-case;
TYPE: DYNAMIC;
DOMAIN-MAPPING:
residence-application;
END KNOWLEDGE-ROLE abstracted-case;
KNOWLEDGE-ROLE norm;
TYPE: DYNAMIC;
DOMAIN-MAPPING:
residence-criterion;
END KNOWLEDGE-ROLE norm;
KNOWLEDGE-ROLE norm-value;
TYPE: DYNAMIC;
DOMAIN-MAPPING:
residence-criterion;
END KNOWLEDGE-ROLE norm-value;
KNOWLEDGE-ROLE norms;
TYPE: DYNAMIC;
DOMAIN-MAPPING:
SET-OF residence-criterion;
END KNOWLEDGE-ROLE norms;
KNOWLEDGE-ROLE evaluation-results;
TYPE: DYNAMIC;
DOMAIN-MAPPING:
SET-OF residence-criterion;
END KNOWLEDGE-ROLE evaluation-results;
KNOWLEDGE-ROLE abstraction-knowledge;
TYPE: STATIC;
DOMAIN-MAPPING:
residence-abstraction FROM system-description;
END KNOWLEDGE-ROLE abstraction-knowledge;
KNOWLEDGE-ROLE norm-set;
TYPE: STATIC;
DOMAIN-MAPPING:
residence-criterion FROM measurement-system;
END KNOWLEDGE-ROLE norm-set;
KNOWLEDGE-ROLE requirements;
TYPE: STATIC;
DOMAIN-MAPPING:
residence-requirement FROM measurement-system;
END KNOWLEDGE-ROLE requirements;
KNOWLEDGE-ROLE decision-knowledge;
TYPE: STATIC;
455
DOMAIN-MAPPING:
residence-decision-rule FROM measurement-system;
END KNOWLEDGE-ROLE decision-knowledge;
INFERENCE abstract;
ROLES:
INPUT:
case-description;
OUTPUT:
abstracted-case;
STATIC:
abstraction-knowledge;
SPECIFICATION: "
Input is a set of case data. Output is the same set of data
extended with an abstracted feature
that can be derived from the data using the corpus of
abstraction knowledge.";
END INFERENCE abstract;
INFERENCE specify;
OPERATION-TYPE: lookup;
ROLES:
INPUT:
abstracted-case;
OUTPUT:
norms;
STATIC:
norm-set;
SPECIFICATION:
"This inference is just a simple look-up of the norms";
END INFERENCE specify;
INFERENCE select;
ROLES:
INPUT:
norms;
OUTPUT:
norm;
SPECIFICATION:
"No domain knowledge is used in norm selction: the
section is a random one.";
END INFERENCE select;
INFERENCE evaluate;
ROLES:
INPUT:
norm,
abstracted-case,
case-specific-requirements;
OUTPUT:
norm-value;
456
STATIC:
requirements;
SPECIFICATION: "
Establish the truth value of the input norm for the given
case description. The underlying domain knowledge is formed by both
the requirements in the knowledge base as well as additional
case-specific requirements, that are part of the input.";
END INFERENCE evaluate;
INFERENCE match;
ROLES:
INPUT:
evaluation-results;
OUTPUT:
decision;
STATIC:
decision-knowledge;
SPECIFICATION:
"See whether the avaialbe evalution results enable a decision
to be taken. The inference fails if this is not the case. ";
END INFERENCE match;
END INFERENCE-KNOWLEDGE
/* Tasks */
TASK-KNOWLEDGE assessment-tasks;
TASK assess-case;
DOMAIN-NAME: asses-residence-application;
GOAL: "
Assess whether an application for a residence by a certain
applicant satisfies the criteria.";
ROLES:
INPUT:
case-description: "Data about the applicant and the residence";
case-specific-requirements: "Residence-specific criteria";
OUTPUT:
decision: "eligible or not-eligible for a residence";
END TASK assess-case;
TASK-METHOD assess-through-abstract-and-match;
REALIZES:
assess-case;
DECOMPOSITION:
TASKS: abstract-case, match-case;
ROLES:
INTERMEDIATE:
abstracted-case: "Original case plus abstractions";
CONTROL-STRUCTURE:
457
TASK abstract-case;
DOMAIN-NAME: abstract-applicant-data;
GOAL:
"Add case abstractions to the case description";
ROLES:
INPUT:
case-description: "The ’raw’ case data";
OUTPUT:
abstracted-case: "The raw data plus the abstractions";
END TASK abstract-case;
TASK-METHOD abstract-method;
REALIZES:
abstract-case;
DECOMPOSITION:
INFERENCES: abstract;
CONTROL-STRUCTURE:
WHILE HAS-SOLUTION abstract(case-description -> abstracted-case) DO
/* use the abstracted case as the input in invocation of
the next abstraction inference */
case-description := abstracted-case;
END WHILE
END TASK-METHOD abstract-method;
TASK match-case;
DOMAIN-NAME: match-residence-application;
GOAL: "
Apply the norms to the case to find out whether it satisfies
the criteria.";
ROLES:
INPUT:
abstracted-case: "Case description plus the abstractions";
case-specific-requirements: "Criteria specific for a
certain residence.";
OUTPUT:
decision: "Eligible or not eligible";
END TASK match-case;
TASK-METHOD match-method;
REALIZES:
match-case;
DECOMPOSITION:
INFERENCES: specify, select, evaluate, match;
ROLES:
INTERMEDIATE:
norms: "The full set of assessment norms";
458
END KNOWLEDGE-MODEL
Synopsis of Graphical Notations
Task Decomposition
task method
alternative alternative
sub-task 1 sub-task 2
method 1 method 2
leaf method
decomposed into inferences method
inference inference
460
Inference Structure
transfer
dynamic role inference function
dynamic transfer
inference
role function
dynamic
static role
role
dynamic
role
input/output dependency
static role
concpt name
super super
concept concept
attribute: value set
attribute: value set
.....
viewpoint
dimension
concept reference
concept name
sub concept sub concept
instance definition
n-ary relation
instance name:
argument argument
concept name
relation
name
attribute = value
attribute = value
instance reference
argument argument
type name type name
relation name
attribute: value-set
attribute: value-set
rule type
cardinality
connection cardinality
antecendent symbol consequent
rule type
name
superconcept
aggregate
viewpoint 1 viewpoint 2
cardinality
viewpoint 1 viewpoint 2
part subconcept subconcept
cardinality
part
combined
subconcept
Index