0% found this document useful (0 votes)
40 views218 pages

Ontology-Driven Conceptual Modeling - Model Comprehension, Ontology Selection, and Method Complexity

Uploaded by

lotekef969
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views218 pages

Ontology-Driven Conceptual Modeling - Model Comprehension, Ontology Selection, and Method Complexity

Uploaded by

lotekef969
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 218

ONTOLOGY-DRIVEN CONCEPTUAL

MODELING:
MODEL COMPREHENSION,
ONTOLOGY SELECTION,
AND METHOD COMPLEXITY

Michaël Verdonck

Supervisor:
Prof. dr. Frederik Gailly

Academic year: 2017 – 2018

A dissertation submitted to Ghent University in partial fulfilment of the requirements


for the degree of Doctor of Business Economics.
Copyright © 2018 by Michaël Verdonck

All rights are reserved. No part of this publication may be reproduced or transmitted in any form or by any means
electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system,
without permission in writing from the author.
DOCTORAL Jury

Prof. dr. Patrick Van Kenhove (Chairman, University Ghent)


Prof. dr. Frederik Gailly (Supervisor, University Ghent)
Prof. dr. Geert Poels (University Ghent)
Prof. dr. Amy Van Looy (University Ghent)
Prof. dr. Sergio de Cesare (University of Westminster, UK)
Prof. dr. Ben Roelens (Open Universiteit, Netherlands)
Prof. dr. Giancarlo Guizzardi (Free University of Bolzano, Italy)
King Arthur: You are indeed brave, Sir knight, but the fight is mine.

Black Knight: Oh, had enough, eh?

King Arthur: Look, you stupid bastard, you've got no arms left.

Black Knight: Yes I have.

King Arthur: Look!

Black Knight: Just a flesh wound.

Monty Python and the Holy Grail


Acknowledgements

Well, we all need someone we can lean on


And if you want it, you can lean on me

Let It Bleed – The Rolling Stones

On a number of occasions, a PhD has been explained to me as a strenuous path, with several
highlights and many more pitfalls, accompanied by an unfaltering desire that your promotor gets

hit by a meteor or such.

Fortunately, this was not the case. To the contrary, I can honestly say that the past couples of

years have been some of the most cheerful I have spent thus far. Through my PhD, I was allowed

to travel to places that belong on a bucket list, meet a great deal of new people, gained friends
and best friends, and above all, finally saw the benefit of applying a methodology. Besides

acquiring new experiences and knowledge, my PhD enabled me to progress, enjoy the process

instead of the outcome and enlarged my sense of humility – although for some I’m sure, not
enough. It goes without saying that all of the above could not have occurred without a great deal

of people to whom I’m very grateful. A couple of pages do not suffice to acknowledge and praise
all the persons who I would like to thank, but here it goes:

First and foremost, the person without whom this PhD would simply not have come to pass is
my promotor, Frederik Gailly. I can honestly say that I can’t picture a better working atmosphere

and environment. Whatever the occasion or the hour, you were always available to provide me
with help and feedback – and an abundance of comments. Your patience, faith and honesty in

dealing with my efforts make you more of a mentor than a promotor. Therefore, a sincere thank

you, and may you forever be shielded from meteors and other debris.

i
Next, I would like to thank Geert Poels and Sergio de Cesare – who served from the early

beginning as advisors in my PhD – and helped shape this dissertation through their guidance and

thoughtful considerations. Furthermore, I would like to thank the members of the doctoral jury:

Amy Van Looy, Ben Roelens and Giancarlo Guizzardi for their questions and feedback that

definitely lifted this dissertation to a higher level. Also, a great deal of gratitude to Martine and
Machteld for always being there in times of need and whose help was always offered with an

unconditional smile.

Moving on to my colleagues and friends at UGent – Steven, David, Ben, Sven, Jan, Xiaji,

Aygun and Gert. I can’t count the moments that I simply couldn’t hold my laughter during all

our ventures together: the savage coffee breaks, lunches where not a single topic was deemed
holy, our bowling endeavors, beers at the Geus, Kofschip brutalities, more beers at the Speaker’s

Corner, cross-kitchen culinary discoveries and the unforgettable nights – and mornings – at the

Gentse Feesten.

To continue on the topic of laughter and unforgettable nights, I would like to thank a couple of
friends that formed an integral part of my life. My comrades of Kortrijk that I know since

childhood – Nicolas, Louis, Vercaemst, Lucas, Bol, Pieter & Chris. From the early scouts

adventures, the first evenings out in t’straatje, “potje van 100” in Paris, hazardous nights in
Brussels, to spending NY’s eve in Lapland, it has all been grand. May many more come to

follow.

Mis amigos Olivier & Hannes, with whom I have shared some of the most memorable trips

and experiences: camp fires at the beaches of San Sebastian, the crossing of the Făgăraș

mountains, and many elaborate discussions on the composition of olive trees. Your
companionship is of a quality that can only be described as exquisite & utterly sophisticated.

Griet & Tom, thank you for the many wonderful evenings, cozy lunches and the occasional

Gin & Tonic – just one, no more – which was always the unforgiving omen of a ferocious
hangover.
ii
My friends from University – Roeland, Niels & Séba that have taught me the true meaning

of the phrase ‘work hard, play hard’. I believe all our labors related to our studies and thesis have

been more than adequately compensated by the flamethrowers in Riga, Lava at Vegas, and the

more civilized evenings out in Bruges…

Moving from civilized to barbarous: Ledure, Deman, Matthias, Beren and the notorious Lolo.
My student time would have been pretty dreary without the hilarious circumstances that we

stumbled into at a various number of occasions. Power hours, Dour, Rome, Amsterdam and den
Ullewupper – I’m still figuring out if they left memories or scars.

Louise & Lis for all our highly entertaining breakfasts and brunches together, the latter much

depending on the number of bottles of wine from the evening before. Cynthia & Alisa with
whom I shared countless ‘beers of the month’ at the Trollenkelder, accompanied by priceless

conversations that always boil down to just “HAP”. A big thanks to Emile, whose charismatic,

cheerful approach together with an unconventional way of thinking and reasoning, have taught
me a great deal of things, and always led to interesting conversations ‘tussen pot en pint’.

Hugo, el malparido whose friendship – and travelling van – I have come to appreciate
enormously. Whatever the topic – climbing, motorcycles or the particular features of a malparido

– you always offer new insights. Furthermore, a group of people that I recently got to know but

whose early friendships I have already come to enjoy a lot, de Dappere Klimmers. Looking
forward to many more trips to Fontainebleau, kayak adventures or simple spaghetti dinners.

Next, I would like to offer my special thanks and sincere gratitude to various people: Oscar

Pastor, my colleagues from the Universitat Politècnica de València and especially Ana, who all

gave me an unforgettable three months at Valencia. I would also like to thank Justine, for her
enormous support offered during my PhD, and the many proof readings which have undoubtedly

made her an expert in ODCM – although I still suspect these proof readings were performed

with the hidden purpose to fall asleep swiftly…

iii
Last but far from least, I own a great deal to my family. My parents for their unflinching support,

that offered me all the opportunities they could give me and who made me the person I am today.

My grandparents for their loving care since I can remember and the sacrifices they made when

I was studying for exams while occupying half of their home. My sisters Julie and Ellen, that

have been my trustworthy sidekicks since childhood and with whom I can always be my playful
self. My brother Anthony, for his relentless reminders about my age – that paradoxically keep

me young – and our elaborate discussions and analyses from Marvel characters to the
constituents of Middle Earth, to which I’m sure would even impress the ancient Greek

Philosophers.

I couldn’t have said it better than Mick himself, we all need someone we can lean on, so thank

you all for letting me lean on you. I hope one day I can return the favor.

Michaël Verdonck
May 2018

iv
Abstract
The term ‘information systems’ began to emerge around the 1960s and has since then evolved
to incorporate a plethora of applications and research fields because of the high speed of

technological advancements in the area of information hardware and software. Broadly

speaking, information systems research is concerned with examining information technology in


use, and as such is characterized by a large diversity of research approaches and topics. More

specifically, research in information systems focuses on the design and implementation of


modeling languages, process models, algorithms, database systems and software and hardware

for information systems (Bourgeois, 2014). Due to the many information system project failures

in the late 1960s that were the consequence of faulty requirement analysis, the importance of
design has been well recognized in the information systems domain.

More specifically, conceptual modeling was introduced as a means to enable early detection
and correction of errors in order to prevent information system breakdowns. Conceptual

modeling can be described as the activity of representing aspects of the physical and social world

for the purpose of communication, learning and problem solving among human users
(Mylopoulos, 1992). Because of the importance attributed to conceptual modeling as a means to

enable early detection and correction of errors, a wide range of conceptual modeling models and

methods were developed and introduced. Criticism, however, arose stating that most of these
modeling approaches and techniques were based on common sense and the intuition of their

developers (Siau & Rossi, 2007), therefore lacking sound theoretical foundations (Batra &
Marakas, 1995; A Burton-Jones & Weber, 1999).

Ontologies were introduced to provide a foundational theory that articulate and formalize the

conceptual modeling grammars needed to describe the structure and behavior of the modeled
domain (Wand & Weber, 1993). Although ontologies were originally applied to analyze the

constructs used in the models and evaluate conceptual grammars for their ontological

expressiveness, the role of ontological theories evolved towards improving and extending
v
conceptual modeling languages. These developments of enriching existing conceptual modeling

languages with methodological guidelines that have their origin in a formalized ontology, is

called ontology-driven conceptual modeling (ODCM).

Nonetheless the successful utilization of ontologies in the field of conceptual modeling,

several research gaps and shortcomings can be identified that still pose challenges for the
further development in the field of ODCM. More specifically, this dissertation identified four

principal shortcomings or research gaps concerning ODCM:

1. The added value of adopting an ODCM technique is not always straightforward, meaning

that it is not always clear for a modeler who wants to develop a conceptual model, what the

actual benefits are for utilizing an ODCM modeling technique.

2. Comprehending an ontology-driven model can be quite challenging. Understanding the

philosophical concepts and structures of an ontology (e.g. theory of parthood, types and

instantiations, identity, dependency, unity etc.) can be a strenuous task for the end users of a
model.

3. The selection of an ontology is not always carefully considered. Since ontological theories

form the foundations of ODCM, consequently the selection of a particular ontology can

potentially influence the conceptualizations that are rendered.

4. Adopting ODCM can be rather complex. ODCM can be adopted for various purposes and in
a broad range of domains. This has led to numerous ontologies and ontological analyses

being created and performed, and a plethora of ontology-driven conceptual models that have
been developed. This can cause a great deal of confusion and complexity for researchers and

practitioners conducting ODCM.

To overcome the identified shortcomings and research gaps as defined above, this dissertation
executed four research studies – each composing a chapter in this dissertation – in order to

contribute and render knowledge-based additions to the field of ODCM:

vi
Ø Chapter 2: In order to systematically identify and aggregate the research efforts of the past

several years in ODCM, we have performed an extensive literature study of articles dealing

with ODCM. The purpose of this study is to describe and classify what has been produced

by the literature, and to critically examine contributions of past research to explain the results

of prior research and to clarify alternative views of past research.

Ø Chapter 3: This chapter includes an empirical study that investigates and compares the

differences between traditional conceptual modeling (TCM) and ODCM. More specifically,
we differentiated between modelers that were trained in a TCM approach and modelers that

have been taught an ODCM approach.

Ø Chapter 4: In this chapter, a rigorous investigation of the effects of applying different kinds
of ontologies on the comprehension of their resulting ontology-driven models was

conducted. This empirical study investigated how users interpret and comprehend the

ontology-driven conceptual models that were developed by adopting different ontologies.

Ø Chapter 5: The last study of this dissertation developed a framework with the purpose to

distinguish between the different kinds of ontological analyses that exist. The benefit of this
framework lies in its ability to differentiate between the different purposes for performing

an ontological analysis, and to determine which kind of methods can be implemented,

depending on this particular purpose

Each of these studies are further discussed in the last chapter of this dissertation for their specific
contribution and relevance towards researchers and practitioners in ODCM, their limitations and

the future research opportunities they uncover.

vii
viii
Table of Contents

Acknowledgements ............................................................................................................................... i
Abstract ................................................................................................................................................ v
List of Figures ...................................................................................................................................... xi
List of Tables ..................................................................................................................................... xiii
1. Introduction ................................................................................................................................. 1
1.1. Research Context ............................................................................................................... 2
1.2. Problem Definition............................................................................................................... 6
1.3. Research objectives............................................................................................................ 9
1.4. Research Design ...............................................................................................................11
1.5. Structure of the Dissertation ...............................................................................................13
1.6. Publications .......................................................................................................................16
2. Ontology-driven Conceptual Modeling: A Systematic Literature Mapping and Review ................19
2.1. Introduction ........................................................................................................................20
2.2. Research methodology ......................................................................................................23
2.3. Mapping Study ...................................................................................................................26
2.4. Review Study.....................................................................................................................44
2.5. Discussion .........................................................................................................................55
2.6. Threats to validity...............................................................................................................59
2.7. Conclusion .........................................................................................................................60
3. Comparing Traditional Conceptual Modeling with Ontology-Driven Conceptual Modeling: An
Empirical Study ..................................................................................................................................63
3.1. Introduction ........................................................................................................................64
3.2. Hypothesis development ....................................................................................................67
3.3. Experiment Design.............................................................................................................70
3.4. Results ..............................................................................................................................80
3.5. Discussion .........................................................................................................................86
3.6. Conclusion .........................................................................................................................91
3.7. Validity ...............................................................................................................................92
4. Comprehending 3D and 4D Ontology-Driven Conceptual Models: An Empirical Study ................95
4.1. Introduction ........................................................................................................................96
4.2. Methodology ......................................................................................................................98
4.3. Hypotheses Development ..................................................................................................99
4.4. Experimental Design ........................................................................................................104
4.5. Results of experimental study ..........................................................................................113

ix
4.6. Protocol Analysis .............................................................................................................123
4.7. Discussion .......................................................................................................................129
4.8. Conclusion .......................................................................................................................134
5. An Ontological Analysis Framework for Domain-Specific Modeling Languages .........................137
5.1. Introduction ......................................................................................................................138
5.2. Methodology ....................................................................................................................140
5.3. Formulation of problem definition and research objectives ................................................141
5.4. Design of the Ontological Analysis Framework .................................................................145
5.5. Assessment of the Ontological Analysis framework ..........................................................148
5.6. Refinement of the ontological analysis framework ............................................................155
5.7. Discussion .......................................................................................................................161
5.8. Conclusion .......................................................................................................................162
6. Conclusion ...............................................................................................................................165
6.1. Research Contributions....................................................................................................166
6.2. Relevance for researchers and practitioners.....................................................................169
6.3. Research Limitations .......................................................................................................172
6.4. Future Research ..............................................................................................................173
References .......................................................................................................................................179
Appendix A .......................................................................................................................................191
Appendix B .......................................................................................................................................195
Appendix C.......................................................................................................................................197

x
List of Figures
FIGURE 1: STEPS UNDERTAKEN WHILE CONDUCTING THE LITERATURE STUDY .........................................................................25
FIGURE 2: MAPPING STUDY SELECTION PROCEDURE........................................................................................................36
FIGURE 3: DESIGN SCIENCE ARTIFACT, CONTRIBUTION AND EVALUATION METHOD OVER TIME ...................................................39
FIGURE 4: DESIGN SCIENCE ARTIFACT AND EVALUATION METHOD .......................................................................................40
FIGURE 5: TYPE OF ONTOLOGY OVER TIME ...................................................................................................................43
FIGURE 6: REVIEW STUDY SELECTION PROCEDURE ..........................................................................................................47
FIGURE 7: EXAMPLE OF DATA EXTRACTION IN THE REVIEW STUDY ......................................................................................48
FIGURE 8: NUMBER OF REFERENCES TO OBJECT OF INTEREST ............................................................................................49
FIGURE 9: COMPARISON OF THE QUALITY TYPES PER OBJECT OF INTEREST AND QUALITY REFERENCE ..........................................51
FIGURE 10: NUMBER OF REFERENCES ACCORDING TO LAYER PER YEAR.................................................................................54
FIGURE 11: OVERVIEW OF EXPERIMENTAL DESIGN .........................................................................................................79
FIGURE 12: ODCM PATTERN - CASE DESCRIPTION EXAMPLE ...........................................................................................89
FIGURE 13: METHOD OF PERFORMING THE EMPIRICAL STUDY ...........................................................................................99
FIGURE 14: OVERVIEW OF EXPERIMENTAL DESIGN ....................................................................................................... 109
FIGURE 15: RESULTS EASE OF INTERPRETATION QUESTIONS 1-3 ..................................................................................... 117
FIGURE 16: RESULTS EASE OF INTERPRETATION QUESTION 4 .......................................................................................... 118
FIGURE 17: RESULTS EASE OF INTERPRETATION QUESTIONS 5-6 ..................................................................................... 119
FIGURE 18: FRAGMENTS OF UFO AND BORO MODELS IN REPRESENTING PROTOCOLS .......................................................... 129
FIGURE 19: DESIGN SCIENCE METHODOLOGY FOLLOWED IN THIS PAPER ............................................................................. 141
FIGURE 20: REFERENCE ONTOLOGICAL ANALYSIS FRAMEWORK ........................................................................................ 148
FIGURE 21: CLASSIFICATION EXAMPLE OF THE ANALYSIS OF (SANTOS ET AL., 2013) TO THE FRAMEWORK .................................. 152
FIGURE 22: DIFFERENT PATTERNS OF ONTOLOGICAL ANALYSIS ......................................................................................... 154
FIGURE 23: COMPARING TYPES OF METHODOLOGY AND PATTERNS OF ONTOLOGICAL ANALYSIS ............................................... 154
FIGURE 24: PRESCRIPTIVE PATTERNS FOR CONDUCTING AN ONTOLOGICAL ANALYSIS ............................................................. 158
FIGURE 25: THE CMQF QUALITY LAYERS AND THEIR QUALITY TYPES, FIGURE OBTAINED FROM (NELSON ET AL., 2012) ................ 192

xi
xii
List of Tables
TABLE 1: DEFINITIONS OF CONCEPTS...........................................................................................................................23
TABLE 2: AVERAGE RESULTS CORRESPONDING TO EFFECTIVENESS .......................................................................................81
TABLE 3: AVERAGE RESULTS CORRESPONDING TO EFFICIENCY ............................................................................................82
TABLE 4: MANN-WHITNEY U RANKS OF EFFECTIVENESS TREATMENTS ................................................................................84
TABLE 5: MANN-WHITNEY U TEST OF EFFECTIVENESS TREATMENTS ..................................................................................84
TABLE 6: MANN-WHITNEY U RANKS OF EFFICIENCY TREATMENTS .....................................................................................86
TABLE 7: MANN-WHITNEY U TEST OF EFFICIENCY TREATMENTS .......................................................................................86
TABLE 8: METAPHYSICAL CHARACTERISTICS OF THE DIFFERENT MODELS ............................................................................. 111
TABLE 9: AVERAGE SCORES OF EXPERIMENT ............................................................................................................... 115
TABLE 10: AVERAGE AMOUNT OF TIME NEEDED TO FINISH THE EXPERIMENT (MM:SS) ........................................................... 116
TABLE 11: MEAN-WHITNEY U RANKS OF SCORES ASSIGNMENTS .................................................................................... 121
TABLE 12: MANN-WHITNEY U TEST OF SCORES ASSIGNMENTS ...................................................................................... 121
TABLE 13: MANN-WHITNEY U TEST OF TIME PER TREATMENT ....................................................................................... 122
TABLE 14: MANN-WHITNEY U TEST OF EOI RESULTS ................................................................................................... 122
TABLE 15: MEAN-WHITNEY U RANKS OF TOTAL SCORE PROBLEM-SOLVING QUESTIONS ....................................................... 134
TABLE 16: MANN-WHITNEY U TEST OF TOTAL SCORE PROBLEM-SOLVING QUESTIONS ......................................................... 134
TABLE 17: DIFFERENT METHODS OF ONTOLOGICAL ANALYSIS .......................................................................................... 143
TABLE 18: CLASSIFICATION OF THE REFERENCE ONTOLOGIES ........................................................................................... 150
TABLE 19: CLASSIFICATION OF THE DSMLS ................................................................................................................ 150
TABLE 20: CLASSIFICATION OF THE TYPE OF METHODS APPLIED IN AN ONTOLOGICAL ANALYSIS ................................................. 151
TABLE 21: CLASSIFICATION OF RESEARCH ARTICLES TO THE FRAMEWORK............................................................................ 153
TABLE 22: OVERVIEW OF METHODS WITH THEIR CORRESPONDING PURPOSES ...................................................................... 160
TABLE 23: TOTAL NUMBER OF QUALITY TYPES, DESCRIBED IN (NELSON ET AL., 2012). ......................................................... 192
TABLE 24: QUALITY TYPES DISCUSSED IN THIS LITERATURE REVIEW, DESCRIBED IN (NELSON ET AL., 2012). ................................ 193
TABLE 25: LIST OF ARTICLES OF LITERATURE REVIEW ..................................................................................................... 195

xiii
xiv
1.

Introduction

1
1.1. Research Context

This PhD addresses certain shortcomings that can be situated in the domain of ontology-driven

conceptual modeling. In order to comprehend this domain, we will first briefly describe the

interrelated fields of information systems, conceptual modeling and ontology, before we address

ontology-driven conceptual modeling and its course of development throughout the years.

Information Systems is a research discipline that is mostly concerned with the socio-

technical systems compromising organizations and individuals that deploy information

technology for business tasks (Recker, 2013). The concept of information system began to
emerge around the 1960s, and since then has become an emerging field of research because of

the high speed of technological advancements in the area of information hardware and software.
In a broad sense, information systems research is concerned with examining information

technology in use, and as such is characterized by a large diversity of research approaches and

topics. More specifically, research in information systems focuses on the design and
implementation of languages, data models, process models, algorithms, software and hardware

for information systems (Bourgeois, 2014). Especially the importance of design is well

recognized in the information systems literature (Winograd, 1996). For instance, Benbasat &
Zmud (1999) argue that the relevance of information systems research is directly related to its

applicability in design, stating that the implications of empirical information systems research
should be implementable and synthesize an existing body of research to stimulate critical

thinking among information systems practitioners. A key approach in designing information

systems is conceptual modeling.

Conceptual modeling can be described as the activity of representing aspects of the physical

and social world for the purpose of communication, learning and problem solving among human
users (Mylopoulos, 1992). Since the late 1960s, the importance of conceptual modeling grew

substantially due to the many information system project failures that were the consequence of

faulty requirement analysis. As a result, conceptual models were introduced as a means to enable
early detection and correction of errors. Over the years, conceptual modeling has become a
2
fundamental discipline in several subdomains of computer science. Idiosyncratically, a

conceptual model possesses three features (Stachowiak, 1973): (1) a mapping feature, meaning

that a model can be seen as a representation of the ‘original’ system, which is expressed through

a modeling language; (2) a reduction feature, characterizing the model as only a subset of the

original system and (3) the pragmatics of a model which describes its intended purpose or
objective. Because of the importance attributed to conceptual modeling as a means to enable

early detection and correction of errors, a wide range of conceptual modeling languages and
methods were developed and introduced. Criticism however arose, stating that these approaches

and techniques still lacked a comprehensive and generally acknowledged understanding

(Moody, 2005). In addition, many conceptual models fell short on an adequate specification of
the semantics of the terminology of the underlying models, which led to inconsistent

interpretations and uses of knowledge (Grüninger, Atefi, & Fox, 2000). In order to provide a

foundation for conceptual modeling, ontologies were introduced.

Ontology can be broadly defined as “the set of things whose existence is acknowledged by a

particular theory or system of thought” (Honderich, 2006). Research on ontologies has become
increasingly widespread in the computer science community, gaining importance in research

fields such as knowledge engineering (Uschold & Gruninger, 1996), knowledge representation

(Sowa, 1999) and information modeling (Ashenhurst, 1996). More specifically in the field of
conceptual modeling, ontologies provide a foundational theory that articulates and formalizes

the conceptual modeling grammars needed to describe the structure and behavior of the modeled
domain (Wand & Weber, 1993). This foundation manifests itself by means of a formal

specification of the semantics of models and describe precisely which modeling constructs

represent which phenomena (Opdahl et al., 2012). Different types of ontologies can also be
distinguished, based upon their level of dependence of a particular task or point of view

(Guarino, 1998):

• Top-level or foundational ontologies describe very general concepts like space, time, matter,

object, event, action, etc., which are independent of a particular problem or domain;

3
• Domain ontologies and task ontologies describe, respectively, the vocabulary related to a

generic domain (like medicine or automobiles) or a generic task or activity (like diagnosing

or selling), by specializing the concepts introduced in a top-level ontology;

• Application ontologies describe concepts that depend both on a particular domain and task,

and often combine specializations of both the corresponding domain and task ontologies.
These concepts often correspond to roles played by domain entities while performing a

certain task, like replaceable unit or spare component.

Moreover, ontologies can also be differentiated based upon their application. For example,
the classification of Uschold & Jasper (1999) classified ontologies according to the purpose they

fulfill: to assist in communication between human agents, to achieve interoperability, or to

improve the process and/or quality of engineering software systems. When applying an ontology
for the purpose of communication, we can think of ontologies providing real-world semantics

for language constructs or assessing the adequacy and sufficiency of modeling constructs for
representing concrete problem domains. Concerning interoperability, ontological theories are

well fit to provide an establishment of a common understanding of the semantics of context

elements and their associated metadata and to facilitate the sharing of this understanding. Finally,
ontologies can be adopted to improve the process and/or quality of engineering information

systems for instance by aiding in the construction of a flexible and configurable software
environment or organize and utilize resources dynamically. The implementations of these new

practices and approaches derived from the use of ontological theories gave rise to a new research

field, often referred to as ontology-driven conceptual modeling.

Ontology-driven conceptual modeling (ODCM) is defined as the utilization of ontological

theories, coming from areas such as formal ontology, cognitive science and philosophical logics,

to develop engineering artifacts (e.g. modeling languages, methodologies, design patterns and
simulators) for improving the theory and practice of conceptual modeling (Guizzardi, 2012). All

of these techniques have in common that ontologies are adopted (e.g. evaluation, analysis,
theoretical foundation or interoperability) to improve the mapping, reduction or pragmatic

4
feature of either the conceptual modeling process or the output of this process – the conceptual

model.

Over the years, the adoption of ODCM as a modeling practice materialized steadily in

different trends or phases. Initially, ontologies were introduced in the field of conceptual

modeling as a way to evaluate the ontological soundness of a conceptual modeling language.


With respect to the evaluation of conceptual modeling languages and more specifically the

evaluation of their conceptual grammars, ontologies proved quite useful in assessing whether
different conceptual modeling procedures are likely to lead to good representations of real-world

phenomena. For instance ontological theories, such as those of Heller & Herre (2004), Chisholm

(1996) and Bunge (1977), have been successfully applied for the evaluation of conceptual
modeling languages or frameworks (Guizzardi & Halpin, 2008).

Gradually however, a second trend for the usage of ontologies emerged, in the sense that an

ontology would express the fundamental elements of a domain, and therefore providing the
theoretical foundations of a conceptual modeling language (Guarino, 1998). This new way of

applying ontologies led to a growing interest in the role that ontologies can fulfill in the
improvement of conceptual modeling languages (Opdahl et al., 2012), by adding structuring

rules to existing languages (Evermann & Wand, 2005a), and by proposing conceptual modeling

patterns and anti-patterns (R. Falbo, Barcellos, Nardi, & Guizzardi, 2013).

Furthermore, a third trend in ODCM takes it another step further, in the form of not only

evaluating or supporting a conceptual modeling technique, but instead by evolving into a proper
conceptual modeling technique itself, as such adopting modelers to the ontological way of

thinking. These new techniques are often founded on existing modeling notations and enhance

the metamodel of this notation by incorporating formal ontological constraints that correspond
to the ontology’s axiomatization. An example of such a new techniques is OntoUML (Guizzardi

& Zamborlini, 2013). OntoUML is a modeling language that reflects the ontological distinctions

prescribed by UFO (Unified Foundational Ontology) by incorporating the axiomatization of the


UFO ontology by means of formal constraints in the UML class diagram metamodel. With these

5
techniques, modelers are adopted to an ontological way of thinking, by learning them to perceive

and interpret the world in ontological concepts and rules.

Finally, the practical relevance of ODCM can be related to its supportive function in the

design of conceptual models, which consequently should result in better performing information

systems. More specifically, ODCM offers a straightforward approach to quality assurance for a
conceptual model by establishing the ontological foundations of its core concepts to clarify its

real-world semantics. By clearly defining the semantics of a conceptual model and its
corresponding domain, we increase its overall quality, which idiosyncratically leads to higher

performing information systems in terms of comprehensibility, maintainability, interoperability

and evolvability.

1.2. Problem Definition

Nonetheless the successful utilization of ontologies in the field of conceptual modeling, several

research gaps and shortcomings can be identified that still pose challenges for the further
development in the field of ODCM.

First, the added value of adopting an ODCM technique is not always straightforward.

More specifically, it is not always clear for a modeler who wants to develop a conceptual model
what the actual benefits are for utilizing an ODCM modeling technique. For example, while

some empirical evidence (Bera, 2012) has confirmed that ontological rules can alleviate
cognitive difficulties when developing conceptual models and that modelers commit fewer

modeling errors when applying these ontological rules, other studies such as the one of (Hadar

& Soffer, 2006) obtained less promising results. Their results agreed with those of (Bera, 2012)
that the use of ontology-based modeling rules can indeed provide guidance in developing a

conceptual model and can reduce modeling variations, although the overall effect of these rules
was not convincingly significant and did not always seem sufficient enough. More specifically,

the study results even showed that difficulties were experienced in the application of the rules,

especially with large sets of these rules. These ambiguous findings add to the uncertainty of the

6
added value of investing time and effort in understanding and applying ODCM compared to

more straightforward traditional conceptual modeling approaches.

Second, another shortcoming and substantial research gap in ODCM can be related to the

model comprehension of ontology-founded models. More specifically, comprehending an

ontology-driven model can be quite challenging. Understanding the philosophical concepts and
structures of an ontology (e.g. theory of parthood, types and instantiations, identity, dependency,

unity etc.) can be a strenuous task for the end users of a model. For instance, the empirical test
of (Cockcroft, 2005) examined how well a conceptual model is understood that has been

enhanced with ontological principles. To perform this test, they compared one group reviewing

a conceptual model that was constructed without ontological rules and another group reviewing
a model recast according to ontological principles. Their results indicated that with the exception

of the use of entities to represent events, the conceptual model without ontological rules was

better understood by domain experts than the ontology-enhanced conceptual model.


Additionally, there is a relatively high scarcity of research concerning the comprehension of

models (Verdonck, Gailly, De Cesare, & Poels, 2015). More research in this aspect would
nonetheless be quite beneficial for the field of ODCM, since the principal purpose of a

conceptual model is to be understood and comprehended by anyone who uses it.

Next, another drawback in ODCM can be attributed to the not always careful consideration
in the selection of an ontology, which is largely due to the little amount of research that has

been conducted in this area. Since ontological theories form the foundations of ODCM,
consequently the selection of a particular ontology can potentially influence the

conceptualizations that are rendered. For example, the research of (De Cesare, Henderson-

Sellers, Partridge, & Lycett, 2015) examined the way in which two different ontologies represent
temporal changes, concluding that each of the ontologies can lead to different representations

and interpretations. Additionally, an interesting observation was made by the research of (Soffer

& Hadar, 2007), where they analyzed two ontology-based modeling frameworks in order to
evaluate their potential contribution to a reduction in variations and thus facilitate model

understanding. Their findings highlight contradictions in the guidance provided by the different
7
ontological frameworks, where differences in the underlying ontology exist. These results

indicate that the choice of an ontology may affect the resulting model and that not all ontologies

are equivalent in terms of modeling guidance. While these research efforts have demonstrated

that applying different ontologies can lead to diverse kinds of conceptualizations, there overall

exists little research that profoundly investigates the impact of applying these different types of
ontologies on the resulting models

Finally, we can identify a last issue, more specifically concerning the complexity in applying
ODCM. As mentioned above, ODCM can be adopted for various purposes and in a broad range

of domains. This has led to numerous ontologies and ontological analyses being created and

performed, and a plethora of ontology-driven conceptual models that have been developed. This
can cause a great deal of confusion and complexity for researchers and practitioners conducting

ODCM. For example, in the case where ontologies are seen as key to successfully achieve

semantic interoperability between models and languages, this proliferation of techniques,


models and analyses have re-introduced the interoperability problem (Khan & Keet, 2013).

Especially on the long term, this raises the ambiguity between different ontology-founded
models, increases the terminological confusion and consequently, leads to more complexity.

Another example of such confusion and complexity can be found in conducting an ontological

analysis. Nowadays, the term ‘ontological analysis’ encompasses a great variety of different
types of purposes, techniques or methods, and can thus be performed in many different ways,

currently without maintaining a clear distinction. Thus, in order to assist researchers and
practitioners in the process of adopting an ontology-founded modeling technique, one approach

is to provide modeling support in the form of methodologies, methods and frameworks in order

to assist modelers in the process of creating an ontology-founded conceptual model.

8
1.3. Research objectives

In order to overcome the identified shortcomings and research gaps as defined above, we

formulate the following research objectives that aim to contribute to the field of ODCM.

Our first objective is to gain more insight into the domain of ODCM itself. Since ODCM

is still a relatively new research domain in the field of information systems, there is still much
discussion on how the research in ODCM should be performed and what the focus of this

research should be (Guizzardi & Halpin, 2008; Saghafi & Wand, 2014). Additionally, gaining

more insight into the kind of research that has been performed in ODCM, and the kind of
research that is still required will allow us to better understand the research gaps that exist, and

which improvements could be implemented in order to overcome them.

Second, since the added value of ODCM can be ambiguous and because there exists little

research into this aspect, we will investigate the impact of adopting an ODCM technique

when constructing a conceptual model. More specifically, we will examine the effect on a
conceptual model when adopting an ODCM technique compared to a TCM technique and

observe the resulting quality of the models and the effort spent by the modelers to create such

models. Few efforts have yet compared the difference in modeling between ODCM and more
traditional conceptual modeling techniques that do not apply an ontology. The research efforts

that have compared ODCM to traditional conceptual modeling techniques were often either
partial or incomplete, meaning that only certain aspects of an ontology or a limited set of

ontological concepts or rules were compared. By performing such a study, we could then

establish in which aspects an ODCM may prove to have more added value than a traditional
conceptual modeling technique.

Third, we are going to investigate the impact of adopting different kinds of ontologies on
the resulting conceptual models and the corresponding influence on the model

comprehension of its end users. While ontologies were introduced to increase the overall

quality of conceptual models, past research has mainly emphasized the semantic quality of

9
models and has spent little effort in examining the comprehension of models. Furthermore, to

determine the effect of adopting two rather different ontologies, we intend to compare their

impact on the resulting conceptual models and measure the comprehension of these models. If

the choice of an ontology matters, this would as a result have a substantial impact on the

development of the model itself, and as a result would influence model comprehension. Thus,
by being able to demonstrate that a certain ontology can influence model comprehension, we

could provide more insights into the importance of this choice, allowing researchers to better
motivate why certain ontologies are adopted.

Finally, since complexity can arise due to the multiple purposes and various different domains

in which ODCM can be applied, we intend to methodologically structure the application of


ODCM. While there exist plenty of methods that aid a researcher or practitioner in performing

ODCM, it is not always clear in which case a certain method or framework is favorable over

another. Therefore, we will methodologically support a practitioner in the process of performing


ODCM. More specifically, we will focus on structuring the process of conducting an ontological

analysis. Since the seminal paper of (Wand & Weber, 1993), various ontological analyses have
been conducted on a plethora of modeling languages. Accordingly, different kinds of methods

and guidelines have been introduced to perform such an analysis, making it difficult for a

practitioner to select the appropriate method, especially considering the specific use of a certain
ontology and with a certain purpose. By first gaining more insights on how an ontological

analysis has been performed in the past, and which techniques were utilized to structure the
analysis, we can develop a methodological approach that associated purpose and method, and

which would assist a researcher or practitioner to successfully perform an ontological analysis.

10
1.4. Research Design

Having identified the research gaps and shortcomings that exist in the domain of ODCM, we

will perform several research efforts in order to ground and deliver knowledge-based solutions

to address these research gaps. To clearly specify the objective of each of these research efforts,

we will translate the research gaps in specific knowledge questions, which will form the basis
for the further development of this dissertation. More specifically, the knowledge questions that

we will address in this dissertation are the following:

Ø Knowledge Question 1: Which kind of research has been performed over the years in the
domain of ODCM, what is the nature of their research contributions and which is the current

state of the art?

Ø Knowledge Question 2: Which are the effects and the principal differences of applying an

ontology-driven conceptual modeling technique compared to a traditional conceptual

modeling technique?

Ø Knowledge Question 3: What is the influence of applying different kinds of ontologies on

the model comprehension of the resulting ontology-driven conceptual models?

Ø Knowledge Question 4: Depending on the purpose of an ontological analysis, how should


this analysis be performed and structured?

Since each of these knowledge questions engage with a specific shortcoming, consequently
they each require a different kind of methodology in order to overcome the identified problem.

Thus, it is our objective to execute several unique research studies – each characterized by their

own methodology and approach – in order to contribute and render knowledge-based additions
to the field of ODCM. While we will not go into great depth in how each research study will be

executed, we briefly describe the methodology that will take place:

Ø Research Study 1: In order to systematically identify and aggregate the research efforts of

the past several years in ODCM, we will perform in this cycle a systematic mapping review

(SMR) and a systematic literature review (SLR) of articles dealing with ODCM. We based
11
our method of conducting this literature mapping and review on the systematic literature

review methods described in (Dybå, Dingsøyr, & Hanssen, 2007; Kitchenham & Charters,

2007; Petersen, 2011). Mapping and review studies have different purposes. While the

purpose of a SMR is to summarize prior research and to describe and classify what has been

produced by the literature, the SLR aims at critically examining contributions of past
research, to explain the results of prior research and to clarify alternative views of past

research.

Ø Research Study 2: The purpose of this research effort is to conduct a study that investigates

and compares the differences between TCM and ODCM. More specifically, we would like

to differentiate between modelers that are trained in a TCM approach and modelers that have
been taught an ODCM approach. To properly measure these effects, we conduct an empirical

study. In order to properly compose a planning and design that prepares for how the

experiment is conducted, we will base ourselves upon the experimental design described in
Wohlin et al. (2012). The design of an experiment can be divided into several steps such as

development of hypotheses, definition of the independent, dependent and control variables,


selection of subjects and instrumentation that involves the practical implementation of the

experiment.

Ø Research Study 3: In this study, we will perform a rigorous investigation of the effects of
applying different kinds of ontologies on the comprehension of their resulting ontology-

driven models – also known as the pragmatic quality. Again, we will execute an empirical
study that will investigate how users interpret and comprehend the ontology-driven

conceptual models that were developed by adopting different ontologies. To arrive at a

proper experimental design, we will apply the methodology of Wohlin et al. (2012). The
empirical design is rendered as follows: first, a set of hypotheses are defined that will serve

as the basis for the empirical study. Next, the experiment will test these hypotheses. The sole

purpose of the experiment is to collect data to either accept or reject the hypotheses. Finally,
in order to provide additional insights into the results of the experiment, a protocol analysis

will be conducted. Hence, contrary to the experiment, the purpose of the protocol analysis is
12
not to collect data to either accept or reject the hypotheses, but instead the analysis aims to

collect data to interpret why the hypotheses were rejected or accepted.

Ø Research Study 4: For our last research effort, we intend to develop an artifact in the form

of a framework will be designed and build with the purpose to distinguish between the

different kinds of ontological analyses that exist. The benefit of developing such a
framework will lie in its ability to differentiate between the different purposes for performing

an ontological analysis, and to determine which kind of methods can be implemented,


depending on this particular purpose. To construct this framework, we adopt the

methodology of (Gregor & Hevner, 2013) in order to differentiate between two main

knowledge bases for the development of the theoretical foundations of the framework, i.e.
descriptive and prescriptive knowledge. Both knowledge bases will be investigated for the

purpose of (1) rendering the framework and (2) assessing and refining any weaknesses or

ambiguities that still may exist in the framework.

1.5. Structure of the Dissertation

This PhD dissertation follows the structure of a paper-based dissertation, consisting of 6

chapters in total. The current chapter provides an introduction to the domain of ontology-driven
conceptual modeling, its related problems, a set of research objectives that have been defined to

overcome these problems and a research approach that structures the execution of these
objectives. Chapters 2 through 5 are papers, which are either published (i.e. chapter 2 and 5) or

either under review with international peer-reviewed journals (i.e. chapter 3 and 4). Finally,

chapter 6 forms the conclusion of this dissertation, and offers several further research
opportunities that are a consequence of the presented research.

Chapter 2: Ontology-Driven Conceptual Modeling: A Systematic Literature Mapping and


Review

This chapter aims to critically survey the existing literature in order to assess the kind of

research that has been performed over the years, analyze the nature of the research contributions
13
and establish its current state of the art by positioning, evaluating and interpreting relevant

research to date that is related to ODCM. To understand and identify any gaps and research

opportunities, the literature study is composed of both a systematic mapping study and a

systematic review study. The mapping study aims at structuring and classifying the area that is

being investigated in order to give a general overview of the research that has been performed
in the field. A review study on the other hand is a more thorough and rigorous inquiry and

provides recommendations based on the strength of the found evidence. This chapter is
published in the journal of Applied Ontology, 10 (3,4), 197–227 (2015).

Chapter 3: Comparing Traditional Conceptual Modeling with Ontology-Driven


Conceptual Modeling: An Empirical Study

The objective of this chapter is to conduct a study that investigates and compares the

differences between traditional conceptual modeling (TCM) and ontology-driven conceptual


modeling (ODCM). More specifically, we would like to differentiate between modelers that are

trained in a TCM approach and modelers that have been taught an ODCM approach. These two

groups of modelers will then have to model a scenario that encompasses certain modeling
challenges. Through this study, we will then compare the two modeling approaches by

investigating the quality of the resulting conceptual models, and the amount of effort a modeler

had to spend in order to compose these models. To properly measure these effects, we intend to
conduct an empirical study. Therefore, the principal objective of this chapter is to examine if

there are meaningful differences in the resulting conceptual model and the effort spend to create
such model between novice modelers trained in an ontology-driven conceptual modeling

technique and novice modelers trained in a traditional conceptual modeling technique.

Chapter 4: Comprehending 3D and 4D ontology-driven conceptual models: an empirical


study

This chapter conducts an empirical study that explores the influence of an ontology on the
interpretation and understanding of the resulting conceptual models. More specifically, this

14
chapter investigates to which degree the pragmatic quality of ontology-driven models is

influenced by the choice of a particular ontology, given a certain understanding of this ontology.

To answer this question, previous research efforts and discussed and distilled into three

hypotheses that distinguish different metaphysical characteristics. Next, these hypotheses are

tested in a rigorously developed experiment, where a total of 158 business administration


students participated. The data collected from the experiment then allowed to either accept or

reject the hypotheses. In order to provide further insights into the results of our experiment, an
additional protocol analysis is performed. Contrary to the experiment, the purpose of the protocol

analysis is to collect data to clarify why the hypotheses were rejected or accepted. Finally, this

chapter extracts five derivations from the results of the experiment and protocol analysis, to
illustrate the extent in which the pragmatic quality of an ontology-driven model is influenced by

the choice of an ontology.

Chapter 5: An ontological analysis framework for Domain-Specific Modeling Languages

This chapter developed a framework to structure the process of conducting an ontological

analysis and offer instructions in the form of prescriptive patterns on how to analyze a DSML.

This chapter constructed the framework based on well-accepted theories and techniques from

both the descriptive and prescriptive knowledge bases. Next, 17 ontological analyses of DSMLs

were classified and described to the framework, in order to gain more insights on how an

ontological analysis has been performed in the past, and which techniques were utilized to
structure the analysis. It is discovered that only few analyses actually implement a method for

conducting an ontological analysis. As a result, several patterns are identified that are being
executed to perform an analysis depending on a particular purpose. The chapter then relates these

purposes and patterns to various different methods and techniques for conducting an ontological

analysis. Consequentially, the framework can aid researchers with future ontological analyses

and allows a researcher with a specific purpose to recognize the required patterns and types of

methods that can be followed in order to successfully conduct an ontological analysis and

15
consequently achieve his or her intended purpose. This chapter is published in the Journal of

Database Management 29, (1), 23-42 (2018).

Chapter 6: Conclusion

The last chapter of this dissertation will first summarize the research efforts that have been

performed in every chapter, and briefly described their findings. Next, we will discuss the

implications of the obtained research results and future research opportunities that our findings
offer.

1.6. Publications

Publications in Peer-Reviewed International Journals

• Verdonck, M., Gailly, F., De Cesare, S., & Poels, G. (2015). Ontology-driven conceptual

modeling: A systematic literature mapping and review. Applied Ontology, 10((3,4)), 197–

227.

• Verdonck, M., & Gailly, F. (2018). An Ontological Analysis Framework for Domain-

Specific Modeling Languages. Journal of Database Management, 29(1).

Publications Under Review in Peer-Reviewed International Journals

• Verdonck, M., Gailly, F., Pergl, R., Guizzardi, G., Martins, B., & Pastor, O. (2018).

Comparing traditional conceptual modeling with ontology-driven conceptual modeling: an

empirical study.

• Verdonck, M., Gailly, F., & De Cesare, S. (2018). Comprehending 3D and 4D ontology-

driven conceptual models: an empirical study.

16
Publications in Peer-Reviewed International Conference Proceedings

• Verdonck, M., Gailly, F., & Poels, G. (2014). 3D vs. 4D Ontologies in Enterprise Modeling.

In Advances in Conceptual Modeling (Vol. 8823, pp. 13–22).

• Verdonck, M., & Gailly, F. (2016a). Insights on the Use and Application of Ontology and

Conceptual Modeling Languages in Ontology-Driven Conceptual Modeling. In Conceptual

Modeling - ER 2016, Lecture Notes in Computer Science (Vol. 9974 LNCS, pp. 83–97).

• Verdonck, & Gailly, F. (2016b). An Exploratory Analysis on the Comprehension of 3D and

4D Ontology-Driven Conceptual Models. In Conceptual Modeling - ER 2016, Lecture Notes


in Computer Science, vol. 9975 (Vol. 9975, pp. 163–172).

Other Conferences and Workshops

• Verdonck, M., Gailly F. (2014). Guiding the Conceptual Modelling Process with Core

Ontologies. In 8th International Conference on Formal Ontology in Information Systems.

• Verdonck, M., Gailly F. (2015). A Requirements Analysis of a Method for Harmonizing

the Conceptual Modeling Language, The Ontology and the Model Setting. In 9th

International Workshop on Value Modeling and Business Ontologies.

• Verdonck, M., Gailly F. (2016). Complexity in Ontology-Driven Conceptual Modeling. In

10th International Workshop on Value Modeling and Business Ontologies.

• Verdonck, M., Gailly F. (2016). Design of an Empirical Study to Measure the

Comprehension of 3D and 4D Ontology-Driven Models. In 11th International Workshop

on Value Modeling and Business Ontologies.

17
18
2.

Ontology-driven
Conceptual Modeling:
A Systematic Literature
Mapping and Review

19
2.1. Introduction

Conceptual models were introduced to increase understanding and communication of a system or

domain among stakeholders. According to Stachowiak (1973), a conceptual model possesses three

features: (1) a mapping feature, meaning that a model can be seen as a representation of the ‘original’

system, which is expressed through a modeling language; (2) a reduction feature, characterizing the

model as only a subset of the original system and (3) the pragmatics of a model which describes its
intended purpose or objective. Conceptual modeling is the activity of representing aspects of the

physical and social world for the purpose of communication, learning and problem solving among
human users (Mylopoulos, 1992). Conceptual modeling has gained much attention especially in the

field of information systems, for design, analysis and development purposes. Their importance was

understood in the 1960s, since they facilitate detection and correction of system development errors
(Wand & Weber, 2002). The higher the quality of conceptual models, the earlier the detection and

correction of these errors occurs. This increase in attention and importance attributed to conceptual

modeling led to the development and introduction of a wide range of various conceptual modeling
approaches and techniques. Criticism however arose, stating that these approaches and techniques

still lacked a comprehensive and generally acknowledged understanding (Moody, 2005). In


addition, many conceptual models lacked an adequate specification of the semantics of the

terminology of the underlying models, which led to inconsistent interpretations and uses of

knowledge (Grüninger et al., 2000).

In order to provide a foundation for conceptual modeling, ontologies were introduced. As

mentioned in (Gruninger, Bodenreider, Olken, Obrst, & Yim, 2008), the appellation of “ontology”
encompasses many different types of artifacts created and used in different communities. For

keeping a broad interpretation, we adopt the characterization of ontologies as described by

Honderich (2006), which describes ontology as “the set of things whose existence is acknowledged
by a particular theory or system of thought”. Research on ontologies has become increasingly
widespread in the computer science community, gaining importance in research fields such as

knowledge engineering (Uschold & Gruninger, 1996), knowledge representation (Sowa, 1999) and

information modeling (Ashenhurst, 1996). This resulted in the development of different types of
ontologies of which some are also used for conceptual modeling.

With respect to the evaluation of conceptual modeling languages and more specifically the
evaluation of their conceptual grammars, ontologies proved quite useful in assessing whether

different conceptual modeling procedures are likely to lead to good representations of real-world

phenomena. Therefore, ontologies quickly became introduced in the field of conceptual modeling
originally as a way to evaluate the ontological soundness of a conceptual modeling language and its

corresponding concepts and grammars. For instance ontological theories, such as those of Heller &

Herre (2004), Chisholm (1996) and Bunge (1977), have been successfully used for the evaluation
of conceptual modeling languages or frameworks (e.g. UML, ORM, ER, REA, OWL) (Guizzardi

& Halpin, 2008). The usage of ontologies goes further than only evaluating conceptual modeling,
in the sense that an ontology would express the fundamental elements of a domain, and therefore

becoming the theoretical foundations of a conceptual model (Guarino, 1998). This way of applying

ontologies led to a growing interest in the role that they can fulfill in improving the quality of
conceptual models. For example, ontologies were used for the development of new conceptual

modeling languages (Opdahl et al., 2012), for adding structuring rules to existing languages
(Evermann & Wand, 2005a), and for proposing conceptual modeling patterns and anti-patterns (R.

Falbo et al., 2013). Additionally, ontology can also be applied as a way to improve (semantic)

interoperability. Or, in other words, ontologies can be used as means of translating or interchanging
information. This is achieved, for example, by applying an ontology to attain semantic integration

for translating between different models, methods, languages or paradigms. (Bittner, Donnelly, &

Winter, 2005; Green, Rosemann, Indulska, & Manning, 2007).

In this paper all these techniques are called ontology-driven conceptual modeling (ODCM)

approaches. We define ontology-driven conceptual modeling as the utilization of ontological

21
theories, coming from areas such as formal ontology, cognitive science and philosophical logics, to

develop engineering artifacts (e.g. modeling languages, methodologies, design patterns and

simulators) for improving the theory and practice of conceptual modeling (Guizzardi, 2012). Hence,
all of these techniques have in common that ontologies are used (e.g. evaluation, analysis, theoretical

foundation or interoperability) to improve the mapping, reduction or pragmatic feature of either the
conceptual modeling process or the output of this process, the conceptual model. A summary of the

definitions related to ODCM can be found in Table 1.

Since ODCM is still a relatively new research domain in the field of information systems, there
is still much discussion on how the research in ODCM should be performed and what the focus of

this research should be (Guizzardi & Halpin, 2008; Saghafi & Wand, 2014). Therefore, this article

aims to critically survey the existing literature in order to assess the kind of research that has been
performed over the years, analyze the nature of the research contributions and establish its current

state of the art by positioning, evaluating and interpreting relevant research to date that is related to
ODCM. In order to systematically identify and aggregate evidence of the past trends in ODCM, we

present in this paper a systematic mapping review (SMR) and a systematic literature review (SLR)

of papers dealing with ODCM. We can distinguish both methods according to the goal they aim to
achieve (Rowe, 2014). The purpose of a SMR is to summarize prior research and to describe and

classify what has been produced by the literature. The SLR aims at critically examining
contributions of past research, to explain the results of prior research and to clarify alternative views

of past research. The SMR and the SLR make the following contributions to the research field of

ODCM:

• Provide a classification scheme founded on previously developed research dimensions that deal

with ontologies and conceptual modeling. These dimensions can also be applied to position and
describe future research activities.

• Analyze how the research in ODCM is performed and how it is being applied in the field

22
• Analyze the past focus and the change of research trends over time.

• Identify any gaps and opportunities that could be areas for further research or improvement.

Table 1: Definitions of concepts

Term Definition
A conceptual model is composed of (1) a mapping feature, meaning that a model can
be seen as a representation of the ‘original’ system, which is expressed through a
Conceptual model modeling language; (2) a reduction feature, characterizing the model as only a subset
of the original system and (3) the pragmatics of a model, which describes its intended
purpose or objective. (Stachowiak, 1973)
Ontology can be defined as the set of things whose existence is acknowledged by a
Ontology
particular theory or system of thought (Honderich, 2006).
Conceptual modeling is the activity of representing aspects of the physical and social
Conceptual
world for the purpose of communication, learning and problem solving among human
modeling
users (Mylopoulos, 1992).
Ontology-driven conceptual modeling is the utilization of ontological theories,
Ontology-driven coming from areas such as formal ontology, cognitive science and philosophical
conceptual logics, to develop engineering artifacts (e.g. modeling languages, methodologies,
modeling design patterns and simulators) for improving the theory and practice of conceptual
modeling (Guizzardi, 2012).

2.2. Research methodology

In order to achieve a rigorous literature study, we based our method of conducting this literature

mapping and review on the systematic literature review methods described in (Dybå et al., 2007;
Kitchenham & Charters, 2007; Petersen, 2011). Mapping and review studies have different

purposes. The SMR aims at structuring the area that is being investigated and displays how the work

is distributed within this structure. The aim of the SLR on the other hand is to provide
recommendations based on the strength of evidence. Hence, the mapping study is concerned with

the structure of the research area, while the review study is concerned with the evidence (Petersen,
Feldt, Mujtaba, & Mattsson, 2008). In order to properly distinguish the followed approach during

the SMR and SLR, this section first describes the general methodology that was being followed in

both studies. Section 3 and 4 then go into more detail, each discussing respectively how both studies
were performed. To collect the articles for this literature study, we adopted the guidelines of

(Kitchenham & Charters, 2007): (1) definition of the research questions; (2) formulation of a search

23
strategy and the paper selection criteria; (3) construction of the classification scheme; (4) extraction

of data and (5) synthesis of the results. Figure 1 displays the steps we have followed for conducting

this literature study.

The research questions act as the foundation for all further steps of the literature study. The

research questions should be formulated in such a way that they represent the objectives of the
literature study. First, we defined a set of research questions for performing the mapping study,

which we refer to now as mapping questions. The underlying reason for the mapping questions was

to determine how research has been performed in ODCM, the kind of research that has been
performed and how ODCM has evolved over the years. Based upon these mapping questions, we

then formulated a search strategy based on the different phases from (Dybå et al., 2007) to collect

our initial set of papers. The first phase defines a search string to search relevant databases and
proceedings for articles related to the classification questions. The second and third phases define

the inclusion and exclusion criteria. In the mapping study, we evaluate the inclusion and exclusion
of articles based upon their titles and abstracts. After having collected and selected the papers in the

search strategy, we constructed a classification scheme in order to categorize the papers. This

classification scheme was based upon our pre-defined mapping questions and can be seen as a
structure of research studies performed in the domain of ODCM. A good classification/taxonomy is

characterized by certain quality attributes (Petersen, 2011). We have adopted these attributes by first
performing a literature search to identify useful classifications for this literature review and by

selecting classifications that are generally well-accepted in the domain of ODCM. The next step is

the extraction of data according to the classification scheme that we have constructed.

To extract the data, we followed an approach similar to (Bandara, Miskon, & Fielt, 2011), as a

reference of how to best extract relevant content from identified papers and how to synthesize and

analyze the findings of a literature review. We first gathered all the collected literature from our
search strategy into the reference manager Mendeley1, to organize the general demographic

1 https://www.mendeley.com/

24
information such as title, author, publication year etc. Next, the extraction was performed through

the qualitative analysis tool Nvivo2 to analyze and structure our data through the means of nodes

and classifications. A node can be described as a collection of references dealing with a specific
topic and is used to group articles and papers. In our research, each node represents a classification

linked to one mapping question for categorizing the data. Both the data from Mendeley and Nvivo
were then merged in the statistical software tool SPSS3 to conduct some additional qualitative

analyses. After all the data was extracted, we could commence with the analysis and synthesis of

the results from the mapping study and present the mapping results. The results and implications of
the mapping study serve as the basis for our review study. The followed methodology runs parallel

with the methodology from the SMR. First, our mapping results generated some new questions

about ODCM, which we defined as review questions. We then defined a search strategy based on
these review questions, where we further distilled the papers of the SLR based upon new defined

inclusion and exclusion criteria. We then created a classification scheme based upon existing
literature, which was used to structure and categorize the articles. In our data extraction, we collected

the papers in Mendeley, structured and classified these papers in Nvivo in order to ultimately

analyze the papers in SPSS. Finally, the analysis and synthesis of the results were presented. To
conclude this section, all articles, classifications and other data of both the SMR and SLR can be

found at http://www.mis.ugent.be/AppliedOntology2015/.

Figure 1: Steps undertaken while conducting the literature study

2 http://www.qsrinternational.com/products_nvivo.aspx
3 http://www-01.ibm.com/software/be/analytics/spss/

25
2.3. Mapping Study

2.3.1. Mapping Questions

In the SLM existing ODCM literature will be structured from a design science perspective. Strictly

defined design science research creates and evaluates IT artifacts intended to solve identified

organizational problems (Hevner, March, Park, & Ram, 2004). In this paper the broader

interpretation of (Hevner, 2007) is followed. According to Hevner (2007), design science research
must be considered as an embodiment of three closely related cycles of activities: (1) The Relevance

Cycle, which inputs requirements from the contextual environment into the research and introduces
the research artifacts into environmental field testing; (2) The Rigor Cycle, providing grounding

theories and methods along with domain experience and expertise from the foundations knowledge

base into the research, therefore adding new knowledge generated by the research to the growing
knowledge base; (3) the central Design Cycle, which supports a tighter loop of research activity for

the construction and evaluation of design artifacts and processes.

Based on the definition of ODCM by Guizzardi provided in the introduction, ODCM can be
classified as a design science project that has as general goal improving the theory and practice of

conceptual modeling. This goal is realized by developing different engineering artifacts, which
ideally should be evaluated with respect to this goal. In line with both Hevner (2007) and Baskerville

et al. (2015), we also recognize that besides the artifact development goal, design science projects

can also have a knowledge production goal. A lot of OCDM design science projects focus on
generating knowledge which is needed for the development of these artifacts or which might be the

result of the design cycle. As a consequence of this broad interpretation of Design Science, we
assume a ‘helicopter view’ over the research performed in ODCM. This means that ODCM is

considered as a covering design science project and that different ODCM studies are interpreted as

a part of one of these three closely related cycles. An example of a similar approach for analyzing
design science research can be found in (Guido L. Geerts, 2011). He integrates different papers’

26
contributions and artifacts and displays the performed research as part of the REA artifact research

network. This allows a better understanding of each paper's contribution, of how the research area

evolves and illustrates the different types of research interactions. Similarly, we will assess different
paper’s contributions and artifacts and examine which research approaches have been followed.

Below, we have displayed the questions related to the structuring of the research. The mapping
questions serve two purposes: the first question aims to gain more insight into the kind of design

science research that has been performed in ODCM. Or more specifically, we would answer

questions such as: what type of artifact did the research produce, develop or improve? What kinds
of contributions have been made to the research field? And what kind of research method was being

followed? The second mapping question aims to discover the application or function of ODCM.

This latter mapping question inquires for more specific content related to ontologies and conceptual
modeling. We would like to assess the context and setting in which conceptual modeling takes place

and discover the intended purpose of the conceptual model and the ontology. Thus, while the first
mapping question aims to discover information about how the process of research in ODCM is

conducted, the second question aims to explore how the final model, or the research result has been

applied.

• MQ1: How is design science research performed in ontology-driven conceptual modeling?

• MQ2: How is research in ontology-driven conceptual modeling applied?

2.3.2. Search strategy and paper selection criteria

As mentioned in section two, we perform three phases to define a comprehensive search strategy.

The first phase defines a search string to search relevant databases and proceedings for articles

related to the mapping questions. The second and third phases define the inclusion and exclusion
criteria. We evaluated articles based upon their titles and abstracts.

Phase 1 – Search articles in relevant databases based upon title and keywords

27
Our choice of electronic collections was determined by the variety of computer science and

management information systems journals: Science@Direct, IEEE digital library, ACM digital

library, Springer database, Web of Science and AIS electronic library. The search terms which we
used to extract literature from the electronic collections, were constructed using the following steps,

as described in (Brereton, Kitchenham, Budgen, Turner, & Khalil, 2007): (1) Define the major
terms; (2) Identification of alternative spelling, synonyms or related terms for the major terms and

(3) Use the Boolean AND for linking the major terms.

As major terms we selected ‘Ontology-driven’ and ‘conceptual modeling’ with alternative terms
‘ontological analysis’ and ‘conceptual analysis’. For the search in titles and keywords the terms

were reformulated using Boolean algebra, so that the terms were connected with the AND operator.

Our search strings were therefore: "Ontology driven" AND "Conceptual modeling", "Ontological
analysis" AND "Conceptual modeling", "Ontology driven" AND "Conceptual analysis". We used

the term ‘Ontology driven’ without the hyphen since this term generated more results. In our search,
we specified that papers needed to be written in English and that they were published from 1993 to

2014. Since the paper of (Wand & Weber, 1993) can be seen as the first that introduces ontologies

in evaluating conceptual modeling languages, we did not search for papers written before 1993.

Phases 2 and 3 - Inclusion and exclusion of articles based upon title and abstract reading

In these phases, we evaluated the titles and abstracts of the returned articles of phase one. For the
inclusion of an article, the following topics were to be explicitly mentioned in the abstract:

• Ontology: one or multiple ontologies were a crucial aspect in the research performed in the

paper. An ontology could have been the topic of the paper or could be used as a means to, for

example improve a model, but had to be an integral part in the development of the paper.

• Conceptual modeling: as with ontologies, one or multiple conceptual models needed to be an

essential aspect of the article. A conceptual model needed to be the topic of the paper or had to
be a deciding factor in the development of the paper. By conceptual modeling, we refer to the

28
framework of (Wand & Weber, 2002), where research on conceptual modeling is composed of:

(1) a conceptual modeling grammar, i.e. a set of constructs and rules to combine the constructs

to model real-world domains, (2) a conceptual-modeling method that provides procedures by


which a grammar can be used, (3) a conceptual-modeling script, which is the product of the

conceptual modeling process and finally (4) the context, being the setting in which the
conceptual modeling occurs.

• Type of literature: As part of our search strategy, we considered only peer-reviewed journals,

workshops and conferences. Although there is a great deal of additional literature in books, web

pages, magazine articles and working papers, their content has not been revised by peer review

and therefore the quality cannot be reliably determined.

From the search results we excluded those studies were Ontology-driven conceptual modeling is

not the main focus. If an article focused only on ontologies or only on conceptual modeling, it was
excluded. Also, if ODCM was only used as a means of general introduction it was not included in

the search results. Finally, we also applied a limited form of snowballing. More specifically, we

restricted the search of papers to the first level, meaning that the references of the selected papers
were not automatically included to obtain more papers on ODCM. We did however search the

references of the selected papers for any often-occurring references that were not included in our

initial dataset. In total, we have added six papers that were often cited in the references of our

selected papers but that were not captured by our search strings. The added papers were (Green &

Rosemann, 2000; Milton, Kazmierczak, & Keen, 2001; Opdahl & Henderson-Sellers, 2001; Wand,

1996; Wand & Weber, 1993, 1995).

2.3.3. Classification scheme

The classification scheme that is based upon the mapping questions consists of six facets: (1) design
artifact, (2) research contribution and (3) design method, which are related to MQ1, while (4)

purpose of the conceptual model, (5) purpose of the ontology and (6) type of ontology relate to
MQ2. Each of these classifications is discussed in more detail below.

29
Design Science Artifact

The creation of a purposeful artifact should be the result of any design science research in order

to address an important problem. It should be described effectively, enabling its implementation and
application in an appropriate domain. Hevner et al. (2004) indicate that design science research must

produce a viable artifact in the form of a construct, a model, a method, or an instantiation.

• Constructs provide the vocabulary and symbols used to define problems and solutions. They

have a significant impact on the way in which tasks and problems are conceived and enable the
construction of models or representations of the problem domain. In our mapping study, we

added any analysis or discussion on conceptual modeling grammars and/or any ontological

constructs to this category. For example, an analysis of the relationship construct, the description
of mutual properties instead of association properties or an evaluation of ontological rules for

developing UML diagrams would fall in this category.

• Models are made up from these constructs in order to generalize specific situations into patterns

for application in similar domains. We included any abstractions, representations, conceptual


models and/or meta-models in this category. Thus, a conceptual model constructed for the

analysis of a specific information system or research that investigates the potential of a meta-
model for software engineering would be included in this category. Meta-models can be

distinguished from ontologies since the latter are acknowledged by a particular theory or system

of thought (Honderich, 2006), which is not the case for a meta-model.

• Methods can be seen as the blueprints for building models of specific situations. Therefore,

frameworks, modeling approaches and/or modeling guidelines and best practices can be seen as

methods for building models or guiding the process of constructing models. Examples are the

description of a three-stage approach to develop a certain ontology-based system or the


development of a methodology for ontological analysis.

30
• Instantiations demonstrate the feasibility of both the design process and of the designed product.

Instantiations may occur in the form of intellectual, prototype systems or software tools aimed

at improving the process of information systems development. For example, a plug-in tool that

supports designers in developing design solutions in the conceptual design phase would be
counted as an instantiation.

Again, since we see research performed in ODCM as part of one of the three closely related

design science cycles, we would like to emphasize that papers will also be assigned to one of the
above categories even if they did not actually construct or originally develop the specific artifact.

Design Science Contribution

Having classified the artifacts that were constructed in the papers, it is useful to further clarify
the relationship between the kind of artifact that was created in a paper and the research contribution

that was made. For classifying the design science contributions, we adopt the design science
research knowledge contribution framework of (Gregor & Hevner, 2013). Their framework

distinguishes four kinds of contributions: improvement, invention, exaptation and routine design.

Each of these contributions are discussed in more detail below:

• Improvement: the contribution aims to develop better solutions in the form of more efficient and

effective products, processes, services, technologies, or ideas. Researchers must contend with a

known application context for which useful solution artifacts either do not exist or are clearly

suboptimal. They draw from a deep understanding of the problem environment to build
innovative artifacts as solutions to important problems.

• Invention is a radical breakthrough, a clear departure from the accepted ways of thinking and

doing. The invention process can be described as an exploratory search over a complex problem

space that requires cognitive skills of curiosity, imagination, creativity, insight, and knowledge
of multiple realms of inquiry to find a feasible solution. The result is an artifact that can be

31
applied and evaluated in a real-world context and where substantial new knowledge is

contributed.

• Exaptation contributions can be described as known solutions that extend to new problems. This

often occurs in a research situation in which artifacts required in a field are not available or are
suboptimal but where effective artifacts may exist in related problem areas that may be adapted

or, more accurately, exapted, to the new problem context. In other words, contributions in this

category are design knowledge that already exists in one field and is extended or refined so that
it can be used in some new application area.

• Routine design occurs when existing knowledge for the problem area is well understood and

when existing artifacts are used to address the opportunity or question. Research opportunities

are less obvious, and these situations rarely require research methods to solve the given problem.

Since both inventions and exaptations are rather rare and often are not easy to distinguish, we

merge these two categories into one. For example, the paper of (Wand & Weber, 1993), where the

Bunge ontology was first introduced to give a foundation for conceptual modeling, would be
classified in the category Exaptation & Invention. Thus, a paper can be classified as an

improvement, as exaptation & invention or as routine design.

Design Science Evaluation Method

The utility, quality, and efficacy of a design artifact are demonstrated through well-executed

evaluation methods. As mentioned above, a design artifact is constructed to address a specific


problem, and therefore the artifact can only be considered complete and effective when it satisfies

the requirements and constraints of the problem it was meant to solve. The development and
evaluation of designed artifacts are often performed through the use of different kinds of

methodologies. For classifying these various evaluations methods, we adopt the design evaluation

methods classification of (Hevner et al., 2004). They distinguish five different methods for

32
evaluating a design artifact: observational, analytical, experimental, testing and descriptive. The

differences between these methods are explained below:

• Observational: Can be in the form of a case study or field study. A case study explores the

artifact in depth in its environment while a field study monitors the use of the artifact in multiple
projects.

• Analytical: Consists of an analysis of the artifact in, for example an IS architecture or in the form

of a study of the structure and qualities of the artifact. Hevner et al. (2004) distinguish further

between static, architecture and dynamic analyses. In order not to fragment our classification
scheme, we generalized the different analyses described above into just one category.

• Experimental: This can be a controlled experiment where the artifact is studied in a controlled

environment for qualities (e.g. usability) or simulation, where the artifact is executed with

artificial data.

• Testing: We distinguish between functional (black box) testing and structural (white box)

testing. The former executes the artifact interfaces to discover failures and identify defects while
the latter performs coverage testing of some metric in the artifact implementation.

• Descriptive: This can be done through scenarios, where the utility of the artifact is demonstrated

through detailed scenarios, or through informed arguments, where information is used from the

knowledge base (e.g. relevant research) to build a convincing argument for the artifact’s utility.

Conceptual modeling purpose

As mentioned above, one of the main features of a model is its purpose or objective (Stachowiak,

1973). We will therefore investigate the different purposes of the models in the field of ODCM.
(Wand & Weber, 2002) classify conceptual models into four generic purposes. We have adopted

this classification because of our diverse set of papers, which require a more general classification
scheme. The four purposes are: (1) supporting communication between developers and users; (2)

helping analysts understand a domain; (3) providing input for the design process and (4)

33
documenting the original requirements for future reference. As an example, if the purpose of a

conceptual model was to maximize expressivity, clarity and truthfulness of the concepts, we

categorized this as a communication purpose. If the purpose of the model however would be
described as modeling requirements engineering for software configuration, we would classify this

model as providing input for the design process. If a paper would describe the purpose of the model
as a means for problem-solving analysis, we interpret this as an implicit purpose of

understandability. Finally, if a paper gives a rather vague description such as ‘conceptual models

are often used as a basis for the construction and integration of information systems or to gain
process knowledge’ or mention multiple purposes such as ‘models represent the application domains

and are created for the purpose of analyzing, understanding, and communicating about the

application domain and are input to the requirements specification phase for IS development’, we
leave this category blank. This means that this paper does not fall into one of the specified categories.

Type of ontology

This classification distinguishes the kind of ontology that has been applied in ODCM. To

categorize different types of ontologies, we adopted the classification of (Guarino, 1998), where

ontologies are being distinguished based upon their level of dependence of a particular task or point
of view:

• Top-level ontologies describe very general concepts like space, time, matter, object, event,

action, etc., which are independent of a particular problem or domain;

• Domain ontologies and task ontologies describe, respectively, the vocabulary related to a generic

domain (like medicine or automobiles) or a generic task or activity (like diagnosing or selling),
by specializing the concepts introduced in a top-level ontology;

• Application ontologies describe concepts that depend both on a particular domain and task, and

often combine specializations of both the corresponding domain and task ontologies. These

34
concepts often correspond to roles played by domain entities while performing a certain task,

like replaceable unit or spare component.

Purpose of the ontology

Since this literature research is conducted to investigate the performed research in ODCM, it

would be interesting to discover in which ways ontologies have been applied in this context. For
classifying the purpose of applying an ontology, we adopted the classification of (Uschold & Jasper,

1999). They classified ontologies more specifically according to the purpose they fulfill: to assist in

communication between human agents, to achieve interoperability, or to improve the process and/or
quality of engineering software systems.

To explain these purposes more specifically in the context of ODCM, we will classify an

ontology in the category of communication if the purpose is described as: ‘providing real-world
semantics for language constructs’; ‘clarifying the structure of knowledge’ or ‘ontological theory is

well-suited for benchmarking the adequacy and sufficiency of modeling constructs for representing
concrete problem domains’. Thus, if the purpose of an ontology is described rather generic (i.e. for

the purpose of a clear representation of a domain), then we classify the ontology in the category of

communication. However, when the purpose of the ontology is mentioned to improve or support
system development, the ontology is classified in system engineering benefits. Examples are: ‘for

the purpose of constructing a flexible and configurable software environment, ontology is built to
organize and utilize resources dynamically’ or ‘ontology is applied to support the retrieval and the

re-use of project information’. Finally, an ontology is classified in the category of interoperability

if the purpose is described as: ‘ontologies facilitate the establishment of a common understanding
of the semantics of context elements and their associated metadata and therefore boost

interoperability’ or ‘a task ontology was developed to be used for addressing semantic

interoperability problems’.

35
2.3.4. Data Extraction

Figure 2 displays the number of included articles after each phase of the article selection process.

We conducted our search of articles at the end of January 2015. In total, we identified 749 articles,
of which we reduced the number of articles to 707 due to duplicates and papers that were not written

in English. Thereafter, articles were selected based on the inclusion and exclusion criteria, leaving
180 articles for review. The reason for the high number of excluded papers is that many articles

focused only on either conceptual modeling or ontologies.

Figure 2: Mapping study selection procedure

After carrying out the inclusion and exclusion of all the papers, we applied our classification
scheme. Nvivo allows the classification of data through the means of nodes. A node can be described

as a collection of references dealing with a specific topic and therefore is particularly useful to group

articles and papers according to this topic. Our node structure was as follows: we created six parent
nodes, each bearing the overall classification facets: design artifact, research contribution, design

evaluation method, purpose of the conceptual modeling, type of ontology and purpose of the
ontology. Next, we created for every main aspect of the classification scheme different sub-

categories, each also represented by a node. Thus, in Nvivo, we can assign certain fragments of an

article to a specific node. As a consequence, this reference is then classified according to


classification facet that this node represents. For example, in the parent node type of ontology, there

are four child nodes: top-level ontology, domain ontology, task ontology and application ontology.
When a certain paper for example dealt with a foundational ontology, we could select and highlight

the text in the paper that refers to this foundational ontology and assign it to the child node ‘top-

level ontology’. This piece of text could then be found in the contents of the specific child node.

36
Also, when a child node was selected, the parent node was also automatically assigned to that

specific paper. After having classified all papers according to the classification scheme, we exported

all the obtained data from Nvivo to the quantitative analysis tool SPSS for a more thorough analysis
of the data. The results of this analysis can be found in the section below.

2.3.5. Mapping results

MQ1: How is design science research performed in ontology-driven conceptual modeling?

In order to answer our first mapping question, we take a look at the developed design science

artifacts over time in the upper panel of Figure 3. We can clearly see that constructs and methods
have a decisive share in the kind of artifacts that have been developed. Especially in the 1990s, most

of the papers dealt with only constructs while the development of methods started several years

later, around the years 2000s. The construction of models and instantiations only really started
around the period 2005-2009. This evolution seems logical, the focus in the 1990s and beginning of

2000s being on developing theoretical bases and foundations while over time artifacts such as
models and instantiations were derived from these theories and foundations. However, the share of

more applied artifacts such as models and instantiations remains much lower compared to the share

of the more theoretical developments such as constructs and methods. Of all design artifacts, 43.3%
were constructs, 37.6% methods, 14.6% models and 4.5% were instantiations. We thus identify a

gap in the research of ODCM, where theoretical developments take a much larger share compared
to empirical developments. The figure also clearly displays the growth of the number of papers

dealing with ODCM. This observed increase in the number of papers indicates that ODCM is a fast-

growing research domain.

When we analyzed the research contribution of our papers (see middle panel figure 3), we noticed

that contributions in the early years of ODCM were mainly improvements, with a minor share of

exaptations and inventions. After a couple of years, we noticed a rise of more routine design in the
domain of ODCM. This trend makes sense that, after certain theoretical foundations have been

established and improvements have been made, more routine design is developed to the already

37
existing research. In total, it is however clear that the research field of ODCM consists of mostly

improvements (62.1%), routine design (34.5%) followed by exaptations and inventions (3.4%).

There are multiple reasons why there are fewer inventions and exaptations. First, inventions and
exaptations are generally rarer than improvements and routine design. Second, many inventions or

exaptations on which much research in ODCM is founded, were written in papers that did not belong
to the topic of this literature study. With this we mean that, for example ontologies (e.g. Chisholm’s

ontology (Chisholm, 1989)) or conceptual modeling languages (UML, BPMN etc.), were introduced

in papers that focused only on either ontology or conceptual modeling, therefore not meeting the
inclusion criteria of this literature study. Finally, certain inventions or exaptations that did belong to

the field of ontology-driven conceptual modeling could neither be included in this study since they

were published in a type of literature that did not meet our inclusion criteria, such as a PhD thesis.
For example the thesis of Guizzardi (Guizzardi, 2005) introduced both the ontology UFO and the

extension of UFO, OntoUML, which can definitely be classified in the category exaptation &
invention. To gain more insight in the evaluation methods that have been applied on the design

artifacts, we constructed another line graph in the lower panel of figure 3 that displays the applied

method of every article over the years. As the graph demonstrates, there was more analytical
research performed in the 90s compared to any other design science evaluation method. However,

starting from 2000, more descriptive research was being applied. After the year 2005, descriptive

research methods were then most applied in ODCM. Analytical research however did increase in a

rather linear trend. As for experimental, observational and testing methods, they were being applied

around the years 2000–2005. However, as we can see from the graph, the number of experimental,
observational and testing methods is reasonably lower than the number of analytical and descriptive

research methods. Therefore, we consider this lack as our second research gap of ODCM. These

observed trends in Figure 3 align with our design science artifact and design science contribution
observations above, indicating that first, theoretical and foundational theory was being performed,

applying mostly analytical and descriptive design science methods.

38
Figure 3: Design science artifact, contribution and evaluation method over time

39
After several years, more empirical research was performed, applying these theoretical and

foundational theories and evaluating them with experimental, observational and testing design

science methods. It would be interesting to compare these results to the publication of studies on
research methodologies in information systems or papers that for example introduce a new

paradigm. Determining the influence of such studies and papers on the overall research trends in the
domain could be an interesting area for further research.

In order to gain more insight between the design science evaluation methods that were applied

according to the design science artifact, we composed this relation in Figure 4. As we can see from
the size of the circles, most analytical research was performed for constructs while most descriptive

research was performed for methods. Another interesting observation is that most experiments can

be associated with constructs while most testing is applied to methods.

Figure 4: Design science artifact and evaluation method

MQ2: How is research in ontology-driven conceptual modeling applied?

To determine how research is applied in ODCM, we first explore the different purposes of why
ontology-driven conceptual models are being developed and to gain a better understanding for which

purpose they are applied. In total, 43% of the purposes were classified as understanding, followed

40
by 35% input for the design process, 17% as communication and finally 5% were categorized as

documentation purposes. If we refer again to the definition of conceptual modeling, as given by

(Mylopoulos, 1992): ‘Conceptual modeling is defined as the activity of representing aspects of the
physical and social world for the purpose of communication, learning and problem solving among

human users’. Our results are clearly in line with this definition, since 60% of our articles mentioned
either the purpose of communication or the purpose of understanding. An interesting observation is

that only 145 articles indicated a specific purpose for the conceptual model. This mapping dimension

is therefore the dimension with the lowest completeness in terms of assigned articles. We also have
to mention that during the classification, we were rather flexible and tolerant when assigning a

purpose for a conceptual model. With this we mean that if a paper implicitly mentioned the purpose

of the conceptual model, we also categorized this paper. For example, if a paper had constructed a
model and tried to assess this model in terms of its problem-solving capabilities, we assigned a

purpose of understanding to this article, even though the paper did not explicitly mention this
purpose. If we had only assigned purposes to papers that explicitly mentioned a purpose, the total

amount of articles for this mapping dimension would have been significantly lower. Thus, we can

observe that many articles and papers either do not (explicitly) give a specific purpose for their
intended conceptual model or just the opposite, that they assign practically every purpose of a

conceptual model to their intended model. Hence, we identify this as our third gap in the domain of

ODCM. Clearly mentioning the specific purpose of the intended model or performed research for a

specific conceptual model is essential for any further steps in the design science cycle of that

particular model or research. If no clear and specific purpose is mentioned for a model, method or
any artifact for that matter, how can one evaluate and test this model? Therefore, we believe that the

influence of the role and the purpose of a conceptual model is a future research possibility.

To gain more insight into our mapping question, we look at which type of ontology is used in the
research on ODCM. We created a line graph that displays the adopted ontology over the years,

which can be found in Figure 2. We can see that overall, top-level ontologies have been used most

41
in the field of ODCM. Already in the 1990s, top-level ontologies dominated the research field and

kept on rising during the years. We can however see that the increase between the years 2005 and

2014 has somehow declined compared to previous years. Another interesting observation is that
starting from the years 2000-2004, domain ontologies have been applied increasingly more in

research on ODCM, experiencing a relatively stronger increase over the last years compared to top-
level ontologies. Overall, a total of 90 articles applied top-level ontologies, 57 used domain

ontologies, 16 articles applied task ontologies and finally 8 articles adopted application ontologies.

In order to assess how ontologies were applied in combination with conceptual models, we take a
closer look at the given purpose for adopting an ontology. In total, we classified 98 articles in the

category of System Engineering Benefits, 51 in Communication and 29 in Interoperability. These

numbers indicate that most of the ontologies were applied for supporting systems in their
development and implementation. In figure 6, we related the purpose of the ontology with the type

of ontology that was being applied in the article. We only selected top-level ontologies and domain
ontologies since they were the most represented in our articles. The differences in the sizes of the

circles clearly show that both domain and top-level ontologies are being applied mostly for system

engineering purposes. However, while much top-level ontology can be found in the category of
communication, relatively more domain ontologies are being found for the purpose of

interoperability.

These results are rather logical, since top-level ontologies are more general descriptions,

independent of a particular problem or domain and are thus more convenient for the purpose of

communication than a domain ontology. Another interesting observation we could deduce from our
data was that top-level ontologies were adopted more often in the early years of ODCM, whereas

the adoption of domain ontologies started to increase around the years 2005-2006. When we

reconsider the results above concerning the purpose of a conceptual model, most of the articles
indicated that the purpose was mainly for either understanding or input for the design process or

communication. Therefore, we can conclude that the purposes of the model and the purpose of the

42
ontology are aligned. However, we again noticed a similar observation as with the purpose of a

conceptual model and, i.e. that many researchers are rather vague in defining the specific application

of the ontology and in motivating their choice of ontological theories for the intended purpose. We
consider this observation as our fourth research gap. This makes us wonder that if one does not

clearly define the intended application of an ontology, how can one then clearly define the meaning
of constructs and statements of a representation that must represent the phenomena of the application

domain they are intended to describe? Therefore, we believe that the role and purpose of an ontology

is an interesting area for future research in the domain of ODCM. Also, relating the role and purpose
of an ontology with the role and purpose of a conceptual model is definitely also a future research

possibility. For example, research could focus on where the limits of ontology-based theories lie as

to the field of conceptual modeling (Recker & Niehaves, 2008).

Figure 5: Type of Ontology over time

43
2.4. Review Study

2.4.1. Review Questions

When referring again to the definition of ODCM by (Guizzardi, 2012), ODCM uses ontological

theories to develop artifacts in order to improve the theory and practice of conceptual modeling. Our

mapping study clearly shed light on the kind of artifacts that were developed and how they were

developed for each contribution. However, we still have questions concerning the application of
ontological theories and models and concerning the kind of improvements they are intended for. As

our second mapping question demonstrated, the intended purpose of both the model and ontology
was not always clearly identified. Therefore, we formulate two new review questions to assess: (1)

what the research in ODCM intends to improve with the applied ontological theories and models

and (2) how ODCM intends to improve the conceptual modeling process and model. The
formulation of our review questions can be found below. Thus, while the mapping study assessed

the kind of research that has been conducted in papers and aimed to give a comprehensive structure

of the domain, the review study aims to provide a more thorough assessment of the papers that
significantly improved the research field, created new insights or had a convincing impact on the

further directions of ODCM research. The review study aims to analyze the quality of a paper,
evaluate its contents and link the research with other research performed in ODCM.

• RQ1: What does the research in ODCM intend to improve?

• RQ2: How does ODCM improve the conceptual modeling process and model?

2.4.2. Search strategy and paper selection criteria

We again apply the same phases to define a comprehensive search strategy as introduced in

section 2. However, since we will derive our papers from the papers selected in the mapping study,
it is unnecessary to reproduce every phase identically as in the mapping study. Instead, we selected

papers based upon their classification in the mapping study to arrive at a set of papers that made a

44
significant contribution to the field of ODCM. In order to identify the papers that have tackled

generic problems and made substantial improvements in the domain of ODCM, we selected a sub-

set of papers from the mapping study based upon the following classifications: (1) a paper had to be
either classified as ‘improvement’ or ‘exaptation & invention’ concerning the design science

contribution and (2) the design science method had to be ‘analytical’, ‘experimental’,
‘observational’ or ‘testing’. The selections of these categories are derived from our search for

contributions that make a new and/or significant addition to the field of ODCM. As mentioned in

(Hevner et al., 2004), routine design and descriptive evaluation methods apply existing knowledge
to organizational problems or to build a convincing argument for the artifact. However, they do not

address unsolved problems or fabricate new knowledge. Hence, descriptive evaluation methods and

routine design were excluded from the review study. Next, as per our second and third phases, we
defined our inclusion and exclusion criteria to apply on our selection of papers from the mapping

study. Again, these criteria differ from the inclusion and exclusion criteria as defined in the SMR,
since all papers already belong to the domain of ODCM and are of the correct type of literature. We

evaluated the articles by their entire content and in order to be included in the review study, they

had to fulfill the following criteria:

• A generic interest of improvement was addressed: Since it is our goal to identify overall

improvements that research has focused upon over time, we will select only papers that deal

with general and overall issues of the ODCM domain.

• Focus on quality improvement: As we mentioned in our introduction, ODCM approaches either

aim to improve the conceptual modeling process or the output of this process, the conceptual
model. We thus include papers that explicitly attempt to improve the quality of conceptual

models or the quality of the conceptual modeling process.

Since we have already excluded all the papers that do not belong to the topic of this literature
review in the SMR, we have formulated no additional exclusion criteria in this section.

45
2.4.3. Classification scheme

In order to develop our classification scheme, we derive our classifications from the review

questions defined above. To classify the papers, we have adopted the Conceptual Modeling Quality
Framework (CMQF) of (Nelson, Poels, Genero, & Piattini, 2012). Their comprehensive quality

framework is the synthesis of two other well-known quality frameworks: the framework of
Lindland, Sindre, & Solvberg (LSS, 1994) and that of Wand & Weber (BWW, 1990) based on

Bunge’s ontology. By unifying both frameworks, the CMQF is useful for evaluating not only the

end result of the conceptual modeling process, i.e. the conceptual representation, but also the quality
of the modeling process itself. The advantage of adopting this framework as our classification

scheme is that it can be used to address both our review questions at once. The framework identifies

both the generic issue that is being addressed and the type of quality measure that is being applied
to solve this issue. We have briefly explained the CMQF framework in Appendix A and have added

two tables, one that describes all the quality types as they occur in the CMQF, and another that
describes and defines all the quality types that have been identified in this review study. For a

complete explanation of the framework, see (Nelson et al., 2012). To give an example, the physical

layer has seven Quality Types, of which the second type (P2) represents the Ontological Quality –
see Appendix A. So, if a paper would analyze certain foundational constructs of an ontology for

achieving a better ontological representation of certain phenomena, this paper would be categorized
under the category P2 Ontological Quality. Or in other words, the problem that is being investigated

(the Object of Interest) can be situated in the physical language, i.e. the constructs of a conceptual

modeling grammar. The improvement of quality can be situated in the relationship between the
Object of Interest (i.e. the physical language) and the Quality Reference (i.e. the physical model),

in this example achieving a better ontological foundation of grammar constructs. The framework

thus clearly relates the interest of improvement that is being investigated by a paper (i.e. Object of
Interest) with the kind of improvement that is generated (i.e. the Quality Type).

46
2.4.4. Data Extraction

Figure 6 displays the number of included articles after each phase of the review selection process.

We conducted our search of articles with the set of articles that we gathered in the mapping study.
In total, we identified 72 articles, based upon their classification in the mapping study. Next, articles

were assessed based on our inclusion and exclusion criteria, leaving 38 articles in total for review.
All 38 articles can be found in the bibliography of this paper and in Appendix B of this dissertation.

Figure 6: Review study selection procedure

The further development of the data extraction is similar to that of the mapping study as we again

use nodes in order to group our papers according to the classification. To give an example of how

the classification was performed, we discuss the classification of the paper of (Bera, 2012). First,
the paper identifies and analyzes the way two different groups of modelers develop a conceptual

model with and without the help of ontological rules. Since the paper assesses the development of
a conceptual model, the Object of Interest is the physical representation. The ontological rules aim

to improve the knowledge of the model that underlies the language and the domain for ultimately

arriving at a better final representation. Therefore, the quality reference is model knowledge. Thus,
we can recognize a first Quality Type: Applied Model Knowledge Quality (D5). Next, the paper

identifies and analyzes the cognitive difficulties that two different groups of modelers had, when

using a conceptual model that was either developed with or without the help of ontological rules, by
letting them answer a domain understanding task. Since the paper aims to assess the perception and

comprehension of the modelers who used the final representation, our Object of Interest is the
cornerstone representation knowledge. Our quality reference is the physical representation since two

47
kinds of models are compared, those developed with ontological rules and those without. Thus, we

can categorize the paper to a second Quality Type, i.e. Pragmatic Quality (L4). Figure 7 displays

the classification of the paper figuratively and displays the Quality Types (the arrows) between the
Objects of Interests.

Figure 7: Example of data extraction in the Review Study

2.4.5. Review Results

RQ1: Which areas of interest does the research in ODCM improve?

To identify the areas of interest in ODCM that are being improved, we look at the Object of

Interest from the CMQF. In figure 9, we have grouped the research according to their referred

Objects of Interests. From the figure, we see that the physical language has been the most
investigated Object of Interest. The physical language consists of the grammar and the vocabulary

that are used to construct a conceptual representation. Most of the articles that performed research
in this category employed an ontological analysis to examine the semantics or ontological

deficiencies of modeling constructs and grammars (e.g. Evermann, 2005). Another stream of

research (e.g. Milton et al., 2001) focused on how conceptual modeling languages were used in a
certain context and applied ontologies to investigate and compare different modeling languages.

The second and third most cited Objects of Interest are the physical representation and the

representational knowledge. The physical representation is the users’ description rendered into a

48
formalized (ER diagram, UML diagram etc.) model-based representation. The representation

knowledge can be described as the users’ cognitive interpretation of the physical representation.

Research related to the physical representation often addresses the lack of theoretical foundations
of modeling constructs, or the failure of a conceptual schema to express the intended meaning and

semantics (e.g. Parsons, 2011). Research related to representational knowledge compares different
versions of a certain conceptual model based on the use of certain ontological constructs or modeling

guidelines. Users’ cognitive interpretations are then examined by measuring the impact of these

differences on for example, the level of understanding obtained by model viewers (e.g. Gemino &
Wand, 2005).

Figure 8: Number of references to Object of Interest

Finally, Language knowledge can be described as the language as understood by those modelers
who are actively involved in the modeling process. Research in this category often addresses the

issue of ontological deficiencies in conceptual modeling grammars (e.g. Recker et al., 2011), by

49
assessing how some properties of these grammars inform usage beliefs, such as usefulness and ease

of use. We would like to emphasize that due to the selection criteria of this literature study, some

Objects of Interest are more represented than others. For instance, since this literature study does
not focus on papers addressing only ontologies, the physical model and its cognitive counterpart the

model knowledge are therefore less addressed as Objects of Interest.

RQ2: What kinds of improvements have been made in ODCM?

To give an overview of the classification of the papers in the review study, we summarized the

different Quality Types from every paper in Figure 9. Each of these quality types is being defined
in Appendix A. We discuss the results according to the different layers an article belongs to in order

to assess the kind of Quality Types that have been investigated. In total, 31 Quality Types were

related to the physical layer, 18 in the knowledge layer, 9 in the development layer and 4 in the
learning layer. It is clear that the majority of the research focused on improvements situated in the

physical layer, i.e. the physical, observable elements that are learned, analyzed and manipulated by
the modelers as they try to understand the domain. When reconsidering the results of the review

question above, the Object of Interest with the highest number of references was the physical

language. As we can see from figure 10, the quality of the physical language has been investigated
through several different Quality Types. Most of the articles from our literature review aimed at

improving the Ontological Quality (P2). Much research (e.g. Opdahl & Henderson-Sellers, 2002)
in this category suggested ontological improvements of modeling constructs (e.g. UML) based upon

an analysis with an ontological model (e.g. BWW) and a mapping of the phenomena the constructs

represent in terms of the phenomena in the problem domain it represents. Or in other words, many
articles aimed to increase the quality (the completeness and validness) between the physical

language and the physical model. The second largest layer, the knowledge layer, is the cognitive

counterpart of the physical layer. The knowledge layer is composed of “the more tacit and individual
elements of the quality framework, which exist only in the minds of the stakeholders involved in

the conceptual modeling process and in the process of using the final representation”. Most of the

50
research in this layer (e.g. Bera & Evermann, 2012), investigates how the users of a model perceive

the usefulness of ontologically founded conceptual representations. Next, articles belonging to the

development layer, examine how well this knowledge was used to create the physical elements.
Most of the research in the development layer involves designing and testing ontological rules for

assisting information system designers to use them in their conceptual modeling activities. These
ontological rules help a modeler or designer to construct a physical representation based upon the

knowledge of the model that underlies the language and the domain. Finally, the learning layer,

which received the least amount of attention in the articles, measures how well that learning,
interpretation and understanding takes place. Articles in the learning layer address the

comprehension and understanding of the final physical representation by the stakeholders who use

the model. We consider this lack of research performed in the development layer and learning layer
as the fifth research gap in ODCM.

Figure 9: Comparison of the Quality Types per Object of Interest and Quality Reference

Additionally, to gain more insight in the evolution of the kind of improvements that have been

made in the field of ontology-driven conceptual modeling, we composed a graph that displays the

51
number of references according to the type of layer the Quality Type belongs to. This evolution is

presented in Figure 10. The graph clearly shows that from the years ’93-’04’, most of the articles

tended to discuss elements of the conceptual model that belonged to the physical layer.

In the early years (’93-’00) of ODCM, the emphasis of most papers was on improving the

ontological expressiveness of grammars for describing real-world phenomena completely and


clearly. Different approaches were followed however. Several researchers aimed at improving

ontological clarity by adapting and extending Bunge's ontology to provide theoretical guidelines in

order to capture the relevant knowledge about a domain and facilitate the mapping from the
conceptual model to the design model of a system (Wand, 1996; Y Wand, Monarchi, Parsons, &

Woo, 1995; Wand & Weber, 1993). Guarino on the other hand systematically introduced formal

ontological principles for the practice of knowledge engineering and explored the various
relationships between ontology and knowledge representation (Guarino, 1995). Additionally, the

branch of ontology still had to find its place in conceptual modeling. In order to prove and position
the potential of ontologies, its value as foundation for conceptual modeling was demonstrated and

discussed next to other foundations such as concept theory, speech act theory and epistemology

(Guarino, 1995; Y Wand et al., 1995). Around the years ’00, ontologies were more and more used
to perform ontological analyses and evaluations of conceptual modeling languages to (1) define the

semantics of modeling constructs in terms of the kind of real-world phenomena they are intended
to represent, (2) identify improvements of conceptual modeling languages by identifying ontological

deficiencies in modeling constructs and grammars such as ontological overload or redundancy and

(3) investigate the ontological assumptions underlying conceptual modeling languages. Most of
these ontological analyses were performed with the BWW ontology as proposed by Wand and

Weber (Milton et al., 2001; Opdahl & Henderson-Sellers, 2001, 2002). Accordingly, also articles to

support and improve ontological analysis were introduced. The paper of (Rosemann & Green, 2002)
for example tackle two issues concerning the use of the BWW model, i.e. understandability of the

constructs in the model and the difficulty in applying the model to a modeling technique. Also Welty

52
& Guarino (2001) introduced a methodology for ontological analysis based upon several notions of

formal ontology that are used for ontological analysis.

Starting from ’04, we see a clear upward trend of research performed in the knowledge layer. It
seems that after years of research focusing on the physical elements of the conceptual model, the

focal point shifted to how these physical elements of the conceptual model were perceived by
individual users and modelers. Parallel to research of the physical layer that focused on the

semantics and ontological deficiencies of modeling constructs and grammars, there was a similar

stream of research in the knowledge layer that aimed at assessing the perceptions of users and
modelers of these modeling constructs and grammars. To assess these perceptions, experimental

studies were performed to observe “how users and modelers perceive ontological constructs and

more specifically to determine their perception of the clarity and comprehensibility of these
grammars and constructs” (Evermann & Halimi, 2008; Gemino & Wand, 2005; Parsons, 2011; G.

Shanks, Tansley, Nuredini, & Tobin, 2008). Another line of research in the knowledge layer focused
more on the shortcomings of conceptual modeling languages for representing certain domains.

While similar research in the physical layer conducted only ontological and hence more theoretical

analyses, research in the knowledge layer validated their theoretical contribution with additional
empirical evidence (Recker, Indulska, Rosemann, & Green, 2005, 2006; Recker, Rosemann,

Boland, Limayem, & Pentland, 2008a). As a consequence of conducting more empirical studies
concerning the perceptions of users, also more attention was given to the modelers and designers on

how they perceived the conceptual modeling process and the overall construction of a conceptual

model. Around the year ’05, we can see a new evolution in the research of ODCM, where more
emphasis is put on the developmental aspect of conceptual modeling. Especially the creation and

adoption of ontological guidelines or ontological guidelines in the conceptual modeling process has

received much attention (Bera, 2012; Bera, Burton-Jones, & Wand, 2009; Bera & Evermann, 2012;
Evermann & Wand, 2011; Recker, Indulska, Rosemann, & Green, 2010). The purposes of these

rules and guidelines are (1) to help analysts create conceptual models that convey semantics more

53
accurately and more clearly and/or (2) to improve the effectiveness of the created models as ways

to communicate and reason about the domain. Some empirical evidence (Bera, 2012) has already

confirmed that ontological rules can alleviate cognitive difficulties when developing conceptual
models and that modelers commit fewer modeling errors when applying these ontological rules.

Figure 10: Number of references according to layer per year

However, (Hadar & Soffer, 2006) obtained less promising results. Their results agreed with those

of (Bera, 2012) that the use of ontology-based modeling rules can indeed provide guidance in
developing a conceptual model and can reduce modeling variations, although the overall effect of

these rules was not convincingly significant and did not always seem sufficient enough. Similarly,

54
also (Guizzardi, Das Graças, & Guizzardi, 2011) noticed that complexity posed a significant issue

for novice modelers who were using the ontologically founded conceptual modeling language

OntoUML. However, as noted by (Gemino & Wand, 2005), we cannot solely focus on the
complexity and comprehension of models without considering the domain understanding obtained

through these models. For example, their study indicated that the use of mandatory properties with
subtypes added to the overall complexity of the model but did provide a better understanding and

comprehension of semantics of the model.

2.5. Discussion

In order to improve and contribute to the field of ODCM, we discuss certain shortcomings and

possible research opportunities that have been identified within this literature study.

Research opportunity 1: As mentioned before in this paper, we considered ODCM as design


science research. Evaluation is a “central and essential activity in conducting rigorous design science

research” (Venable, Pries-heje, & Baskerville, 2012). The validity of any resulting artifacts must be

justified, which is often performed through empirical methods (Baskerville, Kaul, & Storey, 2015).
Although we can deduce an increase of empirical research over the last couple of years of articles

belonging to the knowledge layer and development layer, we still agree with (Moody, 2005) that
there is an overall lack of empirical research in the field of ODCM. In MQ1, more specifically in

the upper and lower panel of Figure 3, we encounter a much larger number of theoretical

contributions compared to the number of empirical research studies that are being performed.
Empirical studies however, are essential to perform design science research, since they allow the

validation of research ideas, testing of theoretical arguments and theories and the evaluation of the
efficacy of new practices.

Research opportunity 2: We have noticed in the articles of this literature study, especially those

papers situated in the knowledge layer and learning layer of the SLR, that many of these empirical
results often encounter the issue of complexity in the process of ontology-driven conceptual

55
modeling (Gemino & Wand, 2005; Guizzardi et al., 2011). In order to tackle this ill-favored effect

of complexity, we agree with (Guizzardi & Halpin, 2008) that research in ontology-driven

conceptual modeling on the one hand needs to provide theoretically sound conceptual tools with
precisely defined semantics but on the other hand must hide as much as possible the complexity that

arise of these ontological theories. It is on this aspect that ontological rules or modeling guidelines
seem promising, since it is their aim to support the conceptual modeling process to arrive at clearer,

more effective and more understandable models.

Research opportunity 3: Perhaps the cause for this perceived complexity in ODCM could be
traced to our findings on the scarcity of research concerning the pragmatic quality of conceptual

models. As the graph in Figure 10 demonstrates, much research has been performed in the physical

and knowledge layer, however, we notice an overall shortage of research performed in the
development layer and especially the learning layer of conceptual modeling. Articles in this last

layer measure how learning, interpretation and/or understanding takes place. It is rather odd that one
of the most frequent given definitions of conceptual modeling (Mylopoulos, 1992) states that the

purposes of conceptual models are communication, learning and problem solving, but that there is

relatively few research conducted in how the learning, interpretation and understanding in
conceptual modeling takes place. As our mapping results also confirmed, 60,5% of our articles

mentioned the purpose of conceptual modeling as either communication or understanding.


Therefore, more research in the learning aspect of conceptual modeling would be beneficial for the

field of ODCM, since the principal purpose of a conceptual model is to be understood and

comprehended by anyone who uses it. Additionally, the process of learning, interpreting and
understanding a conceptual representation is a complicated matter and much influenced by

individual and contextual factors. Therefore, capturing how and to which extent the stakeholder

completely and accurately understands the conceptual model, and to identify which contextual and
individual factors encourage or discourage this comprehension, is a research opportunity in the field

of ODCM that still needs further investigation.

56
Research opportunity 4: A particularly interesting observation was made by the research of

(Hadar & Soffer, 2006), where they analyzed two ontology-based modeling frameworks in order to
evaluate their potential contribution to a reduction in variations and thus facilitate model

understanding. Their findings highlight contradictions in the guidance provided by the different
frameworks, where differences in the underlying ontology exist. These results indicate that the

choice of an ontology may affect the resulting model and that not all ontologies are equivalent in

terms of modeling guidance. We believe that a careful consideration of an ontology applies even
more for foundational ontologies than for domain ontologies, since foundational ontologies are often

used to provide guidance in the conceptual modeling process. This observation is equivalent to

Quality Types such as Applied Domain-Model Appropriateness (D1), Pedagogical Quality (L2) and
the (perceived) Model-Domain Appropriateness (P1 and K1), which address the appropriateness of

an ontology to the understanding and mindset of a certain modeler. In our review study however,
we did not identify any articles performing research into these aspects of ODCM. Similarly, in figure

6 of our second mapping question, we noticed that many researchers are also vague in defining the

specific application of the ontology and in motivating their choice of ontological theories for the
intended purpose.

Research opportunity 5: One element of the contextual factors, i.e. the purpose of a conceptual
model, also deserves some additional attention. As the results of our second mapping question

indicated, many articles do not clearly mention a specific or intended purpose of their model or

performed research. The same observation applies for the purpose of an ontology. Often, when for
example an ontological analysis is performed, or patterns are developed, the given purpose for this

analysis or patterns is usually very broad and opaque. We agree with (Evermann & Halimi, 2008),

that in order to have well-defined meaning of constructs and statements of a representation, these
elements must be defined in terms of the phenomena of the application domain they are intended to

57
describe. Thus, if one does not clearly state the intended purpose, one cannot clearly define meaning,

which as a result leads to ambiguous or confusing models.

To conclude this section, we would like to discuss the significance and relevance of our research
opportunities and how they reflect upon the field of ODCM. Perhaps, from all the research gaps and

opportunities we have identified, the complexity concerning ODCM (research opportunity 2) is the
greatest challenge research in this field has to face. As mentioned above, we are aware that an

increase of complexity can also be paired with an increase in the understanding of the semantics of

the model, which is by no coincidence one of the main purposes of ODCM. However, evidence
provided by (Davies, Green, Rosemann, Indulska, & Gallo, 2006; Recker, 2010) report that

perceived usefulness and perceived ease of use (measured as complexity) are the two most

frequently reported factors influencing the decision to continue using conceptual modeling in
practice. Therefore, in order for ODCM to be used and stay used by practitioners in the field of

conceptual modeling, our priority should be on managing the complexity in ODCM by finding a
balance between the increase of the understanding of the semantics of a model through ontological

theories, and the additional increase of complexity that arises from these ontological theories. It is

at this point that the importance of the other research opportunities becomes apparent, since they
can facilitate this balance. For example, if we can clearly identify the purpose of the preferred model

by the end-user, we can adopt our ontology-founded models according to this purpose (research
opportunity 5). For example, if an end-user has to perform a thorough analysis of a certain system

and desires a higher emphasis on the semantics of the model, we can allow an increase in complexity

in order to accomplish the needs of this end-user. Also, some modelers or end-users may prefer or
posses a better understanding towards a specific ontology and how this ontology represents real-

world phenomena. By applying the preferred ontology in ODCM, we could produce conceptual

models that are better perceived by these users (research opportunity 4). However, probably the first
step towards finding the adequate balance between an increased understanding of the semantics of

a model and its increased complexity is first identifying how learning, interpretation and

58
understanding of these models takes place (research opportunity 3). Finally, we agree with Gemino

& Wand (2005), that the issue of understanding versus complexity “can be studied by combining

theoretical considerations and empirical methods”. Theoretical contributions and artifacts should be
validated and evaluated by empirical studies that assess the perceived usefulness and perceived ease

of these theoretical contributions (research opportunity 1). This approach enables researchers to
address the quality of a model, the perceived understanding from its users and the given complexity

of the contribution.

2.6. Threats to validity

The main threats to the validity of a SLR are (1) publication selection bias, (2) inaccuracy in data

extraction and (3) misclassification (Sjøberg et al., 2005). We acknowledge that is it impossible to

achieve complete coverage of everything written on a topic. However, we aimed to maximize this
coverage by selecting our papers from six digital sources, including journals, conferences and

workshops that are relevant to ODCM. The scope of journals and conferences covered are

sufficiently wide to attain reasonable completeness in the field studied. To reduce the publication
selection bias, we defined research questions in advance, organized the selection of articles as a

multistage process based upon well-established research and involved four researchers in this
process. Both the inclusion and exclusion criteria and the classification schemes of the SLM and

SLR were carefully evaluated by all researchers and were several times discussed for their impact.

When performing the data extraction for both the SLM and SLR, we first classified our papers into
three categories, according to the inclusion and exclusion criteria: (1) Included: the researcher is

sure that the paper is in scope and meets all inclusion criteria; (2) Excluded: the researcher is sure
that the paper is out of scope and applies to at least one of the exclusion criteria or (3) Uncertain:

the researcher is not sure whether the paper fulfills either the inclusion or exclusion criteria. When

a paper was classified as ‘uncertain’, the paper was given to a fellow author for a second evaluation
and was then discussed whether the paper should be included or excluded. Concerning the

59
classification, during the SLM, all authors classified several papers independently from one another

and the classification results were afterwards compared for their consistency. Overall, there was a

general agreement on the classification of papers. When necessary, disagreements were resolved
through discussion. Additionally, two authors performed the classification of the SLR, frequently

comparing the classification results with each other for consistency. Also, one of the authors of this
SLR was also a co-author of the CMQF framework, increasing the correct application of the

framework in the review study. Although data extraction and classification from prose is difficult at

the outset, we believe that the extraction and selection process was rigorous and that we followed
the guidelines as provided in (Kitchenham & Charters, 2007), (Petersen, 2011) and (Dybå et al.,

2007).

2.7. Conclusion

This paper conducted a literature study, composed of a systematic mapping review and a

systematic literature review, in the field of ODCM. The mapping study aims at structuring the area

that is being investigated and displays how the work is distributed within this structure. The aim of
the review study on the other hand is to provide recommendations based on the strength of evidence.

We searched six digital libraries, producing 180 articles dealing with ODCM. We have provided
two classification schemes founded on previously developed research, of which both attempt to

clearly and thoroughly categorize papers dealing with ODCM. The first classification scheme was

used in the SMR, to provide a general categorization of articles. Our second classification scheme
was applied in the SLR, for a more in-depth categorization of articles. The results of the SMR

identified certain gaps and trends in the domain of ODCM. Based upon these results, we conducted
the SLR to gather more evidence on these results. This led to the identification of five research gaps

that need more attention and five research opportunities that could be future areas for improvement

in the field of ODCM. The research gaps were: (1) a shortage of empirical developments compared
to the theoretical developments, (2) a lack of experimental, observational and testing evaluation

60
methods, (3) many articles do not clearly mention a specific or intended purpose of their model or

performed research, (4) similar to the purpose of conceptual models, many researchers are also

vague in defining the specific application of the ontology and in motivating their choice of
ontological theories for the intended purpose, and (5) certain areas in ODCM still need more

research, such as studies that measure how well that learning, interpretation and understanding of a
conceptual representation takes place. Based upon these research gaps, we formulated five research

opportunities to address these gaps.

61
62
3.

Comparing Traditional
Conceptual Modeling
with Ontology-Driven
Conceptual Modeling:
An Empirical Study

63
3.1. Introduction

Modeling, in all its various forms, plays an important role in representing and supporting complex

human design activities. Especially in the development of information systems, their analysis, as

well as in re-engineering, modeling has proved to be an essential element in achieving high

performing information systems (Karimi, 1988). Conceptual models were introduced to increase

understanding and communication of a system or domain among stakeholders. Some commonly


used conceptual modeling techniques and methods include: Business Process Model and Notation

(BPMN), entity relationship modeling (ER), object-role modeling (ORM), and the Unified
Modeling Language (UML). We refer to these techniques and methods as traditional conceptual

modeling (TCM). Many of these early conceptual modeling techniques however lacked an adequate

specification of the semantics of the terminology of the underlying models, leading to inconsistent
interpretations and uses of knowledge (Grüninger et al., 2000). Additionally, conceptual models

were prone to a high degree of inconsistency, caused by the multiple models or views which

participate in the design and development process (Lucas, Molina, & Toval, 2009). In order to
overcome these issues, ontologies were introduced. Ontologies provide a foundational theory, which

articulate and formalize the conceptual modeling grammars needed to describe the structure and
behavior of the modeled domain (Wand & Weber, 1993). Furthermore, since an ontology provides

unambiguous definitions for terms used in a domain, it plays a crucial role in the communication

between modelers, as such maintaining consistency between conceptual models and integrating
different user perspectives (Uschold & Gruninger, 1996). In summary, an ontology thus expresses

the fundamental elements of a domain, and therefore enhances the foundations of a conceptual
model (Guarino, 1998). More specifically, we can describe the utilization of ontological theories,

coming from areas such as formal ontology, cognitive science and philosophical logics, to develop

engineering artifacts (e.g. modeling languages, methodologies, design patterns and simulators) for
improving the theory and practice of conceptual modeling, as ontology-driven conceptual modeling

(ODCM) (Guizzardi, 2012).

64
The benefits of ODCM are presumed to be the most substantial when applied for the design,

analysis and re-engineering of rather large and complex information systems. Their use would lead

to various system engineering benefits such as increased re-usability and reliability (Uschold &
Gruninger, 1996). Additionally, ODCM would aid to the development of a more sophisticated

representation of the domain being modeled, and a higher level of domain understanding by its
modelers and users (Gemino & Wand, 2005). These benefits can be obtained by many different

kinds of techniques and practices that were developed in the field of ODCM. For example,

ontologies were used for the development of new conceptual modeling languages (Opdahl et al.,
2012), for adding structuring rules to existing languages (Evermann & Wand, 2005b), and for

proposing conceptual modeling patterns and anti-patterns (R. D. A. Falbo, Barcellos, Nardi, &

Guizzardi, 2013). However, while many of these ontology-driven techniques have demonstrated to
be beneficial compared to the traditional conceptual modeling practices, the added value of their

application is not always straightforward and there is no clear distinction when it is actually desirable
to adopt these techniques. Understanding the philosophical concepts and structures of an ontology

(e.g. theory of parthood, types and instantiations, identity, dependency, unity etc.) requires time and

encompasses a certain degree of complexity. As noted by (Guizzardi et al., 2011), this complexity
posed a stark issue for novice modelers who were using the ontologically founded conceptual

modeling language OntoUML. Additionally, while it is generally assumed that ontology-based

modeling can indeed enhance the development of a conceptual model, the study of (Soffer & Hadar,

2007) obtained less promising results, acknowledging that the overall effect of ontology-based

modeling rules were not significant. They observed that the utilization of ontologies does not
significantly improve model variation, which according to them means that ontologies do not

sufficiently support the decision making during the conceptual modeling process. Furthermore –

and perhaps due to – the uncertainty of the added value of investing time and effort in understanding
ODCM, it’s application by professionals and businesses is still scarce. Evidence provided by

(Davies et al., 2006; Recker, 2010) report that perceived usefulness and perceived ease of use –
measured as complexity – are the two most frequently reported factors influencing the decision to

65
continue using conceptual modeling techniques in practice. Therefore, in order for ODCM to be

fully accepted by businesses and practitioners, we should be able to demonstrate in which

circumstances ODCM is superior to TCM.

Therefore, it is the goal of this paper to conduct a study that investigates and compares the

differences between TCM and ODCM. More specifically, we would like to differentiate between
modelers that are trained in a TCM approach and modelers that have been taught an ODCM

approach. These two groups of modelers will then have to model a scenario that encompasses certain

modeling challenges. Through our study, we will then compare the two modeling approaches by
investigating the quality of the resulting conceptual models, and the amount of effort a modeler had

to spend in order to compose these models. To properly measure these effects, we intend to conduct

an empirical study. Therefore, as the foundation for the further development of this paper, we
formulate our research question as follows: Are there meaningful differences in the resulting

conceptual model and the effort spend to create such model between novice modelers trained in an
ontology-driven conceptual modeling technique and novice modelers trained in a traditional

conceptual modeling technique. In section 2 of this paper, we formulate our testing hypothesis and

meanwhile discuss previous related empirical research. Next, we will draft our experimental design
to test these hypotheses in section 3. We will then present the results of our experiment in section 4

and discuss their outcome on the hypothesis. Next, in section 5, we will interpret the results of our
experiment, and discuss their consequences and implications. Finally, we will present our

conclusion and future research opportunities in section 7 of this paper.

66
3.2. Hypothesis development

Based upon our research question, we formulate our testing hypotheses. In order to do so

properly, we will first investigate the different kinds of empirical studies that have been performed

in the field and take a closer look at earlier comparisons between ODCM and TCM.

Over the years, the adoption of ontologies and ODCM as a modeling practice materialized

steadily and in different trends or phases. Originally, ontologies were introduced in the field of
conceptual modeling as a way to evaluate the ontological soundness of a conceptual modeling

language. With respect to the evaluation of conceptual modeling languages and more specifically
the evaluation of their conceptual grammars, ontologies proved quite useful in assessing whether

different conceptual modeling procedures are likely to lead to good representations of real-world

phenomena. Therefore, the first empirical research efforts concerning ODCM and TCM examined
whether the semantic analysis offered by ontologies actually benefited the grammars of conceptual

modeling languages. For example the paper of (Recker et al., 2011b) investigated how users of the

BPMN conceptual modeling grammar perceived existing ontological deficiencies, and how
ontologies could aid in identifying such deficiencies. Other empirical studies such as (Poels, Gailly,

Maes, & Paemeleire, 2005; G. G. Shanks, Tansley, Nuredini, Tobin, & Weber, 2008) performed
similar research, where they measured the existing ontological deficiencies of conceptual modeling

languages, studied their impact on users’ perceptions and how techniques involving ontologies were

applied to analyze such languages to identify these deficiencies.

Gradually however, a second trend for the usage of ontologies emerged, in the sense that an

ontology would express the fundamental elements of a domain, and therefore becoming the
theoretical foundations of a conceptual modeling language (Guarino, 1998). This new way of

applying ontologies led to a growing interest in the role that they can fulfill in the improvement of

conceptual modeling languages (Opdahl et al., 2012), by adding structuring rules to existing
languages (Evermann & Wand, 2005a), and by proposing conceptual modeling patterns and anti-

67
patterns (R. Falbo et al., 2013). Additionally, by capturing the foundational elements of a domain,

ontologies facilitate the communication of these elements between multiple stakeholders and

modelers, as such enabling the design and development of higher consistent models. In accordance,
various empirical studies examined these enhanced ways of conceptual modeling. For instance, the

study of (Bera, 2012) tested the effect of ontological modeling rules on the development of
conceptual models. Their results revealed that modelers face cognitive difficulties when developing

conceptual models and that ontological modeling rules can alleviate these difficulties. Additionally,

the study indicated that ontological rules could help modelers to commit fewer modeling errors and
help them to develop conceptual models in a systematic way. Other studies such as (Evermann &

Wand, 2006b) reached similar conclusion, supporting the use of ontological theories and rules in

conceptual modeling. However, not all studies acknowledged the same results. For instance, (Soffer
& Hadar, 2007) performed an explorative study to investigate the effect of applying ontological

modeling rules to the modeling process on model variations. More specifically, their results
expressed that difficulties were experienced in the adoption of the ontological concepts and rules

underlying an ontology, especially with large sets of these rules. As a conclusion, they expressed

their belief that further improvements may be achieved by adopting modelers to an ontological way
of thinking, learning them to perceive and interpret the world in ontological concepts.

This is where the third trend in ODCM aims to deliver a solution, in the form of not only
evaluating or supporting a conceptual modeling technique, but instead by evolving into a proper

conceptual modeling technique itself, as such adopting modelers to the ontological way of thinking.

These new techniques are often founded on existing modeling notations and enhance the metamodel
of this notation by incorporating formal ontological constraints that correspond to the ontology’s

axiomatization. Examples of these new techniques are OntoUML and the O3 language. The O3

language (Pastor & Molina, 2007) can be considered as a natural language that fuses various
ontological concepts based upon the Bunge-Wand-Weber (BWW) ontology together with the

object-oriented paradigm, with the purpose to facilitate automatic development of information

68
system applications. OntoUML (Guizzardi & Zamborlini, 2013) on the other hand, is a modeling

language that reflects the ontological distinctions prescribed by UFO (Unified Foundational

Ontology) by incorporating the axiomatization of the UFO ontology by means of formal constraints
in the UML metamodel. With these techniques, modelers are adopted to an ontological way of

thinking, by learning them to perceive and interpret the world in ontological concepts and rules.
However, this requires a modeler to understand the philosophical elements and structures from the

underlying ontology, where its formal axiomatization and constructs can pose a significant

challenge to novice modelers (Guizzardi et al., 2011).

Hence, it would seem that the ODCM technique at the one hand facilitates the development of

ontologically sound conceptual models, while on the other hand it appears this practice can increase

the complexity of developing a conceptual model. However – as to the knowledge of the authors –
no empirical research has yet measured the actual impact of adopting an ODCM technique to

develop a conceptual model and observe the resulting models and effort to create these models.
Furthermore, no research study has yet compared the difference in modeling between ODCM and

TCM techniques. Most of the empirical studies described above did compare ODCM to TCM,

although this comparison was often either partial or incomplete, meaning that only certain aspects
of an ontology or a limited set of ontological concepts or rules were compared. Additionally,

subjects were either briefly introduced to the ontology or received only minor training in applying
the ontology in the process of conceptual modeling. This results in modelers that are not fully

competent with the respective ontology. It is our perception that modelers should be more

intensively trained into an ontological way of thinking, by learning them to perceive and interpret
the world in ontological concepts.

As such, the objectives of our study are (1) to have a complete comparison between an OCDM

and a TCM technique, meaning that both techniques are taught in their full scope, and not only
certain aspects of it; (2) to compare subjects that have been properly trained in both techniques, over

a period of several months; (3) to require subjects not only to comprehend these techniques, but also

69
have them apply the technique in order to construct a conceptual model; and (4) to compare both

the resulting models of each technique, as well as the effort required to construct these models. As

the advantages of ODCM are presumed to be the most beneficial when applied to rather large and
complicated modeling tasks and designs, we assume ODCM to deliver better results when applied

to a more complex modeling task. Thus, based upon previous research efforts, and the assumptions
given above, we formulate our hypotheses as followed:

1. Novice modelers applying an ODCM technique will arrive at higher quality models compared

to novice modelers applying a TCM technique – given a thorough understanding of the


respective technique and a sufficiently complex modeling task.

2. Novice modelers applying an ODCM technique will experience more effort in the process of

developing a conceptual model compared to novice modelers applying a TCM technique – given
a thorough understanding of the respective technique and a sufficiently complex modeling task.

In other words, we believe that the more complex a modeling task becomes, the more
semantically correct conceptual models will be when adopt an ODCM technique over an TCM

technique. However, we do expect that adopting an ODCM approach will also require more effort

compared to a TCM approach. The next section will further specify how we will set up our
experimental design, based upon this hypothesis.

3.3. Experiment Design

A careful planning and design prepares for how the experiment is conducted and is essential in
achieving validated experimental results. Due to the lack of a random assignment of subjects

between our testing groups – infra Experimental Design Type – we would like to emphasize that we
will perform a quasi-experiment, since key characteristics between subject treatments may differ.

As such, when referring to the term ‘experiment’ in the further development of this chapter, we refer

to a quasi-experiment. We base ourselves upon the experimental design described in Wohlin et al.

70
(2012), where the design of an experiment can be divided into several steps. Based upon our

hypothesis, the selection of the independent and dependent variables takes place. Next, the selection

of subjects is carried out. The experiment design type is chosen based on the hypothesis and
variables selected. Next the instrumentation prepares for the practical implementation of the

experiment. Finally, the validity evaluation aims at checking the validity of the experiment. After
the planning process is iterated, we can conduct the actual experiment, and collect the data in order

to either accept or reject the testing hypotheses.

3.3.1. Variable development

Before any experimental design, the dependent, independent and control variables should be

selected beforehand. Both the independent and dependent variables are derived from our

hypotheses, and consequently from our research question.

Independent Variable

In our study, the independent or affecting variable constitutes of the two different modeling

techniques or approaches our subjects can apply to construct a conceptual model. In other words, in

our experimental setting we can control if we either assign our test subjects with a traditional
modeling technique or with an ontology-driven technique. More specifically, we will compare the

enhanced entity relationship (EER) modeling technique against the ontology-driven OntoUML
modeling technique. The entity-relationship (ER) approach – initially proposed by Chen (1976) –

still remains the premier model for conceptual design (Fettke, 2009). It is used to represent

information in terms of entities, their attributes, and associations among entity occurrences called
relationships. The EER modeling technique can be applied in combination with several notations.

The UML notation – more specifically class diagram notation – is a widely accepted notation, both

in academics (Elmasri & Navathe, 2015) as in practice by analysts and software developers (Gornik
& IBM, 2003). By enhancing the EER approach with the UML notation, the conceptual model gains

significant benefits, including easier communication and a more truthful representation of a

71
particular domain. Similarly, OntoUML is a well-known technique in the domain of ODCM and

has been frequently adopted for various purposes. Additionally, OntoUML also applies the UML

notation – again class diagrams – but with the UFO ontology as an underlying foundational theory.
More specifically, the purpose of OntoUML is to improve the truthfulness to reality (i.e. domain

appropriateness) by constructing conceptual models supported by ontological concepts (Guizzardi


& Wagner, 2005). As such, both techniques have been primarily developed to deliver conceptual

models that offer faithful representations of a particular domain. Additionally, both techniques apply

the same UML notation, but are grounded in two different underlying theories – the EER approach
and the UFO ontology.

Dependent variables

The purpose of our experiment is to measure the difference of the resulting conceptual model –

both in quality and in consistency – and the effort required to create such a model, when applying
either a traditional modeling technique or an ontology-driven modeling technique. Therefore, to

properly measure and compare such differences, we rely on the research of (Grüninger & Fox, 1995;

Krogstie, 2012; Moody, 2003), where we make a distinction between the effectiveness and efficiency
of our two techniques. While effectiveness is defined on how well a particular technique achieves

its objectives, efficiency is viewed as the effort required to apply the technique. The former can be
measured by output measures evaluating the quantity and/or quality of the results; the latter can be

measured by a variety of input measures such as time, cost or perception.

Effectiveness

We are going to measure the effectiveness of the TCM and OCDM methods by evaluating the

quality of the resulting models created by the participants. As stated by quality standards such as

ISO 9001, quality is defined as "the degree to which a set of inherent characteristics fulfills
requirements" (ISO/IEC 9001). More specifically in the context of software engineering, quality is

often described as the fitness for purpose. Therefore, we will measure model quality by the degree

72
in which the participants’ models fulfill their purpose. Since both TCM and ODCM have been

developed to represent a domain and its truthfulness to reality, we will evaluate the resulting

conceptual model in its capacity to represent a domain as truthfully as possible. In order to have a
domain that is recognizable for our participants to model – all our participants are students – we

have opted for the domain of a university.

In order to objectively assess the suitability and truthfulness of a model to represent a domain,

we will rely on the use of competency questions. Originally, competency questions were applied in

ontology development (Grüninger & Fox, 1995), where a particular ontology was found adequate
to represent a certain domain providing that the ontology could represent and answer a specific set

of competency questions. In our experiment, we will construct several domain requirements that

will be defined in a set of competency questions, to which the resulting conceptual models should
be able to provide an answer in order to be deemed a good representation of the domain.

Furthermore, we will differentiate between two sets of competency questions. One set of questions
will measure if subjects adequately represented the domain as described in the assignment. The

second set of questions will measure if subjects were able to deal with certain ‘complications’

described in the case, which required subjects’ to improve their model beyond the literal description
given in the case. This corresponds to the work of (Daga et al., 2005; De Cesare & Partridge, 2016),

where they make a distinction between competency questions that measure Content Interpretation
(CI) and Content Sophistication (CS). While the former is defined as the identification of the entities

that exists in the domain by an applicant or modeler, the latter can be seen as the process of gradually

improving the model such that it provides a more precise representation of the world. Tailored to
our experiment, participants will receive a case that describes the university domain. When

modeling the domain, they have to identify the necessary constructs, relationships and cardinalities

that govern this domain – i.e. content interpretation. However, the case (deliberately) contains
ambiguous descriptions or certain complications. Content sophistication can then take place if a

participant responds by improving their model so that it provides a more precise representation of

73
the university domain – and overcome these complications or ambiguities. As such, the competency

questions allow us to evaluate the participants’ models in a rather objective way, by distinguishing

between the ‘completeness’ of the model (i.e. content interpretation), and the more innovative
aspects of their models (i.e. content sophistication). These competency questions will be adopted by

the authors to assign scores to the models of the participants.

Efficiency

Based upon previous findings in the literature such as the one from (Soffer & Hadar, 2007), we

expect that modelers adopting to an ontological way of thinking, and perceiving and interpreting the
world in ontological concepts and rules will require more effort – and hence achieve a lower

efficiency — compared to modelers adopting a TCM technique, since they do not have to concern

themselves with such rules and ontological concepts.

The efficiency of the ontology-driven models will be measured by: (1) assessing the amount of

time needed to develop the models, and (2) assessing the usage beliefs of each modeling technique.
More specifically, we will measure perceived usefulness and perceived ease of use, which are key

to understanding modeling usage beliefs (Davies et al., 2006). Perceived ease of use is determined

by the degree to which a person believes that using a particular technique would be free of effort.
Perceived usefulness refers to the degree in which a person believes that a technique will be effective

in achieving the intended modeling objective. Perceived usefulness can therefore also be seen as a
way to measure the actual effectiveness of the technique (Moody, 2003), but since it is determined

by its perceived ease of use we categorize it under efficiency. In our experiment, participants will

have to answer several questions after completing the modeling task – using multiple-item scales,
with five-point Likert scales – which will measure both the perceived usefulness and the perceived

ease of use. The reliability and validity of these questions has already been proven in several

research efforts (Davis, 1989; Recker, Rosemann, Green, & Indulska, 2011a).

74
Control Variables

Since we will be testing participants modeling with a TCM and an ODCM technique, we need to
ascertain that all subjects have an equal understanding of each technique they are modeling with.

Therefore, we apply a control variable to test every subject’s knowledge and understanding of the
modeling technique, before the start of the experiment. The results from the subjects that failed the

knowledge test will not be incorporated into the results of the experiment. Next, to provide a

complex enough modeling case as required in our hypotheses, we have selected a modeling case
that served as an assignment of a modeling course given at the University Ghent. The feedback and

the final results of the assignment that applied the modeling case confirmed that the modeling case

is of a rather complex degree. Additionally, we have presented the modeling case at the OntoCom
workshop at the 36th International Conference on Conceptual Modeling. During this workshop, the

case has been given to several experts in the domain of conceptual modeling and ontology. Each of
these experts have then created a conceptual model – often also based upon an ontological theory –

according to their interpretation of the case. Afterwards, the different models were discussed for

their completeness and how they dealt with the challenges or ambiguities that could be found in the
case. During this workshop, many of the competency questions for both the content interpretation

and especially the content sophistication was derived from the models of the workshop and the
feedback from the different experts. Additionally, the experts who have modeled the case

themselves also have labeled the case as sufficiently complex to be applied in an experimental

setting.

3.3.2. Subject Selection

The subjects in our study all were novice conceptual modelers and were attending two different

courses on conceptual modeling at the University of Ghent (Belgium) and the Technical University
of Prague (Czech). While the subjects at the University of Ghent were taught how to adopt a TCM

technique to construct a conceptual model, the course at the Technical University of Prague taught
their students the ODCM technique. As stated by (Falessi et al., 2017), using students as participants

75
remains a valid simplification of reality needed in laboratory contexts. It is an effective way to

advance software engineering theories and technologies but, like any other aspect of study settings,

should be carefully considered during the design, execution, interpretation, and reporting of an
experiment. Consequently, we decided to select students as our test subjects since they have no prior

knowledge of conceptual modeling and can thus be seen as novice modelers who can be trained in
either TCM or ODCM. Hence, our selection of students enabled us to train subjects without having

prior experience in another modeling technique. Consequently, we could measure the full impact of

the modeling technique that is being taught.

At Ghent University, students have been taught the EER conceptual modeling technique through

both theoretical classes and practical sessions. In these practical sessions, students were required to

solve modeling assignments of certain scenarios. Additionally, students were required to submit a
rather extensive group assignment, where they had to design and implement an information system.

An important aspect of this assignment was to develop a sound EER conceptual model that forms
the foundation of their database. Similarly, students at the Technical University of Prague received

both theoretical classes as well as practical sessions on a weekly basis. Furthermore, they also had

to complete a work assignment that required them to create sound OntoUML models, to serve as a
foundation for a software system. Moreover, all subjects have the same age (i.e. early-twenties) and

the majority of our subjects have a business/technical-oriented background. Concerning motivation,


students were asked to participate with the experiment out of self-interest and as an opportunity to

improve their skills in conceptual modeling. There was no reward-based incentive. As such, students

that participated in our experiment were essentially self-motivated based on the inclination to learn
more and to improve their skillset. Thus, the specific selection and the education program leads to

a controlled sample of subjects, all being novice modelers, that are properly trained in the respective

modeling technique and with no prior knowledge of any other modeling technique.

Finally, in order to determine the number of subjects for our empirical study, we base ourselves

on the differences in the averages in the model comprehension scores from the study of (Verdonck

76
& Gailly, 2016a). Based upon the sample size formula below (Shao, Wang, & Chow, 2008),

assuming a Type I error (α) of 5% and a Power (1−β, where β is Type II error) of 0.8, we require a

total number of 43 subjects per treatment group. In total, 100 subjects participated in the study, of
which 50 in each treatment. Hence, the number of participants in our experiment is sufficient with

regard to the required statistical minimum.

3.3.3. Experimental Design Type

An experiment consists of a series of tests of different treatments (Wohlin et al., 2012). To get

the desired results to answer our research question, the series of tests must be carefully planned and
designed. Based on our hypotheses, we can derive two treatments: an UML treatment and an

OntoUML treatment. The assignment in each treatment constitutes of a case study that has to be

modeled by the participants of the respective treatment. We have assigned the participants to these
treatments according to the balancing design principle. By balancing the treatments, we assign an

equal number of subjects to each separate treatment, to arrive at a balanced design. Balancing is
desirable since it both simplifies and strengthens the statistical analysis of the data. However, due

to practical limitations we could not balance the students of the two different universities between

the two treatments, e.g. half of the students of Ghent University being trained in TCM and ODCM
and vice versa for the students at the University of Prague. As such, one group may differ from the

other – e.g. due to the students’ specific profile or the teaching method of the respective professor.
Hence, our type of experiment is a quasi-experiment. The most important consequence of this quasi-

experimental design is that our study may suffer from increased selection bias, meaning that other

factors instead of our dependent variable may have influenced the outcome of our results. As a
result, this also impacts the internal validity of our study, which is again emphasized below in the

conclusion section. The design type of our quasi-experiment is a two-factors with two-treatments

design, meaning that we compare the two treatments against each other with two dependent
variables – the quality of the conceptual model (i.e. effectiveness) and the effort in constructing the

model (i.e. efficiency). Each subject also takes part in only one treatment. Most commonly, the

77
means of the dependent variables for each treatment are compared. We will thus assign scores to

the different measures of the dependent variables in order to compare our two different treatments

objectively. This aspect will be discussed in more detail in the section below ‘Instrumentation’.

3.3.4. Instrumentation

The instruments of an experiment provide means for performing the experiment and to monitor it,
without affecting the control of the experiment. Below, we will describe in detail the different phases

a subject goes through when participating in our experiment, and the kinds of instruments we apply

in each of these phases. We would like to note that all materials – the assignments per treatment, the
case description, knowledge assessments, competency questions etc. – that have been applied in this

experiment can be found at our online repository at Open Science Framework (OSF)4.

Assessment of subjects’ knowledge

In order to assess if the subjects clearly understood the respective modeling technique, we
evaluate each subjects’ understanding with several written statements. Each of these statements

describe a certain phenomenon or scenario, to which the subject has to choose the correct

corresponding element of the modeling technique. The subjects can choose from four different
multiple-choice answers. In total, six statements were given for each treatment (see OSF repository).

Each of these statements was derived from examples from existing literature or exercises related to
the techniques. If a student failed the assessment, their results were not included into the experiment.

Modeling Assignment

After the completion of the knowledge assessment, subjects could complete the modeling

assignment. The assignment describes a company that desires to develop a software system for

universities. As part of the development process, a conceptual model is required, that should be
applicable to multiple universities. As means of a reference case, a description is given of a

4 osf.io/w7mh2

78
university. Subjects are given specific instructions that the concepts and entities of the university

should be modeled, but that their model should also be accessible for representing the structure of

other universities. The purpose of the task is thus of a rather businesslike nature, with the objective
to deliver a ‘complete’ representation of the case, and which should at the same time be adaptive

enough to apply to the structure of other universities. For example, the case describes that a professor
can only work at one department of a faculty. However, this structure is specific for the university

in the case. An adaptive model should also allow a professor to work at different faculties and/or

work at different universities.

Usage belief and perception

As a last phase in the experiment – after completing the modeling assignment – the participants
are asked to fill in a set of 8 questions, which will measure both the perceived usefulness and the

perceived ease of use (see OSF repository). As a summary of this section, Figure 11 gives a more
comprehensive overview of the different aspects of our experimental design.

Figure 11: Overview of Experimental Design

79
3.4. Results

Below we will first discuss the descriptive results related to the knowledge assessment, the

effectiveness and the efficiency of each treatment. By regarding and discussing the descriptive

results we can get a first indication of the differences that exists between the treatments. However,

based upon the descriptive statistics we cannot conclude if the treatments are significantly different

from one another. Therefore, we will perform further statistical testing to test the hypotheses as
formulated above and examine if significant differences can be deducted.

3.4.1. Descriptive statistics

Knowledge Assessment

The results of the knowledge assessment test – which was our control variable – indicate that all
subjects gained a reasonable understanding of the respective technique’s structure and concepts,

with an average score of 97,6% for the TCM treatment and 94,3% for the ODCM treatment. None

of the participating subjects gained a score lower than 50%, which would have excluded the results
of the particular subject in the experiment.

Effectiveness of the treatments

As for the effectiveness of the treatments, we have demonstrated the average results of the

competency questions in Table 2. More specifically, we have distinguished the total average scores
for both the content interpretation questions as for the content sophistication questions. The very

last column in this table then displays the total average scores for each separate treatment. As the
table demonstrates, the scores for the ODCM concerning content interpretation are somewhat higher

compared to the averages scores for the TCM treatment, although the difference is not substantial.

Concerning the average scores of the content sophistication questions however we observe a much
stronger difference. In total, the ODCM treatment obtained an average score of 46% while the TCM

treatment achieves a considerable lower score of 24%. It would thus seem that adopting the ODCM

80
approach enables subjects to better deal with the challenges and ambiguities that are confined in the

modeling assignment compared to subjects adopting the TCM approach. Consequently, due to this

rather substantial difference in results between the content sophistication scores, the total average
scores of the ODCM treatment are also higher compared to the TCM scores. Hence, based upon the

descriptive results of the effectiveness for each treatment, it would appear that the ODCM treatment
was more effective in representing the domain as truthfully as possible compared to the TCM

treatment – especially concerning content sophistication, which deals with the more challenging or

ambiguous aspects of the case assignment.

Table 2: Average results corresponding to effectiveness

Content Content
Treatment Total Average Score
Interpretation Sophistication
TCM 83,40% 24,00% 53,70%
ODCM 87,50% 46,00% 66,75%

Efficiency of the treatments

The average results for the measurements of the efficiency of each treatment can be found in

Table 3. Here, the results are less straightforward in comparison with the results of the effectiveness.
The first column displays the average time required for each subject to complete the modeling

assignment. As we can observe, the average time is a little higher for the TCM treatment (38

minutes) compared to the ODCM treatment (36 minutes). However, this difference in time is rather
small. We would like to note that this time measurement only involves the time required to complete

the modeling assignment, meaning that it does not incorporate the time needed to complete the
knowledge assignment or the completion of the perception questions. It is strictly the time required

to read the modeling assignment and develop the conceptual model corresponding to this

assignment. Additionally, the table demonstrates the average results per treatment. The average
reduces the impact of outliners in your data – for example subjects that completed the modeling

assignment rather fast or just the opposite, very slow. When we calculate the median of both

81
treatments, we arrive at a greater difference, i.e. 37 minutes for the TCM treatment and 32 minutes

for the ODCM treatment.

Next, we have calculated the total average score of the perceived usefulness and the perceived
ease of use questions for every treatment. Since the questions correspond to a five-point Likert scale

– 1 indicating strongly agree and 5 strongly disagree – this means that that the lower the score, the
more the subject perceived the modeling technique as useful or easy to use (i.e. strongly agree with

the statement concerning perceived usefulness or ease of use). As the table indicates, there is no

clear difference between both treatments. The total score for the perceived usefulness is higher for
the ODCM treatment, meaning that subjects perceived the technique as less useful compared to

subjects of the TCM treatment. On the other hand, the perceived ease of use is slightly higher for

the TCM treatment, indicating that this technique was perceived as less easy compared to the subject
adopting the ODCM technique. These scores do not correspond to our second hypothesis, which

expected subjects adopting the ODCM technique would perceive the technique as less easy to apply
compared to the treatment adopting the TCM technique.

Table 3: Average results corresponding to efficiency

Time Total Average Score Total Average Score


Treatment
(hours: minutes) Perceived Usefulness Perceived Ease of Use
TCM 00:38 2,37 3,03
ODCM 00:36 2,60 2,91

3.4.2. Hypotheses Testing

Effectiveness of the treatments

In order to test our hypotheses, we are going to compare if the scores of the competency questions

between the two treatments differ significantly. To determine which kind of test we have to apply,
we first examine the distributions of our data – the total individual scores per subject, categorized

per treatment. In order to identify if our data is normally distributed, we performed the Shapiro-
Wilk test (p-value: 0.000), revealing that both the data of the ODCM and TCM treatment follow a

82
non-normal distribution – indicating that we have to analyze our hypotheses with non-parametric

tests. Additionally, we have performed the Kolmogorov-Smirnov test where we also obtained a

significant p-value of 0,000. To compare the differences between our treatments, we have chosen
the Mann-Whitney U test (McKnight & Najab, 2010). This test sets the following data limitations:

(1) dependent variable should be measured at the ordinal or continuous level; (2) independent
variable should consist of two categorical, independent groups; (3) independence of observations

and (4) not-normally distributed data. Since our data answers these requirements, we can adopt the

Mann-Whitney U test. In Table 4 and Table 5 we have displayed the results related to the Mean-
Whitney U test. While Table 4 expresses the mean ranks and the sum of ranks for each assignment

for both the TCM and ODCM treatment, Table 5 displays the outcome of the test and the associated

p-values. We test our hypotheses on the 95% confidence interval. Additionally, since our hypotheses
are directional – we test if one treatment scores higher than the other treatment – we regard the one-

tailed significance level. In line with our first hypothesis, we predict that the total score of the
competency questions of the ODCM treatment will be higher compared to the scores of the TCM

treatment.

To gain more insight into the results, we have also tested for the total scores for both the content
interpretation and the content sophistication questions. From Table 4 we can deduct that the mean

rank and the sum of ranks are all higher for the ODCM treatment compared to the TCM treatment
– i.e. for the interpretation, sophistication and total score of the competency questions. These ranks

are in line with our observations of the descriptive results (supra). When we regard the results of

the Mann-Whitney U test of the content interpretation questions, we observe a p-value equal to
0,161 – meaning that no significant difference can be acknowledged on the 95% confidence interval

between the scores of the content interpretation questions of the ODCM and TCM treatment.

However, when we retrieve the p-values of the content sophistication questions, we now obtain a
significant result between the two treatments, with a corresponding p-value of 0,00. Finally, when

we regard the total score for the competency questions, we again notice a significant difference on

83
the 95% confidence interval between the ODCM and the TCM treatment, with a p-value of 0,00. In

other words, we accept H1, and therefore confirming – on the 5% significance level – that novice

modelers applying an ODCM technique will arrive at higher quality models compared to novice
modelers applying a TCM technique.

Table 4: Mann-Whitney U Ranks of Effectiveness Treatments

Ranks Group Mean Rank Sum of Ranks


Content Interpretation Questions TCM 47,67 2383,5
ODCM 53,33 2666,5
Content Sophistication Questions TCM 32,84 1642
ODCM 68,16 3408
Total Score Competency Questions TCM 38,82 1941
ODCM 62,18 3109

Table 5: Mann-Whitney U Test of Effectiveness Treatments

Content Content Total Score


Test Statistics Interpretation Sophistication competency
Questions Questions questions
Mann-Whitney
1108,5 367 666
U
Wilcoxon W 2383,5 1642 1941
Z -0,99 -6,127 -4,043
Asymp. Sig.
0,322 0,000 0,000
(2-tailed)
Asymp. Sig.
0,161 0,000 0,000
(1-tailed)

Efficiency of the treatments

Similar to the section above, we are going to compare if the perceived effort of developing a
conceptual model with an ODCM technique will be significantly higher compared to modelers

applying a TCM technique. In other words, we will examine if there exist significant differences in

the time needed to develop the model, and the answers given by our modelers concerning the
perceived usefulness and the ease of use of each respective technique. Similarly, we first investigate

the distribution of our data with the Shapiro-Wilk test, revealing again that our data – time required
to complete the model and the efficiency questions– are non-normally distributed. Consequently,

84
we can apply the non-parametric Mann-Whitney U test to compare our two treatments with each

other. In Table 6, we have displayed the ranks of the Mean-Whitney U test, while Table 7 displays

the Mean-Whitney U results, for both the time and the two different types of efficiency questions.
Since we are performing a one directional test – effort is higher for ODCM than for TCM – we have

to regard the one-tailed asymptotic significance. First, when viewing the sum of ranks of the time
measurement per treatment, we can see that the sum for the TCM treatment (2711,5) is substantially

higher than for the ODCM treatment (2041,5). When we observe the results of the Mann-Whitney

U Test, the p-value is equal to 0,0295, indicating that there is a significant difference between the
TCM treatment and the ODCM treatment in time required to develop the model, on the 5%

significance level. However, opposite to the hypothesis, it would seem that modelers of the ODCM

treatment needed less time compared to modelers of the TCM treatment.

Next, when we regard the sum of ranks for both types of efficiency, we can observe that the

difference between the sums are relatively smaller compared to the time measurement. Again, we
would like to emphasize that the Mann-Whitney U test has been performed on scores related to the

Likert scale – meaning that the lower the mean rank, the more the subject perceived the modeling

technique as useful or easy to use (i.e. strongly agree with the statement concerning perceived
usefulness or ease of use). When we examine the p-values of both types of efficiency questions, we

observe a p-value of 0,0575 for the perceived usefulness questions and a p-value of 0,2425 for the
questions corresponding to the perceived ease of use. Our results therefore do not confirm – on the

5% significance level – that the perceived usefulness and the perceived ease of use for the ODCM

is lower compared to the TCM treatment. Since both these tests are not significant, and we obtain a
significant difference (on the 95% confidence interval) with our time measurement in the opposite

assumption of our hypotheses, we therefore reject H2, and cannot confirm – on the 5% significance

level – that novice modelers applying an ODCM technique will experience more effort in the process
of developing a conceptual model compared to novice modelers applying a TCM technique.

85
Table 6: Mann-Whitney U Ranks of Efficiency Treatments

Ranks Group Mean Rank Sum of Ranks


Time TCM 54,23 2711,5
ODCM 43,44 2041,5
Total Perceived Usefulness TCM 45,53 2276,5
ODCM 54,56 2673,5
Total Perceived Ease of Use TCM 51,98 2599
ODCM 47,98 2351

Table 7: Mann-Whitney U Test of Efficiency Treatments

Total Perceived Total Perceived


Test Statistics Time
Usefulness Ease of Use
Mann-Whitney U 913,5 1001,5 1126
Wilcoxon W 2041,5 2276,5 2351
Z -1,889 -1,574 -0,698
Asymp. Sig. (2-tailed) 0,059 0,115 0,485
Asymp. Sig. (1-tailed) 0,0295 0,0575 0,2425

3.5. Discussion

In the introduction of this article, we asked ourselves the question – the principal research question

of this study – if there exist any meaningful differences in the resulting conceptual model and the

effort spend to create such model between novice modelers trained in an ontology-driven conceptual
modeling technique and novice modelers trained in a traditional conceptual modeling technique.

The findings of our empirical study can now confirm that there do exist meaningful differences.
More specifically, we found that novice modelers applying the ODCM technique arrived at higher

quality models compared to novice modelers applying the TCM technique. On the other hand, we

did not find any significant difference in effort between applying these two techniques. Below, we
list various findings that are derived from the results of this study:

86
Finding 1. Novice modelers applying an ODCM technique have no additional benefit over an

TCM technique when modeling the foundational aspect of a domain.

In our study, we composed a set of competency questions – content interpretation questions –


that measured if the essential domain requirements of the scenario were met by the developed model

of a subject. More specifically, these questions assessed if all the necessary concepts, relationships
and multiplicities were adequately represented by the model conform to the description of the

assignment. As indicated by our descriptive results in Table 2 the results of the content

interpretation questions were somewhat higher for the ODCM technique (87,50%) compared to the
TCM technique (83,40%). However, the following hypothesis testing in Table 5 designate that this

difference is not significant (on the 5% significance level). Therefore, we can conclude that there

exists no additional benefit in employing an ODCM technique over a TCM technique in the case
where we have to model the basic requirements of a certain scenario or domain. These results were

to be expected and are in line with the existing literature. As mentioned by (Gemino & Wand, 2005),
the benefits of ODCM are presumed to be the highest when developing a more sophisticated

representation of the domain being modeled, and should aid by achieving a higher level of domain

understanding by its modelers and users. This assertion leads us to our second finding.

Finding 2. Novice modelers applying an ODCM technique have a significant benefit over an

TCM technique when modeling the advanced aspect of a domain.

A second set of competency questions – the content sophistication questions – were also

composed with the aim to measure how the models of a certain technique dealt with the more

challenging and ambiguous facets of the case description. In order to score high on the content
sophistication questions, subjects were required to respond beyond following the literal description

of the case and improve their model so that it would provide a more precise representation of the

domain. The descriptive results in Table 2 already give a first indication that the ODCM technique
amplifies content sophistication, since the average results of the competency questions were 46%

compared to a total average score of 24% of the TCM treatment. The hypothesis testing displayed

87
in Table 5 confirmed that the results of the content sophistication questions were higher for the

ODCM technique compared to the results of the TCM technique. As such, the results of the

empirical study demonstrate that it is advantageous to apply an ODCM technique over an TCM
when having to model the more challenging and advanced facets of a certain domain or scenario.

This clear difference in techniques can most probably be explained by the way modelers are adopted
to an ontological way of thinking when learning and applying an ODCM technique.

Idiosyncratically, modelers have to interpret and recognize a domain that they wish to model in the

ontological concepts and rules that correspond to this technique. These ontological rules and
concepts are governed by the axiom’s, constraints and patterns of the underlying ontology. In other

words, these patterns and constraints aid modelers in recognizing and coping with certain modeling

pitfalls, to which modelers that adopt a non-ontological modeling technique are less well protected
against. An example of such a pattern is displayed in Figure 12. In this figure, a typical pattern of

the UFO ontology is displayed. Without going into much details about the specific structure of the
UFO ontology, a Kind can be seen as a ‘rigid type’, meaning that it is an existentially independent

concept, that ‘contains’ its own principle of identity. A Phase is always a specialization of a rigid

type – in our case a Kind – where the specialization condition is always an intrinsic one. For instance,
a child can be seen as a phase of a person, where the specific range of categorizing someone as a

child can be specifically determined. Hence, modelers adopting the UFO ontology, and therefore

also OntoUML, will model concepts such as childhood, adolescence and adulthood as phases of a

person. Similar to the case description of our empirical study, modelers applying the OntoUML

technique will have the tendency to model the different states of a course, i.e. ‘Active’ and
‘Inactive’, as phases of a course, and consequently as specializations of a course itself. Another way

of modeling this description would be to simply assign active/inactive as a property of a course.

However, when we then relate other concepts such as exam or exam date to a course, then we can
have the conflicting situation where an exam and an exam date is scheduled for an inactive course.

Therefore, the impact of the ontological pattern to recognize active and inactive states as further
specializations of a course, prompts modelers to more carefully consider the structure and order of

88
their concepts and the intertwining relationships. We can find the impact of such patterns also clearly

in the answers to the competency questions. For instance, when regarding the tenth content

sophistication question – “Can exams and exam dates be associated only to active courses?” – the
ODCM treatment scored a total of 74% on this question, compared to a 45% of the TCM treatment.

Figure 12: ODCM Pattern - Case Description Example

Finding 3. Novice modelers applying an ODCM technique will not experience more effort in the

process of developing a conceptual model compared to novice modelers applying a TCM technique

– given a thorough understanding of the respective technique.

Our last finding is contrary to our proposed hypothesis, where we assumed that applying an

ODCM would result in more effort due to the additional philosophical rules and concepts that have

to be applied in the process, compared to a TCM where this is not the case. This finding is also
contrary to previous research efforts, such as the one of (Soffer & Hadar, 2007), where they found

that difficulties were experienced in the adoption of the ontological concepts and rules underlying
an ontology, especially with large sets of these rules. However, a key difference from this empirical

study compared to previous research effort investigating ODCM, is that the subjects adopting the

ODCM approach were trained and taught in this technique over a period of several months. Previous
studies did also train their subjects in the ODCM technique, but this training occurred mostly in a

rather short period of time. Presumably, when a modeler has been sufficiently familiarized with the
ODCM technique, the different philosophical terms and rules no longer feel strenuous when

89
developing a conceptual model. In fact, our results even expressed a significant difference in the

time required to model the assignment, with a median time of 37 minutes for the TCM treatment

and 32 minutes for the ODCM treatment. When examining the descriptive results in Table 3
concerning the efficiency questions, we can observe that subjects from the TCM treatment slightly

perceived their technique as more useful to apply compared to the subjects applying the ODCM
technique. On the other hand, subjects from the ODCM treatment rated their technique as easier to

use compared to subjects from the TCM treatment. These results feel quite contrary to the findings

related to the effectiveness from each technique. The ODCM technique clearly assists a modeler in
tackling the more challenging aspects of a domain, but at the same time it would seem that the

modeler does not perceive the technique therefore as more useful. One could argue that perhaps

modelers are still unaware of the potential benefit an ODCM technique can have in achieving higher
quality models. Perhaps even more surprising is that when we regard the specific questions (i.e. PU2

and PEU2) related to the effort of learning the technique, their results indicated that subjects of the
ODCM treatment perceived their technique as easier to learn than the TCM treatment. On the other

hand, subjects from the TCM treatment did find their technique more useful to learn compared to

the subjects of the ODCM treatment. The results to these questions are quite surprising since one
would expect that the ODCM technique would be perceived as more difficult to learn compared to

the TCM treatment. Perhaps, when subjects are being taught the ODCM technique over a longer

period of time, with regular practice and proper instructions, the difference in effort between

learning a TCM technique and a ODCM technique fades. The results of the hypothesis testing in

Table 7 also confirmed these observations, indicating that no significant difference – on the 5%
significance level – can be found between the effort spend to construct a model between novice

modelers trained in an ontology-driven conceptual modeling technique and novice modelers trained

in a traditional conceptual modeling technique.

90
3.6. Conclusion

While many ontology-driven techniques have demonstrated to be beneficial compared to the

traditional conceptual modeling practices, the added value of their application is not always

straightforward and there is no clear distinction when it is actually desirable to adopt these

techniques. Therefore, this paper conducted an empirical study that investigated the differences

between adopting a TCM technique and an ODCM technique with the objective to understand and
identify in which modeling situations an ODCM technique can prove beneficial compared to a TCM

technique. More specifically, we trained two groups of novice modelers in each technique
respectively and assigned these groups with an identical case description that had to be modeled

with the corresponding technique. We then compared the two modeling approaches by investigating

the quality of the resulting conceptual models, and the amount of effort a modeler had to spend in
order to compose these models. The findings of our empirical study can now confirm that there do

exist meaningful differences. However, since we are performing a quasi-experiment– meaning that

key characteristics may differ between our treatments – we would like to emphasize that other
effects such as the professor teaching the specific course or subject-specific characteristics can

influence the outcome of our experimental results. Taking into account these limitations, our results
revealed that novice modelers applying the ODCM technique arrived at higher quality models

compared to novice modelers applying the TCM technique. More specifically, the results of the

empirical study found that it is advantageous to apply an ODCM technique over an TCM when
having to model the more challenging and advanced facets of a certain domain or scenario. This

additional benefit can most probably be explained by the way modelers are adopted to an ontological
way of thinking when learning and applying an ODCM technique. The patterns and rules

corresponding to this ontological mindset aid modelers in tackling the more challenging aspects of

modeling a certain domain. Moreover, we also did not find any significant difference in effort
between applying these two techniques. Presumably, this can be attributed to the fact that both our

subject groups were trained in each respective technique over a period of several months with

91
regular practice, consequently leading to a less distinctive difference in effort between learning a

TCM technique and an ODCM technique fades.

3.7. Validity

Internal Validity

In order to avoid any threats to validity, we have carefully designed and monitored the conduct of

this experiment. Several experimental standards were also implemented to strengthen the validity
of the experiment: (1) We applied the balancing design principle in order to balance between our

treatments. However, since balancing within the treatments was not possible due to practical
limitations, we emphasize that this is a quasi-experiment, which reduces the internal validity due to

key characteristics that may differ between the two treatments and as a result can have an impact on

the experimental results; (2) subjects were selected from a ‘controlled’ environment, meaning that
they all share a similar background and were novice modelers in the field of conceptual modeling;

(3) neither of the subjects had any prior knowledge of either of the modeling techniques that were

applied in the treatments; (4) we inserted a control variable in the experiment to assert that subjects
had a similar understanding of the techniques before commencing the experiment; (5) our modeling

task has been evaluated by a large amount of students before the actual experiment took place, in
order to assure the modeling task was complex enough; and finally (6) the correction of the

competency questions – although already rather objective by themselves – has been conducted by

several authors of this article, where also the correlations of the results between these authors have
been calculated to ascertain that the models were rated as objectively as possible.

External Validity

Concerning external validity, we are well aware that by conducting our experiment on students, we

limit the overall generalizability of our results. However, as stated by (Falessi et al., 2017), using

students as participants remains a valid simplification of reality needed in laboratory contexts. It is

92
an effective way to advance software engineering theories and technologies but, like any other

aspect of study settings, should be carefully considered during the design, execution, interpretation,

and reporting of an experiment. Consequently, we decided to select students as our test subjects
since they have no prior knowledge of conceptual modeling and can thus be seen as novice modelers

who can be trained in either TCM or ODCM. Furthermore, although we have balanced our number
of subjects across our treatments, an even better approach would be to also balance subjects of the

different universities over each treatment. In our current setup, only one type of technique was taught

at each university. This was due to the practical organization of the classes given at the universities.
We therefore acknowledge that dividing students over the different treatments per university would

have increased the external validity of this study. We would like to remark however, that the nature

of our results quite accurately follows the distinctions that exist between the techniques that have
been applied in this study. For instance, the results of some competency questions can be clearly

attributed to the existence of the ontological patterns that exist in the ODCM technique. Finally, we
have presented the modeling case at the OntoCom workshop at the 36th International Conference

on Conceptual Modeling in order to evaluate our case and the related competency questions by

several experts in conceptual modeling and ontology. During this workshop, many of the
competency questions –both for the content interpretation and especially the content sophistication

–were derived from the feedback from these different experts. We deliberately also choose our

assignment to deal with the university domain since students are well aware of this domain and so

that there would not exist an additional advantage in modeling between the students.

93
94
4.

Comprehending 3D and 4D
Ontology-Driven Conceptual
Models: An Empirical Study

95
4.1. Introduction

Conceptual modeling is the activity of representing aspects of the physical and social world for the

purpose of communication, learning and problem solving among human users (Mylopoulos, 1992).

Especially in enterprises, conceptual modeling has gained much attention for the design, analysis

and development of information systems and business processes. Since a conceptual model is used

as a communication, analysis and documentation tool for domain knowledge and system
requirements, the quality of the model affects the quality of the developed system or process (Endres

& Rombach, 2003). As a way to improve the quality of conceptual models, ontologies were
introduced. In this paper we shall refer to all techniques where ontologies are applied (e.g.

evaluation, analysis or theoretical foundation) to improve either the quality of the conceptual

modeling process or the quality of the conceptual model, as ontology-driven conceptual modeling
(ODCM). An ontology supports the construction of explicit models of conceptualizations in the

form of concrete guidelines for selecting which concepts should be represented as language

constructs and how they should be applied (Guizzardi, Pires, & Sinderen, 2002). Whilst there exist
different types of ontologies – e.g. domain ontologies, task ontologies etc. – this paper will focus

mainly on foundational ontologies. Foundational ontologies have been developed by adapting and
extending a number of theories coming, primarily from formal ontology in philosophy, but also

from cognitive science, philosophical logics and linguistics (Guizzardi, 2012). They describe

general concepts like space, time and matter, and are independent of a particular problem or domain
and are frequently applied in the field of ODCM.

Different kinds of foundational ontologies can be adapted in order to perform ODCM. For
instance, based upon the endurantism-perdurantism paradigm, we can differentiate between 3D and

4D ontologies. 3D ontologies view individual objects as three-dimensional, having only spatial

parts, and wholly exist at each moment of their existence. 4D ontologies idiosyncratically see
individual objects as four-dimensional, having spatial and temporal parts, and exist immutably in

96
space-time (Hales & Johnson, 2003). While most research in ODCM has been performed with 3D

ontologies (Verdonck & Gailly, 2016b), 4D ontologies have gained more popularity in recent years

(Al Debei, 2012; De Cesare & Geerts, 2012; De Cesare et al., 2015). Although for example the
studies of (De Cesare et al., 2015; Hadar & Soffer, 2006) have already demonstrated that applying

different ontologies can lead to diverse kinds of conceptualizations, there exists little research that
profoundly investigates the impact of applying these different kinds of ontologies on the resulting

models. Furthermore, while ontologies were introduced to increase the overall quality of conceptual

models, past research has mainly emphasized the semantic quality of models, and has spent little
effort in examining the comprehension of models (Moody, 2005; Verdonck et al., 2015).

Therefore, this paper will perform a rigorous investigation of the effects of applying different

kinds of foundational ontologies on the comprehension of their resulting models – also known as
the pragmatic quality (Lindland et al., 1994). To properly measure these effects, we conduct an

empirical study. As the foundation for the further development of this paper, we formulate our
research question as follows: In which degree is the pragmatic quality of ontology-driven models

influenced by the choice of a particular ontology, given a certain understanding of the ontology? In

other words, we are going to investigate the influence of ontology on the interpretation and
understanding of the resulting conceptual models, taking into account the pre-existing knowledge a

person has of the respective ontology. In section 2 of this paper, we will explain the design and
methodology that forms the backbone of our empirical study. In section 3, we will formulate our

hypotheses, where we will perform a thorough investigation and discussion of related research.

Next, we will draft our experimental design to test these hypotheses in section 4. We will then
present the results of our experiment in section 5 and discuss their outcome on the hypotheses. In

order to better understand these results, we will perform a protocol analysis, of which the design

and the results will be discussed in section 6. Next, in section 7, we will discuss the consequences
and implications of the results from both the experiment and the protocol analysis and provide an

97
answer to our research question. Finally, we will present our conclusion and future research

opportunities in section 8 of this paper.

4.2. Methodology

This empirical research is part of a research project that has been in development for several years.

In a first research effort (Verdonck, Gailly, & Poels, 2014), we theoretically examined the model

variations that resulted from constructing different enterprise models with a 3D and a 4D ontology.
Since the resulting representations differed quite substantially from one another, we decided to

further investigate these differences in an exploratory study (Verdonck & Gailly, 2016a). More
specifically, the exploratory analysis focused on the comprehension and understandability of

ontology-driven models that were developed by either the 3D or 4D ontology. Our results confirmed

that the conceptualizations that were realized by the different ontologies have a considerable impact
on the understanding and comprehension on its users. Furthermore, the findings suggested that

depending on the metaphysical characteristics of an ontology, some ontology-driven models are

perceived as more easy or difficult to comprehend. Our exploratory study thus indicates that there
are differences in interpretation between the ontology-driven models. However, we are still left with

the question in which degree the pragmatic quality of these ontology-driven models is influenced
by the choice of a particular ontology. Does the choice of an ontology has a significant effect on the

interpretation and comprehension of the resulting models, or is this effect only marginal?

Hence, our previous studies led to the formation of our research question formulated in the
introduction above. Now, in order to formulate a proper answer on this research question, an

empirical study is performed based upon the experimental design described in Wohlin et al. (2012).
The empirical study is rendered as follows: first, we will define our hypotheses that will serve as the

basis for our empirical study. These hypotheses will be based upon related research and previous

research efforts. Next, we will perform our experiment to test these hypotheses. The sole purpose
of the experiment is to collect data to either accept or reject the hypotheses. Finally, in order to

98
provide additional insights into the results of our experiment, we perform a protocol analysis. Hence,

contrary to the experiment, the purpose of the protocol analysis is not to collect data to either accept

or reject the hypotheses, but instead the analysis aims to collect data to interpret why the hypotheses
were rejected or accepted. This method of conducting our empirical study is elaborated below, in

Figure 13.

Figure 13: Method of performing the empirical study

4.3. Hypotheses Development

To properly formulate our hypotheses, we will first investigate the different kinds of foundational

ontologies that can be adopted for our empirical comparison and examine how we can accurately

distinguish between these ontologies. Next, we will consider prior research that has already been
conducted to assess the impact of applying different ontologies on the resulting conceptual models.

Finally, based upon these findings, we will formulate our hypotheses that will serve as the

experimental design for the further development of our empirical study.

99
We can distinguish between different kinds of foundational ontologies by regarding their

metaphysical characteristics. The metaphysical characteristics of an ontology define its

philosophical concepts and structures such as space, time, matter, object, event, action, etc. and how
these concepts interrelate with one another (Herre & Loebe, 2005; Poli, Healy, & Kameas, 2010).

Every ontology has their own metaphysical characteristics and represents real world phenomena in
their specific way. For instance, we can make a distinction between 3D and 4D ontologies. The main

differences between 3D and 4D ontologies can be translated according to the ontological

interpretation of the following metaphysical characteristics:

• The notion of identity and essence defining properties: this characteristic defines how the

ontology assigns a principle of identity to its entities and how the principle of identity deals with
temporary conditions such as roles, states and phases of an element. For example, in a 3D

ontology, a persons’ childhood and adulthood will be represented as existentially dependent states
of the persons’ entity, while in a 4D ontology, childhood and adulthood are elements, that become

(temporarily) a part of the persons’ entity. We would like to remark that the use of properties in

this section is actually already misplaced, since 4D ontologies do not encompass properties. We
apply this term however more to designate the characteristics or features of a certain entity.

• The perception and endurance of time: defines how entities begin and cease to exist over time,

and how they perceive events and changes over time. In a 4D ontology, objects and relationships

are represented immutably in space-time while 3D ontologies represent these objects and
relationships in the present, with their current traits and characteristics.

• The formation of relations: describes how elements form relationships between different entities

and how entities can become part of each other or separate from one another. In a 3D ontology,

relationships can be distinguished based upon a certain meaning that is derived from the kinds of
entities they link together while in 4D ontologies relationships are defined as tuples that aggregate

any kinds of entities.

100
We would like to note that the metaphysical characteristics between these two ontologies will only

be explored to a certain degree in this chapter – due to the design of this experiment, and the

inexperience of the subjects concerning ontologies. For example, the notion of identity and essence
defining properties will only be examined in terms of how both ontologies deal with the states,

phases, roles and types that a certain entity can adopt or is derived from. We will not deal with the
distinctions on how identity principles adopt to change, and which are to be considered essential or

non-essential. We believe that a more thorough comparison and discussion between the

metaphysical characteristics of 3D and 4D ontologies provide a captivating opportunity for future


research efforts – both theoretical as well as empirical.

Both kinds of ontologies have their respective advantages and disadvantages. The advantage of

4D ontologies is their simplicity, since everything is treated like as a space-time worm. Further,
since 4D ontologies emphasize the continuity of objects over space-time, they are more suitable to

express time-related concepts (Hales & Johnson, 2003). Their disadvantage is that this space-time
continuity feels rather counterintuitive, since objects and processes are not distinguished and thus

things that are typically regarded as objects have temporal parts (Pease & Niles, 2002). On the

opposite, the advantage of 3D ontologies is that they capture the intuitive distinction between objects
and processes. Idiosyncratically, they view objects only from the present, and assume that the same

object can exist over time and thus may be fully identified at different points in time. While this
view is more intuitive than the 4D ontological view in which objects exist immutably in space-time,

it poses several difficulties concerning the principle of identity of 3D objects. As mentioned by

Krieger et al. (2008), the diachronic identity aspect of 3D ontologies reduces the identification of
essential properties that hold over some period of time. For instance, hair color, weight or length is

obviously not an essential property of a person if we consider extended periods of time. Furthermore,

Pease & Niles (2002) point out that the 3D approach can also generate situations where this approach
contradicts itself. For example, a person John loses an arm at a particular event in time (Ω). Since

an object is wholly present at any moment of its existence, we know that John is identical with John

101
before Ω, which in turn, is identical to John after Ω. However, according to the indiscernibility of

identicals, an object A is identical with object B only if every property that can be ascribed to A can

be ascribed to B and vice versa. Thus, we arrive in the contradicting situation that since John has
the property of having an arm before Ω and does not have this property after Ω, it follows that John

after Ω is not identical with John before Ω. Note that 3D ontologies can actually overcome this
contradiction by applying Leibniz’s law through the use of Sortals. Through adopting Leibniz’s law,

a sortal requires that individuals only need to share a special/identifying/essential property for them

to be the same. As mentioned above, we will however not deal with the distinction between essential
properties and principles of identity between the two ontologies in this chapter.

These examples above demonstrate that the metaphysical characteristics of an ontology can be

applied to distinguish between different ontologies. Moreover, depending on these characteristics,


an ontology can emphasize certain elements or structures such as time or identity, which can then

influence the final representation of a conceptualization. The relevance of these influences has also
been demonstrated in the domain of conceptual modeling. For example, in the theoretical research

of Al Debei (2012), the 3D object-role modeling (ORM) paradigm was analytically compared to the

4D object paradigm (OP). The conducted comparison reveals that the OP paradigm can provide
semantically richer representations of phenomena than the ORM paradigm. Also (De Cesare et al.,

2015) and our initial research effort (Verdonck et al., 2014), theoretically examined the way in
which a 3D ontology and a 4D ontology represent temporal changes, concluding that each of the

ontologies can lead to different representations and interpretations.

Hence, as this prior research demonstrates, it appears that a model will differ depending on the
ontology that has been applied. However, since little research efforts have yet been performed in

this area, limited knowledge exists on the fundamental differences between applying different

ontologies on such models. Therefore, we will perform an empirical comparison, more specifically
between a 3D and a 4D ontology. Our choice for these two kinds of ontologies is double-fold. First,

although there exists much theoretical work on both types of ontologies, there has not yet been an

102
empirical comparison between them. While it is clear that both kinds have their advantages and

disadvantages, and consequently influence the conceptualizations that are realized by these

ontologies, there has not yet been a study to test if these conceptualizations actually lead to
significant differences in the pragmatic quality of their resulting conceptualizations. Second, to

perform a comparison between ontologies, it is rather desirable that these ontologies are
considerably different from one another. Since 3D and 4D ontologies originate from rather different

paradigms, this will result in different kinds of models. In order to clearly distinguish between these

ontologies, we will fixate on their metaphysical characteristics, and the influence of these particular
characteristics on the comprehension of the resulting ontology-driven models. Furthermore, as the

knowledge of the respective ontology and its associated paradigm influences the way a person

interprets and understands a model that originates from this ontology, we have to also incorporate
this effect into the comparison of our models.

As such, based upon the related research described in this section and our own previous research
efforts (Verdonck & Gailly, 2016a; Verdonck et al., 2014) we formulate the following three

hypotheses:

• H1: The notion of identity and essence defining properties is more difficult to comprehend with

3D ontology-driven models than with 4D ontology-driven models, given a certain understanding

of the respective ontology. As pointed out by Krieger et al. (2008) and Pease & Niles (2002), 3D

ontologies have more difficulty with the identification of essential properties that hold over some

period of time than 4D ontologies.

• H2: The perception of time is more difficult to comprehend with 3D ontology-driven models than

with 4D ontology-driven models, given a certain understanding of the respective ontology. As

mentioned by Hales & Johnson (2003), 4D ontologies emphasize the continuity of objects over

space-time and should thus be more suitable to represent time-related concepts. Also De Cesare
et al. (2015) illustrated in their research how a 4D ontology is more appropriate to represent

temporality and modality in the form of roles.

103
• H3: The formation of relations between entities is more difficult to comprehend with 4D

ontology-driven models than with 3D ontology-driven models, given a certain understanding of

the respective ontology. In our exploratory analysis (Verdonck & Gailly, 2016a), feedback of

subjects reported that the representation of relationships in the 4D ontology-driven models were
difficult to comprehend and felt unnatural to several of our subjects. Similar remarks about the

counterintuitive feeling of the space-time continuity were also mentioned in Pease & Niles

(2002).

As a final remark, we would like to emphasize that these hypotheses are the result of a generalization

of the existing literature concerning the metaphysical characteristics of an ontology. More explicit

and in-depth hypotheses can be developed – regarding for instance the distinction between essential
properties and principles of identity.

4.4. Experimental Design

In this section we will outline our experimental design (Wohlin et al., 2012) in order to test the

hypotheses above. We first define our variables that will be tested. Next, we specify the selection of

our subjects. Further, we explain the choice of our experimental design type, and the instruments
that will be applied in this experiment. Finally, we discuss the internal validity of our experiment

4.4.1. Variable development

Before any experimental design, the dependent, independent and control variables should be
selected beforehand. Both the independent and dependent variables are derived from our

hypotheses, and consequently from our research question.

Independent Variable

In our study, the independent or affecting variables constitute of the ontologies that were chosen

to construct the ontology-driven models. In other words, in our experimental setting, we can control
if we either assign our test subjects with 3D or 4D ontology-driven models. More specifically, we

104
decided to work with UFO (3D ontology) and BORO (4D ontology). Our choice for these two

specific ontologies is driven by various reasons. First, they are both foundational ontologies that are

repeatedly applied in ODCM. Secondly, both ontologies can be differentiated based on their purpose
and their intended use, making them interesting to compare. UFO was developed for analyzing

modeling languages and to improve them. More specifically, the aim of UFO is to improve the
truthfulness to reality (domain appropriateness) and conceptual clarity (comprehensibility

appropriateness) of a modeling language (Guizzardi & Wagner, 2011). BORO on the other hand,

was developed for re-engineering purposes and to integrate systems in a transparent and
straightforward manner (Partridge, 2005). By utilizing business objects, its purpose is to make

systems simpler and functionally richer so that in practice, they would be cheaper to build and

maintain. We will not cover all the concepts of both ontologies in this paper. Instead, for a more
detailed reading of these ontologies, we refer the reader for BORO to (Partridge, 2005; De Cesare

& Partridge, 2016) and for UFO to (Guizzardi, 2005; Guizzardi, Wagner, Almeida, & Guizzardi,
2015).

Dependent variable

As formulated in our hypotheses and research question, we are interested in measuring the
pragmatic quality or the model comprehension of these 3D and 4D ontology-driven models. The

pragmatic quality of a conceptual model can be determined through the understandability or


comprehension of a model by its users. As such, we will focus on the model comprehension of our

conceptualizations to assess their pragmatic quality. Model comprehension can be measured with

several different approaches. In the work of (Moody, 2003), a distinction is made between efficiency
and effectiveness. While effectiveness of a modeling technique is defined by how well it achieves

its objective – in our case model comprehension – efficiency is defined by the effort required to

apply the modeling technique. The former can be measured by output measures evaluating the
quantity and/or quality of the results; the latter can be measured by a variety of input measures such

105
as time, cost or effort. In our paper, the effectiveness will thus directly measure the model

comprehension, while the efficiency will measure the cost of effort to comprehend the models.

More specifically, in our paper, we will measure the effectiveness of the ontology-driven models
with comprehension and problem-solving questions. These output measures are similar to the

research studies of (Burkhardt, Détienne, & Wiedenbeck, 2002; Gemino & Wand, 2005; Vessey &
Conger, 1994), where they also compared the comprehension and understandability of different

kinds of models that were constructed with different development techniques. While the

comprehension questions assess a basic level of model comprehension, the problem-solving


questions are more challenging and target a deeper level of model comprehension from the subjects.

More specifically, in our experiment the comprehension questions serve two purposes: first they

aim to evaluate if a subject fully understood what real-world situation the ontology-driven model is
representing. Second, they assess if the subject correctly interprets the underlying structure and

meaning of the ontology. The problem-solving questions on the other hand assess if a subject did
not only understand the model but can also apply the ontology for defining new concepts, new

relations and by framing new modifications to the ontology-driven models. Since both type of

questions holds only one correct answer, they can be objectively corrected. Each correct answer
corresponds to one positive point. At the end, the total number of points can be compared to assess

the number of correct answers.

The efficiency of the ontology-driven models will be measured by: (1) assessing the amount of

time needed to understand the models, and (2) the amount of effort a subject had to spend in order

to fulfill the tasks related to the ontology-driven models, here expressed as the ease of interpretation
(EOI). Our EOI questions are based on the perceived ease of understanding as applied in several

research efforts (Evermann & Wand, 2006b; Maes & Poels, 2007). The EOI questions are divided

in such a way that they measure different aspects of perceived effort during the experiment. More
specifically, they assess: (i.) the effort in comprehending a specific assignment; (ii.) the effort spent

106
to complete the comprehending questions or the problem-solving questions; and (iii.) which

assignment required the most effort to solve.

Control Variable

Since we will be testing users’ comprehension of 3D and 4D ontology-driven models, we need

to be certain that all subjects have an equal understanding of the 3D or 4D models they are dealing
with. Therefore, we need to assure that the interpretation of a certain model can be linked to the

ontology that was applied to construct the model, and not to a limitation of the subject’s knowledge

of the ontology. As such, we apply a control variable to test every subject’s knowledge and
understanding of the ontology, before the start of the experiment. The results from the subjects that

failed the knowledge test will not be incorporated into the results of the experiment.

4.4.2. Subject Selection

The subjects in our study all had prior education in the domain of conceptual modeling – more

specifically with conceptual modeling in EER, UML and BPMN – and were completing their
Master’s in Business Engineering at the University of Ghent. As stated by (Falessi et al., 2017),

using students as participants remains a valid simplification of reality needed in laboratory contexts.

It is an effective way to advance software engineering theories and technologies but, like any other
aspect of study settings, should be carefully considered during the design, execution, interpretation,

and reporting of an experiment. We decided to select students as our test subjects since they have
no prior knowledge of ontologies and can thus be seen as a ‘tabula rasa’. Consequently, we can train

them with an ontology and a new paradigm without the interference of any pre-used paradigm of

another ontology. This allows us to measure the full impact of the comprehension of the ontology-
driven models. Furthermore, all subjects have the same age (i.e. mid-twenties).

This specific selection thus leads to a controlled sample of subjects with the same level of

experience in conceptual modeling and with no prior knowledge about any of the ontologies that
were applied in the empirical study. In order to determine the number of subjects for our empirical

107
study, we base ourselves on the differences in the averages in the model comprehension scores from

the study of (Verdonck & Gailly, 2016a). Based upon the sample size formula below (Shao et al.,

2008), assuming a Type I error (α) of 5% and a Power (1−β, where β is Type II error) of 0.8, we
require a total number of 43 subjects per treatment group. In total, 156 subjects participated in the

study, of which 78 in each treatment. Hence, the number of participants in our experiment is ample
in regard to the required statistical minimum.

4.4.3. Experimental Design Type

An experiment consists of a series of tests of different treatments (Wohlin et al., 2012). To get
the desired results to answer our research question, the series of tests must be carefully planned and

designed. Based on our hypotheses, we can derive two treatments: a BORO treatment and a UFO

treatment. The series of test in each treatment constitute of the different models that each emphasize
a specific metaphysical characteristic. Our subjects are thus divided into two different treatments,

where each treatment submits the subjects to similar tests where the comprehension of the models
is measured. We have assigned the subjects randomly to these treatments, and according to the

balancing design principle. By randomizing we mean that subjects will be allocated randomly to

either one of the treatments. By balancing the treatments, we assign an equal number of subjects to
each separate treatment, to arrive at a balanced design. Balancing is desirable since it both simplifies

and strengthens the statistical analysis of the data (Wohlin et al., 2012). The design type of our
experiment is a one factors with two treatments design, meaning that we compare the two treatments

against each other with one independent variable – model comprehension. Each subject also takes

part in only one treatment. Most commonly, the means of the dependent variable for each treatment
are compared. We will thus assign scores to the different measures of the dependent variable – the

comprehension questions, the problem-solving questions, the amount of time required to solve the

task and the ease of interpretation questions – in order to compare our two different treatments
objectively. This aspect will be discussed in more detail in the section below ‘Instrumentation’. As

108
a summary of this section, Figure 14 gives a more comprehensive overview of the different aspects

of our experimental design.

Figure 14: Overview of Experimental Design

4.4.4. Instrumentation

Below, we will describe in detail the different phases a subject goes through when participating
in our experiment, and the kinds of instruments we apply in each of these phases. We would like to

note that all materials – the ontology-driven models, knowledge assessments, comprehension

questions etc. – for the empirical study that have been applied in this experiment can be found at
our online repository at Open Science Framework (OSF)5.

5
https://osf.io/ahfjc/

109
Training of the ontology and its modeling approach

Each subject is trained in either UFO or BORO, depending on the group or treatment they belong
to. The ontology and its modeling technique are explained by the aid of a description of the ontology.

Both ontologies can be expressed with the UML notation. Each modeling technique consists of a
UML profile that reflects the ontological distinctions prescribed by the respective ontology. For

UFO, this technique is called OntoUML (Guizzardi, 2005), while BORO has the BUML modeling

technique (Partridge, 2005). By expressing both ontologies in the UML notation, we can eliminate
any errors that could occur from applying different modeling notations. Additionally, all our

subjects received previous courses in UML modeling, making the notation quite familiar, and

allowing us to fully measure the comprehension of the models resulting from applying the specific
modeling technique – OntoUML or BUML – without ‘interference’ of the modeling notation. In the

description of the ontologies, there is also a section that briefly describes the different syntax
elements of UML. Each subject could take as much time as they needed to read and understand the

description. The description was drafted together with several small modeling examples in order to

fortify the subjects’ understanding of the respective ontology (see OSF repository).

Control variable: Assessment of subjects’ knowledge

In order to assess if the subjects clearly understood the respective ontology and corresponding

modeling approach, we evaluate each subjects’ understanding with several written statements

concerning the ontology (see OSF repository). Each of these statements describe a certain
phenomenon or scenario, to which the subject has to choose the correct corresponding element of

the ontology. The subjects can choose from four different multiple-choice answers, with only once

correct answer. In total, ten statements were given for each treatment. All of these statements were
derived from examples from existing literature related to the ontologies. We would like to emphasize

that the statements for both treatments were identical. Every subject was thus submitted to the same
assessment, of course with varying answers depending on the ontology. A student failed the

assessment if they were not able to answer 50% of the statements correctly. These subjects could

110
still participate in the experiment though, so that we can assess how much impact the knowledge of

an ontology has on interpreting and comprehending the ontology-driven models.

Interpretation of the models

After the assessment of the respective ontology, the subjects are submitted to the treatment, where
they received the three assignments and their related models and questions in a sequential manner.

More specifically, we developed three ontology-driven models of both the BORO (Partridge, 2005)

and UFO (Guizzardi, 2005) ontology, which our subjects were tasked to interpret (see OSF
repository). Since we distinguish 3D and 4D ontologies according to their differences in

metaphysical characteristics, we have created each model in such a way that it emphasizes one of

the metaphysical characteristics, as described in section 2. In other words, the difference between
these models or conceptualizations represents how the respective ontologies deal with their

metaphysical characteristics. Table 8 summarizes the metaphysical characteristic each model was
focusing upon and explains the scenario that was being modeled. Each of these models has been

presented to, and approved by, an expert of the respective ontology (Infra: acknowledgements).

Table 8: Metaphysical Characteristics of the different models

Metaphysical
Model Scenario of Assignments
Characteristic

The first model represents a type of aircraft that constitutes out of


The notion of identity and
different kind of component, where one of these components is a
essence defining properties
type of fuel pump that is part of the aircraft.
The second model represents a scenario where a company keeps
The perception and
track of its projects and where events such as the start and end of a
endurance of time
project are recorded.
The third model represents a layered composition of protocols that
The formation of relations
are interconnected between each other and together form protocol
between entities
stack.

The experiment was conducted without any further explanation or description. We would like to
emphasize that for every assignment, the models in UFO and BORO are informationally equivalent.

They thus represent an identical scenario. Next, in order to measure the model comprehension, the

111
subjects were given a set of comprehension and problem-solving questions that were related to each

model.

Comprehension Question

After the subjects completed the interpretation of a respective model, they had to answer a set of

comprehension questions to assess their interpretation. During the questions, subjects could always
consult the respective model. The comprehension questions are in the form of multiple-choice

questions, with only one correct answer. These questions reviewed a subject’s interpretation of the

model and their comprehension of the concepts and structure of the ontology. All comprehension
questions were the same for both the UFO assignments as for the BORO assignments (see OSP

project). A score was then calculated – independently by an automatic correcting system –depending

on the number of correct answers given by the subject.

Problem-Solving Questions

After the subjects completed the comprehension questions, they had to answer several problem-
solving questions. We would like to note that the subjects received the correct model related to the

respective assignment after answering the problem-solving questions. As such, a subject would not

continue the new task with any wrong assumptions made during the next problem-solving task. The
problem-solving questions were also in the form of multiple-choice, with only one correct answer.

Again, a score was calculated depending on the number of correct answers given by the subject.

Ease of Interpretation

As a last phase in the experiment, several EOI questions were asked at the end of every
assignment to assess which kind of questions were perceived as most difficult (see OSF repository).

The EOI questions from 1-4 were repeatedly asked after completing every assignment. As

mentioned above, our EOI questions measure various aspects of perceived effort during the
experiment. While EOI questions 1-3 assess the overall perceived difficulty of the assignment, EOI

question 4 measures the difference in perceived effort between solving the comprehension questions

112
and the problem-solving questions per assignment. After all three assignment were completed, two

final EOI questions (5 and 6) were asked to assess which assignment was perceived as the most

cumbersome to solve and how confident they feel in solving the tasks related to the interpretation
of the ontology-driven models.

Internal Validity

In order to avoid any threats to the internal validity, we have carefully designed and monitored

the conduct of this experiment. Several experimental standards were also implemented to strengthen
the validity of the experiment: (1) subjects were selected on a random basis, (2) we applied the

balancing design principle in order to balance our treatments; (3) subjects were selected from a

‘controlled’ environment, meaning that they all share the same background and share similar
experiences with conceptual models; (4) neither of the subjects had any prior knowledge of either

of the ontologies that were applied in the treatments; (5) we inserted a control variable in the
experiment to assess that subjects had a similar understanding of the ontologies before commencing

the experiment; and finally (6) our experimental design has been evaluated by several subjects

before the actual experiment took place, in order to test for any ambiguities or dubiety in the
assignments, models or questions.

4.5. Results of experimental study

Below we will first discuss the descriptive results related to the knowledge assessment, the
effectiveness and the efficiency of each treatment. By regarding and discussing the descriptive

results we can get a first indication of the differences that exists between the treatments. However,
based upon the descriptive statistics we cannot conclude if the treatments are significantly different

from one another. Therefore, we will perform further statistical testing to test the hypotheses as

formulated above and examine if significant differences can be deducted.

113
4.5.1. Descriptive statistics

Knowledge Assessment

When we examine the results of the knowledge assessment test – which was our control variable –

we can conclude that all subjects gained a reasonable understanding of the ontology’s structure and
concepts, with an average score of 85,58% for the BORO treatment and 91,83% for the UFO

treatment. None of the participating subjects gained a score lower than 50%, which would have

excluded the results of the particular subject in the experiment.

Effectiveness of the treatments

Table 9 displays the average results scores, of the individual assignments, the total score of the
assignments for each treatment and the scores for both comprehension and problem-solving

questions. As for the total average scores of each assignment, the table demonstrates that for
assignment 1, which deals with the notion of identity and essence defining properties, BORO

slightly scored better than the UFO ontology. For assignment 2, which handles the perception of

time, and assignment 3 that focuses on the formation of relations, UFO scored higher compared to
BORO. Especially in assignment two we notice the most substantial difference between the two

ontologies. Overall, when we calculate the total scores for both treatments, the UFO subjects scored
an average of 74,33%, compared to a 65,92% of BORO subjects. When we take a closer look at the

scores of the comprehension questions, we notice that BORO scores higher at both assignment 1

and 3, where especially assignment 1 differs substantially to UFO. On the other hand, UFO scores
considerably higher at assignment 2. Further it would seem that both treatments scored the least on

assignment 1, which focuses on the principle of identity. Overall, UFO (63,83%) scores slightly

higher than BORO (69,84%) regarding the total average score of the comprehension questions.
Additionally, we would like to note the evolution in score results, where UFO subjects take a ‘leap’

after assignment 1, where they score on average 30% higher on assignments 2 and 3. In BORO,
subjects tend to have the same, rather low scores in assignments 1 and 2, only improving their

114
comprehension scores in assignment 3. When we regard the problem-solving questions, we notice

that overall, UFO scores higher on every assignment compared to BORO. It would thus seem that

UFO subjects had less difficulty answering the problem-solving questions compared to the BORO
subjects. Regarding the difference in the total average score of the assignments, UFO (84,94%)

scores substantially higher than BORO (70,51%). The difference in total score between UFO and
BORO for the problem-solving questions (14%) is also more profound compared to the difference

in score of the comprehension questions (6%). These results seem to suggest that BORO subjects

had reasonably more effort in solving the problem-solving questions compared to the UFO subjects.

Table 9: Average scores of Experiment

Total Score
Average Scores Assignment 1 Assignment 2 Assignment 3
Assignment

Total Average BORO 60,36% 59,34% 77,66% 65,92%


scores
UFO 58,55% 81,56% 81,14% 74,33%
BORO 54,70% 56,28% 80,51% 63,83%
Comprehension
questions UFO 46,15% 83,76% 79,62% 69,84%

BORO
Problem-solving 73,08% 66,99% 70,51% 70,51%
questions UFO
83,33% 76,60% 84,94% 84,94%

Efficiency of the treatments

As a first measure of the effort required to comprehend and understand the models of the

respective treatment, we take a look at the average time needed to solve the assignments. As
displayed in Table 10, subjects of the BORO treatment (40:02) only slightly required more time in

solving all the assignments compared to the UFO treatment (39:07). However, this rather small

difference does not entail that BORO subjects apperceived the assignments as more difficult, and
therefore requiring more time than the UFO treatment. This is again confirmed when looking at the

average time needed to complete the individual assignments, where there exists almost no difference
in time between the treatments to solve these assignments.

115
Table 10: Average amount of time needed to finish the experiment (mm:ss)

Assignment 1 Assignment 2 Assignment 3 Total Time


BORO 16:55 11:54 11:13 40:02
UFO 16:12 11:43 11:12 39:07

Figure 15 gives us an overview of the number of answers given to the ease of interpretation

questions. As for the first EOI question, the highest number of subjects that experienced difficulty

in interpreting the model, both for the UFO (37) and BORO (45) treatment, is related to the first
assignment. Also the third assignment reports a majority of subjects that assigned the task of

interpreting the model as rather difficult. When observing the answers related to the second EOI

question, we notice that most of the subjects of the UFO treatment perceived the comprehension

questions of the third assignment as most difficult, while the BORO group had the most difficulty

with the comprehension questions of the first assignment. On the contrary, a relatively high number
of subjects of the UFO treatment perceived the comprehension questions of the first assignment as

rather easy or neutral to solve. As for the third EOI question, we can clearly see that for all

assignments, the subjects have indicated that the problem-solving questions are perceived as
difficult. In the third assignments even 20% of the total number of subjects, again for both

treatments, perceived the problem-solving questions as very difficult. It would thus seem that the
problem-solving questions are perceived as more difficult compared to the comprehension

questions.

116
Figure 15: Results Ease of Interpretation Questions 1-3

This perception is again confirmed when we consider Figure 16 where the results of the fourth

EOI question are displayed. This question clearly demonstrates that for all three assignments and
for both treatments, more than 60% of the subjects perceived the problem-solving questions as most

difficult to solve when compared to the comprehension questions. Ironically the average scores of

the problem-solving questions are considerably higher compared to the scores of the comprehension
questions. A reason for this could be that because of their more difficult nature, a subject had to

think more thoroughly on the answer of a problem-solving question, and the associated structure of
the ontology, leading to a higher number of correct answers.

117
Figure 16: Results Ease of Interpretation Question 4

As for the EOI questions which were posed at the end of the experiment – represented in Figure

17 – the first question indicates that 49% of the subjects at the BORO group perceived the first

assignment as the most difficult to solve, while 45% of the subjects of the UFO group experienced

the third assignment as most difficult. These results mark a rather clear difference in perception

between the two treatments. As for the last questions, most of the subjects in both treatments
answered that they feel positive in terms of interpreting more models related to the ontology, and

that they believe to reasonably understand the ontology’s structure and concepts. We can also note

that reasonable more subjects of the BORO treatment (31%) compared to the UFO treatment (15%)
answered negative to this question, signaling that the concepts and structure of the ontology are still

vague to them. This perception also corresponds to the total averages scores as discusses above,
where the UFO scores are higher than the BORO scores.

118
Figure 17: Results Ease of Interpretation Questions 5-6

4.5.2. Hypotheses Testing

Effectiveness of the treatments

In order to test our hypotheses, we are going to compare if the scores of each assignment between

the two treatments differ significantly. To determine which kind of test we have to apply, we first
examine the distributions of our data – the total individual scores per subject, for each individual

assignment. In order to identify if our data is normally distributed, we performed the Shapiro-Wilk

test (p-value: 0.000), revealing that both the data of the BORO and UFO treatment follow a non-
normal distribution – indicating that we have to analyze our hypotheses with non-parametric tests.

Additionally, we have performed the Kolmogorov-Smirnov test where we also obtained a


significant p-value of 0,000. To compare the differences between our treatments, we have chosen

the Mann-Whitney U test (McKnight & Najab, 2010). This test sets the following data limitations:

(1) dependent variable should be measured at the ordinal or continuous level; (2) independent
variable should consist of two categorical, independent groups; (3) independence of observations

and (4) not-normally distributed data. Since our data answers these requirements, we can adopt the
Mann-Whitney U test. In Table 11 and Table 12 we have displayed the results related to the Mean-

Whitney U test. While Table 11 expresses the mean ranks and the sum of ranks for each assignment

for both the UFO and BORO treatment, Table 12 displays the outcome of the test and the associated

119
p-values. We test our hypotheses on the 95% confidence interval. Additionally, since our hypotheses

are directional – we test if one treatment scores higher than the other treatment – we regard the one-

tailed significance level.

As for the first assignment, which is related to the first hypothesis, we notice a higher mean rank

for the BORO treatment (82,86) compared to the UFO treatment (74,14). This difference in mean
rank seems to be in line with the claim of the first hypothesis. When conducting the Mann-Whitney

U test, we retrieve a p-value of 0,133 – meaning that no significant difference can be acknowledged

on the 95% confidence interval between the scores of the UFO and BORO group. In other words,
we reject H1, and therefore cannot confirm that the notion of identity and essence defining

properties is more difficult to comprehend with 3D ontology-driven models than with 4D ontology-

driven models on the 5% significance level.

The second assignment produces the mean rank of 49,91 for the BORO treatment, compared to

a mean rank of 108,09 for the UFO treatment. These mean ranks are the opposite to the claim of our
second hypothesis. The Mann-Whitney U test produces a p-value of 0,000; meaning that there is a

significant difference between the UFO scores and the BORO scores, however in the direction that

the UFO scores are significantly higher than the BORO scores on a 95% confidence interval. We
thus reject H2 and cannot attain that the perception of time is more difficult to comprehend with 3D

ontology-driven models than with 4D ontology-driven models. In fact, the opposite predicament is
ascertained.

Finally, concerning the third assignment, we notice a mean rank of 71,7 for the BORO treatment,

compared to a mean rank of 85,3 for the UFO treatment. These ranks are in line with the assumption
of the third hypothesis. The Mann-Whitney U test produces a p-value of 0,028, confirming the third

hypothesis on the 95% confidence interval. In other words, we accept H3, and confirm that the

formation of relations between entities is more difficult to comprehend with 4D ontology-driven


models than with 3D ontology-driven models.

120
Table 11: Mean-Whitney U Ranks of Scores Assignments

Ranks Treatment Mean Rank Sum of Ranks

Assignment 1 BORO 82,86 6463

UFO 74,14 5783

Assignment 2 BORO 48,91 3815

UFO 108,09 8431

Assignment 3 BORO 71,7 5592,5

UFO 85,3 6653,5

Table 12: Mann-Whitney U Test of Scores Assignments

Test Statistics Assignment 1 Assignment 2 Assignment 3

Mann-Whitney U 2702 734 2511,5


Z -1,212 -8,217 -1,916
Asymp. Sig. (2-tailed) 0,226 0,000 0,055
Asymp. Sig. (1-tailed) 0,113 0,000 0,028

Efficiency of the treatments

Similar to the hypothesis testing above, we are going to compare if the perceived effort to
comprehend the models of each assignment differ significantly between the two treatments. We will

thus investigate if there exist significant differences in the time needed to complete the assignments,
and the answers with respect to the EOI questions. Similarly, we examined the distribution of our

data with the Shapiro-Wilk test, revealing again that our data – time required to complete the

assignments and the EOI results – is non-normally distributed, meaning that we can apply the non-
parametric Mann-Whitney U test.

In Table 13 we have displayed the results of the Mean-Whitney U test related to the time needed

to complete the assignment per treatment, while Table 14 displays the Mean-Whitney U results for
the EOI answers for each treatment. For the EOI results, we have grouped EOI question one to three

121
together per assignment, since these questions aim to assess the difficulty of the respective

assignment. The fourth EOI questions and the two final EOI questions (5-6) asked at the end of the

assignment have been tested separately since they each measure different aspects of perceived effort
as explained above.

Similar to our observations with the descriptive statistics, we notice no significant differences in
the time required to complete the assignments between the two treatments. As for the results

concerning the EOI questions, we notice a significant difference in answers from the EOI questions

assessing the difficulty of assignment 1, and for the fifth EOI questions, that aims to assess which
assignment was considered as most difficult. As we observed in the descriptive statistics from our

EOI questions, subjects from the BORO treatment indicated that the first assignment – related to the

notion of identity and essence defining properties – was considered to be more difficult to
comprehend, while the UFO treatment indicated that the third assignment – related to the formation

of relations between entities – was more difficult to support. The differences between these
perceptions thus seem to be significant, supporting the rejection of reject H1 and the acceptance of

H3.

Table 13: Mann-Whitney U Test of Time per treatment

Test Statistics Assignment 1 Assignment 2 Assignment 3

Mann-Whitney U 2814 3023 2979

Z -0,808 -0,067 -0,223

Asymp. Sig. (2-tailed) 0,419 0,946 0,823

Table 14: Mann-Whitney U Test of EOI results

Test EOI EOI EOI


EOI-4 EOI - 5 EOI - 6
Statistics Assign. 1 Assign. 2 Assign. 3
Mann-
Whitney U 2549,5 2730,5 2716,5 2996 2463,5 2666,5
Z -1,971 -1,202 -1,247 -0,202 -2,609 -1,415
Asymp. Sig.
(2-tailed) 0,049 0,229 0,212 0,84 0,009 0,157

122
4.6. Protocol Analysis

In the first phase of this empirical study, we have conducted an experiment with the purpose to

generate a significant amount of data to test our hypotheses. With this data, we have

rejected/accepted the hypotheses. However, in order to provide additional insights into the nature of

our results, we will now conduct a more in-depth analysis – in the form of a protocol analysis. While

the experiment was performed on a larger scale, the protocol analysis will be performed with a
smaller set of subjects, since the goal of the protocol analysis is not to produce data, but to acquire

knowledge on how subjects perceived the experiment (Bera, 2012; Andrew Burton-Jones & Meso,
2006; Voluceau & Chesnay, 2001).

4.6.1. Design of Protocol Analysis

A protocol analysis is a research method that elicits verbal reports from research participants, which
reveals the mental processes taking place as individuals work on the interpretation of the models.

Subjects are required to verbalize their thought processes and strategies, as well to verbalize their

answers to the comprehension, problem solving and EOI questions. These verbal reports and the
progress of the subjects are closely monitored by a researcher guiding the treatment. Hence, we will

perform the protocol analysis on a new set of subjects, in the exact same way as our experiment, but
with the sole purpose to better understand the outcome of our results. By performing this protocol

analysis, we can observe in which phases of the experiment subjects experience any breakdowns or

difficulties, which allow us to better comprehend how the subjects in the experiment perceived the
assignments. In line with other protocol analysis studies (Bera, 2012; G. G. Shanks et al., 2008), our

number of subjects participating the protocol study was small, a total of 6 participants. Similarly, to
the experiment, the subjects all had prior experience in the domain of conceptual modeling and had

no prior knowledge of ontologies. The subjects were evenly distributed over the two treatments,

since a large volume of data is generated even with a small sample size.

123
4.6.2. Results of Protocol Analysis

Training of the ontology

After reading the UFO description, most subjects reported difficulty understanding the Moments

aspect of the ontology, where the concepts Relators, Modes and Qualities were thought of as rather
troublesome to comprehend. Furthermore, the structure of the overall ontology, and the

interrelations between the ontological concepts were described as rather confusing. This is much in

contrast to the BORO ontology, where subjects reported the outline and concepts as very
apprehensible. This ease of comprehension can probably be assigned to the fact that BORO defines

only a few concepts and relations. A few subjects however did report the part-whole relationship as

peculiar, where they pondered how BORO would differentiate between for example essential parts
of a certain entity that defines the identity of that entity, such as the brain in a human body. Also the

usage of part-wholes in the representation of time was considered as unfamiliar.

Assignment 1

As for the first assignment, subjects frequently reported difficulty in differentiating between Kinds,
SubKinds, Collections and Categories. Furthermore, answering the comprehension questions

required a substantial amount of the time needed to solve the first assignment. Especially the
comprehension questions that focused on assigning the ontological concept to a class diagram were

perceived as the most demanding kinds of questions. Additionally, the statements concerning the

actual representation of the model were perceived as difficult, especially question 7 focusing on
types and instantiations, were all subjects required more time to solve the question in comparison

with similar questions in the set. Compared to the comprehension questions, the problem-solving

questions were perceived as more feasible. When the subjects were able to see the actual ontological
concepts that were linked to the class diagrams, it was easier for them to create and instantiate new

ontological concepts. This corresponds to the descriptive results of the experiment, where the total

124
average score of the problem-solving questions (83,33%) was considerably higher compared to the

total average score of the comprehension questions (46,15%).

As for BORO, subjects clearly perceived the first assignment as more challenging than expected
after going through the description. While the differentiation between Tuples, Types and

IndividualTypes was reported as clear during the reading of the description, applying the concepts
in the assignment was less straightforward. When answering the comprehension questions, subjects

expressed a high sense of doubt on the correctness of their answers. Especially the distinction

between Types and IndividualTypes in the model was hard to differentiate. This perceived difficulty
of the principals of identity and essence-defining properties in BORO explains why the first

hypothesis was rejected in the experiment. Next, similar to UFO, the problem-solving questions

were perceived as more doable. Again, this is in line with the results of the experiment, where the
total average score of the comprehension questions (60,36%) was substantially lower compared to

the problem-solving questions (73,08%). Regarding the EOI questions, we see the opposite, where
substantially more subjects of both the UFO and BORO treatment ranked the problem-solving

questions as more difficult than the comprehension questions.

Assignment 2

When submitted to the second assignment, subjects of the UFO treatment reported both the
comprehension and the problem-solving questions as easier to answer compared to the previous

assignment. While there still existed some doubt in assigning the correct ontological concepts to the

corresponding class diagrams, it clearly required less effort than before. Almost none of the subjects
reported any substantial difficulties when answering the comprehension questions related to the

second assignment. Also interpreting the ontological concepts related to time posed no real

difficulty. As for the problem-solving questions, they were deemed somewhat more challenging.
During those questions, subjects did mention that applying the time related concepts (e.g. Quality)

in new scenarios required more effort to solve. The perception of our subjects in the protocol
analysis concerning this assignment also match the descriptive results from the EOI questions of the

125
experiment, where more subjects indicated that they perceived the second assignment as easier to

solve compared to the rankings of the first assignment. Likewise, for the results of this assignment,

the average total scores greatly increased compared to the first assignment, with a total average of
83,76% for the comprehension questions and a 76,60% for the problem-solving questions.

Alike to the UFO subjects, also the participants of the BORO treatment described the second
assignment as considerably easier compared to the first assignment. Subjects required less time to

solve the assignment and reported fewer difficulties when answering both sets of questions. Several

comments however were made concerning the happensIn TupleType, which is used to represent
time, and its relation to the wholePart TupleType and the Event IndividualType. Their specific

relation was not always clear. Nonetheless, after finishing the comprehension questions, subjects

did describe the comprehension questions as more feasible than the previous assignment, and they
believed they had assigned the correct ontological concepts to the different class diagrams in the

model. However, when given the correct model in the problem-solving questions, subjects noticed
that many of their answers were not correct, and that they had erroneously assigned the ontological

concepts. This is an important observation since it seems that although subjects are convinced they

correctly interpreted the concepts underlying the BORO ontology, they had not yet truly understood
the structure of the ontology. Most of these incorrect answers were associated with interpreting the

happensIn TupleType and the wholePart TupleType, the elements that are used to represent time in
BORO, and that caused some initial doubt when answering the comprehension questions. This

observation accurately explains the rejection of our second hypothesis. Similarly, to the experiment,

the protocol analysis also designates that the perception of time appears to be more difficult to
comprehend with 4D ontology-driven models than with 3D ontology-driven models.

Concerning the problem-solving questions, they were reported as more difficult that the

comprehension questions, with the main argument that applying these concepts in new fictive
situations related to the model was more difficult. In the experiment, subjects received a total

average of 56,28% for the comprehension questions and 66,99% for the problem-solving questions.

126
When we interpret these results with the perceptions of the protocol analysis, it would seem that

subjects are in the delusion of properly understanding the concepts and structure of the BORO

ontology, while the actual results indicate otherwise. The results of the EOI questions seem to
confirm this assertion, since the second assignment was rated as the least difficult out of all three

assignments.

Assignment 3

In the last assignment, subjects of the UFO treatment displayed little difficulty in assigning the
correct ontological concepts to the respective class diagrams. The distinction between identity-based

concepts such as Kinds, Roles, SubKinds etc. was reported as clear, and also the Relators

representing the relationships in UFO could be easily recognized. Several subjects mentioned that
the previous assignments have significantly increased their insights in the structure of the UFO

ontology. Instead, subjects struggled more in answering the statements of the comprehension
questions, which assesses their interpretation of the model. As for the problem-solving questions,

they were reported as rather more difficult compared to the comprehension questions. Most of the

effort to solve the assignment went into answering the problem-solving questions. Subjects
mentioned that they understood the UFO concepts and relations, but that applying the ontology in

new situations was still challenging. Most doubt originated from assigning the mediation and
material relationships in a Relator. Despite being reported as more difficult, subjects did answer

many of the problem-solving questions correctly. The perceptions in our protocol study align with

the results of the experiment, where both the comprehension (79,62%) and the problem-solving
questions (84,94%) gathered rather high scores. Equivalently, the subjects of the BORO group

experienced less trouble in assigning the ontological concepts to the class diagrams compared to the

previous models. Similarly, subjects commented that the practice of the previous models aid in
identifying the correct ontological concepts in the current assignment. When answering the

comprehension questions related to the interpretation of the model, almost all subjects mentioned to
experience problems in identifying which the client protocol was and which the supplier protocol.

127
For clarification, we have represented a fragment of both the BORO and UFO protocol model in

Figure 18 to illustrate the difference between both. In the BORO model – panel B of the figure – the

distinction between the client and supplier protocol is rendered through a TupleType, where the
client and supplier each take a place in this Tuple. In contrast, UFO represents these client and

supplier protocols as separate class diagrams instead of relations, probably making it easier for UFO
subjects to identify and distinguish between these kinds of protocols. It is perhaps due to such

differences in representation between both ontologies that the formation of relations between entities

is more difficult to comprehend with 4D ontology-driven models than with 3D ontology-driven


models – leading to the acceptance of our third hypothesis.

Next, the problem-solving questions of this assignment were recognized to be rather difficult.

Subjects especially reported the differentiation in BORO between the place1Type or place2Type
and the tuplePlace1 or tuplePlace2 relations confusing. These different relations are made to

differentiate between the type level and instantiation level of Tuple relations in BORO. Furthermore,
an adept remark was made that the constraint of BORO TupleTypes to only hold two relations would

lead to a plethora of relations in rather large and complex models, and therefore would probably

lead to rather ambiguous models. When we regard the experiment results, we notice a similar
observation, in the sense that the scores of the problem-solving questions (70,51%) are considerably

lower than those of the comprehension questions (80,51%).

128
Figure 18: Fragments of UFO and BORO models in representing protocols

4.7. Discussion

Our earlier research efforts (Verdonck & Gailly, 2016a; Verdonck et al., 2014) acknowledged

that the conceptualizations realized by different ontologies can have a considerable impact on their
pragmatic quality, but they still left us with the question – the research question of this article – to

which degree the pragmatic quality of these ontology-driven models is influenced by the choice of

a particular ontology. The results of our empirical study now confirm that the choice of an ontology
can lead to significant differences in subjects correctly interpreting and comprehending the model

(effectiveness), as well as in their perception or their effort required to comprehend these models
(efficiency). In this section, we will sum up several derivations to which we can attribute these

variations in model comprehension.

Derivation 1: The paradigm underlying a 4D ontology is more difficult to comprehend than the
paradigm of a 3D ontology. Our first hypothesis assumed that the notion of identity and essence

129
defining properties would be more difficult to comprehend with 3D ontology-driven models than

with 4D ontology-driven models. Although the total average scores of the 4D treatment were higher

than those of the 3D treatment, the experimental results were not significant. Even more, the
assignment related to the metaphysical characteristic of identity and essence defining properties was

perceived as the most difficult to comprehend by the 4D treatment group. During the protocol
analysis, it was observed that while initially subjects of the 4D treatment did report the structure,

concepts and identity principles of the 4D ontology as simple and easy to understand, this perception

was rapidly disproven when the subjects were tasked to interpret the models. Even until the last
assignment, several subjects remained confused about correctly identifying and distinguishing the

4D ontology concepts. Further questioning during the protocol analysis indicated that the paradigm

underlying the 4D ontology seemed to be root of this confusion. Viewing individual objects as four-
dimensional, while being composed of spatial and temporal parts often left subjects disoriented.

Especially part-whole relationships that are formed between such four-dimensional objects were
described as counterintuitive. It would thus seem that the disadvantage of the counterintuitive

paradigm, as also noted by (Pease & Niles, 2002), has a greater impact on comprehending the

resulting models than expected. Similarly, however, also subjects of the 3D treatment encountered
problems with the identity principles that are related to the ontology. In this case, it was not the

paradigm that caused these problems. Instead, during the protocol analysis, subjects reported the

multitude of concepts, and the distinctions between the principles of identity as troublesome.

Notwithstanding these initial struggles, as soon as subjects grasped the exact distinction between

the different concepts, no further issues were noted in later assignments. The paradigm of the 3D
ontology thus seems easier to understand and comprehend than the space-time paradigm associated

with the 4D ontology.

Derivation 2: The notion of time is easier to comprehend with 3D ontology-driven models than
with 4D ontology-driven models. As for the second hypothesis, related to the metaphysical

characteristic of how an ontology dealt with time, it was expected that its perception would be easier

130
to comprehend with 4D ontology-driven models than with 3D ontology-driven models. However,

our results proved to be quite the contrary. While the subjects of the 3D treatment achieved their

highest scores on the time-related assignment, subjects of the 4D treatment ranked lowest. Our
results also proved to be significant, in the way that the perception of time is easier to interpret with

3D ontology-driven models compared to 4D ontology-driven models. Again, the protocol analysis


indicated that the immutability of objects in space-time seemed to complicate matters for the

subjects in the 4D treatment. For instance, it was observed during the protocol analysis that subjects

reported the most doubt concerning the elements which are used to represent time —happensIn
TupleType and the wholePart TupleType. The more ‘presently-focused’ paradigm of the 3D

ontology – where objects are viewed only from the present and with the assumption that the same

object can exist over time – appears to be more comprehensible and intuitive. It would appear that
the diachronic identity aspect of 3D ontologies (Krieger et al., 2008), reducing the identification of

essential properties that hold over some period of time, has a less prominent disadvantage in
interpreting 3D ontology-driven models. Probably, the overall 3D paradigm corresponds more to

our everyday way of thinking, whereas the 4D paradigm with its distinction between space and time,

immutability and part-whole relations feel less natural.

Derivation 3: The formation of relationships is easier to comprehend with 3D ontology-driven

models than with 4D ontology-driven models. Our last hypothesis focused on the metaphysical
characteristic of forming relations, which were presumed to be easier to comprehend with 3D

ontology-driven models than with 4D ontology-driven models. Similar to the results of (Verdonck

& Gailly, 2016a), the hypothesis was also confirmed by our results where the subjects of the 3D
treatment scored significantly higher than the subjects of the 4D ontology. During the protocol

analysis, subjects of the 4D treatment commented that the structure of the relationships in the model

made it difficult to relate to certain concepts. Subjects of the 3D ontology often judged the
relationship aspect of the ontology as the most difficult to fully comprehend, especially in the

beginning of the experiment. Despite the reported difficulty, subjects could correctly associate the

131
meaning of the relationships, and the interacting elements. However, while overall scores on the

assignment were considerably high, subjects of the 3D ontology perceived this assignment as the

most difficult to solve.

Derivation 4: The effort to comprehend an ontology-driven model varies substantially between

3D and 4D ontologies, and additionally, can vary heavily depending on the metaphysical
characteristic of the ontology. Furthermore, a misalignment between the perceptions of a model

often does not match the actual interpretation of this model.

Although the assigned models described exactly the same scenario for every treatment, the effort
required to comprehend these models varied greatly between the ontologies. We observed a

significant difference between subjects of the 3D treatment ranking the first assignment as the most

difficult, while subjects of the 4D treatment ranked the third assignment as most troublesome to
solve. Furthermore, depending on the metaphysical characteristic that the assignment was related

to, we noticed differences in perception of difficulty, also between comprehension and problem-
solving questions. The most surprising element however, is the misalignment between a subject’s

perception of the model, and the actual correctness of its comprehension. As mentioned in the

derivation above, while overall scores were considerably high, subjects of the 3D treatment
perceived the assignment as the most difficult to solve. This mismatch between the perception of

difficulty or ease of solving an assignment and the actual results that were achieved is a recurrent
observation in our experiment. While there were more subjects of the 3D treatment that ranked the

first assignment as easy compared to the subjects of the 4D ontology, the scores were lower for the

3D treatment than for the 4D treatment. Especially the comprehension questions were found to be
easier by the 3D treatment, while they received substantial lower scores on this aspect of the

assignment. Perhaps the greatest disproportion between perception and results was the assignment

related to the time perspective, where 4D subjects perceived the assignment as the easiest to solve,
while their scores where the lowest of all assignments. Based upon our observations in the protocol

analysis, we believe these variations and misalignments can be attributed to the paradigm and the

132
number of concepts both ontologies adopt. For instance, subjects reported the BORO ontology easy

to comprehend during the description of the ontology because of its low number of concepts and

relationships. As such, subjects often thought that they had applied the correct BORO elements
during the assignment, which after displaying the solution often turned out wrong. On the other

hand, subjects of the 3D approach reported much more difficulties with understanding the numerous
concepts and relationships that UFO holds. Consequently, subjects often doubted on the correct

assignment of UFO elements. Hence, the combination of each of the number of elements of the

respective ontology and the more counterintuitive paradigm of BORO compared to the more
familiar paradigm of UFO, would explain the differences we observe in the misalignment between

the perceptions of a model and the actual interpretation of this model.

Derivation 5: A deep level understanding is more rapidly attained with 3D ontologies than with
4D ontologies. As our last observation, we noticed that the test results for the problem-solving

questions are consistently higher for the 3D treatment than for the 4D treatment. Since the problem-
solving questions aim to assess a deeper level of understanding, these results suggest that the

subjects of the 3D treatment possessed a deeper understanding of the ontology compared to the

subjects’ knowledge of the 4D ontology. This is contrary to the initial reports when subjects
completed the description of the specific ontology. While subjects of the 3D treatment often

commented the ontology’s structure as extensive and somewhat complicated, subjects of the 4D
treatment on the contrary described the ontology as easy and accessible to comprehend. It would

thus seem that the 3D ontology initially intimidates a user with its plethora of concepts and relations,

but that a full understanding of the ontology is rather rapidly achieved. On the opposite, the 4D
ontology gives a favorable first impression with its simple structure and few concepts but does not

easily facilitate a deeper level of understanding. In order to validate this observation, we have

performed an additional test, where we have compared the total results for all three assignments of
the problem-solving questions between the BORO treatment and the UFO treatment. The results

can be found in Table 15 and Table 16. The results confirm our observation, where the UFO

133
treatment has significant higher scores for the problem-solving questions compared to the BORO

treatment, indicating that UFO subjects attained a deeper understanding compared to the BORO

subjects.

Table 15: Mean-Whitney U Ranks of Total Score Problem-Solving questions

Ranks Treatment Mean Rank Sum of Ranks

BORO 60,43 4713,5


Total
UFO 96,57 7532,5

Table 16: Mann-Whitney U Test of Total Score Problem-Solving questions

Test Statistics Total Score Problem-Solving

Mann-Whitney U 1632,5
Z -5,069
Asymp. Sig. (2-tailed) 0,000
Asymp. Sig. (1-tailed) 0,000

4.8. Conclusion

This paper conducted an empirical study that investigated the influence of an ontology on the
interpretation and understanding of the resulting conceptual models. More specifically, we asked

ourselves the question to which degree the pragmatic quality of ontology-driven models is

influenced by the choice of a particular ontology, given a certain understanding of this ontology?

This paper contributes to the domain of ODCM by demonstrating that there exist significant

differences in the interpretation of ontology-driven conceptual models that were developed by


applying different foundational ontologies. We have linked the pragmatic quality of these

conceptual models to the metaphysical characteristics that compose an ontology. In other words, the

metaphysical characteristics determine the quality of the conceptualizations. Consequently,


researchers or practitioners should be aware of the impact of adopting a specific ontology when

134
developing conceptual models, and the influence they can have on the interpretation of its users. We

aspire that by providing more insights into the importance of this choice, we will allow researchers

to better motivate why certain ontologies are adopted. Further, the empirical nature of this research
article aims to contribute to the lack of empirical research in ODCM (Moody, 2005; Verdonck et

al., 2015). Since this research effort is the first that empirically investigates the differences between
applying two ontologies, we hope that this effort will encourage new studies to further investigate

and comparison new types or kinds of ontologies.

Concerning external validity, the authors would like to acknowledge that by conducting our
experiment on students, we limit the overall generalizability of our results. However, as mentioned

by Siau & Rossi (2007), much depends on the type and nature of the experiments. Since it was the

purpose of this study to compare a 3D and a 4D ontology in a ‘tabula rasa’ environment – meaning
that we did not want any of our subjects to encompass previous knowledge concerning the ontology

– the profile of students are well fitted to the nature of such an experiment. Furthermore, we carefully
aimed to avoid our tasks or assignments in the experiments to be unrealistically simple. Each of our

assignments was rendered to represent real-world scenarios and systems (e.g. protocol

representation). Finally, we would like to note that we have compared ontologies by association of
their metaphysical characteristics. Another point of view would be to relate them to their specific

purpose or usage. In other words, although the models composed with the BORO ontology were
found to be more difficult to interpret in certain cases compared to the UFO models, we should take

into account that perhaps the BORO models were not meant to be easily interpreted – in order to

facilitate for instance complex re-engineering purposes.

Acknowledgements

We would like to express our sincere gratitude to Chris Partridge, of the BORO Solutions Group,
and Maria das Graças da Silva Teixeira, of the Ontology and Conceptual Modeling Research Group

(NEMO), for their detailed reading and revisions of the respective BORO and UFO ontology-driven
models applied in this research.

135
136
5.

An Ontological Analysis
Framework for Domain-
Specific Modeling
Languages

137
5.1. Introduction

Domain-specific modeling languages (DSMLs) are developed for creating models within specific

domains by means of a strongly cohesive set of domain concepts (Henderson-Sellers, 2012). On the

contrary, general-purpose modeling languages (GPML) consist of domain-independent concepts

(e.g. UML, EER or BPMN). As a result, DSMLs enable the rapid modeling of the behavior and/or

structure of applications in well-defined domains (Sprinkle & Karsai, 2004). Different types of
DSMLs have been proposed. Executable DSMLs allow the creation of domain models that can be

transformed into executable code. Visual DSMLs on the other hand describe aspects of the physical
and social world for purposes of human understanding and communication (Mernik, Heering, &

Sloane, 2005). These languages have been developed, for instance, to model different aspects related

to economic reality such as the Architecture for Integrated Information Systems (ARIS) framework
(Scheer, 1998) and value creation processes (Gailly & Poels, 2007a). In this paper, we will focus on

visual DSMLs and henceforth refer to them as DSML.

In order to be effective, a DSML should be sufficiently expressive to represent the domain


concepts that are captured by the intended models. To better fulfill these requirements, ontologies

have been introduced as a theoretical foundation (Wand, Monarchi, Parsons, & Woo, 1995). For
keeping a broad interpretation, we adopt the characterization of ontologies as described by

Honderich (2006), which defines ontology as “the set of things whose existence is acknowledged

by a particular theory or system of thought”. Ontologies support the construction of explicit models
of conceptualizations in the form of concrete guidelines for selecting which concepts should be

represented as language constructs and how they should be applied (Guizzardi et al., 2002).
Moreover, ontologies can be applied to evaluate the quality of a modeling language and its ability

to describe a certain domain by performing an ontological analysis. An ontological analysis

improves a DSML by: (i) providing a rigorous definition of the constructs of a modeling language
in terms of real-world semantics, (ii) identifying inappropriately defined constructs, and (iii)

138
recommend language improvements which reduce lack of expressivity, ambiguity, and vagueness

(Almeida & Guizzardi, 2013). We refer to the ontology that analyzes a DSML as the reference

ontology.

Over the last 15 years, a growing number of DSMLs have been analyzed using different types of

reference ontologies. For instance, the integrated process modeling grammar within the ARIS
framework has been evaluated using the Bunge Wand Weber (BWW) ontology by Green &

Rosemann (2000), or the ArchiMate enterprise architecture language has been evaluated by the

Unified Foundational Ontology (UFO) (Azevedo et al., 2015). Other ontological analyses of
DSMLs were also performed on, for example, the RM-ODP language (Almeida, Guizzardi, &

Santos, 2009) and the REA enterprise modeling language (Geerts & McCarthy, 2003).

Notwithstanding the frequent application of ontologies, the overall process of an ontological


analysis remains problematic (Rosemann, Green, & Indulska, 2004), perhaps even more for DSMLs

than for GPMLs. While different kinds of techniques exist to analyze a GPML, only a few consider
DSMLs. Furthermore, an ontological analysis serves multiple purposes. However, there exists no

clear differentiation between these kinds of analyses. Moreover, an ontological analysis can target

different aspects of a DSML. For instance, a DSML can be ontologically analyzed by comparing
the constructs of the language to an ontology, which can induce changes to its syntax and/or

semantics. On the other hand, the domain ontology of a DSML could be analyzed and mapped to a
reference ontology, in order to increase the interoperability with, for example, another DSML.

Clearly, both such analyses serve an entire different purpose, and require different kinds of means

in order to achieve the respective purpose. As such, the term ‘ontological analysis’ encompasses a
great variety of different types of purposes, techniques or methods, and can thus be performed in

many different ways, currently without maintaining a clear distinction.

In this paper, we therefore aim to construct a framework that will distinguish the different kinds
of ontological analyses that exist. The benefit of this framework will lie in its ability to differentiate

between the different purposes for analyzing a DSML, and to determine which aspects of a DSML

139
should be addressed and which kind of method can be implemented, depending on this particular

purpose. In other words, we intend to structure the process of conducting an ontological analysis,

and offer guidelines when analyzing a DSML. In section 2, we will describe the methodology that
is applied in this paper. Section 3 will then formulate the problem definition and the research

objectives. In Section 4, we construct our framework as an answer to the problem definition. Section
5 serves as an assessment of our framework and identifies any shortcomings that still exist. Next,

section 6 addresses these shortcomings and aims to refine or enhance our framework. In section 7

we then provide a discussion of the framework, its application and discuss any limitations. Finally,
in section 8, we present our conclusion and future research.

5.2. Methodology

To construct our framework, we adopt the design science methodology of (Gregor & Hevner, 2013)
and (Hevner et al., 2004). Their research offers a structured approach to conduct and present design-

science research. Gregor and Hevner (2013) differentiate between two main knowledge bases, i.e.

descriptive and prescriptive knowledge. Descriptive knowledge is the “what” knowledge about
natural phenomena and the laws and regularities among phenomena. The researcher draws

appropriately relevant descriptive and propositional knowledge from this base. Prescriptive
knowledge is the “how” knowledge of human-built artifacts. This base allows the researcher to

examine known artifacts and design theories that have been used to solve the same or similar

research problems in the past. Both knowledge bases are investigated for their contributions to the
grounding of the research project.

In this paper, we will extract knowledge from both bases to design, assess and refine our design
artifact (i.e. the framework) in a rigorous manner. First, we will describe various relevant human-

built artifacts, i.e. previously proposed methods to perform an ontological analysis, in order to

identify and formulate our problem definition and research objectives. We thus draw knowledge of
the prescriptive knowledge base to specify the purpose and scope of our design artifact. Next, we

140
will obtain the required knowledge from the descriptive knowledge base to construct our framework.

Based upon well-accepted theories and classifications, we will design our artifact. Hence, our

framework can be seen as an extension of existing artifacts that are well accepted by the research
community. As a next step, we will assess the applicability of our framework by describing and

classifying existing research from previously performed ontological analyses. As such, we again
apply the prescriptive knowledge base to assess our design artifact. As a last step, based upon this

assessment, we return to the ‘Design/Build’ phase of our framework, and implement the observed

shortcomings in order to further improve our framework. This last step will allow us to refine the
framework, and increase its applicability towards solving the identified problems, and fulfilling our

research objectives. Figure 19 gives a brief summary of the methodology followed in this paper.

Figure 19: Design science methodology followed in this paper

5.3. Formulation of problem definition and research objectives

To clearly describe our problem definition and research objectives, we focus on the prescriptive

knowledge base, in the form of investigating several relevant research articles that describe how to
perform an ontological analysis of a modeling language.

Since the seminal paper of (Wand & Weber, 1993), various ontological analyses have been
conducted on a plethora of modeling languages. They suggest that a theory of representation, based

141
on the Bunge ontology, can be used to help define and build information systems that contain the

necessary representations of real world constructs. The authors propose three models –

representation, state-tracking and the decomposition model – that make up the representation theory.
It is the representation model that has been most often applied in the analysis of modeling languages.

This model specifies the constructs that are deemed necessary to provide faithful representations of
phenomena and should therefore be included in the modeling language. Idiosyncratic to the

representation model are two mappings to conduct an ontological analysis – a representation

mapping and an interpretation mapping. The representation mapping identifies those ontological
constructs that are directly represented by constructs within the target grammar of the modeling

language. The interpretation mapping begins with each construct in the target grammar of the

modeling language and ‘interprets’ a mapping back to relevant ontological constructs.

Over the years however, criticism on these models increased (March & Allen, 2014; Riemer,

Hovorka, Johnston, & Indulska, 2013), resulting in the proposition of extensions to the BWW
approach or alternative approaches to conduct an ontological analysis. Therefore, to gain a better

overview on these alternative approaches, we have gathered several well-accepted methods for

performing an ontological analysis from the literature study performed by Verdonck et al. (2015).
This literature study has collected various articles that belong to the field of ontology-driven

conceptual modeling. In Table 17 we give an overview of these methods. For example, the research
of Rosemann et al. (2004) introduces a procedural model for ontological analysis to structure its

process and to overcome the individual interpretation that exists during an ontological analysis. The

work of Evermann and Wand (2005) takes a more stark approach, and proposes a method to restrict
the syntax of a modeling language to ensure that only possible configurations of a domain can be

modeled. Their method applies the ontological assumptions of the Bunge ontology and translates

them into constraints on the language metamodel. While both of these articles focus on the BWW
approach, other researchers started to focus on different kinds of methods and ontologies to perform

an ontological analysis. For instance, the framework proposed by (Guizzardi, 2013) describes how

142
to evaluate and (re)design DSMLs with the use of the foundational ontology UFO. Their approach

systematically evaluates the level of homomorphism between the language of a DSML, with that of

a reference ontology. Another alternative is the UEML approach (Harzallah, Berio, & Opdahl,
2012), which provides a mechanism for capturing modeling constructs and their syntax in order to

determine the ontological definitions for each of those modeling constructs. This approach can then
be used to compare or integrate different kinds of DSMLs.

Table 17: Different methods of ontological analysis

Methods
BWW Framework (Y Wand & Weber, 1993)
Design Patterns (R. Falbo et al., 2013)
Method of (Evermann & Wand, 2005b)
Framework of (Guizzardi, 2013)
Method of Conceptual Evaluation & Conceptual Comparison (Milton & Kazmierczak, 2004)
Approach of (R. D. A. Falbo, Guizzardi, & Duarte, 2002)
Reference methodology (Rosemann et al., 2004)
Separation of Reference (Opdahl & Henderson-Sellers, 2004)
Approach of (Tairas, Mernik, & Gray, 2008)
UEML Approach (Harzallah et al., 2012)
Framework of (Walter, Parreiras, & Staab, 2014)

However, many of these methods make no distinction between the different kinds of aspects an

analysis can focus upon. Or in other words, the analysis of a DSML is often treated in the same way
as the analysis of a GPML. Opposite to a DSML, the purpose of the modeling constructs of a GPML

is to represent domain-independent concepts. Yet, the principal difference between a DSML and a

GPML is that the former is developed to target a particular domain, and already has a domain
description (or domain ontology) accompanying the language. Hence, when evaluating a DSML,

we must differentiate between ontologically analyzing the modeling language and analyzing the

domain description that accompanies the language. For example, Pereira and Almeida (2014)
analyze the ArchiMate language with the OntoUML Org Ontology (O3), a domain ontology that

describes the organizational domain. In their analysis they identify various deficiencies in the
domain description of ArchiMate and introduce several new organizational concepts to the

143
metamodel. Consequently, the ontological analysis has led to the reference ontology (1) modifying

the domain description of ArchiMate, and (2) adopting the metamodel of ArchiMate to these

changes. This example demonstrates that during an ontological analysis, the reference ontology can
target different aspects of a DSML. In summary, we can identify the following shortcomings:

• An ontological analysis of a DSML can be performed with various methods and ontologies, for

different purposes and can target different aspects of a DSML. However, there exists no clear

distinction between these different kinds of ontological analyses.

• When performing the ontological analysis of a DSML, there is no differentiation between

analyzing the modeling language or analyzing the domain description or domain ontology that

accompanies the modeling language.

Therefore, we define the objectives of this research to (1) structure the process of ontological
analysis of a DSML; (2) offer a comprehensive view on how the analysis can be performed; and (3)

identify which aspects of a DSML should be targeted depending on the purpose of the analysis.

More specifically, we can translate these objectives to the following research questions:

• RQ1: How have ontological analyses of DSMLs been performed in the past and by the use of

which methods?

• RQ2: Depending on the purpose, how should the ontological analysis of a DSML be performed

and structured?

To fulfill these objectives and answer our research questions, we will construct the framework
based upon the identified problems of ontological analyses of DSMLs. Below, we will explain in

more detail the construction of our framework and how it can be distinguished from existing

research contributions.

144
5.4. Design of the Ontological Analysis Framework

To develop the framework, we combine several theories from the descriptive knowledge base that

will act as the foundations of our framework. These theories are well recognized and accepted in

the field of ontology and ontological analysis.

As we mentioned above, different types of ontologies are being used to analyze DSMLs, leading

to diverse recommendations and adjustments. Despite these differences, no distinction is being made
between these miscellaneous types of ontological analyses. Therefore, we will distinguish between

the kinds of reference ontologies that can be used to perform an ontological analysis. In the work of
(Guarino, 1998; Scherp et al. 2009), a distinction is made between several types of ontologies based

upon their level of dependence of a particular application or point of view:

• Foundational ontologies describe very general concepts like space, time, matter, object, event,

action, etc., which are independent of a particular problem or domain. Examples are the BWW
ontology, the General Formal Ontology (GFO) or the UFO ontology;

• Core Ontologies provide a precise definition of structural knowledge in a specific field that spans

across different application domains in this field. For example, UFO-S is a core ontology that is

designed to account for a conceptualization of services that is independent of a particular

application domain (Nardi et al., 2015);

• Domain ontologies describe a hierarchical structure of concepts within a specific domain (like

medicine or automobiles) by specializing the concepts introduced in a foundational ontology

(Henderson-Sellers, 2012). Here we can think of ontologies such as the e-Business Model

Ontology in the domain of e-business modeling or the Gene Ontology in the field of biology.

In order to keep a comprehensive overview in the framework, we bundle the different kinds of

ontologies into the term ‘Reference Ontology’. Therefore, in the framework, a reference ontology
can represent a foundational, a core or a domain ontology.

145
Next, we make a distinction between the different kinds of features of a DSML, based upon the

work of (Aßmann et al. 2006; Guizzardi, 2007). Aßmann et al. (2006) consider models as a

description of a system and its environment for some certain purpose. A metamodel represents and
specifies models, i.e. it describes the valid ingredients of a model. Further, a distinction is made

between meta-meta-models and ontology. While meta-meta-models further represent and specify
meta-models, ontologies are defined as shared and descriptive models, which represent reality by a

set of concepts, their interrelations, and their constraints. Thus, models describe systems,

meta(meta)-models describe models and ontologies describe a domain. Guizzardi (2007) offers a
different kind of view, where a distinction is made between ontology, conceptualizations and

abstractions. While an ontology captures the knowledge of a specific domain, a conceptualization

expresses relevant concepts of this domain. The elements constituting a conceptualization of the
domain are used to articulate abstractions of certain state of affairs in reality. Conceptualizations

and abstractions are intangible entities that only exist in the mind of the user or a community of
users of a language. A modeling language represents the domain concepts of a certain

conceptualization, while the models that originate from this modeling language represent the

domain abstractions.

Based upon these theories we define our framework, as demonstrated in Figure 20. The principal

concepts in our framework are a domain ontology (OD), a reference ontology (OR), a domain
conceptualization (CD) and a metamodel (MM). The OD captures the knowledge of a specific domain.

It has two properties: a vocabulary on entities in the domain and a body of knowledge about the

domain. The CD, similarly to (Guizzardi, 2007), can be seen as a specific selection of concepts and
knowledge from the OD to construct the modeling language of the DSML. The difference between

OD and CD is that while OD captures the knowledge of a whole domain (e.g. business domain), CD

captures the relevant domain concepts required to form the DSML (e.g. business services).
Analogous to (Aßmann et al., 2006), we define the MM of a modeling language as the architectural

semantics of the constructs of the modeling language. As such, MM identifies which elements of CD

146
must be incorporated in the syntax of the modeling language and defines the rules of behavior

between the syntactical elements. To exemplify these concepts, we can consider the REA modeling

language. The REA domain ontology, as in (Geerts & McCarthy, 2000), can be seen as OD of the
modeling language. Over the years, several specifications have been added to the OD of REA, such

as the behavioral or policy-level specifications (Geerts & McCarthy, 2006). However, the modeling
language does not necessarily represent all of these concepts and specifications of OD. Instead, the

CD forms the selection of concepts that are represented by the modeling language and that are thus

also represented by the metamodel of REA.

In the framework, we can distinguish three types of analysis activities that can be performed

during an ontological analysis: (1) Integration, (2) Derivation and (3) Projection. The Integration

Activity describes how a reference ontology can integrate or incorporate its own elements and
semantics to the OD, CD or MM of a DSML. For example, a OR could expand the OD with certain

elements from the OR that were not yet present in the OD. It thus ‘lends’ its semantics to the domain
ontology of the DSML. The Derivation Activity, does rather the opposite and derives or extracts

certain elements and semantics from the OD, CD or MM to the OR. For instance, certain elements or

structures could be derived from OD to OR. Thus, in this activity OD ‘lends’ its semantics to OR,
which can then be used for example to construct a new domain ontology. Finally, the Projection

Activity distinguishes between the ontological and the conceptual projection. The Ontological
Projection represents the translation of the semantics and domain knowledge from OD to CD. The

Conceptual Projection represents the conceptualization of MM based upon the semantics and

knowledge in CD. For example, an ontological analysis could question the correct translation of the
semantics from CD to MM by closely examining the semantics of CD and comparing them with the

elements in MM. Therefore, the framework allows to perform an ontological analysis without OR,

and can thus solely focus on the ‘projections’ of OD to CD or from CD to MM.

147
Figure 20: Reference ontological analysis framework

5.5. Assessment of the Ontological Analysis framework

In this section we will apply the prescriptive knowledge bases to assess our framework by describing

and classifying existing research from previously performed ontological analyses. As a result, we
wish to (1) identify any possible enhancements to our framework and (2) answer our first research

question, i.e. to gain a better understanding on how previous ontological analyses have been

performed in the past. As to the knowledge of the authors, no previous research has yet been
conducted that has classified and structured the ontological analyses of DSML.

To collect these papers, we relied on the literature set of Verdonck et al. (2015), and skimmed all
articles that performed an ontological analysis on DSML. This resulted in a collection of 13 research

articles. Next, we also applied a limited form of snowballing, where we searched the references of

the selected papers for any references that were not captured by the original literature set. This led
to the identification of four more articles, leaving us with a total of 17 articles to classify. A list of

these papers can be found in Appendix C. In our collection of data, an ontological analysis of DSML
was included if the DSML fulfilled the following criteria: (1) the language has a vocabulary on

148
entities in the domain and (2) encompasses a body of knowledge about the domain, which can be

expressed for example through the relationships between the entities. ArchiMate for instance is a

DSML that fulfills these properties. It holds concepts such as Business Actor that are specifically
defined for the business domain and where their underlying interactions and behavior are described

through their relationships. On the other hand, the i-star language for instance was not incorporated
in our collection of papers since the language holds concepts such as actor, resource or task that can

be applied over various kinds of domains. In order to describe how an ontological analysis of a

DSML has been performed, we first conducted a general classification of these papers according to
the following facets:

• Type of Reference Ontology: to discover which ontology was applied the most;

• Type of DSML: to describe which DSMLs have been most focused upon;

• Type of Methodology: to identify the method that was being implemented.

Next, we performed a second classification that aims to provide a more thorough assessment of
the papers. During this classification, we classified the different articles to our framework to

determine which kinds of activities were being performed during an ontological analysis, and more
specifically which aspects of a DSML were most often targeted. An overview of the classification

can be found at http://www.mis.ugent.be/JDM2016.

5.5.1. General classification of selected research articles

In Table 18, we have identified the different reference ontologies that have been used in our

papers. In total, we identified six ontologies, of which the UFO ontology has been applied most
often (8), followed by the BWW ontology (4). Furthermore, we recognized one core ontology and

two domain ontologies. Hence, it seems that foundational ontologies, and more specifically the UFO

and BWW ontologies, are the most popular means to perform an ontological analysis of a DSML.

149
Table 18: Classification of the reference ontologies

Reference Ontology Number of Papers


Foundational ontology
UFO 8
BWW 4
Sowa Ontology 1
Core Ontology
UFO-S 1
Domain Ontology
Reference Ontology of Business Models 1
OntoUML Org Ontology (O3) 1

Next, in Table 19, we categorized the different kinds of modeling languages that have been

investigated. The three most analyzed languages are the REA business modeling language (6), the
enterprise architecture language ArchiMate (6) and the business organizational language ARIS (4).

It is clear that most of the modeling languages that have been analyzed belong to the business or
enterprise domain.

Table 19: Classification of the DSMLs

DSML Freq.
Resource-Event-Agent (REA) 6
ArchiMate 6
ARIS 4
Reference Model of Open Distributed Processing (RM-ODP) 2
e ³ value 2
Business Object Model 1
Multiagent-based Integrative Business Modeling Language (MibML) 1

Finally, regarding Table 20 and the applied methods, we observe that the majority of papers

performed an ontological analysis without the support of an actual method or framework. Most

often, a paper would discuss possible interpretations or perform a classification in terms of the
reference ontology, and then discuss the various interpretations to the usage of the language. This

tendency is rather disturbing. As mentioned by Rosemann et al. (2004), these kind of approaches

150
have to be considered with great care since they allow an individual interpretation of the researcher

to exists during an analysis. The most frequently applied type of method is that of the BWW

Framework. More specifically, we identified two papers that performed a representation mapping,
and one article that performed an interpretation mapping. An interesting observation is that the

METHONTOLOGY framework (Gómez-Pérez & Rojas-Amaya, 1999) – which is originally


intended as a technique to construct ontologies – was also adopted to analyze and re-engineer the

OD of a DSML.

Table 20: Classification of the type of methods applied in an ontological analysis

Type of Methodology Number of Papers


No Method Specified 12
BWW Framework 3
METHONTOLOGY framework 2

5.5.2. Classification to the Framework

As to give an example on how the framework was used to classify these papers, we demonstrate
in Figure 21 the classification of the ontological analysis of ARIS by Santos et. al (2013). First, they

give an interpretation of the semantics of ARIS in terms of the concepts of UFO, where they identity
any existing ambiguities. We can classify this interpretation as a CD Derivation Activity (1). The

semantics from the CD of ARIS are being extracted to the reference ontology to interpret these

elements according to the UFO elements. Next, based upon these ambiguities, suggested ontological
interpretations in UFO and language recommendations are given. We can identify this as the CD

Integration Activity (2), where the semantics of UFO are being extracted to the CD of ARIS in order
to overcome the identified ambiguities. As a last step, these new recommendations are being

introduced in a revised metamodel of ARIS, where the changes in CD are being incorporated.

According to our framework, this corresponds to the Conceptual Projection Activity (3), where the
conceptualization of MM is being adapted based upon the changed semantics and knowledge in CD.

As a remark, since the OD is not part of the ontological analysis, we have surrounded this element

151
with a dotted line in the figure to indicate its absence. The same holds for activities that were not

applied in the analysis.

Figure 21: Classification example of the analysis of (Santos et al., 2013) to the framework

In Table 21, we have displayed the analysis activities that occur most frequently in our literature

set. The results indicate that most of the ontological analyses tend to focus on the CD of a DSML,
most often in the form of the CD Integration and the CD Derivation Activity. We also observed a

high number of articles conducting the Conceptual Projection Activity — indicating that several
papers investigated the correct ‘translation’ of semantics from the CD to the MM of a DSML. Another

option is that these papers first applied changes to CD and then incorporated these changes into MM.

Further, since not every DSML has an OD, we identified a limited occurrence in the ontological
analysis of OD.

152
Table 21: Classification of research articles to the framework

Type of Activity Frequency


CD Integration Activity 11
CD Derivation Activity 8
Conceptual Projection 8
Ontological Projection 2
OD Derivation Activity 2
OD Integration Activity 2
MM Integration Activity 1

To truly understand how ontological analyses were performed, we take a closer look at the order
in which these activities took place. In other words, we identify the sequences in which the different

activities were carried out. The majority of the articles in our literature set all adhere similar
sequences in analyzing a DSML. These sequences are displayed below in Figure 22, where each

sequence was performed at least twice or more in an ontological analysis. The first sequence consists

of just a CD Integration activity and examines which kinds of concepts in OR match the concepts of
CD. Sequence two performs the opposite and conducts a CD Derivation activity, matching the

concepts of OR to CD, and examining their similarities and their differences. Contrary to sequences
one and two, the third sequence goes further than only matching concepts of the DSML with the

reference ontology. Instead this sequence first compares the semantics of the CD through the

Derivation activity and then offers recommendations to CD in the form of the Integration activity.
Improvements in the form of recommendations are thus exported from OR to CD. Finally, the fourth

and most complex sequence is even more elaborate. Similarly, to the third sequence, a comparison

is first made between the concepts of a DSML and a reference ontology, leading again to
improvements in CD. However, the fourth sequence actually implements these recommendations of

CD to the MM of a DSML through the Conceptual Projection Activity.

153
Figure 22: Different patterns of ontological analysis

When we map the methods of Table 20 to our four observed sequences, as displayed in Figure
23, we can observe that especially sequence three and four were executed without an actual method.

The BWW framework has been applied twice in the first sequence, in the form of a representation

mapping, and once in the second sequence, in the form of an interpretation mapping.

Figure 23: Comparing types of methodology and patterns of ontological analysis

In summary, our general classification revealed that most ontological analyses were performed

with mainly two kinds of ontologies, i.e. the UFO and BWW ontology. We observed more variety
in the modeling languages that were analyzed, which consisted mostly out of the business-related

languages. Further we noticed that the majority of the ontological analyses that we have identified

154
perform rather similar sequences of activities when analyzing a DSML. However, a rather alarming

observation in Table 20 revealed that most of these analyses were conducted without the support of

an actual method. This cannot be attributed to a lack of methodologies — of which we have


described several in Table 17. Neither do we believe that it is the explicit intention of authors to

dismiss a method. Instead, our impression from examining these articles is that authors often mistake
the application of an ontology (and the rendering of an interpretation or recommendation of the

concepts of a DSML) as a method or approach itself. This however – as also mentioned in

(Rosemann et al., 2004) – can only be seen as a single aspect of a ontological analysis, and does
not compose the whole structure of an ontological analysis.

Hence, based upon our observations, we have identified which aspects are still essential to

structure the process of an ontological analysis, and which should therefore be incorporated into our
framework. Based upon these sequences, we will enhance our framework with prescriptive patterns,

which should enable a researcher to identify the required sequence(s) to perform the intended
analysis. More importantly, our framework should be able to suggest one or more methods that can

be applied to execute such a prescriptive pattern and successfully conduct the ontological analysis.

As such, we intend to link these prescriptive patterns and methods to the different purposes a
researcher can have to perform an ontological analysis.

5.6. Refinement of the ontological analysis framework

In this section, we aim to answer our second research question, and describe how the analysis of
the DSML should be performed depending on its purpose. Therefore, we will first investigate the

different purposes of an ontological analysis, and then provide clear guidelines in the form of
prescriptive patterns to perform future ontological analyses.

When we regard the definition of an ontological analysis as given above by (Almeida &

Guizzardi, 2013), and the different kinds of purposes that were expressed in the articles from our
literature set, we can filter these purposes into four principal categories: DSML re-engineering,

155
Ontological Mapping, Ontological Recommendation and DSML Interoperability. Based upon these

purposes, we have then clustered the articles performing the same purpose and investigated the

different kinds of patterns that were completed in conducting the ontological analysis. As a result,
we determined for every purpose several re-occurring patterns. This enabled us to ‘prescribe’ the

pattern that should be followed, depending on the purpose of the analysis. Figure 24 displays the
different purposes and their corresponding prescriptive patterns, which we will discuss in more

detail below.

1. DSML Re-engineering. First, we can distinguish articles such as (Gailly & Poels, 2007a) that
improve or enhance a DSML by re-engineering the language, to represent new kinds of concepts or

resolve ambiguities that existed in the original concepts. The pattern that corresponds to this kind

of purpose first focuses on the translation of semantics from OD to CD — by performing an


Ontological Projection — and then from CD to OD by conducting a Conceptual Projection. In other

words, the research aims to identify any mistranslations that occurred in transferring or interpreting
the semantics of OD to the MM of a DSML. Note that there is no reference ontology applied during

this analysis.

2. Ontological Mapping. For this objective, articles mainly focused on performing a mapping
to/from the OR from/to either the CD or OD of a DSML, depending if a DSML has an OD or not.

Many of these mappings have been performed to interpret or classify a DSML. Examples are the
representation and interpretation mappings of the BWW framework. Here, we can identify for

instance the ontological analyses of (zur Muehlen, Indulska, & Kamp, 2007) and (Zhang, Kishore,

& Ramesh, 2004) that performed such patterns. Most often, the purpose of an Ontological Mapping
is to investigate the semantics of either OD or CD of the DSML and recognize several ambiguities in

the semantics. We would like to note that the second pattern – which focuses on the OD of a DSML

– is most likely to be performed for re-engineering a domain ontology or for more specific purposes
such as ontology merging (Storey, 2017).

156
3. Ontological Recommendations. The purpose of an ontological recommendation is to first

identify any ambiguities that exist in the concepts of a DSML and then recommend language

improvements to overcome these ambiguities. For this purpose, two kinds of patterns are possible,
one that first targets the OD of a DSML, and then translates the recommendations into MM through

the Ontological and Conceptualization Projection. The other pattern directly targets the CD, often
because no OD of the DSML exists. Especially the last action in these two patterns, implementing

the recommendations into MM, is a deterministic step of an ontological recommendation. When an

article would only analyze the semantics of the CD or OD of a DSML, without translating these
recommendations into MM, these improvements remain rather abstract, and as a result offer less

added value to the research community applying the respective DSML. Examples of articles

performing these patterns are (Azevedo et al., 2013; Nardi, Falbo, & Almeida, 2014). Finally, we
would like to remark that most of the articles that had no method specified were in fact papers that

ultimately aimed at providing ontological recommendations to a DSML. Often, these articles


performed a combination of the patterns that correspond to this objective.

4. DSML Interoperability. As a last purpose, the ontological interoperability aims to identify the

weaknesses between different DSML and compare their similarities in representing phenomena. In
the pattern belonging to ontological interoperability, the semantics are ‘withdrawn’ from the DSML

through the MM, CD or OD Derivation Activity. This can occur with one or multiple of these
Derivation Activities at the same time. An example of an article performing such a pattern is

(Andersson et al., 2006).

157
Figure 24: Prescriptive patterns for conducting an ontological analysis

158
We have thus identified several prescriptive patterns in our framework that can serve as

guidelines or instructions to fulfill a certain purpose when analyzing a DSML. However, as we

emphasized in the previous section, many of these patterns were executed without adhering to a
particular method. Therefore, we have associated the methods listed in Table 17 with our four

purposes and their respective patterns. The overview of methods with their corresponding objectives
of the framework can be found below in Table 22. Thus, our framework connects a certain purpose

with one or more prescriptive patterns, which can then be translated into one or more specific

methods. We would like to emphasize that these patterns should not be considered as actual methods
or alternative approaches on how to perform an ontological analysis. Instead, a prescriptive pattern

gives a general description of how an ontological analysis should be conducted, by structuring the

process of the analysis. A method on the other hand explicitly gives a step-by-step approach on how
to actually conduct the analysis.

159
DSML Ontological Ontological DSML
Type of Analysis
Re-engineering Mapping Recommendation Interoperability
BWW Framework (Y Wand & Weber, 1993) X
Design Patterns (R. Falbo et al., 2013) X X
Method of (Evermann & Wand, 2005b) X X
Framework of (Guizzardi, 2013) X
Method of Conceptual Evaluation & Conceptual
X X
Comparison (Milton & Kazmierczak, 2004)
METHONTOLOGY framework (Gómez-Pérez &
X
Rojas-Amaya, 1999)
Approach of (R. D. A. Falbo et al., 2002) X X
Reference methodology (Rosemann et al., 2004) X X X
Separation of Reference (Opdahl & Henderson-
X X
Sellers, 2004)
Approach of (Tairas et al., 2008) X
UEML Approach (Harzallah et al., 2012) X
Framework of (Walter et al., 2014) X X

Table 22: Overview of methods with their corresponding purposes


5.7. Discussion

The contributions of this paper are double fold. First, we have classified how ontological analyses

of DSMLs have been performed in the past and by the use of which methods (RQ1). Our

classification revealed that most of these analyses were conducted without the support of an actual

method or methodology. As mentioned by Siau and Rossi (2007), different methods fit different

purposes and therefore should carefully be evaluated to one another. Hence, we developed a
framework that aims to facilitate this choice of method by associating these purposes and methods,

allowing a researcher or practitioner to perform an ontological analysis with a corresponding


methodology that was designed to aid a researcher in fulfilling this objective (RQ2).

We can think of several applications and real-world situations to which this framework could be

applied. For example, consider a researcher who would like to rigorously investigate and correct
any semantic deficiencies that could exist in a newly developed metamodel for a specific business

domain. By applying our framework, the researcher would identify the purpose of his ontological

analysis with the Ontological Recommendation pattern. As described above, this pattern first
prescribes an ontological mapping of either the CD or the OD, where the identified ambiguities would

then have to be improved in the corresponding metamodel. As a next step, the researcher can pick
his or her preferred method of analysis to conduct this prescriptive pattern. For instance, the

researcher could opt to perform the ontological mapping of the CD with the BWW framework. As

such, an interpretation and representation mapping would interpret the semantics of CD and identify
any possible ontological deficiencies (e.g. construct deficit). Consequently, the researcher can

decide to define several recommendations to overcome these identified deficiencies. These


recommendations could for example be formulated as constraints in the metamodel, limiting the

statements that can be made about the domain. These constraints can be generated by implementing

the approach of (Evermann & Wand, 2005b). We would like to emphasize that a single purpose, or
even a single pattern, can be achieved with different kinds of methods, and therefore should be seen
as complementary. Thus, in the case of the researcher, he or she could for instance prefer not to

modify the existing metamodel, and decide to incorporate design patterns into his metamodel, as

proposed in (Falbo et al., 2013).

Finally, we would also like to address the limitations of the framework. First, we acknowledge

that the prescriptive patterns for conducting an ontological analysis still lacks a rigorous validation
concerning completeness. Applying these prescriptive patterns to new ontological analyses could

therefore lead to the modification of existing patterns or to the introduction of new patterns or

applications. Second, this paper also lacks an empirical validation of the prescriptive patterns. New
research efforts could therefore focus on investigating the effectiveness and efficiency of applying

these patterns compared to ontological analyses that do not. Finally, we would like to mark that

these prescriptive patterns should not be perceived as methods to conduct an ontological analysis.
Instead, their function is to aid practitioners in finding the correct methods for conducting such an

analysis, and as a result achieve their intended purpose.

5.8. Conclusion

This paper developed a framework to structure the process of conducting an ontological analysis

and offer instructions in the form of prescriptive patterns on how to analyze a DSML. We
constructed our framework based on well-accepted theories and techniques from both the

descriptive and prescriptive knowledge bases. We then classified and described 17 ontological

analyses of DSMLs to our framework, in order to gain more insights on how an ontological analysis
has been performed in the past, and which techniques were utilized to structure the analysis. We

discovered that only few analyses actually implement a method for conducting an ontological
analysis. As a result, we identified several patterns that are being executed to perform an analysis

depending on a particular purpose. We then related these purposes and patterns to various different

methods and techniques for conducting an ontological analysis. As a result, our framework can aid
researchers with future ontological analyses. The framework allows a researcher with a specific

162
purpose to recognize the required patterns and types of methods that can be followed in order to

successfully conduct an ontological analysis and consequently achieve his or her intended purpose.

As for future research, we concur with (Siau & Rossi, 2007; Yair Wand & Weber, 2017) that
insights about the strengths and weaknesses of conceptual modeling methods could be obtained

through further methodological contributions and better empirical testing. It is therefore the aim of
the authors to evaluate this framework in a new ontological analysis of several DSMLs – more

specifically the REA and the ArchiMate modeling languages. Additionally, it is our intention to

also apply our framework on the analysis of a newly constructed metamodel for the Business
Architecture domain. With these new evaluations, we hope to further strengthen and fine grain our

framework.

163
164
6.

Conclusion

165
6.1. Research Contributions

In the introduction of this dissertation, we identified several research gaps and shortcomings in the

field of ODCM, and accordingly translated these shortcomings in four principal knowledge

questions. In order to provide a proper answer to these knowledge questions – the research

contributions of this dissertation – we performed four research studies with the aim to generate

knowledge and address the research gaps and shortcomings that were identified. Below, we
summarize the answers to the knowledge questions that were rendered from these studies:

Knowledge Question 1: Which kind of research has been performed over the years in the domain
of ODCM, what is the nature of their research contributions and which is the current state of the

art?

To answer our first knowledge questions, we conducted an extensive literature study in the field
of ODCM in the second chapter of this dissertation, which was composed of a systematic mapping

review (SMR) and a systematic literature review (SLR). The results of the SMR identified certain

gaps and trends in the domain of ODCM. For instance, the SMR clearly identified a gap in the kind
of research that is performed in ODCM, where theoretical developments take a much larger share

compared to empirical developments. Additionally, we observed that many articles in our literature
study do not – explicitly – give a specific purpose for their intended conceptual model. Moreover,

we discovered that many researchers are rather ambiguous in defining the specific application of the

ontology and in motivating their choice of ontological theories for the intended purpose. Based upon
these results, we conducted the SLR to gather more evidence on these results. This led to the

identification of five principal research gaps that need more attention and five research opportunities
that could be future areas for improvement in the field of ODCM. The research gaps were: (1) a

shortage of empirical developments compared to the theoretical developments, (2) a lack of

experimental, observational and testing evaluation methods, (3) many articles do not clearly mention
a specific or intended purpose of their model or ontology, (4) similar to the purpose of conceptual

166
models, many researchers are also vague in defining the specific application of the ontology and in

motivating their choice of ontological theories for the intended purpose, and (5) certain areas in

ODCM still need more research, such as studies that measure how well that learning, interpretation
and understanding of a conceptual representation takes place.

Knowledge Question 2: Which are the effects and the principal differences of applying an
ontology-driven conceptual modeling technique compared to a traditional conceptual modeling

technique?

While many ontology-driven techniques have demonstrated to be beneficial compared to the


traditional conceptual modeling practices, the added value of their application is not always

straightforward. Consequently, there is no clear distinction when it is actually desirable to adopt

these techniques. Therefore, in order to answer our second knowledge question, we conducted an
empirical study in the form of a quasi-experiment that investigated the differences between adopting

a TCM technique and an ODCM technique with the objective to understand and identify in which
modeling situations an ODCM technique can prove beneficial compared to a TCM technique. The

findings of our empirical study can now confirm that there do exist meaningful differences. We

observed that novice modelers applying the ODCM technique arrived at higher quality models
compared to novice modelers applying the TCM technique. More specifically, the results of the

empirical study demonstrated that it is advantageous to apply an ODCM technique over an TCM
when having to model the more challenging and advanced facets of a domain or scenario. This

additional benefit can most probably be explained by the way modelers are adopted to an ontological

way of thinking when learning and applying an ODCM technique. The patterns and rules
corresponding to this ontological mindset aid modelers in tackling the more challenging aspects of

modeling a certain domain. Moreover, we also did not find any significant difference in effort

between applying these two techniques. Presumably, this can be attributed to the fact that both our
subject groups were trained in each respective technique over a period of several months with

167
regular practice, consequently leading to a less distinctive difference in effort between learning a

TCM technique and an ODCM technique fades.

Knowledge Question 3: What is the influence of applying different kinds of ontologies on the model
comprehension of the resulting ontology-driven conceptual models?

To answer our third knowledge question, we performed an empirical study that investigated the
influence of an ontology on the interpretation and understanding of the resulting conceptual models.

While earlier research efforts (Verdonck & Gailly, 2016a; Verdonck et al., 2014) acknowledged that

the conceptualizations realized by different ontologies can have a considerable impact on their
model comprehension – i.e. their pragmatic quality – it was still uncertain to which degree the

pragmatic quality of these ontology-driven models is influenced by the choice of a particular

ontology. As a result, the objective of the empirical study was to determine if the influence of the
choice of a particular ontology on the pragmatic quality of the resulting ontology-driven models was

significantly relevant. After performing the study, our findings provide an answer to our knowledge
question in the form that the selection of an ontology can lead to significant differences in subjects

correctly interpreting and comprehending the model, as well as in their perception or their effort

required to comprehend these models. More specifically, since we linked the comprehension of
these conceptual models to the metaphysical characteristics that compose an ontology, we can state

that it is the metaphysical characteristics of ontologies that determine the quality of the
conceptualizations.

Knowledge Question 4: Depending on the purpose of an ontological analysis, how should this

analysis be performed and structured?

We addressed our last knowledge question by developing a framework to structure the process

of conducting an ontological analysis and offer instructions in the form of prescriptive patterns on

how to analyze a domain specific modeling language (DSML). More specifically, we constructed
our framework based on well-accepted theories and techniques from both the descriptive and

168
prescriptive knowledge bases. In total, we classified and described 17 ontological analyses that were

performed on DSMLs to our framework, in order to gain more insights on how an ontological

analysis has been performed in the past, and which techniques were utilized to structure the analysis.
Our classification revealed that most of these analyses were conducted without the support of an

actual method. This cannot be attributed to a lack of methodologies — of which several have been
identified in our study. Neither do we believe that it is the explicit intention of authors to dismiss a

method. Instead, our impression from examining the various articles is that authors often mistake

the application of an ontology – and the rendering of an interpretation or recommendation of the


concepts of a DSML – as a method or approach itself. This however, as also mentioned by

Rosemann et al. (2004), can only be seen as a single aspect of a ontological analysis, and does not

compose the whole structure of an ontological analysis. Thus, based upon the observations from the
classification, we identified which aspects are still essential to structure the process of an ontological

analysis, and which should therefore be incorporated into our framework. Based upon these
sequences, we enhanced our framework with prescriptive patterns in order to identify the required

sequence(s) to perform the intended analysis. More importantly, we then related these purposes and

patterns to various different methods and techniques for conducting an ontological analysis.

6.2. Relevance for researchers and practitioners

The relevance of this dissertation concerns both researchers and practitioners in the fields of ODCM

and conceptual modeling. First, the literature study revealed several research gaps in ODCM that
current researchers can take into account. For instance, the study observed that most of the research

which has been performed in the field of ODCM is of a more theoretical nature, therefore raising
the need for more empirical research that tests and validates the theoretical assumptions and efforts

that have been made. Additionally, we observed that many articles in our literature study do not –

explicitly – give a specific purpose for their intended conceptual model. Moreover, we discovered
that many researchers are rather ambiguous in defining the specific application of the ontology and

169
in motivating their choice of ontological theories for the intended purpose. Concerning research

contributions, the study revealed that these articles mostly consists of improvements focusing on (1)

the semantics and ontological deficiencies of modeling constructs and grammars and (2) assessing
the perceptions of users and modelers of these modeling constructs and grammars, with an overall

shortage of research performed in the way learning, interpretation and understanding of a conceptual
representation takes place. These findings are relevant for researchers in ODCM since they can

focus future research efforts on addressing several of such shortcomings or make them aware of

certain research opportunities within the field that still require further investigation.

Second, the empirical study that compared ODCM with TCM can be rather relevant for both

researchers and practitioners. While the advantages of applying ODCM have already been

demonstrated in various research efforts, it has never been compared before with TCM in a full
empirical study. Moreover, subjects were taught in each of the respective techniques for a long

period of time. The results of this study revealed that novice modelers applying an ODCM technique
have a significant benefit over an TCM technique when modeling the advanced aspect of a domain

and that no additional effort was observed in applying the ODCM technique. While we fully

acknowledge that more empirical evidence is required in order to fully confirm these findings, these
first observations can already be persuasive for practitioners to commence the adoption of ODCM

techniques. Especially for more complicated and elaborated modeling tasks, ODCM can prove to
be beneficial to TCM, arriving at higher quality conceptual models. Also, this study – as to the

knowledge of the authors – is the first that has taught and ODCM technique for a long period of

time to a large number of subjects. We believe that the observation – where no additional effort was
recorded in applying the ODCM technique compared to the TCM technique – can at least be

partially attributed to the longer period of training. This is a rather relevant observation to take into

account for future empirical studies that are to take place with ODCM techniques.

Third, the findings of the empirical study comparing a 3D and a 4D foundational ontology

revealed that there exist significant differences between adopting these ontologies, based upon their

170
metaphysical characteristics. More specifically, the study demonstrated that there exist significant

differences in the interpretation of ontology-driven conceptual models that were developed by

applying different foundational ontologies. In other words, the pragmatic quality of these conceptual
models is directly influenced by the metaphysical characteristics that compose an ontology. As such,

the metaphysical characteristics determine the quality of the conceptualizations. Again, we


acknowledge that more empirical research is required in order to further investigate the influence of

ontologies on the resulting conceptual models. However, the impact of these first observations

require researchers and practitioners to be aware of the impact of adopting a specific ontology when
developing conceptual models, and the influence they can have on the interpretation of its users. In

line with the findings of the literature study – where it was found that researchers are often rather

ambiguous in describing the application and choice of a selected ontology – we aspire that by
providing more insights into the importance of the selection of an ontology that this will encourage

researchers to better motivate why certain ontologies are adopted.

Finally, in our last research effort, we developed a framework for performing an ontological

analysis of a DSML in a structured way, with prescriptive patterns in order to identify the required

sequence(s) to perform the intended analysis. More specifically, we related these purposes and
patterns to various different methods and techniques for conducting an ontological analysis. As a

result, our framework can aid researchers with future ontological analyses. Idiosyncratically, the
framework enables a researcher with a specific purpose to recognize the required patterns and types

of methods that can be followed in order to successfully conduct an ontological analysis and

consequently achieve his or her intended purpose. Another approach to apply this framework is to
structure previously performed ontological analyses in order to identify which aspects of a DSML

have already been analyzed and with which types of references ontologies. This approach gives a

clear overview to researchers what kind of research has already been executed, and which facets of
the DSML have yet to be analyzed.

171
6.3. Research Limitations

For each of our research studies, we adopted a structured and rigorous methodology in order to

arrive at trustworthy results. However, several limitations do exist. While we have discussed the

limitations and validity of each study in depth in the respective chapters of this dissertation, we

would like to emphasize the most profound limitations, more specifically those related to our

empirical studies.

First, we fully acknowledge that the findings of both our empirical studies require additional

empirical evidence before they can be entirely proven and generalized. While we constructed our
experimental design in such a way to allow a certain degree of generalizability, certain factors such

as the profile of subjects participating, or the nature of the modeling task are experiment-specific.

By performing additional empirical studies with different types of assignments, subjects and
independent/dependent variables, we can increase the diversity and variety of the experimental

results. As such, if future empirical findings prove to be in line with the findings of the studies

performed in this dissertation, we can confirm and generalize our hypotheses with more plausibility.

Perhaps one of the major limitations of both empirical studies is that they were executed with

novice modelers – i.e. students – as subjects. Our motivation for selecting students as subjects is
double fold. First, in our experimental setting we required subjects that were a “tabula rasa”,

meaning that our subjects could have no previous modeling experience or should not be acquainted

with any ontology or ontology-driven modeling technique. Second, the field of ODCM is still rather
immature. As such, it is simply unattainable to select for instance 150 practitioners in ODCM with

several years of experience in a certain ontology-driven technique.

More specifically for the empirical study comparing ODCM with TCM, we like to emphasize

that due to practical limitations we could not balance the students of the two different universities

between the two treatments, e.g. half of the students of Ghent University being trained in TCM and
ODCM and vice versa for the students at the University of Prague. Consequently, one group may

172
substantially differ from the other – for instance due to the students’ specific profile or the teaching

method of the respective professor. Hence, the type of experiment performed in this empirical study

is of a quasi-experiment. The most important consequence of this design is that our study may suffer
from increased selection bias, meaning that other factors instead of our dependent variable may have

influenced the outcome of our results. As a result, this type of experimental design impacts the
internal validity of our study.

Finally, concerning the empirical study comparing a 3D and a 4D foundational ontology, we

recognize that further empirical studies are required that are performed with additional ontologies.
In our study, we compared the BORO and the UFO ontology. Additional research is required that

further investigates the difference between (1) the BORO and UFO ontology; (2) 3D and 4D

ontologies and (3) other types of ontologies such as core and domain ontologies. Finally, we have
also compared these two ontologies according to their metaphysical characteristics. However, due

to the nature of our experiment – where we selected subjects that had no previous knowledge of the
respective ontology – we did not explore the metaphysical differences between both ontologies in

great depth. Therefore, additional experimental settings with preferably more expert subjects are

required to fully generalize the findings of our empirical study.

6.4. Future Research

This dissertation has been structured around four knowledge questions, where we performed several

research efforts to answer these questions as a contribution to the field of ODCM. Based upon these
answers to the knowledge questions, we can now pose three new design problems. These design

problems call for a change, in the form of designing an artifact to solve or improve a problem context
(Wieringa, 2014). Below, we discuss the design problems that are derived from our findings. In the

last paragraph of this section, we also address several future research opportunities that do not

invoke new design problems, but can be more seen as extensions to the studies that have been
performed in this dissertation.

173
Design Problem 1.

Our first design problem can be derived from the results of our third knowledge question, where it

was determined that the choice of an ontology can influence the resulting conceptualization and
consequently can have a substantial impact on model comprehension. However, since ontologies

can be applied for several purposes, the next logical step – and herewith the design problem – would
be to design an artifact that (1) fundamentally compares ontologies for their strengths and

weaknesses against certain modeling purposes and relate the characteristics of the ontologies to

specific purposes; and (2) evaluates how the fit of on ontology and the purpose of a conceptual
model impacts the quality of the resulting conceptual model. This artifact could be rendered in the

form of an ontology-driven conceptual modeling framework that incorporates ontologies by means

of metaphysical characteristics and conceptual model purpose by means of conceptual model


purpose requirements.

The main challenges for generating this framework would be to first develop a strong theoretical
foundation that clearly describes what the different modeling requirements are, how the

metaphysical characteristics of an ontology correspond with these modeling requirements, and

which combinations of requirements and metaphysical characteristics are expected to result in


higher quality models. In other words, a first set of relations or connections have to be formed

between ontologies and certain purposes, and the expected positive/negative impact they can have
upon model quality. Secondly, while the first step would form the framework’s foundation, the

proposed relationships between purposes and ontologies are based on current practices and have not

been evaluated. As a consequence, a second step would be to empirically validate if the fit between
metaphysical characteristics and conceptual model purpose requirements indeed have a positive

impact on the quality and more specific on the fit for purpose of the resulting conceptual model.

Finally, after the relations between ontologies and certain purposes have been validated and
established, the normative aspect of the framework can be developed. More specifically, the

framework should be further enhanced as a means to structure the process of selecting an ontology

174
according to the requirements of a purpose. By developing a clear set of rules, instructions and best

practices, a practitioner or researcher would be able to apply the framework by selecting for instance

a series of requirements, to which the framework would relate these requirements to the best fitting
metaphysical characteristics, and accordingly suggest one or more ontologies that would best suit

these requirements in order to arrive at the highest quality models.

Design Problem 2.

The second design problem originates from the results from the second knowledge question and

actually aligns with the design problem addressed above. The research study related to our second
knowledge question found that the quality of a resulting model can be significantly influenced by

the choice of the modeling technique. More specifically, the study demonstrated that applying an

ODCM technique can lead to higher quality models compared to applying a traditional conceptual
modeling technique, especially for the more advanced and challenging aspects of a domain.

Furthermore, no significant differences in effort to create such models were observed between the
use of these techniques. Thus, while it has been demonstrated that ODCM can support a modeler in

constructing higher quality models than a more traditional conceptual modeling approach, a next

step would be to actually implement the resulting model. In other words, while research in ODCM
has been gradually demonstrating the additional benefits of applying ontological theories in the

domain of conceptual modeling, the next step could be to implement the ontology-driven models
into ontology-driven information systems (ODIS), and establish that the additional benefits of

ontology-driven models are also translated to their implemented systems. For instance, one of the

principal purposes of utilizing a conceptual model is for database design and management (Fettke,
2009).

As such, the design problem arises how an ODCM technique could be applied in order to

facilitate database design and management, eventually resulting in a performant ontology-driven


system. This would require the development of an artifact, for example in the form of a systematic

method, that would structure and aid a designer in developing an ontology-driven system. The first

175
phase in this method would be to collect the required knowledge related to the domain that has to

be modeled, specify the exact requirements or purpose of the to-be developed model, and select an

appropriate ontology to assists us in the process of creating the ODCM. As such, this phase would
actually apply the framework as described in the design problem above. Next, a foundational

ontology-driven model would be constructed that represents the domain in which the system would
operate. It is important to emphasize that this model would not represent the design of the

information system or the database. Instead, this foundational model aims to gain more insight and

knowledge concerning the environment – or domain – in which the system would operate.
Consequently, a modeler can gain a better overview of which aspects of this domain directly or

indirectly affect the system and moreover anticipate any future changes that could occur and would

alter the operations of the future system. In fact, this step allows a designer to incorporate a certain
degree of evolvability in his or her design, allowing the future system to more easily adapt to

changing requirements. As a third phase, a system-specific conceptual model is created by distilling


the necessary elements from the foundational conceptual model. It is during this phase, that the

model will be created to which the design of the database or system will correspond. Here, the

designer can focus on the operability and simplicity of the system, by making sure that the design
will result in reliable and smooth operations, and by removing as much unnecessary complexity

where possible. Finally, based upon the system-specific model, the design can be implemented in

order to construct the ontology-driven system. It would be highly interesting then to measure the

resulting performance of such an ontology-driven system in comparison with for example a more

conventional developed system – which could actually be the beginning of a new knowledge
question(s). Different types of comparisons could take place. For example, it could be measured

how easily human users can work with the system and comprehend it, by for example letting them

construct database queries to access the contained information. Other comparisons could measure
the performance of how efficiently these queries are executed by the respective system, or how the

system would correspond to changing requirements or increasing loads.

176
Design Problem 3.

Our last design problem continues the research efforts that have been conducted concerning our

fourth knowledge question. To methodologically structure the process of conducting an ontological


analysis, we developed a framework that offers instructions in the form of prescriptive patterns on

how to analyze a domain-specific modeling language (DSML). These purposes and patterns were
related to various different methods and techniques with the purpose to aid researchers with

selecting the adequate method for their specific ontological analysis. This framework was developed

through the iteration of a rigor cycle and a first design cycle, where we respectively founded the
framework on the existing knowledge base of ODCM and generated an artifact – the framework –

to address the identified problem. Now, the design problem arises to evaluate the framework – as

such initiating a second iteration of the design cycle. More specifically, the next iteration could aim
to ‘populate’ the framework, meaning that new ontological analyses would be performed with the

assistance of the framework, in order to evaluate if the prescriptive patterns and methods suggested
offer the expected added-value in the process of conducting an ontological analysis. In other words,

by applying the framework in the environment and application domain of ODCM, we can identify

any existing weaknesses or misconceptions in the framework. For instance, it is possible that certain
patterns during the application of the framework appear to not match certain methods for conducting

an ontological analysis with a certain purpose. This would require a re-evaluation and re-assignment
of prescriptive patterns, methods and purposes. Another option could be that through its application,

the framework gets populated with new patterns, purposes or methods that were not previously

recognized in our theoretical review of previously performed ontological analyses.

Additional Future Research Opportunities.

As the closing paragraph of this section, we would like to address several last research

opportunities that originate from the studies performed in this dissertation. First, concerning the
empirical study comparing the TCM and ODCM techniques, future research could perform

177
additional testing, for instance by focusing on different modeling techniques or by adopting different

modeling assignments. In our case, we specifically compared the EER modeling technique with the

OntoUML technique. Future research efforts could involve different traditional modeling
techniques – e.g. UML, BPMN, ArchiMate – or also with techniques based upon other ontologies

– e.g. BWW, BORO, Dolce etc. Furthermore, additional studies could be performed where the
experimental design is a full experiment – i.e. where subjects are also balanced between treatments.

Finally, regarding the empirical study comparing 3D and 4D ontologies, additional studies could

focus on further empirically evaluating the observed differences between various types of
ontologies, or on exploring the distinctions in metaphysical characteristics between ontologies. For

instance, instead of evaluating between the BORO and UFO ontology, future research efforts could

investigate the observed differences between other foundational ontologies such as for example
BWW and UFO. Moreover, it could also be interesting to test the differences between other types

of ontologies such as core or domain ontologies. Lastly, a great deal of research can still be
performed – both theoretical as well as empirical – in how ontologies differ with each other with

respect ot their metaphysical characteristics. While our empirical study did not investigate the

differences in metaphysical characteristics in great depth due to the inexperience of our subjects, it
would be particularly interesting to explore the metaphysical characteristics of various ontologies

with more expert subjects.

178
References
Al Debei, M. M. (2012). Conceptual Modelling and the Quality of Ontologies: Endurantism Vs
Perdurantism. International Journal of Database Management Systems, 4(3), 1–19.
https://doi.org/10.5121/ijdms.2012.4301
Almeida, J. P. a., & Guizzardi, G. (2013). An ontological analysis of the notion of community in the RM-
ODP enterprise language. Computer Standards & Interfaces, 35(3), 257–268.
Almeida, J. P. a., Guizzardi, G., & Santos, P. S. (2009). Applying and extending a semantic foundation for
role-related concepts in enterprise modelling. Enterprise Information Systems, 3(3), 253–277.
https://doi.org/10.1080/17517570903046292
Andersson, B., Bergholtz, M., Edirisuriya, A., Ilayperuma, T., Abels, S., Hahn, A., … Weigand, H. (2006).
Towards a Reference Ontology for Business Models. In International Conference on Conceptual
Modeling (Vol. 4215, pp. 482–496). https://doi.org/10.1007/11901181
Ashenhurst, R. (1996). Ontological aspects of information modeling. Minds and Machines, 6(3), 287–394.
https://doi.org/10.1007/BF00729802
Aßmann, U., Zschaler, S., & Wagner, G. (2006). Ontologies, meta-models, and the model-driven paradigm.
In Ontologies for software engineering and software technology (pp. 249–273). Springer.
Azevedo, C., Almeida, J. P. a, Van Sinderen, M., Quartel, D., & Guizzardi, G. (2011). An ontology-based
semantics for the motivation extension to archimate. In EDOC (pp. 25–34).
https://doi.org/10.1109/EDOC.2011.29
Azevedo, C., Iacob, M.-E. E., Almeida, J. P. A. J. P. a, Van Sinderen, M., Pires, L. F. L. F., & Guizzardi,
G. (2015). Modeling resources and capabilities in enterprise architecture: A well-founded ontology-
based proposal for ArchiMate. Information Systems, 54, 235–262.
https://doi.org/10.1109/EDOC.2013.14
Azevedo, Iacob, M.-E., Almeida, J. P. A., van Sinderen, M., Ferreira Pires, L., & Guizzardi, G. (2013). An
Ontology-Based Well-Founded Proposal for Modeling Resources and Capabilities in ArchiMate. In
EDOC (pp. 39–48). https://doi.org/10.1109/EDOC.2013.14
Bandara, W., Miskon, S., & Fielt, E. (2011). A systematic, tool-supported method for conducting literature
reviews in information systems. In Proceedings of the19th European Conference on Information
Systems (ECIS 2011) (p. 221).
Baskerville, R. L., Kaul, M., & Storey, V. C. (2015). Genres of Inquiry in Design-Science Research:
Justification and Evaluation of Knowledge Production. MIS Quarterly Quarterly, 39(3), 541–564.
Batra, D., & Marakas, G. M. (1995). Conceptual data modelling in theory and practice. European Journal
of Information Systems, 4(3), 185–193. https://doi.org/10.1057/ejis.1995.21
Benbasat, I., & Zmud, R. W. (1999). Empirical Research in Information Systems : the Practice of
Relevance. MIS Quarterly, 23(1), 3–16.
Bera, P. (2012). Analyzing the Cognitive Difficulties for Developing and Using UML Class Diagrams for
Domain Understanding. Journal of Database Management, 23(3), 1–29.
https://doi.org/10.4018/jdm.2012070101
Bera, P., Burton-Jones, A., & Wand, Y. (2009). the Effect of Domain Familiarity on Modelling Roles: an
Empirical Study. PACIS 2009 Proceedings, 110.

179
Bera, P., & Evermann, J. (2012). Guidelines for using UML association classes and their effect on domain
understanding in requirements engineering. Requirements Engineering, 19(1), 63–80.
https://doi.org/10.1007/s00766-012-0159-y
Bittner, T., Donnelly, M., & Winter, S. (2005). Ontology and semantic interoperability. Large-Scale 3D
Data Integration: Challenges and Opportunities, 139–160.
Bourgeois, D. (2014). Information Systems for business and beyond. The Saylor Foundation.
Brereton, P., Kitchenham, B. a., Budgen, D., Turner, M., & Khalil, M. (2007). Lessons from applying the
systematic literature review process within the software engineering domain. Journal of Systems and
Software, 80(4), 571–583. https://doi.org/10.1016/j.jss.2006.07.009
Buder, J., & Felden, C. (2011). Ontological Analysis of Value Models. In ECIS 2011 Proceedings.
Bunge, M. (1977). Treatise on basic philosophy: Volume 3: Ontology I: The furniture of the World Reidel.
Dordrecht, Holland.
Burkhardt, J. M., Détienne, F., & Wiedenbeck, S. (2002). Object-oriented program comprehension: Effect
of expertise, task and phase. Empirical Software Engineering, 7(2), 115–156.
https://doi.org/10.1023/A:1015297914742
Burton-Jones, A. . b, Clarke, R. ., Lazarenko, K. ., & Weber, R. . (2012). Is use of optional attributes and
associations in conceptual modeling always problematic? Theory and empirical tests. In International
Conference on Information Systems, ICIS 2012 (Vol. 4, pp. 3041–3056).
Burton-Jones, A., & Meso, P. P. N. (2006). Conceptualizing systems for understanding: An empirical test
of decomposition principles in object-oriented analysis. Information Systems Research, 17(1), 38–60.
https://doi.org/10.1287/isre.1050.0079
Burton-Jones, A., & Weber, R. (1999). Understanding relationships with attributes in entity-relationship
diagrams. ICIS ’99 Proceedings. Retrieved from http://dl.acm.org/citation.cfm?id=352946
Chen, P. P.-C. (1976). The Entity-Relationship Unified View of Data Model - Toward a Unified View of
Data. ACM Transactions on Database Systems, 1(1), 9–36.
https://doi.org/http://doi.acm.org/10.1145/320434.320440
Chisholm, R. M. (1989). On metaphysics (Vol. 115). U of Minnesota Press.
Chisholm, R. M. (1996). A realistic theory of categories: An essay on ontology. Cambridge University
Press.
Clarke, R., Burton-jones, A., & Weber, R. (2013). Improving the semantics of conceptual modelling
grammars : a new perspective on an old problem. Thirty-Fourth International Conference on
Information Systems, 1–17.
Cockcroft, S. (2005). Ontological Clarity and Comprehension in Health Data Models. Eleventh Americas
Conference on Information Systems, 2811–2822.
Daga, A., Cesare, S. De, Lycett, M., Partridge, C., Zhang, Z., Wang, S., … Partridge, C. (2005). An
Ontological Approach for Recovering Legacy Business Content. Proceedings of the 38th Annual
Hawaii International Conference on System Sciences, 0(C), 1–9.
https://doi.org/10.1109/HICSS.2005.94
Davies, I., Green, P., Rosemann, M., Indulska, M., & Gallo, S. (2006). How do practitioners use conceptual
modeling in practice? Data & Knowledge Engineering, 58(3), 358–380.
https://doi.org/10.1016/j.datak.2005.07.007
Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of. Information

180
Technolog MIS Quarterly, 13(3), 319–340.
De Cesare, S., & Geerts, G. L. (2012). Toward a perdurantist ontology of contracts. Lecture Notes in
Business Information Processing, 112 LNBIP, 85–96. https://doi.org/10.1007/978-3-642-31069-0_7
De Cesare, S., Henderson-Sellers, B., Partridge, C., & Lycett, M. (2015). Improving Model Quality
Through Foundational Ontologies: Two Contrasting Approaches to the Representation of Roles. In
ER 2015 (Vol. 1, pp. 304–314). https://doi.org/10.1007/978-3-319-25747-1_30
De Cesare, S., & Partridge, C. (2016). BORO as a Foundation to Enterprise Ontology. Journal of
Information Systems, 30(2), 83–112. https://doi.org/10.2308/isys-51428
Dybå, T., Dingsøyr, T., & Hanssen, G. K. (2007). Applying systematic reviews to diverse study types: An
experience report. Proceedings - 1st International Symposium on Empirical Software Engineering and
Measurement, ESEM 2007, (7465), 225–234. https://doi.org/10.1109/ESEM.2007.21
Elmasri, R., & Navathe, S. B. (2015). Fundamentals of database systems. Pearson.
Endres, A., & Rombach, H. D. (2003). A handbook of software and systems engineering: Empirical
observations, laws, and theories. Pearson Education.
Evermann. (2005). The Association Construct in Conceptual Modelling – An Analysis Using the Bunge
Ontological Model. In Advanced Information Systems Engineering (Vol. 3520, pp. 33–47).
https://doi.org/10.1007/11431855_4
Evermann, J., & Wand, Y. (2005a). Ontology based object-oriented domain modelling: fundamental
concepts. Requirements Engineering, 10(2), 146–160. https://doi.org/10.1007/s00766-004-0208-2
Evermann, J., & Wand, Y. (2005b). Toward formalizing domain modeling semantics in language syntax.
IEEE Transactions on Software Engineering, 31(1), 21–37.
Evermann, J., & Wand, Y. (2006a). Ontological modeling rules for UML: An empirical assessment.
Journal of Computer Information Systems, 46(SI), 14–29.
Evermann, J., & Wand, Y. (2006b). Ontological Modeling Rules for Uml: an Empirical Assessment.
Journal of Computer Information Systems, 47, 14–29.
Evermann, & Fang, J. (2010). Evaluating ontologies: Towards a cognitive measure of quality. Information
Systems, 35(4), 391–403. https://doi.org/10.1016/j.is.2008.09.001
Evermann, & Halimi, H. (2008). Associations and mutual properties - an experimental assessment. In 14th
Americas Conference on Information Systems, AMCIS 2008 (Vol. 2, pp. 1231–1241).
Evermann, & Wand, Y. (2011). Ontology based object-oriented Domain Modeling: Representing behavior.
Journal of Database Management, 20(March), 48–77.
Falbo, R., Barcellos, M., Nardi, J. C., & Guizzardi, G. (2013). Organizing ontology design patterns as
ontology pattern languages. In The Semantic Web: Semantics and Big Data (pp. 61–75). Springer.
Falbo, R. D. A., Barcellos, M. P., Nardi, J. C., & Guizzardi, G. (2013). Organizing Ontology Design
Patterns as Ontology. 10th International Conference, ESWC 2013, 61–75.
Falbo, R. D. A., Guizzardi, G., & Duarte, K. C. (2002). An ontological approach to domain engineering. In
International conference on Software engineering and knowledge engineering (p. 351).
https://doi.org/10.1145/568820.568822
Falessi, D., Juristo, N., Wohlin, C., Turhan, B., Münch, J., Jedlitschka, A., & Oivo, M. (2017). Empirical
software engineering experts on the use of students and professionals in experiments. Empirical
Software Engineering, 1–38. https://doi.org/10.1007/s10664-017-9523-3

181
Fettke, P. (2009). How conceptual modeling is used. Communications of the Association for Information
…, 25. Retrieved from http://aisel.aisnet.org/cgi/viewcontent.cgi?article=3494&context=cais
Gailly, F., Geerts, G., & Poels, G. (2009). Ontological Reengineering of the REA-EO Using UFO. In
OOPSLA Workshop on Ontology-Driven Software Engineering.
Gailly, F., & Poels, G. (2007a). Ontology-driven Business Modelling: Improving the Conceptual
Representation of the REA Ontology. In International Conference on Conceptual Modeling (pp. 407–
422). Berlin, Heidelberg: Springer-Verlag.
Gailly, F., & Poels, G. (2007b). Towards Ontology-Driven Information Systems: Redesign and
Formalization of the REA Ontology. Business Information Systems, 4439, 245–259.
https://doi.org/10.1007/978-3-540-72035-5_19
Gangemi, A. (2005). Ontology Design Patterns for Semantic Web Content. The Semantic Web - {ISWC}
2005, 4th International Semantic Web Conference, {ISWC} 2005, Galway, Ireland, November 6-10,
2005, Proceedings, 262–276.
Geerts, G. L. (2011). A design science research methodology and its application to accounting information
systems research. International Journal of Accounting Information Systems, 12(2), 142–151.
https://doi.org/10.1016/j.accinf.2011.02.004
Geerts, G. L., & McCarthy, W. E. (2003). An ontological analysis of the economc primitives of the
extended-rea enterprise information architecture. International Journal of Accounting Information
Systems, 3(1), 1–16. https://doi.org/10.1016/S1467-0895(01)00021-5
Geerts, G. L., & McCarthy, W. E. (2006). Policy-Level Specifications in REA Enterprise Information
Systems. Journal of Information Systems, 20(2), 37–63. https://doi.org/10.2308/jis.2006.20.2.37
Geerts, & McCarthy. (2000). The ontological foundation of REA enterprise information systems. In Annual
Meeting of the American Accounting Association (pp. 1–34).
Gehlert, A., & Esswein, W. (2007). Toward a formal research framework for ontological analyses.
Advanced Engineering Informatics, 21(2), 119–131. https://doi.org/10.1016/j.aei.2006.11.004
Gemino, A., & Wand, Y. (2005). Complexity and clarity in conceptual modeling: Comparison of
mandatory and optional properties. Data & Knowledge Engineering, 55(3), 301–326.
https://doi.org/10.1016/j.datak.2004.12.009
Gómez-Pérez, a., & Rojas-Amaya, M. (1999). Ontological reengineering for reuse. In International
Conference on Knowledge Engineering and Knowledge Management (pp. 139–156).
https://doi.org/10.1007/3-540-48775-1_9
Gornik, D., & IBM. (2003). Entity Relationship Modeling with UML. IBM. Retrieved from
http://www.ibm.com/developerworks/rational/library/content/03July/2500/2785/2785_uml.pdf
Green, P., & Rosemann, M. (2000). Integrated process modeling: An ontological evaluation. Information
Systems, 25(2), 73–87. https://doi.org/10.1016/S0306-4379(00)00010-7
Green, P., Rosemann, M., Indulska, M., & Manning, C. (2007). Candidate interoperability standards: An
ontological overlap analysis. Data & Knowledge Engineering, 62(2), 274–291.
https://doi.org/10.1016/j.datak.2006.08.004
Green, P., Rosemann, M., Indulska, M., & Recker, J. (2011). Complementary use of modeling grammars.
Scandinavian Journal of Information Systems, 23(1), 59–86.
Gregor, S., & Hevner, A. R. (2013). Positioning And Presenting Design Science Research For Maximum
Impact. MIS Quarterly, 37(2), 337-A6.

182
Grüninger, M., Atefi, K., & Fox, M. M. S. (2000). Ontologies to support process integration in enterprise
engineering. Computational & Mathematical Organization, 6(4), 381–394.
https://doi.org/10.1023/A:1009610430261
Gruninger, M., Bodenreider, O., Olken, F., Obrst, L., & Yim, P. (2008). Ontology Summit 2007 –
Ontology, taxonomy, folksonomy: Understanding the distinctions. Applied Ontology, 3(3), 191–200.
https://doi.org/10.3233/AO-2008-0052
Grüninger, M., & Fox, M. S. (1995). The role of competency questions in enterprise engineering. In
Benchmarking—Theory and practice (pp. 22–31). Springer.
Guarino, N. (1995). Formal ontology, conceptual analysis and knowledge representation. International
Journal of Human-Computer Studies, 43(5), 625–640. https://doi.org/10.1006/ijhc.1995.1066
Guarino, N. (1998). Formal ontology and information systems. In FOIS conference (pp. 3–15).
Guarino, N., & Welty, C. Identity, Unity, and Individuality: Towards a Formal Toolkit for Ontological
Analysis, ECAI-2000: The European Conference on Artificial Intelligence § (2000).
Guarino, N., & Welty, C. (2000b). Ontological analysis of taxonomic relationships. Data & Knowledge
Engineering, 39(1), 51–74. https://doi.org/10.1016/S0169-023X(01)00030-1
Guizzardi. (2012). Ontological Foundations for Conceptual Modeling with Applications. In J. Ralyté, X.
Franch, S. Brinkkemper, & S. Wrycza (Eds.), Advanced Information Systems Engineering (Vol.
7328, pp. 695–696). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-31095-9_45
Guizzardi, G. (2005). Ontological Foundations for Structural Conceptual Models. CTIT, Centre for
Telematics and Information Technology. https://doi.org/10.1007/978-3-642-31095-9_45
Guizzardi, G. (2007). On Ontology, ontologies, Conceptualizations, Modeling Languages, and
(Meta)Models. In Conference on Databases and Information Systems (pp. 18–39).
Guizzardi, G. (2013). Ontology-Based Evaluation and Design of Visual Conceptual Modeling Languages.
In Domain Engineering: Product Lines, Languages, and Conceptual Models (pp. 317–347).
https://doi.org/10.1007/978-3-642-36654-3
Guizzardi, G., & Wagner, G. (2005). Towards Ontological Foundations for Agent Modelling Concepts
Using the Unified Fundational Ontology (UFO). Agent-Oriented Information Systems II, 3508, 110–
124. https://doi.org/10.1007/11426714_8
Guizzardi, G., & Wagner, G. (2011). Can BPMN Be Used for Making Simulation Models ? In J. Barjis, T.
Eldabi, & A. Gupta (Eds.), International Conference on Advanced Information Systems Engineering
(CAiSE 2011) LNBIP vol. 88 (pp. 100–115). https://doi.org/10.1007/978-3-642-24175-8_8
Guizzardi, G., Wagner, G., Almeida, J. P. A., & Guizzardi, R. S. S. (2015). Towards ontological
foundations for conceptual modeling: The unified foundational ontology (UFO) story. Applied
Ontology, 10(3–4), 259–271. https://doi.org/10.3233/AO-150157
Guizzardi, G., & Zamborlini, V. (2014). Using a trope-based foundational ontology for bridging different
areas of concern in ontology-driven conceptual modeling. Science of Computer Programming, 96,
417–443. https://doi.org/10.1016/j.scico.2014.02.022
Guizzardi, Das Graças, A. P., & Guizzardi, R. S. S. (2011). Design patterns and inductive modeling rules to
support the construction of ontologically well-founded conceptual models in OntoUML. In Advanced
Information Systems Engineering Workshops (Vol. 83 LNBIP, pp. 402–413).
https://doi.org/10.1007/978-3-642-22056-2_44
Guizzardi, & Halpin, T. (2008). Ontological foundations for conceptual modelling. Applied Ontology, 3,
1–12. https://doi.org/10.3233/AO-2008-0049

183
Guizzardi, Pires, L. F., & Sinderen, M. Van. (2002). On the role of domain ontologies in the design of
domain-specific visual modeling langages. In OOPSLA Workshop on Domain-Specific Modeling
Languages (pp. 25–38).
Guizzardi, & Zamborlini, V. (2013). A Common Foundational Theory for Bridging Two Levels in
Ontology-Driven Conceptual Modeling. Software Language Engineering. Retrieved from
http://link.springer.com/chapter/10.1007/978-3-642-36089-3_17
Hadar, I., & Soffer, P. (2006). Variations in Conceptual Modeling. Journal of the Association for
Information Systems, 7(8), 568–592.
Hales, S. D. S., & Johnson, T. T. A. (2003). Endurantism, perdurantism and special relativity. The
Philosophical Quarterly, 53(213), 524–539. https://doi.org/10.1111/1467-9213.00329
Harzallah, M., Berio, G., & Opdahl, A. L. (2012). New perspectives in ontological analysis: Guidelines and
rules for incorporating modelling languages into UEML. Information Systems, 37(5), 484–507.
https://doi.org/10.1016/j.is.2011.11.001
Heller, B., & Herre, H. (2004). Ontological categories in GOL. Axiomathes, 14(1), 57–76.
https://doi.org/10.1023/B:AXIO.0000006788.44025.49
Henderson-Sellers, B. (2012). On the mathematics of modelling, metamodelling, ontologies and modelling
languages. Springer Science & Business Media.
Herre, H., & Loebe, F. (2005). A meta-ontological architecture for foundational ontologies. Lecture Notes
in Computer Science, 3761 LNCS, 1398–1415. https://doi.org/10.1007/11575801_29
Hevner, A. (2007). A Three Cycle View of Design Science Research. Scandinavian Journal of Information
Systems, 19(2).
Hevner, March, S. T., Park, J., & Ram, S. (2004). Design science in Information Systems research. MIS
Quarterly, 28(1), 75–105.
Honderich, T. (2006). The Oxford Companion to Philosophy. Oxford University Press.
ISO/IEC. (2001). Software Engineering-Product Quality: Quality model (Vol. 1). ISO/IEC.
Karimi, J. (1988). Strategic Planning for Information Systems: Requirements and Information Engineering
Methods. Journal of Management Information Systems, 4(4), 5–24.
Khan, Z., & Keet, C. (2013). The foundational ontology library ROMULUS. Model and Data Engineering,
200–211. Retrieved from http://link.springer.com/chapter/10.1007/978-3-642-41366-7_17
Kitchenham, B., & Charters, S. (2007). Guidelines for performing systematic literature reviews in software
engineering. Technical report, Ver. 2.3 EBSE.
Krieger, H., Kiefer, B., & Declerck, T. (2008). A Framework for Temporal Representation and Reasoning
in Business Intelligence Applications ∗. In AAAI Spring Symposium: AI Meets Business Rules and
Process Management (pp. 59–70).
Krogstie, J. (2012). Model-based development and evolution of information systems: A Quality Approach.
Springer Science & Business Media.
Lindland, O. I., Sindre, G., & Solvberg, A. (1994). Understanding Quality in Conceptual Modeling. Ieee
Software, 11(2), 42–49.
Lucas, F. J., Molina, F., & Toval, A. (2009). A systematic review of UML model consistency management.
Information and Software Technology, 51(12), 1631–1645.
https://doi.org/10.1016/j.infsof.2009.04.009

184
Maes, A., & Poels, G. (2007). Evaluating quality of conceptual modelling scripts based on user
perceptions. Data & Knowledge Engineering, 63(3), 701–724.
https://doi.org/10.1016/j.datak.2007.04.008
March, S. T., & Allen, G. N. (2014). Toward a social ontology for conceptual modeling. Communications
of the Association for Information Systems, 34(1), 1347–1358.
McKnight, P. E., & Najab, J. (2010). Mann-Whitney U Test. Corsini Encyclopedia of Psychology.
Mernik, M., Heering, J., & Sloane, A. M. (2005). When and how to develop domain-specific languages.
ACM Computing Surveys, 37(4), 316–344. https://doi.org/10.1145/1118890.1118892
Milton, S. K., & Kazmierczak, E. (2004). An Ontology of Data Modelling Languages : A Study Using a
Common-Sense Realistic Ontology Relating Ontology and data Modelling Languages. Journal of
Database Management (JDM), 15(2), 19–38. https://doi.org/10.4018
Milton, S. K., Kazmierczak, E., & Keen, C. (2001). Data Modelling Languages: An Ontological Study.
Ecis 2001, 304–318.
Milton, S. K., Rajapakse, J., & Weber, R. (2012). Ontological Clarity , Cognitive Engagement , and
Conceptual Model Quality Evaluation : An Experimental Investigation. Journal of the Association for
Information System, 13(9), 657–693.
Moody, D. L. (2003). The Method Evaluation Model : A Theoretical Model for Validating Information
Systems Design Methods. Information Systems Journal, 1327–1336.
Moody, D. L. (2005). Theoretical and practical issues in evaluating the quality of conceptual models:
current state and future directions. Data & Knowledge Engineering, 55(3), 243–276.
https://doi.org/10.1016/j.datak.2004.12.005
Mylopoulos, J. (1992). Conceptual modeling and telos. In P. Loucopoulos & R. Zicari (Eds.), Conceptual
Modelling, Databases and CASE: An Integrated View of Information Systems Development. Wiley.
Nardi, J. C., Falbo, R. a., & Almeida, J. P. a. (2014). An Ontological Analysis of Service Modeling at
ArchiMate’s Business Layer. In 18th IEEE International Enterprise Distributed Object Computing
Conference (EDOC). https://doi.org/10.1109/EDOC.2014.22
Nardi, J. C., Falbo, R. D. A., Almeida, J. P. A., Guizzardi, G., Pires, L. F., Van Sinderen, M. J., … Fonseca,
C. M. (2015). A commitment-based reference ontology for services. Information Systems, 54, 263–
288. https://doi.org/10.1016/j.is.2015.01.012
Nelson, H. J., Poels, G., Genero, M., & Piattini, M. (2012). A conceptual modeling quality framework.
Software Quality Journal, 20(1), 201–228. https://doi.org/10.1007/s11219-011-9136-9
Opdahl, A. L., Berio, G., Harzallah, M., & Matulevičius, R. (2012). An ontology for enterprise and
information systems modelling. Applied Ontology, 7(1), 49–92. https://doi.org/10.3233/AO-2011-
0101
Opdahl, A. L., & Henderson-Sellers, B. (2001). Grounding the OML metamodel in ontology. Journal of
Systems and Software, 57(2), 119–143. https://doi.org/10.1016/S0164-1212(00)00123-0
Opdahl, A. L., & Henderson-Sellers, B. (2002). Ontological Evaluation of the UML Using the Bunge-
Wand-Weber Model. Software and Systems Modeling, 1, 43–67. https://doi.org/10.1007/s10270-002-
8209-4
Opdahl, A. L., & Henderson-Sellers, B. (2004). A template for defining enterprise modelling constructs.
Journal of Database Management (JDM), 15(2), 39–73.
Parsons, J. (2011). An Experimental Study of the Effects of Representing Property Precedence on the

185
Comprehension of Conceptual Schemas. Journal of the Association for Information Systems, 12(6),
441–462.
Partridge, C. (2005). Business Objects: Re-engineering for Reuse. Butterworth-Heinemann.
Pastor, O., & Molina, J. C. (2007). Model-driven architecture in practice: a software production
environment based on conceptual modeling. Springer Science & Business Media.
Pease, A., & Niles, I. (2002). IEEE standard upper ontology : a progress report. The Knowledge
Engineering Review, 17, 65–70.
Pereira, D. C., & Almeida, J. P. A. (2014). Representing Organizational Structures in an Enterprise
Architecture Language. In Workshop on Formal Ontologies meet Industry.
Petersen, K. (2011). Measuring and predicting software productivity: A systematic map and review.
Information and Software Technology, 53(4), 317–343. https://doi.org/10.1016/j.infsof.2010.12.001
Petersen, K., Feldt, R., Mujtaba, S., & Mattsson, M. (2008). Systematic mapping studies in software
engineering. EASE’08 Proceedings of the 12th International Conference on Evaluation and
Assessment in Software Engineering, 68–77. https://doi.org/10.1142/S0218194007003112
Poels, G., Gailly, F., Maes, A., & Paemeleire, R. (2005). Object Class or Association Class? Testing the
User Effect on Cardinality Interpretation. In Proceedings of the 24th International Conference on
Perspectives in Conceptual Modeling (pp. 33–42). Berlin, Heidelberg: Springer-Verlag.
https://doi.org/10.1007/11568346_5
Poli, R., Healy, M., & Kameas, A. (2010). Theory and applications of ontology: Computer applications.
Springer.
Recker, J. (2010). Continued use of process modeling grammars: the impact of individual difference
factors. European Journal of Information Systems, 19(1), 76–92. https://doi.org/10.1057/ejis.2010.5
Recker, J. (2013). Scientific Research in Information Systems. Berlin, Heidelberg: Springer Berlin
Heidelberg. https://doi.org/10.1007/978-3-642-30048-6
Recker, J., Indulska, M., Rosemann, M., & Green, P. (2005). Do Process Modelling Techniques Get
Better? A Comparative Ontological Analysis of BPMN. Information Systems Journal, 1–10.
Recker, J., Indulska, M., Rosemann, M., & Green, P. (2006). How Good is BPMN Really? Insights from
Theory and Practice. In Proceedings 14th European Conference on Information Systems, Goeteborg,
Sweden.
Recker, J., Indulska, M., Rosemann, M., & Green, P. (2010). The ontological deficiencies of process
modeling in practice. European Journal of Information Systems, 19(5), 501–525.
https://doi.org/10.1057/ejis.2010.38
Recker, J., & Niehaves, B. (2008). Epistemological Perspectives on Ontology-based Theories for
Conceptual Modeling. Applied Ontology, 3(1–2), 111–130.
Recker, J., & Rosemann, M. (2010). The measurement of perceived ontological deficiencies of conceptual
modeling grammars. Data & Knowledge Engineering, 69(5), 516–532.
https://doi.org/10.1016/j.datak.2010.01.003
Recker, J., Rosemann, M., Boland, R. J., Limayem, M., & Pentland, B. T. (2008a). Measuring Perceived
Representational Deficiencies in Conceptual Modeling: Instrument Development and Test. In
Proceedings of the 29th International Conference on Information Systems (pp. 12–14).
Recker, J., Rosemann, M., Boland, R. J., Limayem, M., & Pentland, B. T. (2008b). Measuring Perceived
Representational Deficiencies in Conceptual Modeling: Instrument Development and Test.

186
Proceedings of the 29th International Conference on Information Systems, (December), 12–14.
Retrieved from http://eprints.qut.edu.au/17119/1/c17119.pdf
Recker, J., Rosemann, M., Green, P. F., & Indulska, M. (2011a). Do ontological deficiencies in modeling
grammars matter? MIS Quarterly, 35(1), 57–79. Retrieved from
http://aisel.aisnet.org/cgi/viewcontent.cgi?article=2942&context=misq
Recker, J., Rosemann, M., Green, P., & Indulska, M. (2011b). Do ontological deficiencies in modeling
grammars matter? MIS Quarterly, 35(1), 57–79.
Riemer, K., Hovorka, D., Johnston, R. B., & Indulska, M. (2013). Challenging the Philosophical
Foundations of Modeling Organizational Reality: The Case of Process Modeling. Icis, (December
2013), 1–18.
Rosemann, M., & Green, P. (2002). Developing a meta model for the Bunge–Wand–Weber ontological
constructs. Information Systems, 27(2), 75–91. https://doi.org/10.1016/S0306-4379(01)00048-5
Rosemann, M., Green, P., & Indulska, M. (2004). A reference methodology for conducting ontological
analyses. In International Conference on Conceptual Modelling (pp. 110–121).
https://doi.org/10.1007/978-3-540-30464-7_10
Rowe, F. (2014). What Literature Review is Not: Diversity, Boundaries and Recommendations. European
Journal of Information Systems, 23(3), 241–255. https://doi.org/10.1057/ejis.2014.7
Saghafi, A., & Wand, Y. (2014). Do Ontological Guidelines Improve Understandability of Conceptual
Models? A Meta-analysis of Empirical Work. In System Sciences (HICSS), 2014 47th Hawaii
International Conference on (pp. 4609–4618). https://doi.org/10.1109/HICSS.2014.567
Santos, P., Almeida, J. P. a., & Guizzardi, G. (2013). An ontology-based analysis and semantics for
organizational structure modeling in the ARIS method. Information Systems, 38(5), 690–708.
Santos Jr, P. S., Almeida, J. P. a., & Guizzardi, G. (2010). An Ontology-based Semantic Foundation for
ARIS EPCs. In ACM Symposium on Applied Computing (pp. 124–130). New York, NY, USA:
ACM. https://doi.org/10.1145/1774088.1774114
Scheer, A.-W. W. (1998). Aris--Business Process Modeling (2nd ed.). Secaucus, NJ, USA: Springer-Verlag
New York, Inc.
Scherp, A., Saathoff, C., Franz, T., & Staab, S. (2009). Designing Core Ontologies. Applied Ontology, 3,
1–3.
Shanks, G. G., Tansley, E., Nuredini, J., Tobin, D., & Weber, R. (2008). Representing Part-Whole
Relations in Conceptual Modeling: An Emperical Evaluation. MIS Quarterly, 32(3), 553–573.
Shanks, G., Tansley, E., Nuredini, J., & Tobin, D. (2008). Representing part-whole relationships in
conceptual modeling: An empirical evaluation. MIS Quarterly, 32(3), 553–573.
Shao, J., Wang, H., & Chow, S.-C. (2008). Sample size calculations in clinical research. Chapman &
Hall/CRC.
Siau, K., & Rossi, M. (2007). Evaluation techniques for systems analysis and design modelling methods - a
review and comparative analysis. Information Systems Journal, 21(3), 249–268.
https://doi.org/10.1111/j.1365-2575.2007.00255.x
Sjøberg, D. I. K., Hannay, J. E., Hansen, O., Kampenes, V. B., Karahasanović, A., Liborg, N. K., &
Rekdal, A. C. (2005). A survey of controlled experiments in software engineering. IEEE Transactions
on Software Engineering, 31(9), 733–753. https://doi.org/10.1109/TSE.2005.97
Soffer, P., & Hadar, I. (2007). Applying ontology-based rules to conceptual modeling: a reflection on

187
modeling decision making. European Journal of Information Systems, 16(5), 599–611.
https://doi.org/10.1057/palgrave.ejis.3000683
Sowa, J. F. (1999). Knowledge Representation: Logical, Philosophical, and Computational Foundations
(1st ed.). Course Technology.
Sprinkle, J., & Karsai, G. (2004). A domain-specific visual language for domain model evolution. Journal
of Visual Languages and Computing, 15(3–4), 291–307. https://doi.org/10.1016/j.jvlc.2004.01.006
Stachowiak, H. (1973). Allgemeine Modelltheorie. Springer-Verlag, Wien.
Storey, V. C. (2017). Conceptual Modeling Meets Domain Ontology Development: A Reconciliation.
Journal of Database Management (JDM), 28(1), 18–30.
Tairas, R., Mernik, M., & Gray, J. (2008). Using Ontologies in the Domain Analysis of Domain-Specific
Languages. In International Conference on Model Driven Engineering Languages and Systems.
Springer.
Uschold, M., & Gruninger, M. (1996). Ontologies: Principles, methods and applications. The Knowledge
Engineering Review, 11(2), 93–116.
Uschold, M., & Jasper, R. (1999). A Framework for Understanding and Classifying Ontology Applications.
In Proceedings 12th Int. Workshop on Knowledge Acquisition, Modelling, and Management KAW
(Vol. 99, pp. 16–21).
Venable, J., Pries-heje, J., & Baskerville, R. (2012). A Comprehensive Framework for Evaluation in
Design Science Research. Design Science Research in Information Systems. Advances in Theory and
Practice, 423–438. https://doi.org/10.1007/978-3-642-29863-9_31
Verdonck, & Gailly, F. (2016a). An Exploratory Analysis on the Comprehension of 3D and 4D Ontology-
Driven Conceptual Models. In Conceptual Modeling - ER 2016, Lecture Notes in Computer Science,
vol. 9975 (Vol. 9975, pp. 163–172). https://doi.org/10.1007/978-3-642-33999-8
Verdonck, & Gailly, F. (2016b). Insights on the Use and Application of Ontology and Conceptual
Modeling Languages in Ontology-Driven Conceptual Modeling. In Conceptual Modeling - ER 2016,
Lecture Notes in Computer Science, vol. 9974 (Vol. 9974 LNCS, pp. 83–97). https://doi.org/DOI:
10.1007/978-3-319-46397-1_7
Verdonck, & Gailly, F. (2018). An Ontological Analysis Framework for Domain-Specific Modeling
Languages. Journal of Database Management, 29(1).
Verdonck, Gailly, F., De Cesare, S., & Poels, G. (2015). Ontology-driven conceptual modeling: A
systematic literature mapping and review. Applied Ontology, 10(3,4), 197–227.
https://doi.org/10.3233/AO-150154
Verdonck, Gailly, F., & Poels, G. (2014). 3D vs. 4D Ontologies in Enterprise Modeling. In Advances in
Conceptual Modeling (Vol. 8823, pp. 13–22).
Vessey, I., & Conger, S. a. (1994). Requirements specification: learning object, process, and data
methodologies. Communications of the ACM, 37(5), 102–113.
https://doi.org/10.1145/175290.175305
Voluceau, D. De, & Chesnay, L. (2001). Quantitative Measurements of the In ¯ uence of Participant Roles
during Peer Review Meetings. Main, 143–159.
Walter, T., Parreiras, F. S., & Staab, S. (2014). An ontology-based framework for domain-specific
modeling. Software and Systems Modeling, 13, 83–108. https://doi.org/10.1007/s10270-012-0249-9
Wand. (1996). Ontology as a foundation for meta-modelling and method engineering. Information and

188
Software Technology, 38(4), 281–287. https://doi.org/http://dx.doi.org/10.1016/0950-5849(95)01052-
1
Wand, Y., Monarchi, D. E., Parsons, J., & Woo, C. C. (1995). Theoretical foundations for conceptual
modelling in information systems development. Decision Support Systems, 15(4), 285–304.
https://doi.org/10.1016/0167-9236(94)00043-6
Wand, Y., Storey, V. C., & Weber, R. (1999). An ontological analysis of the relationship construct in
conceptual modeling. ACM Transactions on Database Systems, 24(4), 494–528.
https://doi.org/10.1145/331983.331989
Wand, Y., & Weber, R. (1990). An Ontological Model of an Information System. Software Engineering,
IEEE Transactions on, 16(11), 1282–1292.
Wand, Y., & Weber, R. (1993). On the ontologrcal expressiveness. Journal of Information Systems, (3),
217–237.
Wand, Y., & Weber, R. (2017). Thirty Years Later: Some Reflections on Ontological Analysis in
Conceptual Modeling. Journal of Database Management (JDM), 28(1), 1–17.
Wand, & Weber. (2002). Research commentary: information systems and conceptual modeling—a research
agenda. Information Systems Research, 13(4), 363–376.
Wand, & Weber, R. (1993). On the ontological expressiveness of information systems analysis and design
grammars. Information Systems Journal, 3(4), 217–237. https://doi.org/10.1111/j.1365-
2575.1993.tb00127.x
Wand, & Weber, R. (1995). On the deep structure of information systems. Information Systems Journal,
5(3), 203–223. https://doi.org/10.1111/j.1365-2575.1995.tb00108.x
Welty, C., & Guarino, N. (2001). Supporting ontological analysis of taxonomic relationships. Data &
Knowledge Engineering, 39(1), 51–74. https://doi.org/10.1016/S0169-023X(01)00030-1
Wieringa, R. (2014). Design Science Methodology for Information Systems and Software Engineering.
Springer Berlin Heidelberg. https://doi.org/10.1145/1810295.1810446
Winograd, T. (1996). Bringing design to software. ${$Addison-Wesley Professional$}$.
Wohlin, C., Runeson, P., Host, M., Ohlsson, M. C., Regnell, B., & Wesslen, A. (2012). Experimentation in
software engineering. Experimentation in Software Engineering (Vol. 9783642290).
https://doi.org/10.1007/978-3-642-29044-2
Zhang, H., Kishore, R., & Ramesh, R. (2004). Ontological Analysis of the MibML Grammar using the
Bunge-Wand-Weber Model. Americas Conference on Information Systems, 4286–4294.
Zhang, H., Kishore, R., & Ramesh, R. (2007). Semantics of the MibML Conceptual Modeling Grammar.
Journal of Database Management, 18(1), 1–19. https://doi.org/10.4018/jdm.2007010101
zur Muehlen, M., & Indulska, M. (2010). Modeling languages for business processes and business rules: A
representational analysis. Information Systems, 35(4), 379–390.
https://doi.org/10.1016/j.is.2009.02.006
zur Muehlen, M., Indulska, M., & Kamp, G. (2007). Business Process and Business Rule Modeling
Languages: A Representational Analysis. In EDOC (pp. 127–132). Darlinghurst, Australia, Australia:
Australian Computer Society, Inc. https://doi.org/10.1109/EDOCW.2007.8

189
190
Appendix A
The Conceptual Modeling Quality Framework (CMQF) is composed out of eight cornerstones.

Each of these cornerstones can be thought of as an aspect that is involved in the conceptual modeling

process and is needed to arrive at a conceptual model and representation. The cornerstones are:
physical domain, domain knowledge, physical model, model knowledge, physical language,

language knowledge, physical representation and representation knowledge. These cornerstones can
be thought of as either sets of statements that constitute physical artifacts or statements that represent

cognitive artifacts. Quality dimensions represent relations between two out of a set of eight

cornerstones in total. Quality dimensions can be grouped in four layers, which roughly follow the
conceptual modeling process and include all the aspects that can be linked to a conceptual model.

These layers are the physical layer, knowledge layer, learning layer, and development layer. The

physical layer contains the physical, observable elements of the quality framework. The knowledge
layer parallels the physical layer, since it represents the cognitive counterpart of this layer. The

learning layer measures how well that learning, interpretation and/or understanding takes place.

Finally, the development layer measures how well that a modeler’s knowledge is being used to

create the physical elements. Further, a Quality Type is defined as a relationship between a Quality

Reference and an Object of Interest. The Object of Interest represents the cornerstone that is being
examined (i.e. the cornerstone where the arrow arrives). The Quality Reference represents the

cornerstone to which the Object of Interest is being compared for completeness and validity (i.e. the
cornerstone where the arrow departures). Figure 12 displays the different cornerstones, quality

dimensions and quality types that are included in each layer. Table 2 describes all the quality types

that are being defined in the CMQF framework. Finally, in order to reduce the overhead for the

191
reader, we have summarized and defined the quality types that only occur in this literature study in

table 3.

Figure 25: The CMQF quality layers and their Quality Types, figure obtained from (Nelson et al., 2012)

Table 23: Total number of Quality Types, described in (Nelson et al., 2012).

Quality Type
P1 Model-domain appropriateness K6 Perceived intentional quality
P2 Ontological Quality K7 Perceived empirical quality
P3 Syntactic quality L1 View quality
P4 Semantic quality L2 Pedagogical quality
P5 Language-domain appropriateness L3 Linguistic quality
P6 Intentional quality L4 Pragmatic quality
P7 Empirical quality D1 Applied domain—model appropriateness
K1 Perceived model-domain appropriateness D2 Applied domain—language appropriateness
K2 Perceived Ontological Quality D3 Applied domain knowledge quality
K3 Perceived syntactic quality D4 Applied model—language appropriateness
K4 Perceived semantic quality D5 Applied model knowledge quality
K5 Perceived language-domain
appropriateness D6 Applied language knowledge quality

192
Table 24: Quality Types discussed in this literature review, described in (Nelson et al., 2012).

Quality Types Definition


The appropriateness of a physical language to express
P2 Ontological Quality the concepts of the physical model and physical
representation.
The ability of a language to express anything in the
P5 Language-domain appropriateness physical domain in order for the user to create a faithful
representation.
The intentional quality aims at keeping the physical
P6 Intentional quality representation true to the mindset and the meanings
defined by the physical model.
The empirical quality measures the readability of a
P7 Empirical quality
conceptual representation.
The perceived ontological quality can be described as
how a stakeholder perceives the validity and
K2 Perceived Ontological Quality completeness of a physical, external language (the
grammar and the vocabulary of the language) for
expressing the concepts of a physical model
Measures how the user of a model perceives the mindset
K6 Perceived Intentional quality
and the meanings defined by the physical model.

Measures how the user perceives the readability of a


K7 Perceived empirical quality
conceptual representation.
Addresses the comprehension and understanding of the
L4 Pragmatic quality final physical representation by the stakeholders who use
the model.
The appropriateness of a modeling language that is being
D2 Applied domain—language
developed to the modeler’s knowledge of the real-world
appropriateness
domain.
The appropriateness of the modeling language being
D4 Applied model—language
developed to the developer’s knowledge of the particular
appropriateness
mindset or ontology it will be based upon.

Measures the knowledge of the model that underlies the


D5 Applied model knowledge quality
language and the domain.
Addresses the knowledge of the modeler using the
D6 Applied language knowledge quality modeling language, the vocabulary and the grammar to
create the physical representation.

193
194
Appendix B

Table 25: List of articles of literature review

Paper Quality Type D2 D4 D5 D6 K2 K6 K7 L4 P2 P5 P6 P7


(Bera et al., 2009) 2 X X
(Bera, 2012) 2 X X
(Bera & Evermann, 2012) 2 X X
(A.a b Burton-Jones, Clarke, Lazarenko, & Weber,
2012) 1 X
(Clarke, Burton-jones, & Weber, 2013) 1 X
(Evermann & Halimi, 2008) 1 X
(Evermann & Wand, 2011) 1 X
(Evermann, 2005) 1 X
(Evermann & Fang, 2010) 2 X X
(Evermann & Wand, 2006a) 1 X
(Gehlert & Esswein, 2007) 1 X
(Gemino & Wand, 2005) 2 X X
(Green, Rosemann, Indulska, & Recker, 2011) 2 X X
(Guarino, 1995) 1 X
(Guarino & Welty, 2000a) 1 X
(Guarino & Welty, 2000b) 1 X
(Guizzardi et al., 2011) 1 X
(Guizzardi & Zamborlini, 2014) 2 X X
(Hadar & Soffer, 2006) 2 X X
(Milton, Rajapakse, & Weber, 2012) 2 X X
(Milton et al., 2001) 2 X X
(Opdahl & Henderson-Sellers, 2001) 1 X
(Opdahl & Henderson-Sellers, 2002) 2 X X
(Parsons, 2011) 2 X X
(Recker et al., 2005) 3 X X X
(Recker et al., 2006) 3 X X X
(Recker et al., 2010) 3 X X X
(Recker & Rosemann, 2010) 1 X
(Recker, Rosemann, Boland, Limayem, &
Pentland, 2008b) 2 X X
(Recker et al., 2011b) 2 X X
(Rosemann & Green, 2002) 1 X
(G. Shanks et al., 2008) 2 X X
(Wand & Weber, 1993) 1 X
(Wand, 1996) 1 X
(Y Wand et al., 1995) 2 X X
(Yair Wand, Storey, & Weber, 1999) 2 X X
(Welty & Guarino, 2001) 1 X
(zur Muehlen & Indulska, 2010) 2 X X

196
Appendix C

Literature set - Chapter Five

(Almeida & Guizzardi, 2013) (Gailly & Poels, 2007b)

(Almeida et al., 2009) (G. L. Geerts & McCarthy, 2003)

(Andersson et al., 2006) (Green & Rosemann, 2000)

(Azevedo et al., 2013) (Nardi et al., 2014)

(Azevedo, Almeida, Van Sinderen, Quartel, & Guizzardi, 2011) (Santos Jr, Almeida, & Guizzardi, 2010)

(Azevedo et al., 2015) (Pereira & Almeida, 2014)

(Buder & Felden, 2011) (Santos et al., 2013)

(Gailly, Geerts, & Poels, 2009) (Zhang, Kishore, & Ramesh, 2007)

(Gailly & Poels, 2007a)

197

You might also like