Applied Ontology Engineering in Cloud Services, Networks and Management Systems
Applied Ontology Engineering in Cloud Services, Networks and Management Systems
Applied Ontology Engineering in Cloud Services, Networks and Management Systems
Applied Ontology
Engineering in Cloud
Services, Networks
and Management Systems
ISBN 978-1-4614-2235-8
e-ISBN 978-1-4614-2236-5
DOI 10.1007/978-1-4614-2236-5
Springer New York Dordrecht Heidelberg London
Library of Congress Control Number: 2011944830
Springer Science+Business Media, LLC 2012
All rights reserved. This work may not be translated or copied in whole or in part without the written
permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York,
NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in
connection with any form of information storage and retrieval, electronic adaptation, computer software,
or by similar or dissimilar methodology now known or hereafter developed is forbidden.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are
not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject
to proprietary rights.
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Preface
viii
Preface
most promising and at the same time more adaptive computing services, generating
the multiplicity of middleware approaches and infrastructure development.
Cloud computing concentrates on tackling not only the complex problems
associated with service cost reduction demands but also those related to network
performance, efficiency and technological flexibility [Greenberg09]. It is true there
is a market boom generated about cloud computing solutions; it rises exponentially
when the economic atmosphere drives the world economies to a more service revenue with less investment in technology. However, beyond the service advantages
and marketing revenue cloud-based solutions can provide, it is absolutely necessary
to have well-identified service challenges, technology requirements and clear customer and service provider necessities to specifically and correctly solve root
problems.
As a main common requirement when diverse systems are interacting, the linked
data and exchange of information are considered as crucial features. In this diverse
and complex set of service and technological requirements, about information
exchange and systems integration respectively, the role of management systems and
other next generation applications and networking services supporting the mentioned integration is most likely catalogued as a set of alternatives.
A traditional scenario example to understand this complexity can be studied in
autonomic systems management [see work from the Autonomic Communications
Forum (ACF)]. Acting as a root problem, autonomic networks are not able to handle
the broad diversity of information from resources, devices, networks, systems and
applications. Particular interest focuses on exchanging information between different stake holders (autonomic components or autonomic layers) when it is necessary;
however as described, there is no capability to exchange pieces of such information
between the different systems participating in the autonomic solution.
The convergence of software and networking solutions can provide solutions for
some of the complex management problems present in current and future Information
and Communications Technologies (ICTs) systems. Current ICT research is focused
on the integrated management of resources, networks, systems and services. This
can be generalized as providing seamless mobility to, for example, personalize services automatically. This type of scenarios requires increased interoperability in
service management operations.
Integrated management and cross-layer interactions involve both the transmission capabilities from network devices and the context-aware management services
of the middleware environment. Transmission capabilities influence the performance of the network, while middleware impacts the design of interfaces for achieving data and command interoperability.
Integrated management refers to the systematization of operations and control of
services in networks and systems. Cross-layer refers to the joint operation of the
physical, link, management and service layers, and context-awareness refers to the
properties that make a system aware of its users state, the goals of the user and
operator, and the state of the network environment. This awareness helps the systems to adapt their behaviour according to applicable business rules, all the while
offering interoperable and scalable personalized services. To do this, different data
Preface
ix
models are required in NGN and Internet solutions, due to the inherent heterogeneity of vendor devices and variability in the functionality that each device offers.
Typical solutions have attempted to provide middleware to mediate between a
(small) subset of vendor-based solutions, while research has investigated the use of
a single information model that can harmonize the information present in each of
these different management data models. Industry has not yet embraced the
approach, since this research typically does not map vendor-specific functionality to
a common information model.
To alleviate this problem in ICT systems, the convergence of software solutions
and managing systems controlling the networking infrastructures must provide
alternate solutions in short incubation periods and with high scalability demands.
Additionally and as result on the increasing demand to implement cloud solutions,
it is not difficult to understand why ICT research is focusing on the integrated management of resources, networks, systems and cloud solutions or cloud services and
on the way to exchange data standards facilitating this information interoperability
and systems integration labor. These challenges in terms of realistic services and
applications can be generalized as looking forward for providing seamless mobility
services to, for example, personalize services automatically.
This book focuses on Ontology Engineering and its applications in service and
network management systems. This book aims to act as a reference book defining
application design principles and methodological modeling procedures to create
alternative solutions to the scientific and technological challenge of enabling information interoperability in cross-domain applications and systems, examples in
managing cloud services and computer network systems are included.
In todays ICT systems, the enormous amount of information and the increasing
demands for optimally managing them generates the necessity of rethinking if current
information management systems can cope to these demands and what are the best
practices to make more efficient the telecommunications services in computer
networks, particularly in times where everything is migrating to cloud-based systems.
Thus this book is expressly an invitation to explore and understand the basic concepts, the applications and the consequences and results about applying ontology
engineering in cloud services, networks and management systems
J. Martn Serrano Orozco
xi
xii
approaches rooted in the ICT area, creating solution(s) for information interoperability problems between network and service management domains in the era of
cloud computing.
This book is aimed for the wide ICT-sector people, engineers in general, software
developers, students, technology architects and people with knowledge on semantic
principles, semantic Web formal languages and people with knowledge and rooted
in Internet science and telecommunications.
This book is addressed to those who realistically see the interaction of network
infrastructure and software platforms as a unified environment where services and
applications have a synergy exchanging information to/for offering cognitive applications commonly called smartness or intelligence in computing and cognitive or
awareness in telecommunications.
This book is suitable to be read and/or studied by students with strong basis in
communications, software engineering or computer science or any other related
disciplines (A level of engineering studies is required or its equivalent in different
knowledge areas). This book is not intended to be a text book, but if well studied can
provide engineering methodologies and good software practices that can help and
guide the students to understand principles in information and data modeling, integrated management and cloud services.
This book is a scientific tool for those active professionals interested in the
emerging technology solutions focused on Internet science and semantic Web into
the communications domain, a very difficult combination to find in current literature references, where a high degree of focus and specialization is required. In this
book, as a difference from other literature references, the underlying idea is to
focus on enabling inter-domain and intra-domain interactions by augmenting information and data models with semantic descriptions (ontological engineering).
There are realistic scenarios where the techniques described in this book have been
applied.
Finally, this book is not aimed at students of different disciplines beyond those
considered in the framework of IT and Communications (ICTs), but it is suitable
for those students and people with general interest in service applications, communications management, future Internet and cloud computing principles.
This book concentrates on describing and explaining clearly the role ontology
engineering can play to provide solutions tackling the problem of information
interoperability and linked data. Thus, this book expressly introduces basic concepts
about ontology engineering and discusses methodological approaches to formal
representation of data and information models, facilitating information interoperability between heterogeneous, complex and distributed communication systems. In
other terms, this book will guide you to understand the advantages of using ontology engineering in telecommunications systems.
This book discusses the fundamentals of todays ICT market necessity about
convergence between software solutions and computer network infrastructures.
This book introduces basic concepts and illustrates the way to understand how to
enable interoperability of the information using a methodological approach to formalize and represent data by using information models. This book offers guidance
and good practices when ontology engineering is applied in cloud services, computer networks and management systems.
xiii
Acknowledgements
xv
Acknowledgements
xviii
Acknowledgements
Giannakopoulos, Algonet, S.A. (Athens, Greece); VTT Arto Tapani Juhola, Kimmo
Ahola, Titta Ahola (Espoo, Finland); Technion, Danny Raz, Ramic Cohen (Haifa,
Israel); UCL, Cris Todd, Alex Galis, Kun Yang, Kerry Jean, Nikolaos Vardalachos
(London, UK); NTUA, Stavros Vrontis, Stavros Xynogalas, Irene, Sygkouna Maria
Schanchara and UPC, Joan Serrat Fernandez, Javier Justo Castao and Ricardo
Marin Vinuesa (Barcelona, Spain) for their direct and indirect contributions when we
were together discussing ideas and for sharing their always valuable point of view.
J. Martn Serrano Orozco
xix
Contents
1
1
3
5
7
9
11
11
12
13
13
14
14
14
15
15
18
21
23
23
25
26
xxi
xxii
Contents
2.4
29
31
43
44
48
52
55
55
56
57
57
58
58
58
58
58
60
67
67
73
81
81
84
85
87
87
89
92
93
94
94
96
97
100
101
103
106
109
113
Contents
xxiii
115
115
116
118
118
119
119
119
119
120
124
126
127
128
129
130
131
132
135
135
136
137
138
139
143
145
147
151
153
153
153
154
155
157
159
xxiv
Contents
6.7
Abbreviations
3GPP
ACF
ADB
ADL
AI
AIN
AN
ANDROID
ANEP
ANSI
API
ASP
BSS
CAS
CCPP
CD
CDI
CEC
CIDS
CIM
CLI
CMIP
CMIS
COPS
CORBA
CPU
Context-aware service
Composite capabilities/preference profiles
Code distributor
Context distribution interworking
Code execution controller
Context information data system
Common information model
Command line interface
Common management information protocol
Common management information service
Common open policy service
Common object request broker architecture
Central processing unit
DAML
DAML-L
DAML+OIL
xxvi
Abbreviations
DCOM
DB
DiffServ
DEN
DEN-ng
DMC
DMI
DMTF
DNS
DNSS
DTD
EAV
EE
eTOM
EU
Entity-attribute value
Execution environment
Enhanced telecommunication operations map
European Union
FPX
GDMO
GIS
GPRS
GSM
GUI
HTML
HTTP
IDL
IEC
IEEE
IETF
IFIP
IM
IMO
ISL
IN
IntServ
IP
IRTF
ISO
ISP
IST
IT
ITC
ITU-X
JavaSE
JavaEE
Abbreviations
JavaME
JIDM
JMF
JVM
KIF
KQML
LBS
LAN
LDAP
Location-based services
Local area network
Lightweight directory access protocol
MAC
MDA
MIB
MPLS
MOF
NGI
NGN
NGOSS
OCL
ODL
OIL
OKBC
OMG
OS
OSA
OSI
OSM
OSS
OWL
P2P
PAL
PBM
PBNM
PBSM
PC
PCC
PCIM
PCM
PC
PDA
PDP
PE
PEP
PPC
Peer-to-pair
Proteg axiom language
Policy-based management
Policy-based network management
Policy-based service management
Personal computer
Policy conflict check
Policy core information model
Policy consumer manager
Policy consumer
Personal digital assistant
Policy decision point
Policy editor
Policy enforcement point
Policy conflict check
xxvii
xxviii
Abbreviations
PGES
PM
PPIM
PSTN
QoS
Quality of service
RDF
RDFS
RFC
RMI
RSVP
RM-ODP
RM-OSI
SAM
SDL
SDK
SGML
SID
SLA
SLS
SLO
SMTP
SNMP
SOA
SOAP
SSL
SP
TCP
TEManager
TMF
TM Forum
TMN
TOM
UDDI
UI
UML
UMTS
URI
URL
VAN
VE
VoIP
VM
VPN
Abbreviations
xxix
W3C
WBEM
WAN
WWW
XDD
XMI
XML
XSD
XSL
XSLT
Y
Z
List of Figures
Fig. 1.1
Fig. 1.2
Fig. 2.1
Fig. 2.2
Fig. 2.3
Fig. 2.4
Fig. 2.5
Fig. 2.6
Fig. 2.7
Fig. 2.8
Fig. 2.9
Fig. 2.10
Fig. 2.11
Fig. 2.12
Fig. 2.13
Fig. 2.14
Fig. 3.1
Fig. 3.2
Fig. 3.3
Fig. 3.4
2
8
16
19
20
21
25
26
32
33
34
35
37
38
41
45
60
68
73
Fig. 3.5
Fig. 4.1
Fig. 4.2
88
93
75
80
xxxi
xxxii
Fig. 4.3
Fig. 4.4
Fig. 4.5
Fig. 4.6
Fig. 4.7
Fig. 4.8
Fig. 4.9
Fig. 4.10
Fig. 4.11
Fig. 4.12
Fig. 4.13
Fig. 4.14
Fig. 5.1
Fig. 5.2
Fig. 5.3
Fig. 5.4
Fig. 5.5
Fig. 5.6
Fig. 5.7
Fig. 5.8
Fig. 5.9
List of Figures
96
97
98
99
102
103
104
105
106
108
110
111
117
120
121
125
126
127
128
129
131
Fig. 6.1
141
142
143
144
147
148
150
156
157
160
165
List of Tables
Table 5.1
Table 6.1
132
152
xxxiii
Chapter 1
1.1
Introduction
Fig. 1.1 Convergence in software and communications towards an ICT integration model
1.2
Finally, rooted in the ITC, rather to define terminology and propose semantic
interoperability problems as an alternative for network and service management in
cloud systems, this chapter discusses about service-oriented architectures, and the
role management information contained as services description and described as
policy information (data models) can be used for providing extensible, reusable,
common manageable knowledge layer for better management operations in cloud
computing.
The organization of the rest of this chapter is as follows. Section 1.2 describes
the trends in the management of communication services and semantic Web areas.
It describes the convergence between management and middleware, explaining the
need for better services management systems, and introduces ontology engineering
as a means to semantically support service management in autonomic communications, which can be used to integrate cloud computing solutions to create and deploy
new embedded services in virtual environments.
Section 1.3 introduces Internet design trends and how the future of the Internet is
being tracked and re-shaped by emerging technologies in telecommunications. This
section describes the stages about software services and telecommunications. The
stages described in this section constitute a research activity and can be considered
as parts contributing to the state of the art about the convergence in IT and telecommunications. The proposal of those stages are depicted and described; this provides
a detailed understanding of the path taken from the start of research activity to the
delivery of this book.
Section 1.4 introduces concepts related to SOA and basis of cloud computing.
Interoperability is an inherent feature established in SOA for addressing heterogeneous, complex and distributed issues, where management operations are also
required. Commonly in SOA design, the interoperability plays a protagonist role,
however studied development common practices reflects that implemented
approaches traditionally espouse a strict inter-functionality and cross-layered interactions has been left beside, however in this section, current management systems
the broad diversity of resources, devices, services and systems of converged networks to be applicable to NGNs and pervasive services applications are considered.
Finally, Sect. 1.5 presents the conclusions in order to summarize about the fundamentals and trends on Semantic Web, Telecommunications Systems and Cloud
Computing introduced and discussed in this chapter. The aim of this section is to
establish a general understanding about the interoperability and interaction
between the areas described in this chapter.
1.2
The process of integrating computing solutions to create and deploy new embedded
services in pervasive environments results in the design and development of complex systems supporting large number of sensors, devices, systems and networks,
where each of which can use multiple and heterogeneous technologies. In the area
of Internet and semantic Web, Web sensors bring the concept of formal descriptions
associated to features in the Web to generate intelligent content. The intelligent
content is structured information usually represented by a formal language that
makes the information easy accessible and able to be used by different applications
either end user or infrastructure related. In the area of the communications management, the concept of seamless mobility associates scenarios where people configure
their personalized services using displays, smart posters and other end-user interaction facilities, as well as their own personal devices as a result of information
exchange.
The inherent necessity to increase the functionality of Web services in the Internet
is particularly motivated by both the necessity to support the requirements of pervasive services and the necessity to satisfy the challenges of self-operations dictated
by the communications systems [Kephart03]. Web services requirements are headed
by the interoperability of data, voice and multimedia using the same (converged)
network. This requirement defines a new challenge: the necessity to link data and
integrate information. Specific scenarios become evident when management instructions are used for expressing the state of users and defining services performance,
and such instructions are considered as pieces of information able to be exchanged
between different management systems. Ideally, as a result of this interaction, i.e.
semantic Web services, service management is possible dynamically to adapt the
services and resources that they provide to meet the changing needs of users and/or
in response to changing environmental conditions. This adaptation is essential, as
each day, more complex services are required by consumers and the main driver of
those services is the Web, which in turn requires more complex support systems that
must harmonize multiple technologies in each network and semantic information
from the Internet.
A more complete visionary approach about service management promise new,
user-centric applications and services. NGNs and services require information and
communications systems able to support information services and especially applications able to process pieces of information. Information plays the important role
of enabling a management plane where data is used in multiple applications with no
restrictions and it is able to be adaptive according to services and resources that it is
designed to be offering. The processes of linking data and information management
services, by using information data models, pursues the common objective of changing the demands of the user, as well as adapt the changing environmental conditions, in the form of interoperable information, thus helping to manage business,
system and behavioural complexity [Strassner06a].
The information is dynamic; therefore, the efficient handling and distribution of
data and information to support context-awareness is not a trivial problem, and has
generated study dating back to the first time that a simple unit of information, known
as context model, was proposed [Chen76]. The multiple advantages derived from
modelling context have attracted much attention for developing context-aware
applications, generating diverse approaches and turning ubiquitous computing into
what is currently known as pervasive computing. Nevertheless, most research has
focused on realizing application-specific services using such information. Such
1.3
the physical, link, management and service layers and information to the properties
that make a system aware of its users state and the state of the network environment. This awareness helps the system to adapt its behaviour (e.g. the services
and resources that it offers at any one particular time) according to desired
business rules, all the while offering interoperable and scalable personalized
services.
Integrated management and cross-layer interactions involve both the transmission capabilities from network devices and the information management services of
the middleware environment (software supporting network and service operations
and tasks). Transmission capabilities influence the performance of the network.
Hence, their impact on the design of new protocols, and the adaptation of existing
protocols to suit these capabilities, need to be studied by modelling and/or simulation of systems. Middleware development impacts the design of interfaces for
achieving the interoperability necessary in services.
Actually, there are many initiatives that are breaking with the models of how
fixed and mobile communication networks operate and provide services for consumers. Those initiatives are founded on existing basis and many times acting as
standards regarding NGN; however, multiplicity of standards is not a good
envision for providing services. Examples of those initiatives create a background
of this book and are available in [Brown96b], [Brown97] and [Chen00] as well as in
industry applications [Brown96c], [Brumitt00] and [Kanter02], just to cite some
examples. These efforts try to move from a world where the networks are designed
and optimized around a specific technology, service and/or device towards a world
in which the user is at the centre of his/her communications universe. In this new
communications world, resources and services become network- and device-agnostic.
NGN services are thus no longer device-, network-, and/or vendor-centric as current
services are; they now are user-centric.
The new user-centric vision, where users define preferences and personalize services, brings itself inherent complex problems of scalability and management
capacity. Acting as an particular requirement for new design approaches; it is
required formal and extensible information model(s) be used according to specific
service management requirements, in order to enable scalability in the systems.
Internet services focus on the information management part to contribute to the
support of these kinds of applications. Even standards, such as the SNMP (simple
network management protocol) [IETF-RFC1157], and its updated versions [IETFRFC2578] have failed in standardizing most of the key information required for
management interoperability. This is because there is no fundamental interests for
device vendors to follow a standard describing how their devices must work, likewise neither standard satisfy every vendor information requirement. Hence, Internet
and their services will be harder to manage than current applications and services,
since Internet applications and services are built from and are supported by more
diverse networks and technologies. This causes control plane to be made up of
different types of dissimilar control functions. Therefore, a management plane supported by complex management systems is needed to coordinate the different types
of control planes, ensuring that each application, service, network and device play
its role in delivering their functionality and all together constitutes the so-called
end-to-end service.
1.4
offers one of the largest and most powerful concepts about service provisioning in
the ICT domain by using a shared infrastructure [MicrosoftPress11]. Shared by
means of multiple service providers interacting to support a common service goal
and distributed by means of multiple computers on which to run computer applications by providing a service, mostly Web services. Allocating services in the
cloud facilitates and simplifies computing tasks and reduces price for operations,
promoting the pay-as-you-go usage of computing services and infrastructure
[Bearden96].
In particular, the cloud computing paradigm relies on the business objectives of
secure and reliable outsourcing of operations and services, the local or remote infrastructure and the infrastructure type to define its perfect model of best revenue with
less technological investment. The three dimension in cloud computing is shown in
Fig. 1.2. Unlike conventional proprietary server solutions, cloud computing facilitates, in terms of time and resources, flexible configuration and elastic on-demand
expansion or contraction of resources [Domingues03].
So enterprises interested on utilizing services in the cloud have come to realize
that cloud computing features can help them to expand their services efficiently and
also to improve the overall performance of their current systems by building up an
overlay support system to increase service availability, task prioritization and service load distribution, and all of them based on users particular interests and
priorities.
1.5 Conclusions
1.5
Conclusions
This chapter discusses alternatives to facilitate the interoperability of the information in management systems by semantically enriching the information models
to contain additional references in the form of semantic relationships to necessary
network or devices concepts, defined in one or more linked data descriptions.
Then, systems using information contained in the model can access and do operations
and functions for which they were designed.
This chapter discuss about augmenting management information described in
information and data models with ontological data to provide an extensible, reusable common manageability layer that provides new tools to better manage resources,
devices, networks, systems and services. Using a single information model prevents
different data models from defining the same concept in conflicting ways.
This chapter introduced cloud computing, presenting a brief discussion about the
concepts, services, limitations, management aspects and some of the misconceptions associated with the cloud computing paradigm. This section addressed
advances of state of the art in the cloud computing area.
Chapter 2
2.1
Introduction
11
12
2.1.1
2.2
13
2.2
The set of definitions, as they are being used in this book, are presented in this section.
2.2.1
Context Information
14
2.2.2
Context-Awareness/Ubiquitous Computing
2.2.3
Policy-Based Management
In the field of network management, a policy has been defined as a rule directive
that manages and contains the guidelines for how different network and resource
elements should behave when some conditions are met [IETF-RFC3198]. In other
words, a policy is a directive that is specified to manage certain aspects of desirable
or needed behaviour resulting from the interactions of users, applications and existing resources or services [Verma00].
In the framework of this book, an initial definition to consider is Policy is a set
of rules that are used to manage and control the changing and/or maintaining of the
state of one or more managed objects [Strassner04], since this definition is more
applicable to managing pervasive services and applications. The inclusion of state
is important for pervasive systems, as state is the means by which the management
systems knows if its goals have been achieved and if the changes that are being
made are helping or not.
2.2.4
Pervasive Services
A pervasive service has been defined as a service that takes into account part of the
information related to context in order to be offered [Dey00a]. However, as a result
of the advent of new devices that are faster, more efficient, and possess greater
2.2
15
processing capabilities, pervasive services have been conceptualized as more deviceoriented, and the applications designed for them consider not only single user
benefits, but more importantly, the interaction between users and systems. This
increases the effect that mobility has on a pervasive service. In the framework of this
book, a pervasive service is one that makes use of advanced ITC mechanisms to
facilitate service management operations and manifest itself as always available.
2.2.5
Ontology Engineering
2.2.6
Autonomic Communications
This section discusses the use of autonomic communications, highlighting the management of information and resources, service re-configurability and deployment,
and the self-management requirements inherent in autonomic systems. The purpose
of autonomic systems is to solve problems of managing complex service and communications systems [Strassner06c], [IBM05].
Autonomic systems are the result of information technologies interacting and
cooperating between them for supporting service management operations (e.g. creation, authoring, customization, deployment and execution).
The interaction is supported by the extra semantics available from context information using ontologies and other information technologies, such as the policybased paradigm for managing services and networks [Serrano06b]. The interactions
are shown in Fig. 2.1.
Autonomic computing is the next step towards increasing self-management in
systems and coping with the complexity, heterogeneity, dynamicity and adaptability
16
Fig. 2.1 Combining autonomic computing with communication systems and technologies related
2.2
17
system, as defined by IBM [IBM01b]. These eight features have guided the development
of autonomic systems. The following subsections review these critical autonomic
communications features [IBM01b], [Strassner06c].
2.2.6.1
Self-Awareness
2.2.6.2
Self-Configuring
2.2.6.3
Self-Optimization
This is the capability of a system to improve the resource utilization and workload
of the system following the requirements from different services and users. This
resource utilization and workload is dependent on the time and service lifecycles for
each service and user. Performance monitoring and resources control and optimization are management operations inherent in this process.
2.2.6.4
Self-Healing
This is the capability of a system to detect and prevent problems or potential problems, and then, as a result of this action, to find alternate ways of using resources or
reconfiguring the system to avoid system or service interruptions. To perform this
capability, local data processing is needed.
2.2.6.5
Self-Protection
This is the capability of a system that defines its ability to anticipate, detect, and
protect intrusion or attacks from anywhere. This depends on its ability to identify
failures in the system, and enables the system to consistently enforce privacy rules
and security policies.
18
2.2.6.6
Context-Awareness
This is the capability of a system to process pieces of information in their environment, including their surrounding, and activity, with the objective to react to that
information changes in an autonomous manner as a result of the utilization of elements and process in its environment.
2.2.6.7
Open
2.2.6.8
Anticipatory
2.2.7
It is anticipated that cloud computing should reduce cost and time of computing and
operations processing [IBM08]. However, while cost benefit is reflected to end user
only, from a cloud service provider perspective, cloud computing is more than a
simple arrangement of mostly virtual servers, offering the potential of tailored service and theoretically infinite expansion [Head10]. Such a potentially large number
of tailored resources which are interacting to facilitate the deployment, adaptation
and support of services, this situation represents significant management challenges.
2.2
19
Fig. 2.2 Relationships between autonomic systems features and service operation requirements
In management terms, there is a potential trend to adopt, refine and test traditional
management methods to exploit, optimize and automate the management operations of cloud computing infrastructures [Waller11]; however, this is difficult to
implement, so designs for management by using new methodologies, techniques
and paradigms mainly those related with security are to be investigated.
The evolution of cloud computing has been benchmarked by a well-known evolution in distributed computing systems [IFIP-MNDSWG]. Figure 2.3 depicts this
evolution and shows the cloud computing trends passing from a physical to virtual
infrastructure usage and from a local to remote computing operations. This evolution towards cloud computing services era is briefly explained in the following
sections.
2.2.7.1
Virtualization
Cloud computing is leading the proliferation of new services in the ICT market,
where its major success has been to facilitate on-demand service provisioning and
enabling the so-called pay-as-you-go provision and usage of resources and server
time over the Internet. The user of the cloud services does not own and does not
20
maintain the underlying physical hardware (i.e. servers and network devices or software
thereby avoiding additional costs for configuration labour), and pays for permission
to use a virtual slice of those shared resources.
As todays cloud computing is known, it has origins in distributed computing
systems [Andrews00], [Elmasri00], [Lynch96]. An important feature in distributed
systems is the role management systems have in order to control, processes remotely
and in a coordinated manner. After a decade of management development, and evolution in computing systems emerges grid computing, a combination of remote
computer resources to execute common goals in remote located physical infrastructures [Foster99]. In grid computing no matter where the resources are allocated,
tasks are executed in distributed places by using grid managers to pursue multiple
tasks even through different administrative domains [Catlett92], [Maozhen05],
[Plazczak06], [Buyya09].
In generic terms, grid computing can be seen as a distributed system where
large number of non-interactive files is involved. It is important to mention what
make grid computing different from cluster computing is that grids seem more
seamless coupled, it is definitively heterogeneous, and geographically their allocation is spread out. Grid computing is traditionally dedicated to a specialized
application and it is a more common cluster computing which will be used for a
variety of different purposes. Likewise grids are more often build with the aim to
be used in a more long-term application and involve more use of physical infrastructure resources with specialized or particular ad hoc developed software libraries known as middleware. Examples of middleware are GridWay [GRIDWAY],
TeraGrid [TERAGRID], gLite [GLITE], UNICORE [UNICORE] and Globus
Toolkit [GLOBUS].
2.3
2.2.7.2
21
Cloud Service
As main feature in cloud computing systems, while labour costs are still present,
users pay reduced prices as the infrastructure is offered to and shared by multiple
users [Head10], [Urgaonkar10]. According to this model, processing time cost for
each user is reduced since it is covered by multiple users, as seen in the traditional
pay-for-server-time model [Greenberg09].
It is well accepted by ITC professional that cloud computing is a revolution in
the service provisioning and marketing giving an opportunity to bring to bear technological experience and revenue in a new area by exploiting the Internet infrastructure. However, the concept behind this trend is the full exploitation of multitenancy
of services, where multiple users can make use of the same infrastructure by using
intermediate middlewares known as virtual infrastructure to use the same information service [Greenberg09].
2.3
22
services more extensible, useful and many times simple. To make this more attractive,
this trend will continue, as new mobile and wireless technologies are being integrated.
It is not difficult to imagine that this mixture of technologies increases the complexity of the systems and solutions by many systems and devices that use different
mechanisms for generating, sharing and transferring information to each other.
Further more existing data models are application-specific and designed independently to each other. In this respect, a method or set of them are needed to enable the
efficient and clear exchange and reuse of information between the systems.
This method must be inherently extensible; as systems and networks become
more pervasive, the nature of the services provided should be easy to change according to changing context. Services must also become more flexible in order to respond
to highly dynamic computing environments and become more autonomous to satisfy the growing and changing requirements from users. In other words, the services
must become more adaptive and context-aware.
Simple advances in resources and services have been feasible as a result of the
ever-increasing power and growth of associated technologies. However, this drive
for more functionality has dramatically increased the complexity of systemsso
much that it is now impossible for a human to visualize, much less manage, all of
the different operational scenarios that are possible in todays complex systems.
The stovepipe systems that are currently common in OSS (operations support
system) and BSS (business support system) designs exemplify thistheir desire to
incorporate best of breed functionality prohibits the sharing and reuse of common
data, and point out the inability of current management systems to address the
increase in operational, system, and business complexity [Strassner06b].
Operational and system complexity are induced by the exploitation and introduction of technology to build functionality. The price that has been paid is the increased
complexity of system installation, maintenance, (re)configuration and tuning, complicating the administration and usage of the system. Business complexity is also
increasing, with end users wanting more functionality and more simplicity.
This requires an increase in intelligence in the system, which defines the need for
pervasive applications to incorporate autonomic characteristics and behaviour
[Horn01]. Along this book, the assumption that pervasive computing focuses on
building up applications using real-time information from different domains is
assumed, thus the requirements are that business must be able to drive resources that
network(s) can provide.
On the other hand, the aim of this book is to discuss the role of ontology engineering in the cloud era. It is evident from the differences between pervasive applications in one side and autonomic communication in the other. The autonomic
systems have been conceived to manage the increasing complexity of systems
[IBM01a] as well as to dynamically respond to changes in the managed environment [Strassner06a]. While autonomic communications has proposed some variant
of automatic service generation, pervasive applications require a more detailed
model of the system that is being reconfigured as well as the surrounding environment. It is due to that autonomic communications is armonized by a standard language where the sharing of information is easier than when formal but different
languages are being used as it occurs in pervasive applications.
2.3
23
2.3.1
The nature of the information requires a format to represent and express the concepts related to the information. Ontology is an explicit and formal way to capture
and integrate information, without ambiguity, so that the information can be reused
and shared to achieve interoperability. Explicit means that the types of concepts
used, and the constraints on their use, are unambiguously defined. Formal indicates
that the specification should be machine readable and computable.
A specific definition of Ontology, with respect to system and network management, is a database describing the concepts in a domain, their properties and how the
concepts relate to each other. This is slightly different from its definition in philosophy, in which ontology is a systematic explanation of the existence of a concept.
System and network management is less concerned with proving the existence of
something than in understanding what that entity is, and how it interacts with other
entities in the domain [Guarino95].
2.3.2
24
any device, from personal accessories to everyday things, such as clothing, can have
embedded computers that can create connections with other devices and networks.
The goal of pervasive computing, which combines current advanced electronics
with network technologies, wireless computing, voice recognition, Internet capabilities and artificial intelligence, is to create an environment where the connectivity
of devices and the information that they provide is always available.
However, in this complex environment where systems are exchanging information transparently using diverse technologies and mechanisms, management
becomes increasingly difficult. The increasing multiplicity of computer systems,
with the inclusion of mobile computing devices, and with the combination of different networking technologies like WLAN, cellular phone networks, and mobile ad
hoc networks, makes even the typical management activity difficult and almost
impossible to be done by human beings. Thus, new management techniques and
mechanisms must be applied to manage pervasive services.
One of the most important characteristics of knowledge is the ability to share and
reuse it. In this context, ontology is used for making ontological commitments.
An ontological commitment is an agreement to use a vocabulary (i.e. ask queries and
make assertions) in a way that is consistent. In other words, it represents the best
mapping between the terms in ontology and their meanings. Hence, ontologies can
be combined and/or related to each other by defining a set of mappings that define
precisely and unambiguously how concepts in one ontology are related to concepts
in another ontology. Thus, ontologies are a powerful means to provide the semantic
structures necessary to define and represent context information.
Ontologies were created to share and reuse knowledge in an interoperable manner [Guarino95] and to avoid the handicaps founded when different systems try to
exchange application-specific heterogeneous representations of knowledge. Such
cases are complicated by the difficulties between languages and patois; missing of
communication conventions and mismatch of information models.
As a data or information model represents the structure and organization of the
data elements, the management activity can obtain benefits from using the data from
such elements in its operations. In principle, a data or information model is specific
to the application(s) for which it has been used or created. Therefore, the conceptualization and the vocabulary of a data model are not intended a priori to be shared
by other applications [DeBruijn03]. Data models, such as databases or XML schemas, typically specify the structure and the integrity of data sets. The semantics of
data models often constitute an informal agreement between the developers and the
users of such data, and that finds its way into applications that use the data model.
By contrast, in the area of knowledge engineering, the semantics of data needs to be
standardized in a formal way in order to exchange the data in an interoperable manner [Genesereth91].
Ontologies not only provide enrichment to the information model, but also
semantic expressiveness, allowing information exchange between management
applications and different management levels. It is this characteristic by which
ontologies are emerging into engineering areas, providing advantages to better specify the behaviour of services and management operations. One potential drawback of
2.3
25
using ontologies for management purposes is that they require significant computational
resources. However, the disadvantages are outweighed by the associated benefits of
using ontologies. Times for agreements are reduced when a system using ontologies
is being used, for instance or even more when the information into the systems need
to be shared to other systems, the times for seeking of the information and mapping
are reduced.
2.3.3
26
operations such as customization, definition and deployment and, even more importantly,
the service maintenance. In the depicted autonomic environment, the possibility of
upload or transfer of context information from its various sources to the management system increases the pervasive level of the applications and the services using
such information (shown on the left side of Fig. 2.4).
This level is depicted as a bar in order to indicate that context information must
be translated between each level. The translation is simplified and automated when
formal languages are used. One of the objectives of this book is to explain the need
for making context network information available to different service abstraction
layers, such as those shown in Fig. 2.4 for triggering appropriate management operations in autonomic systems and also to provide guidance to find out the best possible approaches to achieve this goal.
2.3.4
2.3
27
Enterprise Panel, clients themselves are concerned with Security, Availability and
Integration, today, and over the next 3 years, customers will be moving to Cloud
Services [DEVCENTRAL].
2.3.4.1
Cloud services must reflect the current total service load to address time-varying
demand. Thus, the application must actively participate in the process to produce
changes in the underlying virtual or physical infrastructure. In other words, individual services need not to be adaptive with system load; however, they generate information that can be used as inputs for applications used to modify infrastructure
performance. Therefore, the application is not static anymore; in the cloud, it can
evolve over time to migrate or expand using more or less computing resources. So
cloud services are agnostic of the underlying infrastructure, where possible, and it is
the job of the physical infrastructure to host the services in a self-adaptive manner.
2.3.4.2
In cloud computing, adding more resources does not necessarily mean that the performance of the application providing the service will increase accordingly. In some
situations, the performance may decrease because of the managing/signalling overhead or bottlenecks. A challenge is to design proper scalable application architectures as well as making performance predictions about data and processing load
balancing to satisfy service load demands and prevent future demands.
2.3.4.3
If well-known pay-as-you-go model makes cloud very attractive, there are many
possible other models for application architectures where revenue is the main benefit.
A challenge is to design an optimal architecture to fit a cloud environment and at the
same time to optimally use all resources with respect to pricing and billing. Note
that the pricing model may conflict with the optimally scalable architecture, thus
requiring a carefully designed trade-off of application and infrastructure design
priorities is the challenge.
2.3.4.4
Cloud services are commonly referred to as infinitely scalable. However, far away
from the technology is near to reach this concept. A concept of this dimensions is
not generally possible to solve physically by just expanding the server farm, many
28
2.3.4.5
2.3.4.6
Security
With the participation of multiple users in the cloud environment, the privacy and
security of the information being stored, transmitted or operated is crucial. In public clouds, this issue is still driving research activity in terms of efficiency, data
protection algorithms and protocols [Mace11]. This is motivated mainly by legal
issues about security requirements for keeping data and applications operating in a
particular legal domain (e.g. Ireland, EU, etc.) and not due to technological
constraints.
2.3.4.7
2.3.4.8
2.4
29
Today the cloud and its underlying infrastructure is managed in the form that services
capabilities are not exposed [Srikanth09]. The cloud computing paradigm establishes that computing operations and service applications can be performed at
unlimited scalable levels and following on-demand service attention, where (at least
from a service end-user perspective) the less important is to worry about infrastructure. However, in PaaS and IaaS levels, a key feature for the cloud optimization and
at the same time increase the performance of the cloud infrastructure is that the
low-level management of underlying resources and services remains hidden from
the application, and the end user has to relieve the platforms and management user,
when it is needed, to manage them. However, the end user still requires some flexibility in terms of the resources and service they pay for so [Sedaghat11]. A key
challenge remains in the mapping of high-level user requirements to low-level
configuration actions. It is also vitally important that these management actions
can be performed in a scalable manner since the cloud service will not be able to
individually configure a large number of shared resources and services for individual users.
2.3.4.9
2.4
The identification of service requirements, which is between others one of the main
research tasks of this book, is presented in this section. This section is divided into
four parts: (1) information requirements, (2) end-user requirements, (3) technology
requirements and (4) rather to present an exhaustive list of cloud computing challenges and service requirements, it concentrates in the most important up-to-date
challenges and where this emerging area has advanced developing tool and technologies to achieve cloud computing level has up to date.
The considerations presented in this section are based on exhaustive study, analysis and discussions about the state of the art in IT and cloud systems and on technical experience from research activities. Basic conceptual frameworks applied
30
2.4
2.4.1
31
One of the most difficult aspects of context information is its dynamism. Changes
in the environment must be detected in real time, and the applications must quickly
adapt to such changes [Dey01]. The nature of the information is the most important
feature of context-aware applications and systems; if pervasive applications can
fully exploit the richness of context information, service management will be dramatically simplified [Brown98]. Other important challenges of pervasive services
include: (1) how to represent and standardize the context information, (2) if the
information is correctly represented, and (3) if the information can be translated to
a standard format, then different applications can all use the information. Finally,
some types of context also depend on user interfaces (which can make retrieving
context information easier), or the type of technologies used to generate the context information. This sets the stage for discussing the information requirements in
this section.
2.4.1.1
Modelling context information is one of the major challenges when services deployment design is needed [Krause05]. Without a well defined, clear and at same time
flexible information model, applications will not be able to use such information in
an efficient way for taking advantage of all of the benefits that the context information can provide for the service as well as for the provisioning of that service.
The context information model must be rich and flexible enough to accommodate
not only the current facets of context information, but also future requirements
and/or changes [Dey01]. It has to be based on standards as much as possible and
moreover, the model should scale well with respect to the network and the applications.
This introduces a great challenge managing this context information in a consistent and coherent manner. Storage and retrieval of this information is also important.
A most well-established approach is to model context information model representation and the framework proposed to represent the context information supported
by an object-oriented, entity-centred model.
Figure 2.7 shows an entity-centred model representation. The model is based on
simple concepts and its relationships, as syntactical descriptions, between those
concepts. An entity is composed of a set of intrinsic characteristics or attributes that
define the entity itself, plus a set of relationships with other entities that partially
describe how it interacts with those entities.
The entities can represent anything that is relevant to the management domain
[Chen76]. Moreover, the relations that can exist between the different model
entities can represent many different types of influence, dependence, links and
so on, depending mainly on the type of entities that these relationships connect.
The models objective is to describe the entity and its interaction with other
entities by describing the data and relationships that are used in as much detail
32
as is required. This abstraction enables the model to be made more comprehensible by different applications. Since this format is machine readable, the information can be processed by applications much easier than an equivalent,
free-form textual description.
The entity model can be thought as a general purpose way of representing and
storing context information throughout the network. Modelling context is a complex
task, so using an extensible model provides a template that both standardizes the
information and enables it to scale its contents for future applications. This entitycentred model can be used to characterize context information.
This model is inherently extensible, and new attributes or relationships can be
added to entities without having to modify the structure of the model. The model
must be reused, as any new entities that are required can simply be added to the
model and linked to the existing entities by defining appropriate (new) relationships, without having to modify any of the existing entities. The entity-centred
model is easily scalable. Along this book multiple references which use the entityrelationship paradigm for modelling context information can be found, the reason is
simple, by using this model it results in an easy and extensible management mechanism to represent, handle and operate information.
2.4.1.2
2.4
33
information throughout the network, and for this reason, this concept is used to
model context information for mapping purposes.
An abstract model that can be adopted and when necessary add new information
to the model is the most appealing solution needed, as a clue for generating this type
of models, it is not necessary to modify the existing entities; all that is required is to
create a new entity and establish suitable relationships with existing entities. This
ensures the scalability of the information model.
34
classes of relationships that express the different types of interaction between the
different types of entities are defined. Figure 2.9 represents an example set of initial
relationships that could be established between the entities of the model. These are
high-level type of associations between the four main types of entities. Note that
certain types of relationships are only meaningful between certain entities, for
example social relations only have meaning between entities of person class and
not, for example between places.
2.4.1.3
The classification of context information is not an easy task, due to its extreme
heterogeneity. Therefore, this taxonomy could be defined in multiple ways and
from multiple perspectives. To identify the information that could be relevant to
pervasive applications, the different management operations required by pervasive services were classified based on an extensive literature survey that was conducted to understand the current state of the art in context-aware service
provisioning. In this section, a review of different kinds of classifications (most of
them orthogonal and compatible) with pervasive services and management operations is presented, a more detailed description can be found in a previous work on
[Serrano05]. In a first approximation, context information can be classified by the
following characteristics:
2.4
35
By its persistence:
Permanent (no updating needed): Context, which does not evolve in time, that
remains constant for the length of its existence (e.g. name, ID card), or Temporary
(needs updating): Context information that does not remain constant (e.g. position, health, router interface load).
By its medium:
Physical (measurable): Context information that is tangible, such as geographical
position, network resources, and temperature (it is likely that this kind of information will be measured by sensors spread all over the network), or
Immaterial (non-measurable by means of physical magnitudes): Other context
information, such as name or hobbies (it is likely that this kind of information
will be introduced by the user or customer themselves).
By its relevance to the service:
Necessary: Context information that must be retrieved for a specific service to run
properly, or
Optional: Context information which, although it is not necessary, could be useful
for better service performance or completeness.
By its temporal characteristics:
Static: Context information that does not change very quickly, such as temperature
of a day, or
Dynamic: Context information that changes quickly, such as a persons position
who is driving.
By its temporal situation:
Past: Context information that took place in the past, such as an appointment for
yesterday, which can be thought of as a context history, or
Present: Context information that describes where an entity is at this particular
moment, or
Future: Context information that had been scheduled and stored previously for
future actions, such as a meeting that has not yet occurred.
Based on the above context information taxonomy, the state-of-the-art survey,
the operators expectations, typical scenario/application descriptions and the fundamental requirements for provisioning context-aware network services, Fig. 2.10
shows a representation of the context information taxonomy referred to and
described above.
2.4.1.4
Context modelling depends on the point of view of the context definition and scope.
The model is a first approximation on how to structure, express and organize the
context information as it is defined in [Dey00a] and [Dey01]. As mentioned before,
36
the model is based on the concepts of entity and relationship and derived from the
definition of entity in [Chen76].
An entity is composed of a set of intrinsic characteristics or attributes that define
the entity itself, plus a set of relationships that are each instances of a standard set
of relationship types. The concept of the local context of an entity can be defined as
the information that characterizes the status of the entity. This status is made up of
its attributes and its relationships. Moreover, the relationships that can exist between
the different entities inside the model, as well as the entities themselves, can represent many different types of influences, dependencies, and so on, depending on the
type of entities that these relationships connect.
With this type of model, one can construct a net of entities and relationships
representing the world surrounding the activity of a context-aware service and thus
the models can influence the development of the activity or service. This enables a
scenario made up of many different types of information, and the influences or
nexus that links one with the others. The local context enables the service to select
and use context information from this scenario that is considered relevant in order
to perform its task and deploy its service.
Figure 2.11 shows a simple, high-level example of modelling context by means
of this entity-relationship technique [Serrano05]. In this example, two possible
entities inside a hypothetical scenario are defined as a reference (an entity that
represents a person and an entity that represents a printer device). The local context
is defined as the sum of all of the specific attributes, plus the relationships established with other entities inside the model, of these two entities. Hence, this figure
synthesizes the concept of local context of Person 1 or Printer 1, neither of which
include entities such as Person 2 or Person 3. An entity becomes to be a part of
local context in other entity when a relationship is defined, so Person 1 can be a
part of local context of Printer 1 (right hand-side circle on the figure) and at the
2.4
37
same form Printer 1 can be part of local context of Person 1 (left hand-side circle
on the figure).
2.4.1.5
The tools that could be used to represent and implement the model, and the way to
integrate this information model inside the general context-aware system architecture, need to be identified and tested, as they are potential tools for representing the
context information. XML is a flexible and platform-independent tool that can be
used in different stages of the information representation, which makes implementation consistent and much easier. The use of XML is increasing every day; however, it is by definition generic. Therefore, new languages that are based on XML
have been developed that add application-specific features as part of the language
definition. For example, to customize services, languages must have concepts that
are related to the operational mechanisms of that service. It is in this context that
XML Language has been successfully proposed and used to represent the context
information models. XML has the following advantages:
XML is a mark-up language for documents containing structured information.
The use of XSD (XML schema definition) facilitates the validation of the documents created, even in a more basic but in some way also functional the use of DTD
(document type definition) is also an alternative for validation. This validation can
be implemented in a JAVA program, which can be the same used for creating and
maintaining these XML schemas and/or documents.
Use XQuery, as a powerful search engine, to find specific context information
inside the XML documents that contain all the information related to a specific
entity. These queries can select whole documents or sub-trees that match conditions
defined on document content and structure.
An example of using XML language for describing context information is shown
in Fig. 2.12. This represents the context information model (CTXIM) as an example.
38
Person, Place and Task entities are contained with specific descriptions as part of
each entity.
2.4.1.6
Context-awareness requires the following question to be solved: how can the context
information be gathered and shared among the applications that use it? The answer
to this question requires an extensible and expressive information model. The format
to contain the information is part of the modeling process and the methodology to
create the model. The most important challenge is to define the structure of the context information to collect, gather and store information. Context information can be
used not just to model information in services, but also to manage the services provided. The model must be rich in semantic expressiveness and flexible enough to
consider the variations of current status of the object being managed [McCarthy93].
The model should scale well with the network or the application domain.
A model considered reference in this book, about context modelling in pervasive
computing environments, can be found and explained in [McCarthy97]. In this book
the modelling result from this previous analysis and modelling activity formalizing
2.4
39
information is referred as an excellent example. In other words and aligned with the
objective of demonstrative facts pursued in this book, if the information models are
expressive enough, pervasive systems can use that information to provide better management service operations. In order to formalize the information contained in the
information model, ontologies appear to be a suitable alternative. However, this does
not mean that other approaches are unsuitable for different applications. In this section, the idea to be discussed is that ontologies, in the field of management services,
appear as a suitable alternative to solve the problem of formal modelling identified as
providing the required semantics to augment the data contained in the information
model in order to support service management operations. Ontology engineering can
act as the mechanism for formalizing the information and provides the information
with the semantic and format features, as it is the main scope in this section.
2.4.1.7
Hierarchical Model
This organizes the data in a tree structure. This structure implies that a record can
have repeating information, as each parent can have multiple children, but each child
can only have one parent, and it collects all the instances of a specific record together
as a record type. It is a clear example of dependency in higher hierarchical levels.
Network Model
This model organizes data as a lattice, where each element can have multiple parent
and child records. This structure enables a more natural model for certain types of
relationships, such as many-to-many relationships.
Relational Model
This model organizes data as a collection of predicates. The content is described by
a set of relations, one per predicate variable, and forms a logic model such that all
predicates are satisfied. The main difference between this organization and the
40
above two is that it provides a declarative interface for querying the contents of the
database.
Object/Relational Model
This model adds object-oriented concepts, such as classes and inheritance, to the
model, and both the content and the query language directly support these objectoriented features. This model offers new object storage capabilities to the relational
systems at the core of modern information systems that integrate management of
traditional fielded data, complex objects such as time-series and geospatial data and
diverse binary media such as audio, video, images and applets for instance. It also
enables custom datatypes and methods to be defined.
Object-Oriented Model
This model adds database functionality to object programming languages.
Information is represented as objects and extends object-oriented programming language with persistent data, concurrency control and other database features. A major
benefit of this approach is the unification of the application and database development into a seamless language environment. This model is beneficial when objects
are used to represent complex business objects that must be processed as atomic
objects. As an example, object-oriented models extend the semantics of the C++,
Smalltalk and Java object programming languages to provide full-featured database
programming capability, while retaining native language compatibility.
Semi-structured Model
In this data model, the information that is normally associated with a schema is
contained within the data, which is sometimes called self-describing. In such a
system, there is no clear separation between the data and the schema, and the degree
to which it is structured depends on the application. In some forms of semi-structured
models there is no separate schema, the schema itself can be converted if the data
model is previously defined. In others models, the schema exist but only place loose
constraints on the data. Semi-structured data is naturally modelled in terms of
graphs.
Associative Model
The associative model uses two types of objects, entities and associations. Entities
are things that have discrete, independent existence. Associations are things whose
existence depends on one or more other things, such that if any of those things
ceases to exist, then the thing itself ceases to exist or becomes meaningless.
2.4
41
Entity-Attribute-Value Model
The best way to understand the rationale of entity-attribute-value (EAV) design is to
understand row modelling, of which EAV is a generalized form. Consider a supermarket database that must manage thousands of products and brands, many of which
have a transitory existence. Here, it is intuitively obvious that product names should
not be hard-coded as names of columns in tables. Instead, one stores product descriptions in a products table: purchases/sales of individual items are recorded in other
tables as separate rows with a product ID referencing this table. Conceptually, an
EAV design involves a single table with three columns, an entity, an attribute, and a
value for the attribute. In EAV design, one row stores a single fact. In a conventional
table that has one column per attribute, by contrast, one row stores a set of facts.
EAV design is appropriate when the number of parameters that potentially apply to
an entity is vastly more than those that actually apply to an individual entity.
In this section, different types of information databases have been studied.
The context model is an alternative that seeks to combine the best features of different models; however, because of this combination, the context model is a challenge
to implement. Thus, the possibility to use other data models is open. An object-oriented
model, as shown in Fig. 2.13, has been implemented in Java, Extended explanation
can be found in [Serrano06d]. The fundamental unit of information storage of the
context model implemented in this book is, as described above, a Class, which
contains Methods and describes Objects.
A consolidated context data model must combines features of some of the
above models. It can be considered as a collection of object-oriented, network
and semi-structured models. To create a more flexible model, the fundamental
unit of information storage of the context model is a Class. A Class contains
Methods and describes an Object. The Object contains Fields and Properties.
42
A Field may be composite; in this case, the Field contains Sub-Fields. A Property
is a set of Fields that belongs to a particular Object, similar to an EAV database.
In other words, Fields are a permanent part of the Object, and Properties define
its variable part. The header of the Class contains the definition of the internal
structure of the Object, which includes the description of each Field, such as
their type, length, attributes and name. The context data model has a set of predefined types, but can also support user-defined types. The pre-defined types
include not only character strings, texts and digits but also pointers (references)
and aggregate types (structures).
2.4.1.8
In this book, the use of policies is fundamental to understand the interaction and
convergence between different information in different domains, the form of a policy
is expressed as follows:
IF < conditions or events > THEN < x-actions > ELSE < y-actions>
Policies are used to manage the service logic at a higher level. A policy is used
to define a choice in the behaviour of a pervasive service, and a pervasive service
itself comprises a policy-based management system (PBMS). The adaptability of
the PBM paradigm comes from its awareness of the operation environment, that is
the context in which the management system and its components are being used.
A requirement for the management of information is the capability of the system
offering service adaptation. When service adaptation occurs, there is an interesting
change in the context. So a pervasive service management information model,
which must be in reality an open, vendor-neutral approach to the challenge of technology change, represents an approach with a type of policy information model that
also requires a language.
An interesting and extended proposal is to be implemented by using the modeldriven architecture (MDA) initiative of the object management group (OMG)
[OMG-MDA]. As requirement, a policy-based pervasive service specification language will also need to be used. The PBM methodology is to be implemented and
demonstrated as a service composition platform and a service execution environment,
both based on open source software and open standards. A set of service-centric
network application programming interfaces (APIs) are developed by reusing existing user interfaces (UIs) to hide the heterogeneity of multiple types of end-user
access networks and the core networks.
2.4.1.9
Implementation Tools
As far as PBM is concerned, this book do not try to develop new management techniques; rather, the research work is to apply existing PBM techniques to the network
aspect of managing pervasive services in cloud environments. In this respect, it is
wise to use the background of other standard-based information models and extend
2.4
43
them to satisfy the services requirement needs (e.g. policy core information model
(PCIM) [IETF-RFC3060] and [IETF-RFC3460] by the IETF, the core information
model (CIM) by the DMTF [DMTF-DSP0005], and the Parlay policy information
management (PPIM) APIs by the Parlay Group [Hull04]) between others.
In this book, specific emphasis on the policy-based descriptions for managing
pervasive services is given, and a practically functioning pervasive service information model using ontologies to do so is pursued. The fact that there is not any reference implementation for the IETF PCIM [IETF-RFC3060], [IETF-RFC3460] or the
DMTF CIM [DMTF-CIM] makes PBM hard to implement using these specifications. Hence, an step further in this section is to support the idea to enhance and
control the full service life cycle by means of policies. In addition, this approach
takes into account the variation in context information, and relates those variations
to changes in the service operation and performance inspired from autonomic
management principles and its application in cloud environments.
2.4.2
This section describes the envisaged requirements for services from each of the parties involved in the service value-chain. The benefits can be obtained for each of
these stakeholders with the introduction of context information models that represent the effective functionality of pervasive services in a vendor- and technologyneutral format as summarized. This section refers to work undertaken by the author
as collaboration research activities into the EU IST-CONTEXT project and the
EMANICS Research Network; the public documentation that defines the state of
the art of context-aware services in the pervasive computing knowledge area can be
found in the Web sites [IST-CONTEXT] and [IST-EMANICS]. Here is presented
just a resume of those main user requirements since the scope that they are necessaries to any pervasive service being managed by policy-based managed systems.
2.4.2.1
End-User Requirements
2.4.2.2
44
2.4.2.3
2.4.2.4
2.4.3
The tendency in modern ITC systems is that all control and management operations
are automated. This can be effectively and efficiently driven by using context information to adapt, modify and change the services operation and management offered
by their organizations. The context-awareness property necessarily implies the definition of a variety of information required to operate services in next-generation
networks.
This section relates and describes the technological requirements to support such
ITC context-aware features and identifies the properties of pervasive applications
for supporting service management operations in cloud environments. The pervasive properties describe how management systems use context information for
cross-layer environments (interoperability) in NGNs to facilitate information
interoperability between different service stake holders (federation). The analysis
about using information following such technological requirements is a task of this
book to exemplify and help as reference to satisfy one or various cloud service management aspects.
The priority of this section is the support of multiple and diverse services running
in NGNs. Such scenario(s) are typified by complex and distributed applications,
which in terms of implementation and resource deployment, represent a high management cost due to the technology-specific dependencies that are used by each
different application. This in turn makes integration very difficult and complex.
Before providing a definition of technology requirements, Fig. 2.12 shows the
information requirements for supporting services. This shows the perspective of
technology requirements, their relationships and the level of influence between the
2.4
45
Scalability
Information systems solutions need to scale with the number of users, services and
complexity in acquiring and filtering the context information used by the pervasive
application. Information needs to be transparently distributed among and along the
applications within heterogeneous service environments. The scalability requirements are:
High levels of scalability due to the inherent necessity to represent diverse types
of information within multiple services.
The necessity to extend the definition and representation of information to other
platforms supporting different services.
Solutions that scale according to the number of users and services in acquiring
and filtering information.
46
2.4.3.2
Extensibility
Extensibility determines the possibility that an information model could be applicable, with the appropriate adaptation, to future or different coexisting applications
for pervasive services. The extensibility requirements are:
Extensibility is required to represent context information and context-aware services using different technologies, platforms and languages.
Extensibility means that the fundamental structure of the model can accommodate new data and relationships without requiring extensive redesign.
2.4.3.3
Automation
Automating services aims to provide tools and mechanisms for automatic service
creation, operation and management. This enables the system to reduce or suppress
the need for any human intervention. Information models, supported by updating
data mechanisms, can significantly contribute to reach this goal. This implies:
Mechanisms that use information for automatic service creation, operation and
management of the information and services must be supported by the information model.
Reduce as much as possible the need of any human intervention to manage
services.
Software solutions for supporting self-* operations in terms of diverse user-centred
services.
2.4.3.4
Flexibility
Flexibility is the characteristic or capability of the systems for adapting the service
creation, deployment, customization and operation to market and user needs. The
information model can contribute to this service characteristic by providing the necessary abstractions of the underlying network services and network infrastructure to
the entities and stakeholders involved in the service lifecycle. The requirements of
this feature are:
The capability for adapting the service lifecycle management operations to market
and user needs must be able to be represented in the information model.
The necessary abstraction mechanisms to represent the underlying network
resources and services must be able to be represented in the information model.
Middleware solutions that enable business goals to determine information-specific
network services and resources must be able to easily use data in the information
model.
2.4
2.4.3.5
47
Integration
2.4.3.6
Independence
It is well known that vendor independence is a feature desired for all systems.
Hence, the requirements for this feature are:
Service providers need independence from equipment manufacturers, which
promotes the interoperability of the information that they supply for supporting
services.
The information model must be able to model functionality from different vendors that operate on different platforms and use different technologies.
2.4.3.7
Management Cost
Telecommunication services are only viable if they do not incur recurring capital
and operational expenditures. The additional traffic and data heterogeneity caused
by application-specific information has, up to now, increased operational expenditure, since it requires custom middleware or mediation software to be built to
integrate and harmonize disparate information from different applications. Hence,
the requirements are:
Reduce management cost by lowering the number of skilled resources for managing heterogeneous pieces of information.
Reduction of side effects when implementing information models that use and
integrate heterogeneous pieces of information.
48
2.4.4
As premises in cloud computing, the reduction of payment cost for services and the
revenue benefit for investment in proprietary infrastructure are the key factors to
believe cloud is the solution to many of the under usage problems and waste in
technology the IT sector is facing up. Particular interest for cloud computing services and the use of virtual infrastructure supporting such services is also the result
of business model which cloud computing offers, where bigger revenue and more
efficient exploitation are envisaged [IBM08]. Likewise, there exist a particular interest from the industry sector, where most of the implementations are taking place, for
developing more management tools and solutions in the cloud, and in this way the
pioneers are offered full control of the services and the cloud infrastructures. On the
other hand, far away from revenue benefits, academic communities point towards
finding solutions for more powerful in terms of computing processing and at the
same time more efficient to reduce enormous headaches when different technologies need to be interactive to exchange a minimum part of information. Thus, generally problems on manageability, control of the cloud and many other research
challenges are being investigated.
Cloud computing management is a complex task [Rochwerger09], for example
clouds must support appropriate levels of tailored service performance to large
groups of diverse users. A sector of services, named private clouds, coexist with and
are provisioned through a bigger public cloud, where the services associated to
those private clouds are accessed through (virtualized) wide area networks. In this
scenario, management systems are essential for the provisioning and access of
resources, and where such systems must be able to address fundamental issues
related to scalability and reliability issues which are inherent when integrating
diverse cloud computing systems.
2.4.4.1
2.4
49
However, up to date of the publication of this book, the DRmonitoring tool lacks
scalability and fails to address security concerns. From the industrial viewpoint, HP
Open View [HPOPENVIEW] and IBM Tivoli [IBMTIVOLISIC] have been developed to ease system monitoring and are primarily targeting the enterprise application environment. Although, the commercial products are relatively limited in
portability across different operating systems, they are usually highly integrated
with vendor-specific applications, subject to new versions released after the edition
of this book.
In the cloud environment, heterogeneity is one of the fundamental requirements
for monitoring tools. Therefore, the industrial tools are unsuitable for more general
purpose monitoring of the cloud. GoogleApp engine [GOOGLEAPP] and Hyperic
[HYPERIC] both provide monitoring tools for system status such as CPU, memory
and processes resource allocations. Such system usage data can be useful for general
purpose cloud monitoring, but they may not be sufficient enough for an application-level
manager to make appropriate decisions.
To address the limitations of existing tools, a new monitoring tool named runtime correlation engine (RTCE) [Holub09] has been developed at the UCD
Performance Engineering Laboratory jointly with IBM Software Verification Test
teams. RTCE takes the important concerns of heterogeneity, high performance, low
overhead and need for reasonable scalability. In the near future, RTCE can be essentially improved with greater scalability based on several proposed architectures
[Wang10]. RTCE will be primarily focusing on stream data correlation and provide
flexible data results for other system components to consume. The output data produced should be generic and scalable, so that any type of component can easily
adopt the content of the data. In the case of changing signatures on the output, the
existing applications should still be able to consume the new output data without
any code changes.
2.4.4.2
The need for end users to become involved in cloud infrastructure, service monitoring and management is driven by two key features of cloud computing. As the
number of virtual resource instances and individual features of resources and services continue to grow to provide flexibility, it will become increasingly unrealistic
to expect the cloud provider to manage resources and services for end users.
In addition, as users will pay for resources in a very fine-grained manner, end
users may want to monitor their own resources so they can decide when to request
more/less resources and ensure they are getting an optimal value for the resources
they pay for. A model where user management and monitoring preferences and
requirements are mapped to the underlying resources and services in a constrained
yet extensible manner, thereby giving users to control over the resources they used
to pay.
Harmonization of monitoring data requires mechanisms for mapping of the large
volume of low-level data produced by resource-level and service-level monitoring
50
2.4.4.3
FederationCloud Interconnection
2.4
51
interworking/sharing or resources and services for users, and how support operators
to securely monitor, manage and share each others heterogeneous resources to
achieve this.
Federation represents an approach for a solution supporting the increasingly
important requirement to orchestrate multiple vendors, operators and end-user interactions [Bakker99], [Serrano10], and it is clear in the applicability of this concept
in the cloud computing area.
Cloud computing offers an end-user perspective where the use of one or any
other infrastructure is transparent, in the best case the infrastructure is ignored by
the cloud user [Allee03]. However, from the cloud operator perspective, there are
heterogeneous shared network devices as part of diverse infrastructures that must be
self-coordinated for offering distributed management or alternatively centrally
managed in order to provide the services for which they have been configured.
Furthermore, there must be support to facilitate composition of new services which
requires a total overview of available resources [Kobielus06].
In such a federated system, the number of conflicts or problems that may arise
when using diverse information referring to the same service or individuals with the
objective of providing an end-to-end service across federated resources must be
analyzed by methodologies that can detect conflicts. In this sense, semantic annotation and semantic interoperability tools appears as tentative approach solution and
that currently is being investigated.
2.4.4.4
The need to control multiple computers running applications and likewise the
interaction of multiple service providers supporting a common service exacerbates
the challenge of finding management alternatives for orchestrating between the
different cloud-based systems and services. Even though having full control of the
management operations when a service is being executed is necessary, distributing
this decision control is still an open issue. In cloud computing, a management system supporting such complex management operations must address the complex
problem of coordinating multiple running applications management operations,
while prioritizing tasks for service interoperability between different cloud
systems.
An emerging alternative to solve cloud computing decision control, from a
management perspective, is the use of formal languages as a tool for information
exchange between the diverse data and information systems participating in cloud
service provisioning. These formal languages rely on an inference plane
[Strassner07b], [Serrano09]. By using semantic decision support and enriched
monitoring information management, decision support is enabled and facilitated.
As a result of using semantics, a more complete control of service management
operations can be offered, hence a more integrated management, which responds
to business objectives. This semantically enabled decision support gives better
control in the management of resources, devices, networks, systems and services,
52
thereby promoting the management of the cloud with formal information models
[Blumenthal01].
This section addressed the need to manage the cloud when policies are being
used as the mechanism to represent and contain description logic (DL) to operate
operational rules. For example, the SWRL language [Bijan06], [Mei06] can be used
to formalize a policy language to build up a collection of model representations with
the necessary semantic richness and formalisms to represent and integrate the heterogeneous information present in cloud management operations. This approach
relies on the fact that high level infrastructure representations do not use resources
when they are not being required to support or deploy services [Neiger06],
[VMWARE]. Thus, with high-level instructions, the cloud infrastructure can be
managed in a more dynamic and optimal way.
2.4.4.5
Several cloud usage patterns can be identified based on bandwidth, storage, and
server instances over time [Barr10]. Constant usage over time is typical for internal
applications with small variations in usage. Cyclic internal loads are typical for
batch and data processing of internal data. Highly predictable cyclic external loads
are characteristic of Web servers such as news, sports, whereas spiked external loads
are seen on Web pages with suddenly popular content (cf. slashdotted). Spiked
internal loads are characteristic of internal one-time data processing and analysis,
while steady growth over time is seen on startup Web pages.
The cloud paradigm enables applications to scale-up and scale-down on demand,
and to more easily adapt to the usage patterns as outlined above. Depending on a
number or type of requests, the application can change its configuration to satisfy
given service criteria and at the same time optimize resource utilization and reduce
the costs. Similarly clientswhich can run on a cloud as wellcan re-configure
themselves based on application availability and service levels required. On-demand
scalability and scalability prediction of a service by computing a performance model
of the architecture as a composition of performance models of individual components are also features to be considered when designing cloud solution. Exact components performance modelling is very difficult to achieve since it depends on a
various variables such as available memory, CPU, system bus speed and caches.
2.5
Conclusions
In this chapter
The requirements of information and pervasive service, based on both NGN
demands on context information and the demands of context-awareness according
to a set of pervasive service requirements, have been studied and discussed.
2.5
Conclusions
53
Chapter 3
3.1
Introduction
This chapter reviews basic concepts about ontology engineering, with the objective
of providing a better understanding of building semantic frameworks using ontologies in the area of telecommunications and particularly managing communication
services. This background enables the reader to better understand how ontologies
and ontology engineering can be applied in communications, since both of these
have only recently been applied to this field. Ontologies are used to represent knowledge, and ontology engineering is made up of a set of formal mechanisms that manage and manipulate the knowledge about the subject domain in a formal way
endowing semantic riches to the information.
Ontology engineering has been proposed as a mechanism to formalize knowledge. This chapter presents those basic concepts referenced in this chapter as inherent features for supporting network and services management. Alike defines the
lexical basic conventions for the process of creating ontologies helping the reader to
understand how ontology engineering can augment the knowledge and support
decision making of current management systems.
The organization of this chapter is as follows. Section 3.2 introduces a general
scope for introducing the basic elements for building up ontologies and how ontologies are structured. It provides ontology engineering definitions for concepts and
relationships, and describes the representation tools and functions. Instances as well
as axioms that are being used to build ontologies within this chapter are also
explained.
Section 3.3 provides general understandings about pervasive services and semantics in a form of introductory basic definitions for concepts related, and describes its
usage implications in different areas to establish this concept in the state of the art.
Section 3.4 reviews various semantic operations that can be done with ontologies. The basic ontology operations, as tools to support the management systems,
can execute semantic control of context information. This section briefly describes
those operations that management systems can do using context information when
J.M. Serrano Orozco, Applied Ontology Engineering in Cloud Services, Networks
and Management Systems, DOI 10.1007/978-1-4614-2236-5_3,
Springer Science+Business Media, LLC 2012
55
56
ontologies are used for the formal representation and modelling of knowledge in
cloud systems.
Section 3.5 presents a review of two different types of functions that computing
systems can perform with ontologies. The first group consists of ontology mapping, merging and reasoning tools, for defining and even combining multiple
ontologies. The second group uses ontologies as development tools, for creating,
editing and managing concepts that can be queried using one or more inference
engines.
3.2
The stage of the research presented in this section has been achieved by seeking
how to satisfy the requirements dictated by pervasive services regarding information interoperability. Strang and Linnhoff-Popien [Strang04] have described and
present a classification, about how much they satisfy or contribute to semantic
requirements enrichment. This classification is done in terms of information modelling capabilities and particularly context information models are used for supporting services interoperability. The idea that ontologies offer more capabilities for
satisfying different information requirements in terms of semantic richness as it is
discussed in [Strang03b] is supported. While ontologies do have some shortcomings, mainly in the consumption of more computing resources, in the final analysis,
advantages overcome the drawbacks or restrictions.
The section about ontology structures acts as an introduction to using ontologies
as the formal mechanism for integrating and increasing the semantics of facts represented in information models, in this book such facts are particularly referred to
management of services and networks.
A state of the art on ontology categorization, as well as a hierarchical description
for applying ontologies for integration of concepts or information models, has been
formerly presented in [Lpez03a] and [Lpez03c]. In this chapter, the research
efforts are addressed towards the correct application of ontologies for representing
knowledge into pervasive applications for service and network management as well
as services life cycle management operations control and the functional architecture
supporting such applications. However, to date, no other approaches have deeply
analyzed the meaning and significance of management data using ontology-based
context information and their implications when using ontology in other engineering disciplines.
As have been defined in [Gruber93b] and [Guarino95], ontologies have been
used to represent information that needs to be converted into knowledge. In the
cognitive conversion process, the information is formalized as a set of components
that represent knowledge about the subject domain in a general and non-specific
manner [Gmez99]. This representation then leads to a formal coding approach. In
this section, these components are described. Ontologies can be used to support
3.2
57
service and network management goals and also management operations and process.
Ontologies can be extended to provide application- and domain-specific ontologies
that augment information and data models to meet the needs of next-generation
network (NGN) and service management.
Ontologies provide the necessary formal features to define a syntax that captures
and translates data into ontological concepts; otherwise, syntax can only be weakly
matched (i.e. using patterns). In the domain of communication systems, the syntax
must be formalized in a functional way with the objective of supporting NGN management operations [both operational support systems (OSS) and their associated
business support systems (BSS)] and as a consequence of supporting the new pervasive services creation, deployment and management.
Ontologies are formally extensible. Systems can take advantage of this extensibility and semantic interoperability for supporting the management system. In addition, when integrating context information into pervasive services, context will help
specify management services more completely, as well as formally adjust and manage resources and services to changes in autonomic environments.
This chapter pays special attention for focusing pervasive service applications in
the framework of autonomic communications. Thus, the use of ontologies offers
significant benefits in terms of representing semantic knowledge, and reasoning
about that information is being managed. This helps to promote the information
interoperability behind the integrated management.
3.2.1
Concepts
Concepts are the abstract ideas that represent entities, behaviour and ideas that
describe a particular managed domain. Concepts can represent material entities
such as things, actions and objects, or any element whose concepts and/or behaviour
needs to be expressed by defining its features, properties and relationships with
other concepts. Such concepts can be represented and formalized as object classes.
The classes are used and managed by computing systems for performing operations
or simply for sharing information.
3.2.2
Representation
The representation is a formal or informal way to understand and situate the idea in
reference to certain properties or features in the domain where the idea is created.
The representation can be created using formal tools or mechanisms for depicting
the ideas or concepts, or the representation can be informal, such as using a simple
graph or a set of symbols depicting the ideas of the concept.
58
3.2.3
Relationships
3.2.4
Functions
3.2.5
Instances
Instances are used for creating specific objects already defined by a concept and can
represent different objects of the same class (e.g. person 1 instance-of and person 2
instance-of). Instances enable objects that have the same properties, but are used to
represent different concepts, to be realized, and can be described as sub-components
of concepts that have already been modelled.
3.2.6
Axioms
Axioms are the logic rules that the ontology follows. Axioms are theorems that
contain the logic descriptions that the elements of the ontology must fulfil. The
axioms act as the semantic connectors between the concepts integrating the ontology, and they support the logic operations that create a dynamic interaction between
the concepts. In pervasive computing, the axioms act as conditions for linking the
concepts and create the functions between concepts in an ontology.
3.3
Pervasive computing and semantic Web is acquiring more and more importance as
a result of the necessity for integrating context information in service provisioning,
applications deployment and network management. In todays systems, making
context information available in an application-independent manner is a necessity.
59
The expansion of the semantic web and the consolidation of pervasive computing
systems have as result an increasing demand on the generation of standards for
service-oriented architectures (SOA) and also on Web Services. The target in current pervasive systems is the use of the environment itself, in a form that the information can be easily accessible for the support of services, systems and networks.
In those scenarios, systems need to be prepared to use diverse information from
multiple information models, and most importantly, model the interaction between
the information contained in business models and network models.
Service management platforms must be able to support the dynamic integration
of context information, and thus take advantage of changing context information for
controlling and managing service and network management operations in service
provisioning operations. To achieve this kind of service management, flexible architectures are necessary that are based on the knowledge and construction of relational links between principal concepts, using relationships to build extensible
semantic planes founded on a formal definition of the information.
The creation of a semantic plane following SOA demands the combination of
multiple and diverse technologies, but principally agents. Programmable networks
and distributed systems provide the background necessary for implementing pervasive services in NGNs.
The state of the art, presented in this section, is concentrated on management
failures. Diverse are the reasons makes a management system inefficient, part of
them are listed as trends in service management and computing and referred in
state-of-the-art sections.
In the framework of this chapter, and as main integrated management objective,
three different types of failures already identified, acting as guidance: (1) technological failures as a result of hardware problems, (2) hardware limitations creating management errors as a result of limited capability for processing information (e.g.
overload of the systems when multiple and diverse systems are being managed as a
result of different data models) and (3) middleware limitations for exchanging information between systems as a result of diverse technologies using different information and data models. Currently, management systems are able to detect those failures
and follow pre-defined procedures to re-establish the communications infrastructure.
However, a most important type of failure exists that is related to content and
semantic issues. These failures occur when management systems operate with
wrong information, when data is changed erroneously after a translation or conversion process, or when data is misinterpreted or not fully understood. Such problems
are still largely unsolved. However, the introduction of ontology engineering is
seemed as a tool or mechanism to help solve these problems.
In Fig. 3.1, the three areas of concern (pervasive management, context data and
communications systems domain) are identified. The domain interactions clarify
the actions that this section addresses. Pervasive management provides the interfaces and mechanisms to users and system applications, enabling them to utilize
services as a result of variations in the context information. The communications
systems domain refers to software and hardware components needed in the
operation of management services, and for their deployment and execution.
60
The context data domain provides all the formal data mechanisms, business-oriented as
well as network-oriented, to represent and handle the information related to users and
networks that is used in management operations to support pervasive applications.
3.3.1
61
62
3.3.1.1
Context has an important role in the design of systems and/or applications that are
context-aware, since any change in context will change the functionality provided.
In the same manner that a gesture or a word can have different meanings depending
on the context or the situation in which they are expressed, a context-aware service
can also act differently. Since context can be made up of logical or physical characteristics, and expressed as different levels of abstractions, it can be perceived as
either a physical or virtual effect that alters the functionality and/or performance of
context-aware applications.
Context information must be collected and properly managed to make pervasive
services a reality; due to its inherent dynamic nature, this poses extreme demands
on the network and the service layers that most current technologies are unable to
fulfil. Work in this field has been concentrated on creating, collecting and deploying
context sensitive information as described in [Dey98], [Schmidt01] to the user.
Context information is described as the knowledge about the users and/or devices
state, including its surroundings, location and, to a lesser extent, situation.
Previous context-aware network services have been mostly focused on mobile
users that may see their context changing due to changes in location (also termed as
location-based services (LBS)). For example, [Finkelstein01] describes efforts to
design, specify and implement an integrated platform that will cater to a full range
of issues concerning the provisioning of LBS. This solution is made up of a kernel
and some support components that are in charge of locating the user and making
services accessible. Within the scope of this solution, there is a service creation
environment that will enable the specification, creation and deployment of such
services within the premises of service operators; however, those solutions do not
consider the control of operations as a crucial activity.
Other approaches have been developed that aim to design and implement technology that is personalized to the users and sensitive to their physical situation
[Long96]. The idea is to develop a new system architecture, which will enable ambient information services to be delivered to mobile citizens [Schilit95]. For users
location purposes, a set of sensors is deployed. Moreover, these sensors will detect
and send contextual information about the surroundings of the mobile users. These
sensors can be networked and integrated within existing computers and wireless
network infrastructures. In these approaches, the importance of using context information for controlling the services is unfortunately not emphasized and useless for
control management operations.
There are research efforts that relate context information and use it for other
purposes than only representing information [Schilit94b]. Other examples of
research activities for end-devices are the composite capabilities/preference profiles
framework [CCPP] and applications with agent and profile specifications [Salber99],
[Gribble00], [Gruia02]. Example research also exists in the IETF for networks,
including the open pluggable edge services [OPES], content distribution interworking [IETF-CDI] and Web intermediaries [IETF-WI] working groups, which develop
frameworks and recommendations for network communications, especially for
63
content peering and for adaptation purposes by using and processing context
information to change the performance of the applications.
One implementation, provided in existing projects, follows the clientserver
principle, incorporating a server with numerous clients to provision and deliver
multimedia information concerning the users location and orientation. This system
is able to determine the clients position in the overall network, to manage personal
data of each user and each task by associating geographic information system (GIS)
data to multimedia objects and to provide ubiquitous services; another important
characteristic of this project is the design and implementation of a users new generation mobile terminal device [LOVEUS].
However, location is not the only context information that is being used. For
example, some of these initiatives have focused on designing a toolkit that provides
general and modular solutions for mobile applications by adapting the content to the
device that will use it [Gellersen00], [Samann03]. With that purpose, they propose
a software toolkit that will be hosted on the users terminal and will adapt the format
of the content, usually multimedia, to specific capabilities of the terminal device.
Finally, it is important to mention initiatives oriented to locate the user by establishing a network of beacons located in the building where the user can move around.
From these beacons, the users device will be able to know its location [UAPS].
There are some other projects, in which the main goal is to introduce ubiquitous
computing in everyday environments and objects [Fritz99], [Hong01].
Other attempts in the use of programmable network technologies for contextaware services based on mobile agents using programmable network facilities are
presented in [Winograd01], [Hightower01], [Hunt98], [Helin03a] and [Wei03].
All of them represent the context-aware research activity and implementation work
by considering context location, context identity, roles or context objects as the
solution to contain all context information in a model necessary to efficiently
provide services, but the management of pervasive services is not well described.
3.3.1.2
Pervasive Services
Context-awareness is enabling the next generation of mobile networks and communication services to cope with the complexity, heterogeneity, dynamicity and
adaptability required in pervasive applications. Context-awareness, as has been
described in this chapter, refers to the capability of a system to be sensitive and react
to users and network environment, thus helping to dynamically adapt to context
changes. However, context information is as complex and heterogeneous as the services that it intends to support. Furthermore, it is difficult to imagine context-aware
systems that are not supported by management systems that can define, manage and
distribute context efficiently. In fact, this is one of the main problems to face in the
ubiquitous computing area and in pervasive applications.
It is assumed that demands for services operating over NGNs are context-aware.
This means that the functionality of the service is driven by the user context, so that
it can automatically adapt to changes in context. Moreover, in pervasive computing
64
environments, the service examines both user and network context, as Dey describes
in [Dey01]. Different scenarios can be envisaged that highlight the impact of context-awareness. For instance, in an emergency scenario, context-aware services will
manage the incoming calls to a Voice Server, permitting only privileged calls within
the emergency area. Many other examples could be offered, all of them exhibiting
as a common denominator, the use of context information to provide improved multimedia services to users by adapting to different conditions and needs.
Early proposals for context-aware network services have resulted in the design
and implementation of integrated platforms that cater to the full range of issues
concerning the provision of LBS, such as [Henricksen02], [Komblum00]. Most
initiatives propose models that refer to the situation that surrounds the user of the
service as a physical person; in this user-centric model, definition of context and
the context model are mainly derived from the fact that the context information is
going to be used and stored only on small mobile devices used by specific users. In
those proposals, the idea of using context is not flexible or extensible enough for
pervasive applications, since the scope of pervasive services can cope with many
different types of context-aware applications that are supported by mobile and
non-mobile devices, and used by both people and virtual users (machines, other
applications, etc.).
This larger scope requires the use of a more standard context format. Other initiatives aim to locate users by establishing a network of beacons located within the
buildings that the users move in [CRICKET]. Another approach aimed to design
services personalized to users and sensitive to their physical surroundings is
[Schilit95]. Another approach is used in [LOVEUS]; this follows the clientserver
principle in order to adapt the delivery of sensitive multimedia information according to users location described in [INMOVE].
The CONTEXT system [IST-CONTEXT] and all its information model representation has been designed without any preconception about the nature or type of
context information that the services are going to manage. Other projects attempt
similar objectives that CONTEXT has; only some of them propose the use of programmable network technologies for context-aware applications for providing services [Karmouch04], [Kanter00], [Wei03], [Klemke00], [Yang03a], [Kantar03] but
not for controlling the management of services. All of these attempts develop frameworks and recommendations for communications between intermediaries with the
network, especially for content peering and use for adaptation purposes. Similar
initiatives could be taken to adapt the execution environment to other technologies
or devices, but in this chapter, the aim is to add the functionality to manage the service operations as well.
3.3.1.3
Context Model
When humans talk with humans, they are able to use implicit situational information, or context, to increase the understanding of the conversation. This ability does
not transfer well to humans interacting with computers, and especially does not
65
work well with computers communicating with computers. In the same way, the
ability to communicate context as applied to different levels of abstractions (e.g.
business vs. network concepts) between applications is also difficult; in fact, the
flexibility of human language becomes restricted in computer applications.
Consequently, the computer applications are not currently enabled to take full
advantage of the context of humancomputer dialogue or computer applications
from different levels of abstractions or interactions.
By improving the computer representation and understanding of context, the
richness of communication in humancomputer interaction is increased. This makes
it possible to produce more useful computational services.
In order to use context effectively, it has to be clear what information is context
and what is not. This will enable application designers to choose what context information to use in their applications, and how can it be used, which will determine
what context-aware behaviours to support in their applications. Nowadays, literature in regarding to pervasive computing has outlined the benefits from using context-awareness and has proposed quite similar context-awareness definitions.
Sometimes it is common to find definitions giving conceptual descriptions, which
give emphasis to the influence of the environment on a process, while others with a
more practical orientation try to discover the necessary of context information for a
broad range of application types. Choosing the right definitions depends on the
application area. Hereafter, the concept of context and context-awareness from different sources is discussed.
The first definition of context-awareness which gives origins to pervasive applications was given by Schilit et al. [Schilit95], which restricted the definition from
applications that are simply informed about context to applications that adapt themselves to context. Definitions of context-awareness fall into two categories: using
context and adapting to context. Further discussion on these two categories can be
found in [Dey01]. In this chapter, the aim is not to re-define context or contextawareness, but rather to follow a consistent context-awareness definition that is sensitive to external variations of the information around end users, applications and
networks.
Schilit and Theimer [Schilit94a] refer to context as location, identities of nearby
people and objects and changes to those objects. Brown et al. [Brown96a] defines
context as location, identities of the people around the user, the time of day, season,
temperature, etc. Ryan et al. [Ryan97] define context as the users location, environment, identity and time. Dey [Dey00a] enumerates context as the users emotional
state, focus of attention, location and orientation, date and time, objects and people
in the users environment. However, these definitions are difficult to apply due to
their semantic diversity and not very clear applicability.
Other definitions of context have simply provided synonyms for context; for
example, referring to context as the environment or situation. Some other consider
context to be the users environment, while others consider it to be the applications
environment. Brown [Brown96b] defined context to be the elements of the users
environment that the users computer knows about. Franklin and Flaschbart
[Franklin98] observe it as the situation of the user. Ward et al. [Ward97] views
66
context as the state of the applications setting. Hull et al. [Hull97] include the entire
environment by defining context to be aspects of the current situations. These definitions are also very difficult to put in practice.
Pascoe defines context as the subset of physical and conceptual states of interest
to a particular entity [Pascoe98]. But all the above definitions are too specific.
Finally, the definition which is given by Dey and Abowd is: Context is any
information that can be used to characterize the situation of an entity, an entity is a
person, place or object that is considered relevant to the interaction between a user
and an application, including the user an application themselves [Dey00a].
From these definitions, the important aspects of context are as follows: where are
you, who are you with and what resources are nearby? In an analogous fashion, it
refers to a network device or the devices around it, the authors converge to define
context to be the constantly changing environment, which includes the computing
environment, the user environment and the physical environment, which in network
terms are equivalent to the network environment, the device environment and the
element environment.
If a piece of information can be used to characterize the situation of a participant
in an interaction, then that information is context. For example, the users location
can be used to determine the type of services that the user receives; this interaction
is context information that can be used by an application. A general assumption is
that context consists only of explicit information (i.e. the environment around a
person or object).
Perhaps the most important requirement of a framework that supports a design
process is a mechanism that allows application builders to specify the context
required by an application. In the framework of context information handling processes, it is easy to see that there are two main steps: (1) Specify what context an
application needs and (2) Decide what action to take when that context is acquired.
In these two steps from [Dey00a], the specification mechanism and, in particular,
the specification language used must allow application builders to indicate their
interest along a number of context dimensions or nature of the context addressing.
However, to achieve the goal of defining an efficient specification mechanism, special interest must be taken to identify when context information is being handled
with respect to other context information and other management data. Possible scenarios when context is being handled are described as follows:
Single piece of context vs. multiple pieces of context
In this example, the nature of the single piece of context is the same as the nature of
the multiple pieces of context that it is related to. For example, when a single
piece of context could be the location of a user and multiple pieces of context
could be the location of other users.
Multiple, related context information vs. unrelated context information
Related context means different types of contextual data that apply to the same
single entity. For example, related context about a user could include the location
and the amount of free time. By contrast, unrelated context could be the date and
time of the user and the price of a particular product.
3.4
67
3.4
Due to the proliferation of multiple services and vendor-specific devices and technologies, ontologies offer a scalable set of mechanisms to interrelate and interchange information. Therefore, basic types of ontology tools are required to support
pervasive system management operations using knowledge as its inherent nature.
This section reviews ontology operations and describes the ontologies features to
allow service and network systems to accomplish with knowledge representation.
3.4.1
Ontology Engineering
68
different knowledge representations and languages which interact with each other.
Ontologies not only provide enrichment to the information model and semantic
expressiveness to the information as described in [Gruber93b], but also allow the
information to exchange between applications and between different levels of
abstraction, which is an important goal of pervasive computing.
In this section, it is discussed the fact that ontologies are used to provide semantic
augmentation, addressing the cited weaknesses of current management information
models [Lpez03a] and beyond with ontologies the integration of context for managing operation control is proposed, resulting in improved system management.
The cognitive relationships are shown in Fig. 3.2, where the ontologies are used
for making ontological commitments in the form of cognitive relationships (i.e. an
ontological commitment is an agreement to use a vocabulary in a way that is consistent to different domains of application).
The commitments are very complex [Uschold96] and can be thought of as a set
of mappings between the terms in an ontology and their meanings. Hence, ontologies can be combined and/or related to each other by defining a set of mappings that
define precisely and unambiguously how one concept in one ontology is related to
another concept in other ontology.
In most current management applications, different data models are embedded in
each application, and as a result complex systems need to be developed to translate
between data defined by different applications. This is due to many reasons; perhaps
the most important is different management applications require different management data to accomplish different tasks or to represent information from a different
point of view.
Very often, each application uses different tools, since the use and manipulation
of those data require different functions. For example, the simple text-based functionality of LDAP (for directories) is not sufficient for more complex tasks that
require (as an example) SQL. This is the trap which developers fall into when they
use an application-specific data model instead of an application-independent information model. Furthermore, the complexity increases when end-user applications
use information models that need to interact with information models from devices
in the networks, as the difference between user and network data is significant.
3.4
3.4.1.1
69
70
3.4.1.2
Not all ontologies are built using the same language and structure. For example,
Ontolingua uses the knowledge interchange format (KIF) language and provides an
integrated environment to create and manage ontologies (more details about KIF
can be found in [Genesereth91]). KL-ONE [Brackman85], CLASSIC [Borgida89]
and LOOM [Swartout96] each use their own ontology language. The open knowledge base connectivity (OKBC) language, KIF and CL (common logic) have all
been used to represent knowledge interchange, and have all become the bases of
other ontology languages. There are also languages based on a restricted form of
first-order logic (this makes the logic more easily computable), known as description logics, such as DAML + OIL [Horrocks02].
With the advent of Web services, a new family of languages appeared. The
resource description framework (RDF) [Brickley03a] and RDF-Schema
[Brickley03b] have provided basic ontological modelling primitives, like classes,
properties, ranges and domains. RDF influenced the defense agent markup language
(DAML) from the USA [DAML]; DAML and OIL (the ontology inference layer, a
separate but parallel European effort) [Horrocks02] were eventually merged in the
World Wide Web Consortium (W3C), which created the Web ontology language
(OWL) standard [OWL] and it was introduced in [Dean02]. OWL is an integral part
of the semantic Web [Berners-Lee01] and a W3C recommendation [W3C]. OWL
comes with three variations: OWL Full, OWL DL and OWL Lite [DeBruijn03].
OWL Lite has been recently extended to create OWL-Flight, which is focused on
using a logic programming framework [DeBruijn04]. Other activities are inspired
by first, integrating semantic rules into an ontology (this effort is inspired by some
OWL modelling weaknesses to contain certain restrictions) and second, building
new languages on top of OWL for specific applications. The best example of this is
OWL-S, which was designed to be used with semantic Web applications [OWL-S].
Another approach is SWRL (semantic Web rule language) combining sublanguages
of the OWL (OWL DL and Lite) with those of the rule markup language (unary/
binary datalog) [Horrocks04].
Referring to information integration, some initiatives are based on a single global
ontology, such as TSIMMIS, described in [Garcia97]. Another example is the information manifold in [Kirk95]. Others use multiple domain ontologies, such as
InfoSleuth [Bayardo97] and Picsel [Reynaud03], but any of them could be adapted
for integrating and gathering context information for various service applications in
autonomic environments.
The work above is focused on using ontologies for knowledge engineering representation. Ontologies have also been used for representing context information in
pervasive applications. In particular, the CoOL (context ontology language) is an
initiative for enabling context-awareness and contextual interoperability as described
in [Strang03c]. CoOL allows context to be expressed, which enables context-awareness
and contextual interoperability, but does not describe how to manage context or
context-aware services.
3.4
71
72
In these complementary scenarios, the use of the ontologies is more than just the
simple representation of knowledge; rather, ontologies are also used to integrate
knowledge. Finally, the CONTEXT project [IST-CONTEXT], which acts as the
base for extending the information model and formalize it with ontologies, defines
an XML Policy Model that supports the complete service life cycle. The policy
model is extensible and contains parts defined as context information. This approach
follows the business-oriented scope based on context information that the networks
require to operate. The service life cycle is managed by a set of policies that contain
such context information, and it is used to trigger events. However, this proposal
does not use appropriate formalisms for sharing context information for supporting
the reuse of context information contained in the policies.
Ontology is used for expressing different types of meaning for a concept that
needs to be interpreted by computers. There are ontologies aiming not only to define
vocabulary to enable interoperability, but also to define one or more definitions and
relationships for a concept. This feature enables different applications to use different meanings for the same object in multiple applications, which helps integrate the
cross-layers in NGN systems. Due to the inherent influence of the Internet, most
initiatives for representing context information want to use schema extensions that
support Web services and other initiatives specified by the [W3C].
3.4.1.3
An important characteristic of ontologies is their capability to share and reuse information. This reusability is the feature that attracts the attention of many developers
of information systems and obviously this feature is applicable to communications
systems, particularly in management domains. Sharing and reusing information
depends on the level of formalism of the language used to represent information in
the ontology.
One way to share and resource network knowledge is to use models and structures which are extensible enough to enable such information to be captured
[IBM01b], [Kephart03]. Initiatives for using ontologies in the domain of networking are [Keeney06], [Lpez03a], [Guerrero07]. More specifically, in the pervasive
services area, context information is essential and could be used for managing services and operations [Strassner06a]. However, it is a highly distributed environment
and introduces a great challenge to managing, sharing and exchanging of information in a consistent and coherent manner. To do so, where current networking scenarios use different networks, technologies and business rules and a diverse
interaction of domains increase the complexity of the associated management activities, the emerging of autonomic solutions is acquiring importance.
In autonomic environments, mechanisms are necessary for managing problems
in an automated way, minimizing human interaction, with the objective of handling
problems locally. In autonomic environments, every technology uses its own protocols, and most of the time proprietary languages and management data structures,
so the interoperability for exchanging information is impaired. Autonomic environments
3.4
73
seek to unite these isolated stovepipes of data and knowledge using semantic mechanisms to share and reuse information. This often takes the form of middleware that
understands and translates information, commands and protocols.
Figure 3.3 depicts autonomic environments, a diversity of technologies involved
in the exchanging of information, which results in increased complexity [Serrano07b].
It is observed that every management systems or station corresponds to every technology domain; in autonomic communications, the exchange of information implies
the collection and processing of the information by using the same information
model. So the systems must have the necessary mechanisms to translate the information into the same format that the model define.
3.4.2
A policy has been defined as a rule or a set of rules that manage and provide guidelines for how the different network and service elements should behave when certain conditions are met [IETF-RFC3198]. Verma defines a policy as a directive that
is specified to manage certain aspects of desirable or needed behaviour resulting
from the interactions of user, applications and existing resources [Verma00].
However, as said earlier in this chapter, as reference the definition of Policy is a set
of rules that are used to manage and control the changing and/or maintaining of the
state of one or more managed objects is used [Strassner04].
The main benefits from using policies are improved scalability and flexibility for
managing services. Flexibility is achieved by separating the policy from the implementation of the managed service, while scalability is improved by uniformly
applying the same policy to different sets of devices and services. Policies can be
changed dynamically, thus changing the behaviour and strategy of a service.
74
Policy management is expressed using a language. Since there are many constituencies having their own concepts, terminology and skill sets that are involved
in managing a system (e.g. business people, architects, programmers and technicians), one language will not be expressive enough to accommodate the needs of
each constituency. Figure 3.4 shows the approach used in [Strassner06a] and termed
as policy continuum, defined in [Strassner04] and extended in [Davy07a]. While
most of these constituencies would like to use some form of restricted natural language, this desire becomes much more important for the business and end users.
In the framework of this book, the definition of a language, following principles
from policy continuum, supports the idea of an initial representation by using XML to
ensure platform independence. In the same way, implemented dialects are also easy to
understand and manage, and the large variety of off-the-shelf tools and freely available
software provide powerful and cost-effective editing and processing capabilities.
Each of the implementation dialects shown in Fig. 3.4 is derived by successively
removing vocabulary and grammar from the full policy language to make the dialect
suitable for the appropriate level in the policy continuum. XML representations and
vocabulary substitution by using ontologies is applied or some view levels in the
policy continuum.
3.4
75
3.4.2.1
The main objective of using policies for service management is the same as that of
managing networks with policies to automate management and do it using as high
a level of abstraction as possible. The philosophy for managing a resource, a network or a service with a policy-based managed approach is that IF something
specific happens THEN the management system is going to take an action. The
main idea is to use generic policies that can be customized to the needs of different
applications; the parameters of the conditions and actions in the policies are different for each user, reflecting its personal characteristics and its desired context information. The asset idea is the use of the policy-based paradigm to express the service
life cycle and subsequently manage its configuration in a dynamic manner. It is this
characteristic which provides the necessary support and operations of pervasive
systems.
The policies are used in the management of various aspects of the life cycle of
services. An important aspect of policy-based service management (PBSM) is the
deployment of services throughout the programmable elements. For instance, when
a service is going to be deployed over any type of network, decisions that have to be
taken in order to determine which network elements the service is going to be
installed and/or supported by. This is most effectively done through the use of policies that map the user and his or her desired context to the capabilities of the set of
networks that are going to support the service. Moreover, service invocation and
execution can also be controlled by policies, which enable a flexible approach for
customizing one or more service templates to multiple users. Furthermore, the
maintenance of the code realizing the service, as well as the assurance of the service, can all be related using policies. A final reason for using policy management
is that when some variations in the service are sensed by the system, one or more
policies can define what actions need to be taken to solve the problem.
76
The promises of PBM are varied and often conceptualized as networking managing
tasks, since networking is a means to control the services offered by the network.
However, policies can potentially do much more than just manage network services. In particular, this section emphasizes PBM as the application of a set of
abstract conditionaction rules. This ability to manage abstract objects is a feature
that provides the extensibility necessary to be applicable to pervasive applications.
Without this ability, a common interface to programming the same function in different network devices cannot be accomplished.
The proposed use of the PBM paradigm for service management does not assume
a static information model (i.e. a particular, well-defined vocabulary that does not
change) for expressing policies. By contrast, in this chapter, the idea of a framework
for pre-defined policies that can be processed dynamically (e.g. new variable classes
can be substituted at runtime) is supported as it offer more advantages when policies
are pre-defined and known.
Associations between the information expressing the policy structure, conditions
and actions with information coming from the external environment are crucial to
achieve the goals of management systems. Specifically, the externally provided
information can either match pre-defined schema elements or, more importantly,
can extend these schema elements. The extension requires machine-based reasoning to determine the semantics and relationships between the new data and the previously modelled data. This is new work that augments previous PBM systems, and
is assumed to reside outside the proposed framework (the service creation and customization systems in the context system).
By supporting dynamically pre-defined policies, the flexibility of pervasive management can be achieved and context interactions can be more completely realized
using policy-based control. This feature is a requirement of the design of the overall
pervasive system (for achieving rapid context-aware service introduction and automated provisioning).
3.4.2.2
PBM has been proven to be a useful paradigm in the area of network management.
In the last few years, initiatives have appeared that use polices or rule-based decision approaches to tackle the problem of fast, customizable and efficient service
delivery. Among the most representative are OPES by Piccinelli [Piccinelli01] and
Tomlinson [Tomlinson00].
This analysis goes a step further to analyze solutions intended to control the full
service life cycle by means of policies. To accomplish this goal, there are solutions
making use of programmable network technology, for example as the technology
infrastructure supporting pervasive services and applications. Programmable technology as described in [Raz99] plays the role of the infrastructure supporting context-aware applications and services while, at the same time, it support networking
operations to guarantee the correct operation of the network. The IST-CONTEXT
project approach [IST-CONTEXT] and the ANDROID project [ANDROID] aims
3.4
77
78
3.4.2.3
The promises of PBM are varied and today demonstrated as suitable for supporting
network operations and services. Most approaches to management and network
configuration lack the capacity for exchanging and reusing information from business goals and technical objectives, as they cannot relate network services to business operations. Furthermore, these conditions avoid new business roles for
modifying the systems in terms of adapting services to the demands of changing
user needs and environmental conditions.
A typical example is understood when traditional management protocols are
unable to express business rules, policies and processes in a standard form (e.g.
SNMP or CLI). They have no concept of a customer, and hence when they report a
fault, it is impossible to determine which, if any, customers are affected from data
retrieved by the protocol or even the commands. This makes it nearly impossible to
3.4
79
80
of the system. The selected working set of policies defines the appropriate roles of the
ManagedEntities that form context; this enables context to manage system functionality
(through roles) at a higher level of abstraction. In particular, this means that policy determines the set of roles that can be assumed for a given context. This is represented by the
GovernsManagedEntityRoles aggregation. When these ManagedEntityRoles are
defined, they are then linked to context using the ContextDependsOnManagedEntityRoles
association;theManagedEntityRoleAltersContextandtheManagedEntityRoleUsesPolicy
associations are used to feedback information from ManagedEntityRoles to context and
policy, respectively.
Context also defines and depends on the management data collected from a
ManagedEntity. First, policy is used to define which management information will
be collected and examined (via the GovernsManagementInfo aggregation); this
management information affects policy using the ManagementInfoUsesPolicy association. Once the management information is defined, then the two associations
ContextDependsOnMgmtInfo and MgmtInfoAltersContext codify these dependencies (e.g. context defines the management information to monitor, and the values of
these management data affect context).
Given the above definitions, the relationship between policy and context becomes
clearer. When a context is established, it can select a set of policies that are used to
govern the system. The governance is done by selecting an appropriate set of
ManagedEntityRoles that provide access to the functionality of the ManagedEntity.
These ManagedEntityRoles provide control points for functionality that needs to be
governed.
3.5
81
Similarly, the result of executing a policy may alter context (e.g. an action did not
succeed, and new corrective action must be taken; or a set of configuration changes
did succeed, and the system is back in its desired state) such that a new context is
established, which in turn may load a different set of policies.
3.5
Ontology engineering as a mechanism for helping the computing systems to integrate knowledge has a large number of example applications, since the most simple
to represent information until the most complex and robust as a complete information management system in a communication network, where diverse mechanisms
and reasoning process are present.
Ontology engineering is being more accepted and considered every day as a suitable alternative to cope with one of the main problems in the communications area
(to endow the communications networks with the necessary semantic enrichment to
support applications and services).
The following sections in this chapter show some of the most important applications that ontologies offer to the computing systems as those are divided into two
basic types of ontology tools: The first group contains the ontology mapping and
merging tools, for either combining multiple ontologies. An application example in
this group is when it is necessary to identify nodes into a network defined by a specific ontology (ontology A), which are semantically similar to nodes in other network defined by other different ontology (ontology B). In this example mapping
operations and merging for combining concepts are necessaries. In the second
group, the development of mechanisms, for creating, editing and specifying tools
that can be queried using one or more inference engines are necessary.
Ontologies are used to define a lexicon that all other system components must
follow and furthermore the relationships that exist between each others. Once the
ontology has been created, the ontology can be handle using ontology merging
tools, such as PROMPT [PROMPT] and Chimaera [CHIMAERA]. These approaches
provide a common set of definitions and relationships for data used in the system
based on ontologies, and are a basis for the semantic rules that are used in cognitive
systems. By other side, common ontology development tools, such as Protg
[PROTG] and Ontolingua [ONTOLINGUA], are used to define queries, commands and assertions used this kind of systems.
3.5.1
Ontologies were created to share and reuse knowledge, and a formal application of
these concepts can be studied in [Genesereth91], where the information is transformed using a specific KIF, even if the input data uses multiple heterogeneous
representations. This is why many knowledge engineering efforts are using ontologies
82
to specify and share knowledge. Network management problems need this capability,
especially since there is a lack of a standard mapping between different languages
used to represent network management data (e.g. CLI and SNMP).
The challenge is to create the links between different structural representations
of the same information. The lack of using standard information models and the
resulting mismatch of data models used to represent network management and context data are the motivations to use the set of ontology operational mechanisms that
enable such information exchange and the interactions between different application- and domain-specific data models.
Following the premise that ontologies can be used as a mechanism for reusing
and exchanging knowledge in pervasive systems for context-aware information, as
explained in [Giunchiglia93], ontologies are used as operational mechanisms to
provide such features to the service management systems by using autonomic-like
behaviour and functionality.
Autonomic systems benefit the features that ontologies provide to the information, when the knowledge in these application- and domain-specific data models is
standardized by using ontologies to provide a formal representation and a set of
mapping mechanisms, autonomic systems benefit from the features that ontologies
provide to the information, when it is being formalized and then transformed in
knowledge, thus autonomic systems must be able to understand and use the semantics of these data, as explained in [Kitamura01].
Once the information has been gathered, the next step is for each component to
make decisions based on and/or following a set of ontology-based reasoning procedures [Keeney05], which allow it to create and execute suitable inferences and/or
deductions from the knowledge expressed in the knowledge base, and/or simply
transfer the necessary information to different abstraction layers in autonomic systems, for example with certain level of pragmatism as result of a decision. The
ontology-based procedures can be categorized in three processes described as
follows.
3.5.1.1
Ontologies are used to describe and establish semantic commitments about a specific domain for a set of agents, with the objective that they can communicate without complicated translation operations into a global group. Examples of those
commitments are presented in [Crowcrof03]. The idea of semantic commitment can
be thought of as a function that links terms of the ontology vocabulary with a conceptualization. Those agreements can represent links between concepts from different domains or concepts from the same domain, as is exemplified in [Khedr03].
In particular, ontologies enable the system to describe concepts involved in the
applications, process or tasks (a domain of discourse) without necessarily operating
on a globally shared theory. Knowledge is attributed to agents that do not need to
know where the commitments were done; all they need to know is what those commitments are, and how to use them.
3.5
83
3.5.1.2
The research activity when merging ontologies is broad. An algebra for ontology
commitments has been defined [Mitra00], which uses a graph-oriented model with
values for defining areas of interest and logic operations between the concepts in the
ontologies such as unions, intersections and differences. The fusion of ontologies
will result in the creation of a new ontology based on the set of ontologies that are
being fused, as has been presented in [Lpez03b]. In this kind of ontology merging
process, all of the concepts and relationships are replaced by a new set of concepts
and relationships that are equivalent to the original ontologies [Lpez03a].
3.5.1.3
Ontology-Based Reasoning
84
3.5.2
3.5.2.1
Ontology Editors
Ontologies define the lexicon that a language uses to define the set of queries, commands and assertions that are available when the ontology language is being used.
Pragmatically, the language represents an agreement to use the shared vocabulary in
a coherent and consistent manner. Hence, the first and most basic activity that is
done with ontologies is the definition of knowledge that can be retrieved. This
includes things, objects, activities and other entities of interest, including events that
have occurred in the environment of the system, as well as relationships between
these entities. This enables different sensor elements, such as agents, to all use the
same formal language to describe contextual data in a common way.
Today, the most common exemplar for a service definition language is without
any doubt the semantic Web. The huge quantity of information on the Web emphasizes the need to have a common lexicon, which in turn raises interest in using
ontologies.
The semantic Web gave rise to a new family of languages, including the RDF and
the OWL standards. Both are integral parts of the semantic Web; the latter depends
on the former and is also a W3C recommendation. OWL comes with three variations
(OWL Full, OWL DL and OWL Lite), as described in detail in the state-of-the-art
section in Chap. 2. Each of these different dialects has its own strengths and weaknesses, and provides different levels of expressiveness for sharing knowledge.
3.6 Conclusions
85
In the other hand, when designing an ontology, it is very common see ontology
languages providing their own GUI to edit the ontology. Many of those tool very
powerful, however it usually reflect the intent of the designer of the ontology language, and hence may or may not be applicable for a particular application domain.
Hence, open source alternatives exist that work with standard languages (such as the
OKBC [OKBC] or KIF standards [KIF]). In order to maximize reuse, the research
activity has been done using open source tools that are not dependent on any one
specific commercial product to edit the ontologies.
3.5.2.2
Ontology Reasoners
Technical aspects of the OWL language, and particularly OWL DL, have its foundations in descriptions logics, which is fundamentally a subset of first-order logic. An
inherent property of first-order logic is that it can be described and contained in
algorithms described in a finite number of sequences or steps, but do not guarantee
the result that the steps will be completed in finite time.
A first internal evaluation of using ontologies that use reasoners is based on the
flexibility of the various inference services offered that can be used to determine the
consistency of the ontology. A class is inconsistent when it cannot possibly have any
instances. The main inferences services can be listed as follows:
1. Inferred superclasses of a class.
2. Determining whether or not a class is consistent.
3. Deciding whether or not one class is subsumed by another.
An ontology reasoner is normally used for verification of the ontology; however,
other more extensive uses focused on generating solutions and creating decisions
(decision-making processes). Hence reasoner plays an important role when using
ontologies and it is necessary to identify the correct one to increase the efficiency
according to the compatibility with an ontology editor.
3.6
Conclusions
In this chapter
Reference to former research challenges in ontology-based principles for network management, in order to build a clear framework describing how ontologies
can be used to represent context information in network management operations,
has been introduced and discussed.
The main benefits from using policies for managing services are improved scalability and flexibility for the management of systems; this feature also simplifies the
management tasks that need to be performed. These scalability and simplification
improvements are obtained by providing higher level abstractions to the administrators, and using policies to coordinate and automate tasks.
86
Chapter 4
4.1
Introduction
Ontology engineering has been proposed as a formal mechanism for both reducing
the complexity of managing the information needed in network management and
autonomic systems and for increasing the portability of the services across homogeneous and heterogeneous networks. This section describes a formal mechanism to
integrate context information into management operations for pervasive services.
Reuse of existing ontologies is a task that ontology development must anticipate.
In fact, it is this feature that will speed up the development of new extensible and
powerful ontologies in the future. The integration of ontologies is an important task
in the ontology development area. Hence, it is important to emphasize that the perspective and analysis of the ontology is crucial in its level of acceptance. In this
sense, in this chapter there is a devoted section to present a description and representation of a practical example to represent an ontology and integrate concepts from
different domains, the ontology is no implemented as full but the formal representation and the conceptual background to build the ontology are fully explained.
In this chapter, an ontology model definition and representation is introduced and
developed, following the formal methodology basic principles described in this
book the ontology is explained in a conceptual form. It is a demonstrative application to integrate users context information in service management operations. This
ontology provides the semantics, using a certain level of formalism, to capture concepts from the context information for helping to define data required by various
service management operations. It also augments the expressiveness of the policy
information model by adding domain-specific context data.
The diversity of languages used in management creates a corresponding diversity in management knowledge. However, semantic information can be managed by
reasoners and semantic discovery tools that are capable of identifying the cognitive
similarities between multiple concepts. As depicted in Fig. 4.1, if the Data Model A
has some cognitive similarities with the Data Model B, the process to find such
similarities is very complex, and in general fails when ontologies are not used due
J.M. Serrano Orozco, Applied Ontology Engineering in Cloud Services, Networks
and Management Systems, DOI 10.1007/978-1-4614-2236-5_4,
Springer Science+Business Media, LLC 2012
87
88
to the lack of an underlying lexicon that enables different concepts to be semantically related to each other. However, when using ontologies, the terms in these two
data models can be related to each other by using the formal linguistic relationships
defined in the ontologies. In this chapter, the use of OWL as a formal language is
explained and related to its capability to realize schema-based ontology matching
and integration.
As a way of quick survey and as it was mentioned before, the use of standard
languages, such as OWL, promotes the easy and flexible integration between ontologies and models. The characteristics of the ontology language define the clarity
and quality of the knowledge that the ontology specifies. However, not all ontologies are built using the same set of tools, and a number of possible languages can be
used. A popular language is Ontolingua [ONTOLINGUA], which provides an integrated environment to create and manage ontologies using KIF [Genesereth91].
Other languages, such as KL-ONE [Brackman85], CLASSIC [Borgida89] and
LOOM [Swartout96], were defined according to domain-specific requirements.
Another standards-based approach is to follow the conventions defined in Open
Knowledge Base Connectivity (OKBC) [OKBC] model, KIF or CL-Common
Logic. Each of these languages are examples that have become the foundation of
other ontology languages, and each specifies a language that enables semantics to
be exchanged.
This chapter describes the ontology construction process. The phases for building an ontology are not detailed but the objective is to demonstrate the formal mechanism to represent information and most importantly the interactions between the
different information domains. Formal concepts present in the information models
are used and formally represented in this chapter. The relationships between the
concepts from information models are then defined as part of the formalization
process. This provides enhanced semantic descriptions for the concepts present in
the information models.
The organization of this chapter is as follows. Section 4.2 provides a general
description about the data and information model used as basis in our approach (i.e. the
89
Context Information) with the objective of clearly identifying the elements that
exist in the information model and represent the objects definitions.
Section 4.3 introduces the policy model structures, first to define the structure of
a policy and second to demonstrate the information links between different domains
by using ontologies, that is Policy Information, and the Service Lifecycle Management
Operations.
Section 4.4 provides the model interactions and formal representations, those
information models are first studied over XML schemas to understand its components and then represented and integrated formally within the process for building
an exemplar ontology; it means an ontology representation to understand the practical part of using ontologies for integration of data and information models. Specially,
when each model is separately augmented semantically and then finally integrated
by using ontologies.
Section 4.5 is related to the conclusions concerning this chapter.
4.2
90
even the data models that are derived from the information model). The language
used to represent the data is usually informal. However, unless a single common
information model (CIM) exists, there is no way to harmonize and integrate these
diverse data models without a formal language, since informal languages may or
may not be able to be unambiguously parsed. Hence, various initiatives have been
proposed, with the objective to standardize and integrate such information.
These include information models such as the CIM [DMTF-CIM], authored by
the Distributed Management Task Force (DMTF) and the Shared Information and
Data model (SID) [TMF-SID], authored by the TeleManagement Forum (TMF).
These two models are arguably driving the modelling task for the computer industry. However, both of these models represent vendor-independent data, as opposed
to vendor-specific data, and hence are not adopted by many vendors. The documentation about CIM and SID information models can be found in [DMTF-DSP0201],
[DMTF-DSP0005] and [TMF-SID], respectively.
However since the management perspective, those initiatives do not provide
enough tools to integrate context information for management operations in communications systems (or vice versa: the modelling of commands issued by management systems that (re)configure devices and services). In addition, neither the CIM
nor the SID provides any specific type of context model definition.
The level of formalism in both is compromised; the CIM does not use a standard
language as UML (it has invented its own proprietary language), and while the SID
is UML-based, other modelling efforts of the TMF are making compromises that
are turning the information model into a set of data models. Indeed, the CIM is in
reality a data model, as it is not technology-independent (it uses database concepts
to represent its structure, which classifies it as a data model) [DMTF-CIM].
Thus, the real challenge is to promote information interoperability in heterogeneous systems combining network technologies, middleware and Internet facilities, to create an environment where the information between the devices and the
applications and their services is always available. In this sense, the integration of
information models is a non-trivial task, and it takes special care when diverse
information models from different domains need to be integrated. Ontology engineering has been proven as a formal mechanism for solving problems in meaning
and understanding; hence, ontology engineering appears to be as one of the best
candidate for reconciling vendor- and technology-specific differences present in
information and data models. While ontologies have been previously used for
representing context information, currently most proposals for context representation ignore the importance of the relationships between the context data and
communication networks.
The context information modelling activity, using ontologies, relies on knowing
what elements of context are actually relevant for managing pervasive services; this
in turn drives the selection and use of the proper ontology for discovering the
meaning(s) and/or helping with reasoning about the context information in the networks [Lpez03b]. In addition to this activity, within this process, context information
has not been represented or considered yet in management ontologies as a relevant
part for managing services. This section concentrates on describing the advantages of
91
using ontologies to represent and integrate different types of context information from
different information and data models into service management operations.
In the process of representing information from multiple data sources, many different mechanisms can be used. For example, XML [XML-RPC] has emerged as a
widely accepted way of representing and exchanging structured information. XML
allows the definition of multiple markup tags and constraints that can describe the
relationship between information structures. In particular, an XML Schema (XSD)
[XML-XSD] is a schema language, which means that it contains a set of grammatical
rules which define how to build an XML document. In addition, an XSD can validate
an XML document by ensuring that the information in the XML document adhered to
a set of specific datatypes (i.e. it implies the existence of a data model that validates
the content of the XML document). In this way, XSDs allow more control over the
way XML documents are specified. Certain common datatypes are supported, and
there is the ability to specify relationships and constraints between different elements
of a document. However, an XSD can contain a complex list of rules, causing the
XML document to be turned into a much more complex document for describing
information. In addition to a non-user-friendly list of markup tags, XSDs define a
limited set of datatypes, which can impede the natural representation of information.
The principal technical strengths of an XSD are that it has a text-based representation, which makes it easy to build tooling to process XSD documents. XSDs
impose a strict syntax to permit the automated validation and processing of information in an unambiguous way, which is required by pervasive applications. The XSD/
XML editor used is XMLSpy [XMLSPY].
The idea of creating ontologies to support the integration of diverse information
models is the result of extensive research activity to find a solution to the problem,
interoperability of the information, necessary for enabling integrated management.
Integrated management is one of the most complex problems in ITC systems.
The use of ontologies is based on the assumption formal representations can enable
computing systems to use any information that is relevant for a particular domain in other
domain in mutual or individual benefit. This section is concentrated on this idea in order
to improve and simplify the control of the various management operations required in the
service lifecycle.
In this section, it is explained how to use information models to build a more
formal ontology-based information models that structure, express and organize
interoperable context information. The most important challenges for the integrated
model is that the context information is dynamic (can change very quickly), and
context information is naturally distributed across many layers in the systems. Thus,
the models need to be much more robust, and at the same time, must be semantically
rich and flexible to be used in multiple platforms and systems.
An ontology-based model considers the current status of the managed object, as
well as current and future aspects of the context information describing the managed
object, in order to determine if any actions are required (e.g. to govern the transitioning of the state of the managed object to a new state). This also applies to other managed objects that affect the state of the object being managed. Most current
applications are narrowly adapted to specific uses, and do not provide sufficiently
rich expressiveness to support such a generic context information model.
92
In the following sections, concept of using ontologies for context integration into
service management operations is used, with a novel vision in which functional
components and ontology-based middleware solution for context integration must
contain. These results are presented as part of the construction process for the
integrated model which ended up with an ontology-based model.
4.2.1
93
An additional challenge is the sharing and exchange of information between different levels of abstraction.
Figure 4.2 shows the context information model defined to capture and represent the explicit context information required to support pervasive services.
The description of the context information model, with its set of general classes
and sub-classes, is described in the following sub-sections.
4.2.2
A simplified version of the context model is shown in Fig. 5.1. Descriptions about
objects that are contained in this model are presented. The model has a small set of
high-level classes of entities and relationships, in order to keep it conceptually simple. The model contains four main types of entitiesperson, place, task and object.
These classes are defined by taking into account what they need to represent in the
service provisioning process, and their relationships with various service lifecycle
operations, and are described in the following sub-sections.
94
4.2.2.1
This object represents a human, and can be anyone, including end users as well as
people responsible for different stages of the service provisioning and deployment
process. If the person himself (e.g. service operator or service manager) or his characteristics are relevant to the delivery of a service, then an object representing that
person should appear in the model. Example, a persons attributes could include the
professional role or position in the company that this person occupies.
4.2.2.2
This object represents the location for whatever entity it is representing, and includes
(for example) positional references where persons, applications (services) or objects
(network devices) could be or are actually placed. For example, a place could
describe a country, a city, a street, a floor, a coverage area or a combination of these.
Basic attributes could further specify the location of the place as an address, phone
number, GPS coordinates, position or a combination of these and other entities.
4.2.2.3
This object represents activities that are or could be performed by one or more
applications, people or devices, and may depend on other tasks or be a smaller part
of a larger task. Examples of the attributes of a task entity could include the start
time, end time, due time and status (both in terms of success/failure as well as the
percentage finished); clearly, additional attributes can be defined as well.
4.2.2.4
This object can represent any physical or virtual entity or device, such as a server, a
router, a printer or an application. Examples of basic attributes of an object entity
include the status (on, off, standby) of the entity and its description (technical or
social). It is important to highlight a classification of context types must be used to
help, examine and organize if there are additional pieces of context that can be useful in managing pervasive service applications.
4.3
4.3.1
4.3
95
this book. There are four main policy models that are based on information models:
(1) the IETF policy model, (2) the DMTF CIM, (3) the TMF SID and (4) the latest
version of DEN-ng, being standardized in the ACF.
The IETF policy model is specified in RFC 3060 [IETF-RFC3060] and RFC
3460 [IETF-RFC3460]; the DMTF CIM is based on and extends these two models.
Both the CIM and the IETF standard share the same basic approach, which specifies
a set of conditions that, if TRUE, results in a set of actions being executed. In
pseudo-code, this is:
IF a condition_clause evaluates to TRUE, subject to the evaluation strategy
THEN execute one or more actions, subject to the action execution strategy
In contrast, the SID, which is based on an old version of DEN-ng (version 3.5),
adds the concept of an event to the policy rule. Hence, its semantics are:
WHEN an event_clause is received
IF a condition_clause evaluates to TRUE, subject to the evaluation strategy
THEN execute one or more actions, subject to the rule execution strategy
The latest version of DEN-ng (version 6.6.2) enhances this respecting the rules
and adding alternative actions considered in a new pseudo-code, as explained in
detail in [Strassner07a]:
WHEN an event_clause is received
IF a condition_clause evaluates to TRUE, subject to the evaluation strategy
THEN execute one or more actions, subject to the rule execution strategy
ELSE execute alternative actions, subject to the rule execution strategy
DEN-ng was still under revision at the time of book edition, but had some excellent improvements over the existing state-of-the-art. In this book, as part of the
demonstrative example, a compromise between the existing IETF standards and the
newly emerging DEN-ng architecture is therefore included.
The high-level description of policies follows the format:
WHEN an event_clause is received that triggers a condition_clause evaluation
IF a condition_clause evaluates to TRUE, subject to the evaluation strategy
THEN execute one or more actions, subject to the rule execution strategy
ELSE execute alternative one or more actions, subject to the rule strategy
The above pseudo-code makes an innovative compromise: it defines an event
as a type of condition. The problem with the IETF standard is that no events whatsoever are mentioned. This means that there is no way to synchronize or even
debug when a policy condition is evaluated, since there is no way to trigger the
evaluation.
Hence, as part of the demonstrative example, which is a part of this section, the
IETF model has been extended to add specific triggering semantics. This is indicated by the phrase that triggers a condition_clause evaluation; for reference,
refer back to the SID pseudo-code, which only stated: WHEN an event_clause is
received. The purpose of this addition is to explicitly indicate that an event triggers
the evaluation of the condition clause.
96
Fig. 4.3 The policy hierarchy pattern applied to the definition of PolicySets representation
The design improvement of the pseudo-code came after the work for the ISTCONTEXT architecture had been finished. Hence, to incorporate this work into the
improved onto-CONTEXT architecture [Serrano07c], the model and ontology were
redesigned to define events as types of conditions. In this way, the onto-CONTEXT
architecture could remain compliant with the IETF standard but implement enhanced
semantics.
The above description follows a simple syntax structure. Policies are not evaluated
until an event that triggers their evaluation is processed. However, the rest of the
policy uses implicit rules, which confers a poor semantic understanding. For this
process, in order to enable the exchange of information contained in policies and the
semantics describing what a policy might do and not do, and when it is executed,
more semantic rules are mandatory.
4.3.2
Policy Hierarchy
The policies are structured hierarchically, in terms of Policy Sets, which can be either
PolicyRules or PolicyGroups. The PolicyGroups can contain PolicyRules and/or
other PolicyGroups. This is enabled through the use of the composite pattern for defining
a PolicySet, and is shown in Fig. 4.3. That is, a PolicySet is defined as either a PolicyGroup
or a PolicyRule. The aggregation HasPolicySets means that a PolicyGroup can contain
zero or more PolicySets, which in turn means that a PolicyGroup can contain a PolicyGroup
and/or a PolicyRule. In this way, hierarchies of PolicyGroups can be defined.
The order of execution of PolicyRules and PolicyGroups depends on the structure of the hierarchy (e.g. grouped and/or nested), and is controlled by its set of
metadata attributes contained in Policy_Aim element. The service management
policies control just the service lifecycle operations, and never the logic of the
service. In this way, service management policies are used by the components of
policy-based management systems to define the deployment of the service as a
result of the management. For exemplification purposes, five types of policies covering
4.3
97
the service lifecycle have been defined; these policy types are structured around an
information model whose most representative part is shown in Fig. 4.4.
4.3.3
Policy Model
4.3.3.1
The Policy_Set_Id is the first of the identifier elements. The Policy_Set_Id element
contains the identifier of the policy set to which the instance of this policy belongs.
98
4.3.3.2
The next field among the identifier elements is the Policy_Group_Id element. It
contains the identifier of the group to which this instance of this policy belongs.
4.3.3.3
The third field among the identifier elements is the Policy_Id element. This element
contains information to uniquely identify this policy instance from other instances
of the same policy rule.
4.3.3.4
This element defines information used to manage the policy when received from the
Policy Definition System. This field specifies if this policy is a new policy to be
loaded in the system, or if the policy identified by the above three-tuple (Policy_
Set_Id, Policy_Group_Id, Policy_Id) has been already defined and, if so, has already
been loaded into the system. In the latter case, this element defines whether this
instance should replace the existing instance or not.
4.3
99
4.3.3.5
This field, dedicated to the management of this policy instance, defines the way to
enforce the policy. The IsAtomic element is a Boolean value that defines whether
concurrent execution is allowed. If concurrent execution is allowed, then multiple
policies can be executed before their results are verified; otherwise, this policy must
be enforced before starting the evaluation of the next policy in the sequence, that is
the sequence in how the policy must be enforced cannot be interrupted.
4.3.3.6
This field, dedicated to the management of this policy instance, defines when this
policy is evaluated with respect to other policy instances that are contained in this
particular policy group.
4.3.3.7
This element is used to express the policy expiration date. Usually, the expiration
date is given as the time that the policy starts and finishes. Filters that specify further
granularity can be also introduced. In Fig. 4.6, the structure of the Validity_Period
element is shown. Note that the only mandatory element is the Time_Period element, which includes the start and stop times.
4.3.3.8
ConditionCondition Element
The Condition element includes all data objects, data requirements and evaluation parameters needed to specify and evaluate it. The Condition element contains three basic subelements: Condition_Object, Condition_Requirement and Evaluation_Parameters.
4.3.3.9
100
4.3.3.10
Action Element
The Action elements contain all the information needed for the enforcement of the
specific action. The Action element contains three basic sub-elements: Action_
Parameter, Enforcer_Component, Enforcement_TimeOut and Success_Output_
Parameters. The structure of the Action element is shown in Fig. 5.6.
4.4
4.4
101
modelling and applicability of the information is the main goal of using ontologies
to support the integration of the context information to help define the contextawareness of the services provided; second, the policy model, where the ontologies
help represent the terms, elements and components of a policy and the relationships
between them, and are also used to establish the relationships or links with the context information; third, communications networks, where the ontologies represent
the elements, operations and components that manage the lifecycle operations of
pervasive services.
The following section describes these three domains, along with the associated
class definitions. It is important to highlight the interaction between classes from
different domains, and how they interact with each other.
4.4.1
This section does not propose a new information model for context, although it does
extending existing definitions in earlier works and compiled in the state-of-the-art
in Chap. 3 (Sect. 3.3.1.1 for more details). The model used is composed by the four
concepts, such as person, place, task and object, which have been found to be the
most fundamental data required for representing and capturing the notion of context
information.
The information requirements and the revision of the state-of-the-art can be
reviewed in [Bauer03], [Debaty01], [Eisenhauer01], [Gray01], [Henricksen04],
[Korpi03b], [Schmidt02], [Starner98] and other research works.
Ontologies are used to formalize the enhancements made to the extension of the
context model, in this section the interaction between domains is specified and represented. This reflects one of the objectives of this book, which is to model the entities defined in the information model using a formal language based on ontologies,
and to represent the context information. Specifically, in this section UML class
diagrams are used to define basic context and management information, and then
enhances this information to create the formal ontology.
The context representation using UML classes consists of the definition of classes
and their relationships. Relationships are defined as a list of elements that have a
relationship, such as a dependency or an aggregation, to another set of elements.
The Context class can be related to a set of classes that represent the specific context
information that can be shared.
Figure 4.7 shows the context information model upper level ontology. The context representation is structured as a set of abstract classes describing a physical or
virtual (i.e. logical) object in the service domain. Attributes and relationships can be
optionally specified to further define the characteristics of and interaction between
different aspects of the context. The Context class is related to the Object, Person,
User, Place, and Task classes. Each of these are implemented as class containers.
This enables the definition of each to be inherently extensible; it also enables the
information in each of the components that is placed in a class container to be
102
4.4
103
stored as a Data Model. These five relationships collectively constitute the bridge
to formalize the concept of context. Similarly, the relationships between Person and
the Physical, Place, Position, Task and Object concepts with the Inmaterial concept
define these as semantic relationships.
For example, assume that a pervasive context-aware service is being defined.
When an end user appears in a specific WiFi coverage area, the first operation that
is required is to determine how those data are related to the context of that end
user. Hence, these data need to be related to a Context Entity that includes that
end user.
Using the ontology representation shown in Fig. 4.8, it can be seen the Context
Entity is related to a Data Model, which is in turn related to the location of the end
user, both through the isLocatedAt relationship as well as the isPlacedOn relationship.
This latter relationship associates the position of the end user with the context, which
means that static as well as dynamic locations are automatically accounted for. Note the
difference between these relationships and, for example different types of Locations
(i.e. indoor or outdoor)the former require instance data (and hence are defined only
in the Data Model) while the latter can be statically defined, since indoor and outdoor concepts are predefined. This means that when the Context Entity is being used
to trigger the service, the location of the user can be used to do so.
4.4.2
The second domain is the policy-based management domain. Chapter 2 reviews the
state-of-the-art of policy information models where the ontology-based policy
approaches have been studied (please refer to Sect. 3.4.1.2 for more details).
104
4.4
105
106
4.4.3
The third domain is the set of management operations for services management.
This domain has been defined in the framework of business-oriented technologies,
but to date has never been related to either network operations or especially to management applications. The proposal in this section is hence novel, since context
information is used to control management operations of pervasive services.
Figure 4.11 shows a set of interactions between various policy-based management operations. This class diagram depicts some of the important service lifecycle
operations that must be controlled by the integrated context information. Here, integration explicitly means the definition of relationships in the information model and
4.4
107
the formalization of those relationships by the ontology. The operations are structured
as relationships between basic service management components (abstract classes
with attributes) in a service management system.
The service management operations are the result of policy tasks that execute in
response to the evaluation of certain conditions related with the service lifecycle. For
example, a policy could request certain information from a service listener and, based
on the data received, distribute and/or execute one or more sets of service code.
The information to be delivered by the listener, in this example, can then be a
value that makes the policy evaluation true (i.e. equal to its expected value), which
then results in executing one or more actions. In the class diagram of Fig. 4.11, the
evaluation of the value, coming from a context variable, is managed by the
PolicyApplication, and such information triggers a serviceInvocation. One of these
policy actions can be the policyDistribution to certain Storage points. The
PolicyDecision decides what actions must be executed and then pass control to
PolicyExecution.
A serviceInvocation can be signalled by a ManagedEntity containing the context
values to be evaluated by the PolicyApplication, or from the Service Listener. The
PolicyExecution is responsible for the distribution of service code and service policies as well as their deployment (as a result of codeDistribution and the codeMaintenance operations).
The PolicyEvaluation helps the PolicyManager to make decisions based on the
values of the relevant context information, which can be measured or computed.
Finally, the ConflictCheck is responsible for ensuring that the set of current policies
do not conflict in any way (e.g. the conditions of two or more policies are simultaneously satisfied, but the actions of these policies do different operations to the same
ManagedEntity).
Figure 4.12 shows the service management components and the associated management operations involved in the service lifecycle process. Those operations are
typically represented as policy management concepts when UML class diagrams
are being used. However, this is a novel vision using ontology class interaction
maps to integrate information between different domains, as it is shown this is an
important interaction, and it is explained shortly as follows.
The novel use of this interaction map enables the visualization of the semantic
relationships necessary when different classes are being related to each other. This
is especially important when these classes are from different domains. One example
is the ability to view semantic descriptions of the context data to identify the source
of the information. For example, this can be used to determine if the data is being
produced from end users (e.g. personal profile, service description or variables) or
network devices (e.g. server properties or traffic engineering values).
The main operations that a policy-based system can execute on a ManagedEntity
are shown in the Ontology class interactions map, where the PolicyManager works
to satisfy the policyConditionsRequestedBy relationship between the PolicyManager
and the PolicyApplication.
The PolicyApplication is related to the ManagedEntity in several ways, including
directing which management and/or context information is required at any given time.
108
4.4
4.4.4
109
To date, initiatives have used policy management approaches to tackle the problem
of fast and customizable service delivery. These include OPES [OPES] and
E-Services [Piccinelli01]. In this book, the use of the policy-based management
paradigm to control the full service lifecycle using ontologies is a step further in this
area, and it is presented as a novel example to do data integration by using formal
mechanism as ontologies. In addition, to complement the use of ontology to formally represent information and integrate data, a functional architecture which takes
into account the variation in context information and relates those variations to
changes in the service operation and performance for services control is described.
The managing concepts for service lifecycle operations are contained in the
ontology for integrated management. The integration of concepts from context
information models and policy-based management constitutes the foundations of
the semantic framework, which is based on the construction of a novel ontology
model for service management operations. A demonstration of the concepts of this
approach is one of the most important contributions in the area of knowledge engineering and telecommunications [Serrano07a].
Policy-based management is best expressed using a restricted form of a natural
language than a technical or highly specialized language that uses domain- and
technology-specific terms for a particular knowledge area. A language is the preferred way to express instructions and share data, since it provides a formal rigor
(through its syntax and grammatical rules) that governs what makes up proper input.
An ideal language is both human- and machine-readable, which enables systems to
automate the control of management operations. However, in network management,
multiple constituencies are involved (e.g. business people, architects, programmers,
technicians and more) and all must work together to manage various aspects of the
system. Each one of these constituencies has different backgrounds, knowledge and
skill sets; hence, they are represented using different abstraction levels.
The language and the interactions within these different abstraction levels are
shown in Fig. 4.13. While most of these constituencies would like to use some form
of restricted natural language, this desire becomes much more important for the
business and end users (it even becomes undesirable for some constituencies, such
as programmers, that are used in formal programming languages). This notion was
described previously as the Policy Continuum in [Strassner04], where each constituency is assigned a dialect of the language to use.
The introduced and depicted ontology is a global domain ontology that captures the consensual knowledge about context information, and includes a vocabulary of terms with a precise and formal specification of their associated meanings
that can be used in heterogeneous information systems. The ontology was designed
to enable policy-based management operations to more easily share and reuse
data and commands. The basic approach is to use the Policy Continuum to connect a service creation view to a service execution view using multiple dialects of
a common language.
110
This ensures that the different needs and requirements of each view are
accommodated. For example, there is a distinct difference between languages used
to express creating services from languages used to express the execution and monitoring of services. This also enables this approach to accommodate previously
incompatible languages, due to their different structures, formats, protocols or other
proprietary features. While there are some existing policy languages that have been
designed (e.g. Ilog Rules [ILOGRULES] and Ponder [Damianou01]), each of these
(and others) use different commands, which have different semantics, for executing
the same instruction when a policy is being evaluated.
The approach illustrated in this section uses a set of ontologies that capture the
syntax and semantics from different areas, but at the same time provide a level of
constituency by mapping those terms and phrases to the same expressive language.
This is done by using a set of languages based on OWL to ensure platform independence. Since OWL is based on W3C standards [W3C], this approach takes advantage of a popular existing standard, and hence makes it more appealing for adoption.
By augmenting this with formal graphical representations using UML [OMG-UML]
and describing the OWL syntax using XSDs, this approach takes advantage of the
large variety of off-the-shelf tools and freely available software providing powerful
and cost-effective editing and processing capabilities.
4.4
111
Each one of the implementation dialects shown in Fig. 4.14 is derived by successively
derivated vocabulary and grammar from the full policy language to make a particular
dialect suitable to each particular level of the Policy Continuum. For some levels,
vocabulary substitution using ontologies enable more intuitive GUIs to be built.
This ontology modelling work is based on the definition of a policy information
model that uses an ontology to help match a set of dialects to each different constituency. This follows the methodology described in the Policy Continuum. Specifically,
a context model and a policy model are combined to define management operations
for pervasive services in an overall framework, which provides a formal mechanism
for management and information systems to share and reuse knowledge to create a
truly integrated management approach. This exemplar and instructive work is
aligned with the activity of DEN-ng and it constitutes a future research line for converging with such model to build up a DENON-ng and ontology for the DEN-ng
information model.
The ontology is driven by a set of pervasive service management use cases that
each requires a policy-based management architecture as represented in this section. The ontology is founded on using information models for context information
and policy management to promote an approach to integrated management, which
is required by both pervasive as well as autonomic applications. The combination of
context-awareness, ontologies and policy-driven services motivates the definition of
a new, extensible and scalable semantic framework for the integration of these three
diverse sources of knowledge to realize a scalable autonomic networking platform.
Policies are used to manage various aspects of the service lifecycle. Thus, the
scope of this example addresses the various service management operations identified from the research and experience acquired by the active participation in the
IST-AUTOI [AUTOI] and IST-CONTEXT Project [IST-CONTEXT]. These service management operations include code distribution, service code maintenance,
112
service invocation, service code execution and service assurance, and are common
operations that constitute the basic management capabilities of any management
system. In spite of the valuable experience gained from the IST-CONTEXT project,
additional activities are necessary to support the service lifecycle, which is addressed
in this section.
The management operations model represents the service lifecycle operations,
as shown in Fig. 4.14. In this figure, service management operations, as well as
the relationships involved in the management service lifecycle process, are represented as classes. These classes will then be used, in conjunction with ontologies,
to build a language that allows a restricted form of English to be used to describe
its policies. To do so context information is underlayed in such relationships, with
one or various corresponding activities called events, which could be related to
context information.
The service lifecycle begins with the creation of a new service. The Service
Editor Service Interface acts as the application that creates the new service. Assume
that the service for deploying and updating the service code in certain network
nodes has been created. This results in the creation of an event named aServiceOn, which instantiates a relationship between the Application and Maintenance
classes. This in turn causes the appropriate policies and service code to be distributed via the Distribution class as defined by the aServiceAt aggregation. The service distribution phase finds the nearest and/or most appropriate servers or nodes to
store the service code and policies, and then deploys them when the task associated
with the eventFor aggregation is instantiated. When a service invocation arrives,
as signalled in the form of one or more application events, the invocation phase
detects these events as indication of a context variation, and then instantiates the
service by instantiating the association aServiceStart. The next phase to be performed is the execution of the service. Any location-specific parameters are defined
by the locatedIn aggregation. The execution phase implies the deployment of
service code, as well as the possible evaluation of new policies to monitor and manage the newly instantiated service.
Monitoring is done using the service consumer manager interface, as it is the
result of associations with execution. If maintenance operations are required,
then these operations are performed using the appropriate applications, as defined
by the aServiceOn aggregation, and completed when the set of events corresponding to the association whenServiceOff is received. Any changes required
to the service code and/or polices for controlling the service lifecycle are defined
by the events that are associated with the whenServiceNew and aServiceChange associations.
The service management operations are related to each other, and provide the
necessary infrastructure to guarantee the monitoring and management of the services over time. The UML design shown in Fig. 4.14 concisely captures these relationships, thus the pervasive service provisioning and deployment is on certain
manner assured to provide service code and policies supporting such services to the
service consumers.
4.5 Conclusions
4.5
113
Conclusions
In this chapter
The main advantage of using an ontology-based approach to integrate and represent context information and management operations has been introduced and discussed. The isolation achieved between explicit context and implicit context
information enables the interaction and exchange of information to be done by the
management systems when context information is being used as information for
managing purposes.
Ontologies have been used for specifying and defining the properties that represent the context information and the management operations that support pervasive
services in the communications networks. It has been depicted and demonstrated
that ontologies used for representing the context information and the management
operations for pervasive services enable services to exhibit an adaptive behaviour.
Ontology engineering for supporting the management of services through context integration enables context-awareness to be better implemented by enabling the
operation of management systems to use context information and policy-based
management mechanisms. The use of ontology engineering is versatile and solves
some of the problems in the area of pervasive service management.
The ontology used as example is an ontology-based information model for pervasive applications that integrates its information model describing different aspects
of context with ontologies that augment the model with semantic information; these
augmented data enable context-aware service requirements to be modelled.
Structuring this as a formal language using OWL provides extensible and efficient
context information for managing services.
According to the ontology representation, pervasive services can be automatically customized, delivered and managed. The ontology is suitable not only for
representing and defining context information concepts but also for defining management operations that motivate and promote the integrated management of pervasive services.
The contributions of this chapter are focused on defining the basis for functional
middleware technologies, defining an extensible ontology for the robust support and
management of pervasive services. Future work will continue the study of formal
ontologies and their interaction with information and data models. One of the key
areas that are addressed is how to automatically validate, as much as possible, the
relevance and correctness of both information and data models as well as appropriate ontologies in pervasive systems.
Chapter 5
5.1
Introduction
115
116
5.2
5.2
117
118
requires a new understanding and possible new ways and tools to control the emerging
services kind as result of the Internet evolution and cloud computing as the main
drivers of such new services. The different service phases exposed in this section
describe the service lifecycle foundations. The objective is focusing the research
efforts in understanding the underlying complexity of service management, as well
as to better understand the roles for the technological components that make up the
service lifecycle, using interoperable information that is independent of any one
specific type of infrastructure that is used in the deployment of NGNs.
One of the most important benefits of this agreement is the resulting improvement of the management tasks and operations using such information to control
pervasive services. However, the descriptions and rules that coordinate the management operations of a system are not the same as those that govern the data used in
each management system. For example, information present in end-user applications is almost exclusively used to control the operation of a service, and usually has
nothing to do with managing the service.
5.2.1
Service Creation
The creation of each new service starts with a set of requirements; the service at that
time exists only as an idea [Serrano08]. This idea of the service originates from the
requirements produced by market analysis and other business information. At this
time, technology-specific resources are not considered in the creation of a service.
However, the infrastructure for provisioning this service must be abstracted in order
to implement the business-facing aspects of the service as specified in a service definition process.
The idea of a service must be translated into a technical description of a new
service, encompassing all the necessary functionality for fulfilling the requirements
of that service. A service is conceptualized as the instructions or set of instructions
to provide the necessary mechanism to provide the service itself and called service
logic or most commonly typified as SLOs.
5.2.2
Service Customization
5.2
5.2.3
119
Service Management
The main service management tasks are service distribution, service maintenance,
service invocation, service execution and service assurance. These tasks are
described in Sect. 5.3.
5.2.4
Service Operation
5.2.5
Service Billing
Service billing is just as important as service management, since without the ability
to bill for delivered services, the organization providing those services cannot make
money. Service billing is often based on using one or more accounting mechanisms
that charge the customer based on the resources used in the network. In the billing
phase, the information required varies during the business lifecycle, and may require
additional resources to support the billing. The make up of different billing infrastructures is out of scope of this chapter. However, since the management of service
information is just as important as the maintenance of the service, information and
processes defined here can be used for both processes.
5.2.6
Customer Support
120
5.3
In this section, the management operations of a pervasive service and its interactions
are identified as distinct management operations from the rest of the service lifecycle
phases. Figure 5.2 depicts management operations as part of the management phase
in a pervasive service lifecycle.
In this book, the idea of context information is explained, and other types of
knowledge are required in order to provide better management services. An enhanced
understanding of the semantics of a set of service management operations to enable
current and future pervasive service management is crucial to achieve enhance
semantic management control. Further explanations in this section drive to posit that
increased semantic control is best realized through the use of ontologies, which can
integrate context information with other management information to achieve interoperability in the various management systems used by pervasive systems.
Management systems must be capable of supporting semantic variations of the
information and react to those variations in application-specific ways (e.g. to enable
context-awareness). Management systems must also be flexible enough to manage
current as well as future aspects of services variations in response to variations in
context information.
An important aspect of policy-based service management is the deployment of
services using programmable elements. For instance, when a service is going to be
deployed, decisions have to be taken in order to determine which network elements
are going to be used to support the service. This activity is most effectively done
through the use of policies that map the user and his or her desired context to the
capabilities of the set of networks that are going to support the service. Moreover,
service invocation and execution can also be controlled by policies, which enable a
flexible approach for customizing one or more service templates to multiple users.
121
122
to be satisfied. From the picture, the interactions between the operations of different information in pervasive services are represented by arrows and can, for example be limited as a result of mobility requirements that limit the applicability of the
information to different situations. Hereafter, formal representations for the information used by current pervasive services are introduced and explained.
There are two types of logic-based functions, coming from the set theory, which
is in use to impose semantic control. The first type represents management operations that are described as specific service functions f(Xsn)m, where Xs represents
the service management operations, n acts as an index number to identify the total
number of services and m is the number of replicas of the same type of service.
The second type represents context information, and is described as context functions f(Ctn)m, where in these functions, Ct represents the context information, n is
the context number, which is used as a taxonomy indicator to classify the type of
context and m is the number of samples or variations of the same type of context
information. The basic logic-based semantic functions are as follows:
f (Xsn )m service functions where n > 1 and m > 0,
(5.1)
(5.2)
The functions are representations for expressions that identify variations in the
semantic values of the information, and can be operated on using logic or mathematical expressions. Logic and mathematical operations are used for both businessoriented solutions and management information from possibly heterogeneous
networks that could be distributed in different databases (e.g. from different
sensors).
The functions are independent of each other. The way to relate context information to certain services using such information is by generating inclusive functions
F which contains sets of service and context functions. In these kind of functions,
the constraints n > 1 and m > 0 are forced to be rewritten as n 1 and m 1,
since the service and at least 1 sample of context must both exist. The expression
representing such conditions is as follows:
F[ f {(Ct n )m } f {(Xsn )m }] context functions related with service functions, (5.3)
where n 1 and m 1.
Integrating the expression by set theory arguments and using summation:
F[{(Ct
n m
(5.4)
123
F[{(Ct
Xs =1
n m
services,
(5.5)
F[{(Ct
n m
Xs =1
policy services,
(5.6)
Xs =1
(5.7)
124
Representing the expression when Ct1 = 1 (no context variations) the function
obtained is:
ps + pn
Xs =1
(5.8)
Xs =1
(5.9)
Xs =1
(5.10)
In this example, the number of schema elements required is 21. To prove if this
approximation is correct, the use of graphs for representing the schema elements in
the above example is presented in next sections. At the same time, those graphs help
to depict the service management operations and their interactions. This feature
itself is a requirement of the design for pervasive systems in order to achieve rapid
context-aware service introduction and automated provisioning, and is supported by
our approach. Note that the schema elements of the ontology can themselves serve
as a guide to know if the number of semantic interactions is correct.
5.3.1
125
complex scenarios. Semantic functions are enabled using the concepts that build
up later the ontologies, as explained in the beginning of this section (i.e. Sect.
4.3). By now in this section, both UML use cases as well as other approaches are
used to extract the meaning of the use case(s) and links, and hence provide meaning to the system representations. The basic management operations to support
the lifecycle of pervasive services will be described.
Another tool for representing service management operations that is able to identify the associated semantic interactions is the use of sequence diagrams to represent the variations and conditions in the service [Kouadri04], as is shown in Fig. 5.4.
This shows a sequence diagram for pervasive services, where the common operations required to manage pervasive services are described as a communication process between two actors in the service lifecycle. The basic operations are highlighted
as common operations present in most pervasive services as a result of the research
activity and analysis of the scenarios examined (see Chap. 6 for details).
Requirements dictated by pervasive services in terms of information [Park04]
use different types of models that have been proposed in order to enhance information interoperability [Park04], [Fileto03]. However, none of these requirements
have been related with service functions or management operations. Knowing the
information requirements dictated by pervasive services and the different types of
models that have been proposed so far for information interoperability, a relationship between those models and management functions must be established. The
purpose of this relationship is to quantify how much each relationship supports or
contributes to controlling the service lifecycle phases supporting the management
operations.
126
5.3.2
Service Distribution
This step takes place immediately after the service creation and customization in the
service lifecycle. It consists of storing the service code in specific storage points.
Policies controlling this phase are termed code distribution policies (Distribution).
The mechanism controlling the code distribution will determine the specific set of
storage points that the code should be stored in. The enforcement will be carried out
by the components that are typically called Code Distribution Action Consumers.
A high level example of this type of policy is as follows:
If (customized service event f(Cs1)1 is received)
then (distribute service code f(Ds1) in optimum storage points selection with parameters f(Dsn)
Figure 5.5 represents how context information, as represented by the event in a
function f(Ct1)1, triggers the distribution of the code and the policies in functions
f(Ds1) through f(Dsn) as a result of context variations f(Ctn)m, where n is the index
number to identify the type of context and m is the number of samples or variations
of the same type of context information.
This figure shows two service functions f(Ds1) and f(Dsn), as well as two context
functions f(Ct1)1 and f(Ctn)m. In this example, it is assumed no context variations and
only one type of information; thus, in this graph, can be observed that, according to
(5.6), two ((1)(2) = 2) schema elements are required to represent the two semantic
interactions that are necessary to in this ontology representation and that must be
considered when the ontology is being created.
127
5.3.3
Service Maintenance
Once the code is distributed, it must be maintained in order to support updates and
new versions. For this task, special policies have been used, termed code maintenance
policies (CMaintenance). These policies control the maintenance activities carried out
by the system on the code of specific services. A typical trigger for these policies could
be the creation of a new code version or the usage of a service by the consumer. The
actions include code removal, update and redistribution. These policies will be
enforced by the component that is typically named the Code Distribution Action
Consumer. Three high level examples of this type of policies are shown here:
If (new version of service code defined by f(Ds1+n) is TRUE)
then (remove old code version of service f(Ds1) & (distribute new service code,
function of f(M1+n) )
If (customized service code expiration date defined by f(Ct1+n)m has been reached)
then (deactivate execution for service f(Ds1+n) & (remove code of service, in function of f(M1+n) )
If (The invocations number for service is defined by the function f(Ct1+n)m )
then (distribute more service code replicas f(Ds1+n) to new Storage Points as defined
by the function f(M1+n) )
Figure 5.6 represents how context information f(Ctn)m controls the maintenance
of the code and the policies in functions f(M1+n) as a result of context variations
f(Ctn)m, triggering the deployment of f(Ds1) through f(Ds1+n), which in turn invoke
new services. In these functions, n is the index number that identifies the type of
context, while m is the number of samples or variations of the same type of context
information.
128
This figure presents six service functions f(Ds1), f(Dsn), f(M1), f(Ds1+n), f(Dsn+m)
and f(M1+n), the latter three functions arising from the effect of the code maintenance operation. Two context functions, f(Ct1)1 and f(Ctn)m, are also defined. Since
it was already assumed that no context variations are present, and that only one type
of information exists, then in this graph it is observed (again using (5.6)) that six
schema elements ((1)(6) = 6) are required. This is equivalent to requiring six semantic interactions that must be present in the ontology representation.
5.3.4
Service Invocation
The service invocation is controlled by special policies that are called SInvocation
Policies. The service invocation tasks are realized by components named Condition
Evaluators, which detect specific triggers produced by the service consumers. These
triggers also contain the necessary information that policies require in order to
determine the associated actions. These actions will consist of addressing a specific
code repository and sending the code to specific execution environments in the network. The policy enforcement takes place in the Code Execution Controller Action
Consumer. A high level example of this type of policy is as follows:
If (invocation event f(Ct1)1 is received)
then (customized service must be downloaded as function of: f(Ds1) until f(Dsn) to
IP addresses)
Figure 5.7 represents how context information f(Ctn)n is represented as an event,
which in turn triggers the execution of services as a function of f(In). As result of the
context variations defined by f(Ctn)m, new invocations result in new context invocations, as defined by the function f(In+1), which in turn are used to define new services. In these functions, n is the index number to identify the type of context, while
m is the number of samples or variations of same type of context information.
129
This figure presents four service functions f(I1), f(In), f(In+1) and f(Im). In addition,
two context functions f(Ctn)m and f(Ctn+1)m are considered. As before, no context
variations and only one type of information are assumed. Thus, (5.6) once again is
used to determine that four ((1)(4) = 4) schema elements (e.g. semantic interactions)
are required for this ontology representation.
5.3.5
Service Execution
Code execution policies, named CExecution policies, will govern how the service
code is executed. This means that the decision about where to execute the service
code is based on one or more factors (e.g. using performance data monitored from
different network nodes, or based on one or more context parameters, such as location or user identity). The typical components with the capability to execute these
activities are commonly named Service Assurance Action Consumers, which evaluate network conditions. Enforcement of these policies will be the responsibility of
the components that are typically called Code Execution Controller Action
Consumers. A high level example of this type of policies is as follows:
If (invocation event f(In+1) is received or f(In+1) is TRUE)
then (customized service must be deployed as function of f(Dn)
Figure 5.8 represents the deployment of service code. The context information is
defined as part of the invocation function f(In), and results in the triggering of services to be executed according to the function f(En). In these functions, n is the index
number identifying the type of context, and m is the number of samples or variations
of the same type of context information from the source. As a result of context
variations f(Ctn)m, new deployment results in code executions controlled by the
functions f(E1) through f(En), and also results in deploying new services as f(Dn).
130
In the figure, the four new service functions f(E1), f(En), f(D1), and f(Dn), and the
two invocation functions f(I1) and f(In). However, the four invocation functions were
considered in the previous graph representation. Thus, in this graph, once again by
using (5.6), observe (1)(4) = 4, four schema elements (e.g. semantic interactions by
using (5.6)) are required for this ontology.
5.3.6
Service Assurance
This phase is under the control of special policies termed service assurance policies,
termed SAssurance, which are intended to specify the system behaviour under service quality violations. Rule conditions are evaluated by the Service Assurance
Condition Evaluator. These policies include preventive or proactive actions, which
will be enforced by the component typically called the Service Assurance Action
Consumer. Information consistency and completeness is guaranteed by a policydriven system, which is assumed to reside in the service creation and customization
framework. Examples of this type of policies are:
If (customized service is running as a result of a deployment function f(Dn) )
then (configure assurance parameters f(A1+n) service & (configure assurance variables f(D1+n) )
If (f(En) = 1)&(parameterA > X) then (Action defined by function f(En))
If (f(Dn) = 1)&(parameterB > Y) then (Action defined by function f(Dn))
If (f(An) = 1)&(parameterC < Z) then (ParameterA > X)&(Action defined by function f(D1+n))
Specifically, in this phase, the externally provided information can either match
pre-defined schema elements to achieve certain management activities or, more
importantly, the management systems can use these schema elements to extend and
share the information to other management systems.
This extension requires machine-based reasoning to determine the semantics and
relationships between the new data and the previously modelled data. Reasoning
about such decisions using ontologies is relatively new; an overview of this complex
task is contained in [Keeney06].
Figure 5.9 shows five service functions that make up part of the complete set of
service management operations f(A1), f(E1+n), f(E1+m), f(D1+n) and f(D1+m). The context information functions are considered atomic units; thus, in this graph, again the
(5.6) is used to determine the required five ((1)(5) = 5) schema elements (e.g. semantic interactions).
Note that if all schema elements from Sects. 5.3.25.3.6 are added together, thus
it is obtained 21 total schema elements that are required to represent service management operations. The same value is obtained when solved the semantic-based
logic function shown in (5.10).
Figure 5.9 represents the service assurance aspect of this approach. It shows
associations of context information f(Ctn)m through f(Ct1+n)1+m, which can be
131
expressed using policy conditions and actions that use information coming from the
external environment for executing and controlling the management operations for
ensuring the service.
5.4
This section presents research advances on semantic rules for combining service
management operations and context information models for promoting information
interoperability within pervasive applications. This section briefly describes a novel
management technique using context information and ontology-based data models.
Semantic rules associated with management operation functions are now presented.
These semantic rules act as the foundation for building ontologies by expressing
these functions as semantic interactions.
The control rules used to resume the service lifecycle functions follow logicbased rules that are expressed as policies, which defines a set of actions to be executed when certain conditions are met. This can be expressed using If-then-Else
type rules.
Table 5.1 shows a set of function-based rules expressed as Condition-Action
policies for managing the service lifecycle operations; these rules are expressed in
OWL. This table shows how policies can be created using semantic functions (i.e.
this table shows a function-based description of service management operations
using policies).
When the values of the semantic functions match concepts from the ontology,
the semantic functions can be used for controlling the service management operations. These can be used to orchestrate a set of policies, and as the context information need to be integrated in those management operations, the context information
132
Table 5.1 Semantic control rules defining service lifecycle managing functions
Service lifecycle
Semantic control rules for defining ontology-based functions
Customization
If (serviceNew f(Crn) = Service001) & (LocatedIn f(Ctn)m = WebServer)
Then (CreateConfService001 f(Crn)=Service001 )
If (userOf f(Ctn)m = ConfService001) & (LocatedIn f(Ctn)m = NetServer)
Then (CustConfServ001 f(Csn) = 001) & (CustConfServ002
f(Csn) = 002)
Distribution and
If (ConfServiceScheduleAt f(Ctn) = 00:00:0000)
deployment
Then (DistConfServ001 f(Dsn) = 001) & (DistConfServ002
f(Dsn) = 002)
If (userOf f(Ctn)=ConfService001) & (locatedIn f(Ctn)=Reg001)
Then (StartConfService001 f(Dn)=ConfService001)
Execution and
If (userOf f(Ctn)=ConfService001) & (LocatedIn f(Ctn)=Cell001)
maintenance
Then (StartConfService002 f(En)=Service002)
If (ConfServiceDateAt f(Ctn)=00:00:0000)
Then (StopConfService001 f(An)=001)&(StartConfService002
f(An)=002)
5.5
Conclusions
In this chapter
Pervasive service lifecycle operations have been described and represented.
It has been demonstrated that it facilitates the formal representation of service management operations and the different phases of the service lifecycle have been
explained. New organizational aspects of emerging Internet services and communications (ITC) demands, as well as network management operations and cloud computing functions in service modelling tasks, were being considered when this task
was presented.
Management operations for pervasive services have been explained, emphasizing six important phasescreation, customization, management, operation, billing
and customer support. The semantic control of service management operations is
5.5 Conclusions
133
the focus of this chapter. Five types of service management operations (code
distribution, code maintenance, invocation, code execution and assurance) were
examined with respect to being able to support semantic variations of management
and context information, so that application-specific behaviour can be orchestrated.
The use of functions following set theory and semantic rules for controlling
the management operations of the service lifecycle has been exposed. The solution seems suitable for any particular technology, as it is following self-management principles inspired from autonomic communications. The comparative
results between a management system using policies with and without semantic
enrichment and ontology-based functions illustrate the differences of using this
approach.
An alternative for formal representation about the service lifecycle operations
has been described. Formal representations of service management operations for
supporting the information interoperability in pervasive environments are considered as well. Finally, it is demonstrated that service management operations are
described and formally represented with the combination of ontologies and information models following the principles explained in this chapter.
Chapter 6
6.1
Introduction
This chapter presents evaluation result about the general concept of using ontology
engineering to represent network management operations and cloud services.
The results are introduced as experiences and descriptions, scenario descriptions on
service and network management, autonomic architectures, service application
frameworks and cloud computing applications. The scenarios help as reference and
examples with the aim of showing how ontologies contributes for supporting pervasive services and network management systems in cloud environments following
autonomic principles.
The aim of this chapter is to depict autonomic applications for providing management and support of services that require certain levels of smart understanding
(which in turn enables ontology engineering processes like semantic reasoning, for
example or other ontology engineering techniques) when context information is
integrated into the service management application. This chapter also describes how
ontology engineering is being used for the interaction of applications for management operations of pervasive services in cloud environments.
The organization of this chapter is as follows. Section 6.2 summarizes the service
management benefits using ontology engineering, if well it is not an exhaustive
analysis but it describes the most general benefits when ontology engineering is
used in service and network management, autonomic and cloud environments.
Section 6.3 evaluates the general idea of using ontologies and the mechanisms or
tools to facilitate integration of models and interoperability. This section concentrates more in the formal representation about the integration of information in pervasive services management rather than software tools or technology factors for
facilitating the information interoperability as in previous chapters in this book.
Section 6.4 surveys research challenges and features that define information
model interactions that are required to achieve the level of information interoperability that is required for next generation networks (NGNs) supporting services that
use autonomic communications principles and mechanisms.
J.M. Serrano Orozco, Applied Ontology Engineering in Cloud Services, Networks
and Management Systems, DOI 10.1007/978-1-4614-2236-5_6,
Springer Science+Business Media, LLC 2012
135
136
6.2
The Internet services and systems, from a design conception, does not pursue free
information exchange between networking data and service levels; mainly because
the model which was created did not pursue that aim. Currently, the Internet is by
many meanings a complete different platform beyond the only objective of packet
switching network between universities and government office, [Fritz90], today the
Internet has a face with clear business objectives, thus it is wise to re-consider the
role of the Internet in the services and systems and re-design utilities targeting this
requirement. This fact promotes a race between academic and industry communities, first to investigate for designing what it could be a solution enabling this feature
and second to bring to the market products enabling this feature and enhancing the
current solutions with emerging novelty products. This new design feature has contributed in the transformation from an agnostic to a more service and network aware
of Internet services, and as an inherent consequence, communication systems are
following this evolution line too.
In this accelerated necessity for designing the Future Internet services and systems,
many active academic and information technologies and communications (ITC)
6.2
137
6.2.1
138
already in Chap. 1 of this book. This trend involves, mainly, services management
issues between non-interoperable aspects in software systems and communications
infrastructure. In this panorama, perhaps a more consistently orientation for understanding how ontology engineering can support the new service, and communication challenges result more interesting than new definitions or procedures aiming to
guarantee such benefits.
Research initiatives addressing software-oriented architectures (SOA) base their
implementation in overlay networks that can meet various requirements whilst
keeping a very simplistic, unmanaged underlying Internet platform (mainly
IP-based). For example, a clear example of this SOA design orientation is GENI
NSF-funded initiative to rebuild the Internet [NSFFI]. Others initiatives argue about
the requirements for a fundamental redesign of the core Internet protocols themselves [CLEANSLATE], [NGN].
As the move towards convergence of communications networks and a more
extended service-oriented architecture design gains momentum worldwide; facilitated mainly by pervasive deployment of Internet protocol suites (VoIP is a clear
example of this), the academic research community is increasingly focusing on how
to evolve networking technologies to enable the Future Internet and its implicit
advantages. In this sense, addressing evolution of networking technologies in isolation is not enough; instead, it is necessary to take a holistic view of the evolution of
communications services and the requirements they will place on the heterogeneous
communications infrastructure over which they are delivered [IFIF], [SFIFAME].
By addressing information interoperability challenge issues, Internet systems
must be able to exchange information and customize their services. So Future
Internet can reflect changing individual and societal preferences in network and
services and can be effectively managed to ensure delivery of critical services in a
services-aware design view with general infrastructure challenges. A current activity in many research and development communities is the composition of data models for enabling information management control. It focuses in the semantic
enrichment task of the management information described in both enterprise and
networking data models with ontological data to provide an extensible, reusable,
common and manageable data link plane, also referenced as inference plane
[Serrano09].
6.2.2
Taking a broad view of state-of-the-art, current development about ontology engineering and particularly data link interactions in the converging communications
and software IT technologies, including trends in cloud computing, many of the
problems present in current Internet systems and information management systems
are generated by interoperability issues.
6.3
139
Ontology engineering provides tools to integrate user data with the management
service operations, and offers a more complete understanding of users contents
based on data links and even relating their social relationships. Hence, it is positive
a more inclusive governance of the management of resources, devices, networks,
systems and services can be used for supporting linked-data of integrated management information within different management systems. This complete understanding of contents use ontologies as the mechanism to generate a formal description,
which represents the collection and formal representation for network management
data models and endow such models with the necessary semantic richness and formalisms to represent different types of information needed to be integrated in network management operations. Using a formal methodology the users contents
represent values used in various service management operations, thus the knowledge-based approach over the inference plane [Strassner07b] aims to be a solution
that uses ontologies to support interoperability and extensibility required in the systems handling end-user contents for pervasive applications [Serrano09].
6.3
Trends in the area of ITC require multiple and diverse networks and technologies to
interoperate and provide the necessary support and efficient management of pervasive services. In this sense, autonomic system principles and emerging cloud technologies and their management tools are an important way to facilitate and enable
solutions capable for managing the complexity of NGN services. This section discusses ontology engineering enabling information interoperability, and outlines an
approach for lifecycle management of services and applications that require specific
levels of quality of service (QoS) as particular example to show the advantages of
ontology engineering to achieve this objective. Ontology engineering is used to
enable the interoperability across different management domains using semantic
reasoning and leveraging policy-based management techniques to achieve autonomic behaviour.
In recent years, the business and technical aspects, and hence the complexity, of
Internet and communications services and their support systems have increased
enormously, requiring new technologies, paradigms and functionality to be introduced. The drive for more functionality has dramatically increased the complexity
of the systems so much, so that it is now almost impossible for a human to manage
all of the different operational scenarios that are possible in todays complex communication systems.
The common stovepipe present in the design of OSSs (operational support systems) and BSSs (business support systems) exemplify the necessity to incorporate
the best implementation of a given feature; unfortunately, this focuses on the needs
of individual applications as opposed to overall system needs, and thus inhibits the
sharing and reuse of common data. This is a specific example of the general inability
140
6.3
141
The objective of defining the class interactions is to identify classes whose content
must be integrated with all or parts of the information necessary to be shared. The
task of identifying these interactions is done with a visual approach using the ontology class diagrams based on the individual information model class definitions that
have been previously presented and explained in the previous chapter of this book.
The class diagrams contain the class relationships described in a generic form
(e.g. as abstract classes interacting for deploying a service), but each relationship
and interaction between classes is appropriately refined when the ontology is constructed and subsequently edited. Thus, the class interactions map becomes a tool
for representing the semantic interactions which make up the ontology. In other
words, the ontology becomes a class map to identify inter-linked concepts
It is important to highlight the difference between relationship and interaction:
InteractionThe graphical representation for two or more elements in class diagrams that interact and hence define one or more specific behaviours. This is
normally shown using symbols that represents the type of interaction, so that
different interactions having different meanings can be easily identified.
RelationshipThe semantic description of the interaction between two or more
elements, usually consisting of a description that follows simple syntax rules and
uses specific keywords (e.g. isUsedFor or asPartOf).
Figure 6.2 shows the domain ontology representation (upper-representation).
The image represents the integration of the context information classes related to
the management operation class through the Event class. The Event class interacts
with other classes from different domains in order to represent context information.
Note that only the ContextEntity class from the context information model domain
142
and the Event class from the service management domain are shown. This simplifies
the identification of interactions between these information models. These entity
concepts, and their mutual relationships, are represented as lines in Fig. 6.2. For
instance, a ContextEntity forms part of an Event class, and then the Event defines
part of the requirement evaluating a Condition as true or false. Another example
shows that one or more Policies govern the functionality of a Managed Entity by
taking into account context information contained in Events. This functionality
enables context to change the operation requested from a pervasive service or application, and is represented as the interaction between Event and ContextEntity.
The ontology seen in this class representation, integrates concepts from the IETF
policy standards as well as the current DEN-ng model. Thus, in the following subsections, important classes that were originally defined in the IETF, DEN-ng, models will be identified as such. The ontology defines a set of interactions between the
Context Data, Pervasive Management and Communications Systems Domains in
order to define the relationships that represent interactions between the classes from
the information models for these three different domains. These interactions are an
important part of the formal lexicon, as shown in Fig. 6.3 and will be described.
The formal language used to build the ontology is the web ontology language
(OWL), which has been extended in order to apply to pervasive computing environments [Chen03c]; these additional formal definitions act as complementary parts of
the lexicon. The formal descriptions about the terminology related with management domain are included to build and enrich the proposal for integrating network
and other management data with context information to more completely define the
appropriate management operations using formal descriptions. The ontology integrates concepts from policy-based management systems [Sloman94b], [Strassner04]
to define a context-aware system that is managed by policies, which is an innovative
aspect when integration work is being performed.
6.4
143
From Fig. 6.3, ManagedEntity is the superclass for Products, Resources and
Services, and it defines the business as well as the technical characteristics for each.
A PolicyApplication is a type of Application. It evaluates a set of PolicyConditions
and, based on the result of the evaluation, chooses one or more PolicyActions to
execute. A PolicyController is equivalent of a Policy Server from other models;
the name was changed to emphasize the fact that PolicyApplications are not limited
in implementation to the traditional clientserver model. The ontology class interaction hasPolicyForPerforming signals when the set of PolicyRules, selected by the
current ContextData, are ready to be executed. When this interaction is instantiate,
the PolicyController can pass the set of policies to the PolicyManager, which is a
class in the ontology. The ontology is based on superclass interactions between the
classes from the domains involved in the integration of context information for
pervasive service management operations. This simple example about ontology
interactions shows representation and the identification for domain interactions (see
Chap. 5 for more details about various pervasive service management operations).
6.4
144
6.4
145
6.4.1
146
A PDP is similar to traditional PDPs (such as that defined in the IETF), except
that it is specifically designed to answer requests from policy-aware and policyenabled network elements (as represented by the ManagedEntity class interaction
policyConditionEvaluatedBy) using formal ontology-based terms. This enables a
PDP to serve as an interface between the network and higher level business processes. The difference between a policy-aware and a policy-enabled entity is a longer discussion that is beyond the scope of this book. For now, it is enough to say that
the semantic expressiveness reached when using the combination of policy-aware
entities and ontology-based mechanisms for representing and integrating context
makes this proposal different from traditional policy-enabled approaches. The
PolicyExecutionPoint is used to execute a prescribed set of policy actions on a given
ManagedEntity as the class interaction policyActionsPerformedOn. A
PolicyEnforcementPoint is a class that performs the action on a given ManagedEntity
using the class interaction isEnforcedOn.
The PolicyDecisionMaking entity received requests from a ManagedEntity and
evaluates PolicyConditions (via the class interaction hasCondValuatedOn) based in
part from the results of the PDP (as a result of the class interaction hasValueDefinedBy). The class interaction policyConditionRequestedBy tells the
PolicyApplication to evaluate one or more specific PolicyConditions; this may
depend on retrieving management information using the class interaction isDescribedAsManagementInfo (which involves the ManagementInfoApp and the
ManagedEntity). The class interaction isForwardTo is established when it is necessary to operate on the ContextEntity to store the context information in the
DataModel using the class interaction isStoredIn. In order to integrate context information properly, the isDefinedAs class interaction (between ContextEntity and
Event) enables the Event to trigger the evaluation of a PolicyCondition, as it is part
of the Condition values of the policy-aware information model. This is defined as
the two class interactions isTriggeredAs and isEvaluatedAs (with the Condition and
Policy components, respectively).
The PDP obtains the status of the ManagedEntity after the execution of the
PolicyAction(s) using the policyApplicationInvovedWith and the isDescribedAsManagementInfo class interactions. The class interaction hasPolicyTargetFrom is
used to check if the PolicyActions have correctly executed.
Figure 6.5 shows the full class map interactions as multiple classes from the
three diverse domains (context information model, pervasive management models
and communications networks). In the ontology map, a Policy can be triggered by
Events that relate important occurrences of changes in the managed environment,
such as context-related conditions. Note that the upper ontology can contain and
relate concepts to create new concepts that can be used by other applications in a
lower ontology level or specific domains. For instance, Router is a type of
Resource; this concept is defined as an Object that can be part of the context
(which is represented by ContextEntity). Router has relationships to other
objects, such as the IP class, which is a type of Network.
147
6.5
148
Fig. 6.6 Functional components for policy interactions and services support
149
150
Fig. 6.7 PRIMO architecture supporting policy interactions for management operations
set of related policies. Thus, the component will take as input a service view policy,
and select a set of network view policies that can meet its goals.
The selection process involves analyzing the triggering mechanisms of low-level
policies as expressed in an appropriate set of ontologies via automated semantic
integration, and matches them to the service level policy goals. An outline of this
process is depicted in Fig. 6.7. The key enabling concept is the process of semantic
integration that can create associations and relationships among entities specified in
different ontologies and/or models.
151
6.5.1
152
POLICY_2
POLICY_3
SUBJECT QoSAdmin
TARGET Routers_on_PATH_A
ON IPv4PacketRecieved
IF IP source in 10.10.10.0/24
AND IP of type FTP
AND IP destination in 10.10.24.0 / 24
THEN < MARK IP.ToS with AF41 >
SUBJECT QoSAdmin
TARGET Routers_on_PATH_A
ON IPv4PacketRecieved
IF IP source in 10.10.10.0/24
AND IP of type FTP
AND IP destination in 10.10.24.0 / 24
THEN < SHAPE IP to 512Kbps >
SUBJECT QoSAdmin
TARGET Routers_on_PATH_A
ON IPv4PacketRecieved
IF IP.ToS equals AF41
THEN < CBWFQ weight is 5 >
6.5.2
153
6.5.3
6.5.4
Policy Analyzer
The Policy Analyzer takes the specified policies and transforms them into the PDP
rule engine format as well as a format suitable for generation device configurations.
Our baseline ontology enables this component to take advantage of automated reasoning in performing the policy transformation and policy conflict detection
processes.
Policy transformation is defined to be the mapping necessary to relate one representation of policy to another representation of policy. Note that policy transformations
are usually performed between policies at different levels of abstraction. Since the
154
6.6
Application Scenarios
6.6
Application Scenarios
155
services that do not use context information for their configuration or to trigger the
deployment of new services. CACTUS is responsible for providing QoS guarantees
for specific time periods in order to hold a video conference session among the
members of a group.
Scenario 2, distribution of services on demand of multimedia contents, based on
contextual information from devices and networks, MUSEUM-CAW (MUltimedia
SErvices for Users in MovementContext-Aware & Wireless scenario model).
MUSEUM-CAW is extended by using semantics from the Crisis Helper Scenario
which was demonstrated in the EU IST-CONTEXT Project. MUSEUM-CAW takes
advantage of the knowledge contained in ontology-based information models, and
compares that information to contextual information from the devices and networks
used to provide an advanced multimedia service that is independent of specific
access devices in wireless environments. Multimedia services are customized using
the user and devices profiles for the deployment of multimedia services in an ad hoc
fashion. MUSEUM-CAW is responsible for the distribution of multimedia contents,
and guarantees the transmission among different hot spots in a wireless network.
Scenario 3, integrated services that are technology-independent and automatically deployed Sir-WALLACE (Service integrated for WiFi Access, ontoLogyAssisted and Context without Errors scenario model). Sir-WALLACE is an enhanced
version by using semantics from the Services to all People scenario which has been
demonstrated in the EU IST-CONTEXT Project. This project exhibits self-management capabilities based on the use of ontologies, context-awareness and policybased management. This scenario takes advantage of the knowledge in the context
information from users, devices, networks and the services themselves, which are
all integrated in ontology-based information models. Sir-WALLACE pursues the
technological independence from wireless network operators, and promotes autonomy in operation and management of services.
The depicted scenarios described in this section aim to demonstrate, in a qualitative way, the potential of using ontologies for the integration of context information
in management operations of services.
6.6.1
Personalized ServicesCACTUS
Assume a large quantity of users that subscribe to a video conference service with
QoS guarantees for image quality. This is the CACTUS service model. This application scenario, depicted in Fig. 6.8, follows the mediators basis [Wiederhold92]
for gathering raw context information of networks and users. Ontologies are used to
define a policy information model to integrate users information in the management operations, which then provides a better and advanced 3G/4G service to its
users. CACTUS is responsible for providing the QoS guarantees for a specific time
period, in order to hold a video conference session among the members of a group.
CACTUS upgrades the services as a result of the information interoperability present in all information handling and dissemination. The CACTUS system facilitates
156
Management
Station
Overlay Networks
Overlay VPN2
Overlay VPN1
Video Server
Overlay VPNn
Edge Node
Edge Node
Edge Node
Internet
WiFi AR1
WiFi AR2
WiFi ARn
UMTS U1
UMTS U2
UMTS Un
Edge Node
User B
Edge Node
UMTS U1
UMTS U2
UMTS Un
CACTUS
Service
UMTS A Region Un
WiFi AR1
WiFi AR2
WiFi ARn
User A
WiFi A Area Rn
the deployment of new services, and better manages the associated service lifecycle,
as a result of the integration of different knowledge that is provided by the ontologybased information model. This enables the dynamic execution of services by using
users information to control the execution of code. The code is referred to as a
service logic object (SLO).
The organizer of the conference specifies the participants and its duration, and then
appropriate user profiles are generated. This provides personalized services to users as
defined by the user profiles; this information is defined in ontology-based information
models as end-user context information, which can be used alike as events that generate appropriate actions if some variations exist in such information. When a user registers for this service, that user enters: (a) personal information (name, address, etc.),
(b) information about the network cards that he/she owns and is able to use, in order
to connect to the network (e.g. MAC addresses for WLAN/LAN network cards and
MSISDN numbers for UMTS/GPRS cards) and (c) the specific service level that the
user chooses from among a set of available service levels. Each service level has an
associated set of policies that are used to enforce the QoS guarantees for the service.
The system uses this information for matching this information with the end-user
profiles defined in specific ontology-based information models. Thus, if the data from
an end-user profile is matched, the appropriate services are deployed and distributed
and recorded in the system knowledge databases. The information contained in information model remains constant until it needs to be updated.
The conference session is scheduled by a registered user (consumer) who utilizes
the conference setup service web interface to input the information for the conference session. Specifically, that user enters the conference start time, duration and
6.6
Application Scenarios
157
Internet
Active Node
Active Node
VPN Server
Management
Station
Active Node
VPN Server
CAW-MUSEUM VPN
Overlay Virtual Private Network
Active Node
Active Node
Network
Ad-hoc Network A -Section 1
6.6.2
158
6.6
Application Scenarios
159
6.6.3
160
Web Service
Subscription Pool
Management Station
Service Deployment
Service Allocation
Service
LogicObjects
Service
Policies
Management
Policies
Autonomic Element
WiFi AR1
WiFi AR2
WiFi ARn
Internet
WiFi BR1
WiFi BR2
WiFi BRn
R2
Cloud Infrastructure
Autonomic Element
R4
R1
Autonomic Element
R3
WiFi BR1
WiFi BR2
WiFi BRn
WiFi AR4
WiFi AR3
WiFi AR1
Sir-WALLACE
Service
WiFi B Area Rn
WiFi AR1
WiFi AR2
WiFi AR4
WiFi AR1
WiFi AR2
WiFi ARn
WiFi A Area Rn
6.6
Application Scenarios
161
priately adjust the service authentication and deployment. They can also be used to
govern other business issues, such as billing, which simplifies the process for matching or mapping information between the different phases of the service lifecycle.
In the Sir-WALLACE scenario, assume that a large quantity of users all subscribe to a wireless access service, and that each is looking for independence
from WiFi operators. The semantic framework approach then is used to create a
service called Sir-WALLACE. The application scenario, depicted in Fig. 6.5,
requires traffic engineering algorithms to satisfy the large demand for seamless
mobility. Such algorithms are translated into network policies by the PRIMO architecture. The Sir-WALLACE service must also achieve interoperability between the
different technologies present in order to distribute the correct information to trigger new traffic conditions within the nodes of a network to support new services.
In the scenario of Sir-WALLACE, a service takes advantage of network and user
environment information, and then provides a better, more advanced wireless access
service to its users using traffic engineering algorithms managed by policies defined
in an ontology-based information model.
In this approach, policies are the key element using context information contained in events to modify the structure of the policy and, at the same time, collect
the required information from the network. The specific traffic engineering algorithms are based on user requirements (e.g. taken from their profiles) and mobility
patterns (e.g. create routing algorithms based on the most frequently used access
points of the user) for the set of services that they have chosen.
The PRIMO architecture hosts the set of policies used to manage these algorithms. Sir-WALLACE is responsible for providing the QoS guarantees for a specific time period and service session; this facilitates its use among WiFi networks
that use overlay networks to satisfy user demands. Sir-WALLACE upgrades the
services based on the context information defined in the management policies using
ontology-based information models, augmented by analyzing appropriate information obtained from the set of events received. The use of context enables the services
that Sir-WALLACE provides to be better than conventional ones because the services of Sir-WALLACE use context information, which can change dynamically, to
configure the appropriate service logic (SLO).
In a general manner, as part of the scenario, a user subscribes to the service (e.g.
using a service setup web interface) and then, based on his or her user profile information, the system generates the appropriate services and personalizes them using
the programmable nodes covering the areas in which the user is moving. These
nodes are connected to the WiFi network nodes from different operators, which hide
any changes in the access technology or devices used. This is an important end-user
goal, as such low-level technical details should be transparent for the user.
The user information is used to infer the location of the user, and provide updated
context information for building an overlay network (or VPN). However, the overlay (or VPN) is not created until the context information triggers the service. Then,
the overlay (or VPN) is created and stays active while the user is present in the
WALLACE system. When a user registers for this service, he enters:
162
(a) Personal information (name, address, etc.), which fills a profile that is modelled
in information models as end-user classes; hence, this is in effect populating an
object with instance data.
(b) Information about the different ways he or she can access the network, in order
to connect to the network: MAC addresses for WLAN network cards, etc.; these
information are also contained in the ontology-based information model as
resource classes, and also can be conceptualized as populating an object with
instance data.
(c) Service level, for defining QoS guarantees. A user can choose between service
levels, which correspond to different policies, such as local (i.e. related to a City
that is close by), region (i.e. related to a particular geographical area, which may
include nearby states or countries) and global (i.e. related to many regions, such
as Central and Eastern Europe). The system uses the context information to
guide the future deployment of the services, and distributes the information to
be stored in its appropriate knowledge databases.
6.7
6.7
163
not limited to play the role of owner, while the rent of the services is delegated to a
Internet Service, provided a cloud provider also can play this role simultaneously.
This fact is very difficult to find in current data centres, where the proprietary of the
infrastructure is few times also the service provider. Currently, some of the main
cloud providers are Amazon [AMAZON], Salesforce [SALESFORCE] and Google
[APPENGINE] leading the market for their broad infrastructure and wide software
portfolio of services.
Cloud computing is becoming so popular that many companies are considering,
if not implement their own cloud infrastructure subsidies they service in the cloud
to reduce administration, maintenance and management cost. Cloud computing is
characterized by easily, administratively and technologically on-demand expansion;
running of dedicated servers, providing most of the time virtual server applications
according with users demands. Cloud computing offers resources when users need
them and enables infrastructure offering reduced times for processing, but at the
same time it allows users for closing processing sessions and as a consequence
infrastructure is assigned to other users or computing purposes.
Cloud services are offered as a pay-as-you-go service and are characterized by
complex pricing/economic models including time-based, utilization-based and
SLA-based charges. For example, Amazon charges for an instance of information
based on its size and uptime, while allowing for a reduced payment if you pre-pay
for a year or 3 years for inbound and outbound network traffic, with reduced pricing
with increasing capacity [AMAZON]. It also charges for disk size, reliability level,
transfer rate and number of I/O operations performed.
To understand the advantage of this billing method in networking services,
charges differs on localitybeing free in availability zone, reduced for inter-zone
and full across regions. IP addresses and load balancing are charged in addition.
In this complete new paradigm, where more complex management systems
interact to exchange information the role of management systems is crucial. In one
side for providing and control the system and in the other for offering and transport
the information. Most of the times both activities are to be conducted in parallel, as
viable alternative semantic-based systems are helping to support the multiplicity in
the information, the analysis of data and the grouping of clusters of data to facilitate
classification, processing and control.
The need to control multiple computers running applications and likewise the
interaction of multiple service providers supporting a common service exacerbates
the challenge of finding management alternatives for orchestrating between the different cloud-based systems and services. In cloud computing, a management system
supporting such complex management operations must address the complex problem of coordinating multiple running applications management operations, while
prioritizing tasks for service interoperability between different cloud systems.
An emerging alternative to solve cloud computing decision control, from a management perspective is the use of formal languages as a tool for information exchange
between the diverse data and information systems participating in cloud service provisioning. These formal languages rely on an inference plane [Strassner07],
[Serrano09]. By using semantic decision support and enriched monitoring informa-
164
6.7.1
6.7
165
Cloud Service
Allocation & Discovering
Service Lifecycle
Operation & Control
Service Logic
Virtual Infrastructures
Deployment
Virtual
Infrastructure
Management
Service Logic
VMs
Services Server
Performance Data
Policy
Refinement
Management
Policies Engine
Service
Request
Document
Data Files
Systems Policies
Service
Request
Feedback
Services
Request
Service
Definition
Data
Correlation
Engine
Performance Analysis
166
can be seen. Services servers must be aware all the time in the kind of services the
virtual infrastructure is supporting and running, alike the status of the virtual services. One of the main problems in virtual infrastructures is the limited information
about running services, service discovering and service composition operations are
operations almost impossible to perform, unless the services are well known initially. Further activity needs to be conducted to have an approach covering this
crucial requirement on virtual infrastructures.
6.8
Conclusions
In this chapter
Principles about ontology-based policy interactions supporting service management
operations emerge into the framework of tools to solve autonomic systems management
requirements. The main management activity in autonomics concentrates in the line
of self-management services, integration, and interoperability between heterogeneous systems.
The framework approach has been conceived to pursue the challenge of autonomic control loops applicable to different domains; the idea is that any policybased service management system can interact with PRIMO and their components
using ontology-based mechanisms, and then can provide an extensible and powerful
management tool for next generation services that have a set of autonomic characteristics and behaviour.
The described scenarios act as examples for generic services using ontology
engineering and service management mechanisms as main software and technological tools. In these scenarios, it is assumed the use of ontology-based information
models and information systems. Behind the success of context-aware services, a
flexible information system must exist that can accommodate heterogeneous sets of
information.
The scenarios partially implemented, confirm that the desired ontology-driven
functionality can be provided by using ontology-based models.
The adoption of a specific ontology-based information model is related with the
nature of the service and application. According to classification studies, discussed
in the state-of-the-art in this book, ontology-based modelling is accepted as the
most suitable alternative in terms of composition and management costs of the
information and with the advantage of application independence.
Bibliography
A
[Abowd97] Abowd, Gregory D., Dey, Anid K., Orr, R., Brotherton, J. Context-awareness in
wearable and ubiquitous computing. 1st International Symposium on Wearable Computers.
p.p. 179180, 1997.
[ACF] Autonomic Communications Forum. http://www.autonomic-communication-forum.org/og.
[ACF-FET] ACF-FET Future and Emerging Technologies Scope ftp://ftp.cordis.lu/pub/ist/docs/
fet/comms-61.pdf.
[Aidarous97] Aidarous, S. and Pecyak T. (eds), Telecomunicatios Network Management and
Implementations. IEEE Press, 1997.
[Allee03] Allee, V. The Future of Knowledge: Increasing Prosperity through Value Networks,
Butterworth-Heinemann, 2003.
[AMAZON] Amazon Web Services, http://aws.amazon.com/.
[Andrews00] Andrews, Gregory R. (2000), Foundations of Multithreaded, Parallel, and
Distributed Programming, Addison-Wesley, ISBN 0-201-35752-6.
[ANDROID] ANDROID Project. Active Network Distributed Open Infrastructure Development.
http://www.cs.ucl.ac.uk/research/android.
[APP-PERFECT] AppPerfect DevSuite 5.0.1 AppPerfect Java Profiler. http://www.appperfect.com/.
[APPENGINE] Google App Engine, http://code.google.com/appengine/.
[AUTOI] IST AUTOI Project, Autonomic Internet http://ist-autoi.eu.
B
[Bakker99] Bakker, J.H.L. Pattenier, F.J. The layer network federation reference point-definition
and implementation Bell Labs. Innovation, Lucent Technol., Huizen, in TINA Conf Proc.
1999. p.p. 125127, Oahu, HI, USA, ISBN: 0-7803-5785-X.
[Barr10] Barr, Jeff. Host your web site in the cloud, Sitepoint, 2010, ISBN 978-0-9805768-3-2.
[Bauer03] Bauer, J. Identification and Modelling of Contexts for Different Information
Scenarios in Air Traffic, Mar. 2003. Diplomarbeit.
[Bayardo97] Bayardo, R. J. et al., InfoSleuth: Agent-based Semantic Integration of Information
in Open and Dynamic Environments. In SIGMODS, p.p. 195206, 1997.
167
168
Bibliography
C
[Catlett92] Catlett, Charlie; Larry Smarr (June 1992). Metacomputing. Communications of the
ACM 35 (6). http://www.acm.org/pubs/cacm/.
[CCPP] Composite Capabilities/Preference Profiles framework: http://www.w3.org/Mobile/
CCPP.
[Chen04] Chen, H., Finin, T., and Joshi, A. An Ontology for context-aware pervasive computing
environments Special issue on Ontologies for Distributed Systems, Knowledge Engineering
review, 2003.
Bibliography
169
[Chen03a] Chen, H., Finin, T. and Joshi. A. An Ontology for Context-Aware Pervasive
Computing Environments. In IJCAI workshop on ontologies and distributed systems,
IJCAI03, August, 2003.
[Chen03b] Chen, H., Finin, T., and Joshi, A. An Intelligent Broker for Context-Aware Systems.
In Adjunct Proceedings of Ubicomp 2003, Seattle, Washington, USA, p.p. 1215, October 2003.
[Chen03c] Chen, H., Finin, T., and Joshi, A. Using OWL in a Pervasive Computing Broker.
In Proceedings of Workshop on Ontologies in Open Agent Systems (AAMAS 2003), 2003.
[Chen00] Chen, G., Kotz, D., A survey of context-aware mobile computing research, Technical
Report, TR2000-381, Department of Computer Science, Dartmouth College, November 2000.
[Chen76] Chen, P. S. The entity-relationship model: toward a unified view of data. ACM
Transaction on Database Systems Vol. 1, No. 1, p.p. 936, March 1976.
[CHIMAERA] CHIMAERA Tool. http://www.ksl.stanford.edu/software/chimaera/.
[Clark03] Clark, D., Partridge, C., Ramming, J. C., Wroclawski, J. T. A Knowledge Plane for
the Internet. SIGCOMM 2003, Karlsruhe, Germany, 2003.
[CLEANSLATE] Clean Slate program, Stanford University, http://cleanslate.stanford.edu.
[CRICKET] CRICKET Project: http://nms.lcs.mit.edu/projects/cricket/.
[Crowcrof03] Crowcroft, J., Hand, S., Mortier, R., Roscoe, T., Warfield, A., Plutarch: An argument for network pluralism, ACM SIGCOMM 2003 Workshops, August 2003.
D
[DAIDALOS] DAIDALOS Project: Designing Advanced network Interfaces for the Delivery
and Administration of Location independent, Optimised personal Services. http://www.istdaidalos.org/.
[DAML] Defense Agent Markup language. http://www.daml.org/.
[Damianou02] Damianou, N., Bandara, A., Sloman, M., Lupu E. A Survey of Policy
Specification Approaches, Department of Computing, Imperial College of Science Technology
and Medicine, London, 2002.
[Damianou01] Damianou, N., Dulay, N., Lupu E. and Solman, M. The Ponder Specification
Language, Workshop on Policies for Distributed Systems and networks (Policy 2001).
HP Labs Bristol, 2931 January 2001.
[DARPA] DARPA Active Network Program: http://www.darpa.mil/ato/programs/activenetworks/actnet.htm.
[Davy08a] Davy, S., Jennings, B., Strassner, J. Efficient Policy Conflict Analysis for Autonomic
Network Management, 5th IEEE International Workshop on Engineering of Autonomic and
Autonomous Systems (EASe), 2 April 2008, Belfast, Northern Ireland.
[Davy08b] Davy, S., Jennings, B., Strassner, J., Application Domain Independent Policy
Conflict Analysis Using Information Models, 20th Network Operations and Management
Symposium (NOMS) 2008, Salvador Bahia, Brasil, 2008.
[Davy07a] Davy, S., Jennings, B., Strassner, J. The Policy Continuum A Formal Model, in
Proc. of the 2nd International IEEE Workshop on Modelling Autonomic Communications
Environments (MACE), Multlicon Lecture Notes No. 6, Multicon, Berlin, p.p. 6578, 2007.
[Davy07b] Davy, S., Barrett, K., Jennings, B., Serrano, J.M., Strassner, J. Policy Interactions
and Management of Traffic Engineering Services Based on Ontologies, 5th IEEE Latin
American Network Operations and Management Symposium (LANOMS), 1012 September
2007, p.p. 95105, ISBN 9781424411825.
[Dean02] Dean, Mike., Connolly, Dan., van Harmelen, Frank., Hendler, James., Horrocks, Ian.,
McGuiness, Deborah L., Patel-Schneider, Peter F., Stein, Lynn Andrea Web Ontology
Language (OWL). W3C Working Draft 2002.
[DeBruijn04] De Bruijn, J., Fensel, D. Lara, R. Polleres, A. OWL DL vs. OWL Flight:
Conceptual Modelling and Reasoning for the Semantic Web; November 2004.
170
Bibliography
[DeBruijn03] De Buijn, J. et al. Using Ontologies Enabling Knowledge Sharing and Reuse on
the Semantic Web. Technical Report DERI-2003-10-29, Digital Enterprise Research Institute
(DERI), Austria, October 2003.
[Debaty01] Debaty, P., Caswell, D., Uniform Web presence architecture for people, places, and
things, IEEE Personal Communications, p.p. 4651, August 2001.
[DeVaul00] DeVaul, R.W; Pentland, A.S, The Ektara Architecture: The Right Framework for ContextAware Wearable and Ubiquitous Computing Applications, The Media Laboratory, MIT, 2000.
[DEVCENTRAL] The Real Meaning of Cloud Security Revealed, Online access Monday, May
04, 2009. http://devcentral.f5.com/weblogs/macvittie/archive/2009/05/04/the-real-meaningof-cloud-security-revealed.aspx.
[Dey01] Dey, A. K., Understanding and using context, Journal of Personal and Ubiquitous
Computing, Vol. 5, No. 1, p.p. 47, 2001.
[Dey00a] Dey, A. K., Abowd, G. D., Towards a better understanding of context and context
awareness. In Workshop on the What, Who, Where, When and How of Context-Awareness,
affiliated with the 2000 ACM Conference on Human Factors in Computer Systems (CHI 2000),
April 2000, The Hague, Netherlands. April 16, 2000.
[Dey00b] Dey, A., K., Providing Architectural Support for Building Context-Aware
Applications, PhD thesis, Georgia Institute of Technology, 2000.
[Dey99] Dey, A.K., Salber, D., Abowd, G.D., Futakawa, M., An architecture to support contextaware applications, GVU Technical Report Number: GIT-GVU-99-23, 1999.
[Dey98] Dey, A.K. Context-Aware Computing: The CyberDesk Project. AAAI 1998 Spring
Symposium on Intelligent Environments, Technical Report SS-98-02, p.p. 5154, 1998.
[Dey97] Dey, A., et al. CyberDesk: A Framework for Providing Self-Integrating Ubiquitous
Software Services. Technical Report, GVU Center, Georgia Institute of Technology. GITGVU-97-10, May 1997.
[DMTF] Distributed Management Task Force Inc. http://www.dmtf.org.
[DMTF-CIM] DMTF, Common Information Model Standards (CIM). http://www.dmtf.org/
standards/standard_cim.php.
[DMTF-DEN] DMTF, Directory Enabled Networks (DEN). http://www.dmtf.org/standards/
standard_den.php.
[DMTF-DSP0005] Distributed Management Task Force, Inc. Specification for CIM Operations
over HTTP. DMTF Standard DSP0005. 2003.
[DMTF-DSP0201] Distributed Management Task Force, Inc. Specification for the Representation
of CIM in XML, DSP0201. 2002.
[Domingues03] Domingues, P., Silva, L., and Silva, J. A distributed resource monitoring system. In Parallel, Distributed and Network-Based Processing, 2003. Proceedings. Eleventh
Euromicro Conference on, p.p. 127133, February 2003.
[Ducatel01] K. Ducatel, M. Bogdanowicz, F. Scapolo, J. Leijten, and J.C. Burgelman, editors.
Scenarios for Ambient Intelligence in 2010. ISTAG. 2001.
E
[Eisenhauer01] Eisenhauer, Markus and Klemke, Roland, Contextualisation in Nomadic
Computing, Ercim News, Special Issue in Ambient Intelligence, October 2001.
[Elmasri00] Elmasri, Ramez; Navathe, Shamkant B. (2000), Fundamentals of Database Systems
(3rd ed.), AddisonWesley, ISBN 0-201-54263-3.
[EU-FP7Draft] Commission of the European Communities: proposal for COUNCIL DECISIONS
concerning the specific programs implementing the Framework Program 20062010 of the
European Community for research, technological development and demonstration activities.
Presented by the Commission, COM (2005), Brussels, Belgium, 2005.
[eTOM] eTOM enhanced Telecomunication Operations Map. http://www.tmforum.org/
browse.aspx?catID=1647.
Bibliography
171
F
[Feldman07] Feldmann, A. Internet clean-slate design: what and why?, ACM SIGCOM
Computer Communication Review, Vol. 37, No. 3, 2007.
[Fileto03] Fileto, R., Bauzer, C. A Survey on Information Systems Interoperability, Technical
report IC-03-030, December 2003.
[Finkelstein01] Finkelstein, A., Savigni, A. A Framework for Requirements Engineering for ContextAware Services in Proceedings of STRAW 01 the First International Workshop From Software
Requirements to Architectures, 23rd International Conference on Software Engineering, 2001.
[FIPA-SC00094] Foundation for Intelligent Physical Agents. FIPA Quality of Service Ontology
Specification. Geneva, Switzerland. 2002. Specification number SC00094.
[FORATV] The Intercloud and the Future of Computing an Interview with Vint Cerf at FORA.
tv, the Churchill Club, January 7, 2010. SRI International Building, Menlo Park, CA, Online
access January 2011. http://www.fame.ie/?p=362, http://www.youtube.com/user/ForaTv#p/
search/1/r2G94ImcUuY.
[Foster99] Ian, Foster; Kesselman, Carl. (1999). The Grid: Blueprint for a New Computing
Infrastructure. Morgan Kaufmann Publishers. ISBN 1-55860-475-8. http://www.mkp.com/grids/.
[Franklin98] Franklin, D., Flaschbart, J., All Gadget and No Representation Makes Jack a Dull
Environment. AAAI 1998 Spring Symposium on Intelligent Environments, Technical Report
SS-98-02, p.p. 155160, 1998.
[Fritz99] Hohl, Fritz; Kubach, Uwe; Leonhardi, Alexander; Rothermel, Kurt; Schwehm, Markus:
Next Century Challenges: Nexus An Open Global Infrastructure for Spatial Aware
Applications, Proceedings of the Fifth Annual ACM/IEEE International Conference on
Mobile Computing and Networking (MobiCom99), Seattle, Washington, USA, T. Imielinski,
M. Steenstrup, (Eds.), ACM Press, p.p. 249255, August 1520, 1999.
[Fritz90] Fritz E. Froehlich; Allen Kent (1990). ARPANET, the Defense Data Network, and
Internet. The Froehlich/Kent Encyclopedia of Telecommunications. 1. CRC Press. p.p. 341375.
ISBN 9780824729004. http://books.google.com/books?id=gaRBTHdUKmgC&pg=PA341.
G
[Garcia97] Garcia-Molina H., et al., The TSIMMIS approach to mediation: Data models and
Languages. Journal of Intelligent Information Systems, 1997.
[Gellersen00] Gellersen, H. W., Schmidt, A., Beigl, M. Adding Some Smartness to Devices and
Everyday Things. In the Proceedings of the Third IEEE Workshop on Mobile Computing
Systems and Applications Monterey, CA, ACM, p.p. 310, December 2000.
[Genesereth91] Genesereth, M. Knowledge Interchange Format In J. Allenet & others (Eds.), 1991.
[Ghidini01] Ghidini, C., and Giunchiglia, F. Local models semantics, or contextual reasoning
locality compatibility. Artificial Intelligence Vol. 127, No. 2, p.p. 221259, 2001.
[Gil00] Gil, Y. and Blythe, J. PLANET: A Shareable and Reusable Ontology for Representing
Plan. Proceedings of the AAAI, Workshop on Representational Issues for Real-world Planning
Systems, 2000.
[Giunchiglia93] Giunchiglia, F. Contextual reasoning. Epistemologica. Special Issue on I
Linguaggi e le Macchine 16 (1993), 345364. Also IRST-Technical Report 921120, IRST,
Trento, Italy.
[GLITE] gLite Lightweight Middleware for Grid Computing http://glite.cern.ch/.
[GLOBUS] Globus and Globus Toolkit. http://www.globus.org/.
[Goiri10] Goiri, I., Guitart, J. and Torres, J. Characterizing Cloud Federation for Enhancing
Providers Profit, Proceedings of IEEE 3rd International Conference on Cloud Comuting
(CLOUD), p.p. 123130, July 2010.
172
Bibliography
H
[Hampson07] Hampson, C. Semantically Holistic and Personalized Views Across Heterogeneous
Information Sources, in Proceedings of the Workshop on Semantic Media Adaptation and
Personalization, (SMAP07), London, UK, December 1718, 2007.
[Harter99] Harter, A., Hopper, P., Steggles, A. and Webster, P. The anatomy of a context-aware
application in Proceedings of MOBICOM 1999, p.p. 5968, 1999.
[Head10] Head, M.R., Kochut, A., Shulz, C. and Shaikh, H. Virtual Hypervisor: Enabling Fair
and Economical Resource Partitioning in Cloud Environments Proceedings of IEEE Network
Operations and Management Symposium (NOMS), p.p. 104111, 2010.
[Helin03a] Helin, H. Supporting Nomadic Agent-based Applications in the FIPA Agent
Architecture. PhD. Thesis, University of Helsinki, Department of Computer Science, Series
of Publications A, No. A-2003-2. Helsinki, Finland, January 2003.
[Henricksen04] Henricksen, K., and Induska, J. Modelling and Using Imperfect Context
Information. In Workshop Proceedings of the 2nd IEEE Conference on Pervasive Computing
and Communications (PerCom2004), Orlando, FL, USA, p.p. 3337, March 2004.
Bibliography
173
I
[IBM-PBM] Policy Management for Autonomic Computing. http://www.alphaworks.ibm.com/
tech/pmac/.
[IBM08] IBM Software Group, U.S.A. Breaking through the haze: understanding and leveraging cloud computing Route 100, Somers, NY 10589. IBB0302-USEN-00. 2008.
[IBM05] IBM AC-Vision An Architectural Blueprint for Autonomic Computing, v7, June
2005.
[IBM01a] IBM, Autonomic Computing: IBMs Perspective on the State of Information
Technology. Technical Report, IBM, 2001.
174
Bibliography
Bibliography
175
J
[Jeng03] Jeng, Jun-Jang., Chang, H. and Chung, Jen-Yao. A Policy Framework for Business
Activity Management. E-Commerce, IEEE International Conference. June 2003.
[Joshi03] Joshi, A. A Policy Language for a Pervasive Computing Environment.
In Procceedings of IEEE 4th International Workshop on policies for Distributed Systems and
Networks, 2003. POLICY 2003.
K
[Kagal03] Kagal, L., Finin, T. and Joshi, A. A Policy-Based Approach to Security for the
Semantic Web, Proceedings. 2nd Intl Semantic Web Conf. (ISWC 2003), LNCS 2870,
Springer-Verlag, 2003, p.p. 402418.
[Kagal02] Kagal, L. REI: A Policy Language for the Me-Centric Project HP Labs, Technical
Report hpl-2002-070, September 2002.
[Kanter02] Kanter, T. G., Hottown, enabling context-Aware and extensible mobile interactive
spaces, Special Issue of IEEE Wireless Communications and IEEE Pervasive on ContextAware Pervasive Computing, p.p. 1827, October 2002.
[Kanter00] Kanter, T., Lindtorp, P., Olrog, C., Maguire, G.Q., Smart delivery of multimedia
content for wireless applications, Mobile and Wireless Communication Networks, p.p. 7081,
2000.
[Kantar03] Kantar, T.G., Gerald Q. Maguire Jr., Smith, M. T., Rethinking Wireless Internet
with Smart Media, 2003 http://psi.verkstad.net/Papers/conferences/nrs01/nrs01-theo.PDF.
[Karmouch04] Karmouch, A., Galis, A., Giaffreda, R., Kanter, T., Jonsson, A., Karlsson, A. M.
Glitho, R. Smirnov, M. Kleis, M. Reichert, C., Tan, A., Khedr, M., Samaan, N., Heimo, L.,
Barachi, M. E., Dang, J. Contextware Research Challenges in Ambient Networks ISBN
3-540-23423-3, Springer-Verlag Lecture Notes in Computer Science-IEEE MATA 2004,
Florianopolis, Brazil, 2022 October 2004.
[Katsiri05] Katsiri, E., Middleware Support for Context-Awareness in Distributed Sensor-Driven
Systems. PhD Thesis, Engineering Department, University of Cambridge. Also published as
Technical Report n.620, Computer Laboratory, University of Cambridge, February 2005.
[Keeney06] Keeney, J., Lewis, D., OSullivan, D., Roelens, A., Boran, A. Runtime Semantic
Interoperability for Gathering Ontology-based Network Context, 10th IEEE/IFIP Network
Operations and Management Symposium (NOMS 2006), Vancouver, Canada, p.p. 5665,
April 2006.
[Keeney05] Kenney, J., Carey, K., Lewis, D., OSullivan, D., Wade, V. Ontology-based
Semantics for Composable Autonomic Elements. Workshop on AI in Autonomic
Communications, 19th International Joint Conference on Artificial Intelligence, Edinburgh,
Scotland, 30 July5th August 2005.
[Kephart03] Kephart, J. O. and Chess, D. M., The Vision of Autonomic Computing, IEEE
Computer Magazine, January 2003. http://research.ibm.com/autonomic/research/papers/.
[Khedr03] Khedr, M. and Karmouch, A. Exploiting SIP and agents for smart context level
agreements, 2003 IEEE Pacific Rim Conference on Communications, Computers and Signal
Processing, Victoria, BC, Canada, August 2003.
[Khedr02] Khedr, M., Karmouch, A., Liscano, R. and Gray, T. Agent-based context aware ad
hoc communication. In Proceedings of the 4th International Workshop on Mobile Agents for
Telecommunication Applications (MATA 2002), Barcelona, Spain, p.p. 292301, Oct 2324,
2002.
[KIF] KIF Language Knowledge Interchange Format Language. http://www-ksl.stanford.edu/
knowledge-sharing/kif/.
176
Bibliography
L
[Lewis06] Lewis, D., OSullivan, D., Feeney, K., Keeney, J., Power, R. Ontology-based
Engineering for Self-Managing Communications, 1st IEEE International Workshop on
Modelling Autonomic Communications Environments (MACE 2006), Dublin, Ireland, 2526
October 2006, edited by W. Donnelly, R. Popescu-Zeletin, J. Strassner, B. Jennings, S. van der
Meer, multicon verlag, p.p. 81100, 2006.
[Liao07] Liao, L., Leung, H.K.N. An Ontology-based Business Process Modeling Methodology,
in Advances in Computer Science and Technology ACST 2007. 24 April 2007, Phuket,
Thailand.
[Long96] Long, Sue., Kooper, Rob., Abowd, Gregory D. and Aktenson, Christopher G., Rapid
Prototyping of Mobile Context-Aware Applications: The Cyberguide Case Study. Proceedings
of the second annual international conference on Mobile computing and networking,
p.p. 97107, Rye, New York, United States, November 1996.
[Lpez03a] Lpez de Vergara, Jorge E., Villagr, Vctor A., Berrocal, Julio, Asensio, Juan I.,
Semantic Management: Application of Ontologies for the Integration of Management
Information Models. In Proceedings of the 8th IFIP/IEEE International Symposium on
Integrated Management (IM 2003), Colorado Springs, Colorado, USA, 2003.
[Lpez03b] Lpez de Vergara, Jorge E., Villagr, Vctor A., Asensio, Juan I., Berrocal, Julio.,
Ontologies: Giving Semantic to Network Management Models. IEEE Network Magazine,
Special Issue on network Management, Vol. 17, No. 3, May 2003.
[Lpez03c] Lpez de Vergara, J.E. Especificacin de modelos de informacin de gestin de red
integrada mediante el uso de ontologas y tcnicas de representacin del conocimiento. PhD
Thesis, UPM, Spain, 2003.
Bibliography
177
[LOVEUS] LOVEUS Project. Location Aware Visually Enhanced Ubiquitous Services Project:
http://loveus.intranet.gr.
[Lynch96] Lynch, Nancy A. (1996), Distributed Algorithms, Morgan Kaufmann, ISBN 1-55860348-4.
M
[Mace11] Mace, J.C., van Moorsel, A. and Watson, P.The case of dynamic security solutions in
public cloud workflow deployments Proceedings of IEEE/IFIP 41st International Conference
on Dependable Systems and networks Workshops (DNS-W), p.p. 111116, June 2011.
[Maozhen05] Maozhen, Li., Baker, Mark, A. (2005). The Grid: Core Technologies. Wiley.
ISBN 0-470-09417-6. http://coregridtechnologies.org/.
[McCarthy97] McCarthy, J., and Buva C. Formalizing context (expanded notes). In Working
Papers of the AAAI Fall Symposium on Context in Knowledge Representation and Natural
Language (Menlo Park, California, 1997, American Association for Artificial Intelligence, p.p.
99135. 1997.
[McCarthy93] McCarthy, J. Notes on formalizing contexts. In Proceedings of the Thirteenth
International Joint Conference on Artificial Intelligence, San Mateo, California, 1993, R.
Bajcsy, Ed., Morgan Kaufmann, p.p. 555560. 1993.
[McGuiness02] McGuiness, L. Deborah., Fikes, Richard., Hendler, James., Lynn Andrea,
DAM+OIL: An Ontology Language for the Semantic Web, in IEEE Intelligent Systems, Vol.
17, No. 5 September 2002.
[Mei06] Mei, J., Boley, H. Interpreting SWRL Rules in RDF Graphs. Electronic Notes in
Theoretical Computer Science (Elsevier) (151): 5369. 2006.
[MicrosoftPress11] The Economics of the cloud, online access Wednesday 05, January 2011.
http://www.microsoft.com/presspass/presskits/cloud/docs/The-Economics-of-the-Cloud.pdf.
[Mitra00] Mitra, P., Wiederhold, G., Kersten, M. A graph-oriented model for articulation of
Ontology Interdependencies, In Proceedings of the Conference on Extending Database
Technology 2000 (EDBT 2000) Konstanz, Germany, March 2000.
N
[Nakamura00] Nakamura, Tetsuya., Nakamura, Matsuo., Tomoko, Itao. Context Handling
Architecture for Adaptive Networking Services. Proceedings of the IST Mobile Summit 2000.
[Neches91] Neches, Robert., Fikes, Richard., Finin, Tim., Patil, Ramesh., Senator, Ted. Swartout,
William, R. Enabling Technology for Knowledge Sharing. AI Magazine, Vol. 12, No. 3,
1991.
[NEWARCH] Clark, D et al., NewArch: Future Generation Internet Architecture, NewArch
Final Technical Report, http://www.isi.edu/newarch/.
[NGN] Architecture Design Project for New Generation Network, http://akari-project.nict.go.jp/
eng/index2.htm.
[Neiger06] Neiger, G., Santoni, A., Leung, F., Rodgers, D. and Uhlig, R. Intel Virtualization
Technology: Software-only virtualization with the IA-32 and Itanium architectures, Intel
Technology Journal, Vol. 10, No. 03, August 2006.
[Novak07] Novak, J.: Helping Knowledge Cross Boundaries: Using Knowledge Visualization
to Support Cross-Community Sensemaking, in Proceedings of the Conference on System
Sciences, (HICSS-40), Hawaii, January 2007.
[NSFFI] NSF-funded initiative to rebuild the Internet, http://www.geni.net/.
178
Bibliography
O
[Ocampo05c] Ocampo, R., Cheng, L., and Galis, A., ContextWare Support for Network and
Service Composition and Self-Adaptation. IEEE MATA 2005, Mobility Aware Technologies
and Applications, Service Delivery Platforms for Next Generation Networks; Springer ISBN-2
553-01401-5, p.p. 1719, Montreal, Canada, October 2005.
[OKBC] Open Knowledge Base Connectivity language Specification. http://www.ai.sri.
com/~okbc/spec.html.
[OMG-MDA] Object Management Group. Model Driven Architecture. http://www.omg.org/
mda/.
[OMG-UML] Object Management Group, Unified Modelling Language (UML), version 1.4,
UML Summary, OMG document, September 2001.
[ONTOLINGUA] ONTOLINGUA Description Tool. http://www.ksl.stanford.edu/software/
ontolingua.
[OPENDS] OpenDS Monitoring. https://www.opends.org.
[OPES] Open Pluggable Edge Services OPES: http://www.ietf-opes.org.
[Opperman00] Oppermann, Reinhard; Specht, Marcus. A Context-sensitive Nomadic
Information System as an Exhibition Guide. Handheld and Ubiquitous Computing Second
International Symposium. 2000.
[OSullivan03] OSullivan, D., Lewis, D. Semantically Driven Service Interoperability for
Pervasive Computing. In Proceedings of 3rd ACM International Workshop on Data
Engineering for Wireless and Mobile Access, San Diego, California, USA, September 19th,
2003.
[OWL] Ontology Web Language, http://www.w3.org/2004/OWL.
[OWL-S] http://www.daml.org/services/owl-s/.
P
[Palpanas07] Palpanas, T., Chowdhary, P., Mihaila, G.A., Pinel, F.: Integrated model-driven
dashboard development, in the Journal of Information Systems Frontiers, Vol. 9, No. 23, Jul
2007.
[Park04] Park, Jinsoo., Ram, Sudha. Information systems interoperability: What lies beneath?.
ACM Transactions on Information Systems 224, 2004.
[Perich04] F. Perich. MoGATU BDI Ontology, University of Maryland, Baltimore County
2004.
[Pascoe99] Pascoe, J., Ryan, N., Morse, D., Issues in developing context-aware computing. In
Proceedings of First International Symposium on Handheld and Ubiquitous Computing
(HUC99), 1999.
[Pascoe98] Pascoe, J. Adding Generic Contextual Capabilities to Wearable Computers. 2nd
International Symposium on Wearable Computers, p.p. 9299, 1998.
[Piccinelli01] Piccinelli, Giacomo., Stefanelli, Cesare. Morciniec, Michal. Policy-based
Management for E-Services Delivery HP-OVUA 2001.
[Plazczak06] Plaszczak, Pawel; Rich Wellner, Jr (2006). Grid Computing The Savvy Managers
Guide. Morgan Kaufmann Publishers. ISBN 0-12-742503-9. http://savvygrid.com/.
[PROMPT] PROMPT Tool http://protege.stanford.edu/plugins/prompt/prompt.html.
[PROTG] PROTG http://protege.stanford.edu/.
Bibliography
179
R
[Raz99] Raz, D. and Shavitt, Y. An Active Network Approach for Efficient Network
Management, IWAN99, Berlin, Germany, LNCS 1653, p.p. 220231, July 1999.
[Reynaud03] Reynaud, C., Giraldo, G., An application to the mediator approach to services
over the web, in Concurrent Engineering, 2003.
[RDF] http://www.w3c.org/rdf.
[Rochwerger11] Rochwerger, B. et. Al., Reservoir When One Cloud Is Not Enough. Computer
Magazine, Vol. 44, No. 3, p.p. 4451, March 2011.
[Rochwerger09] Rochwerger, B., Caceres, J., Montero, R.S., Breitgand, D., Elmroth, E., Galis,
A., Levy, E. Llorente, I.M., Nagin, K., Wolfsthal, Y., Elmroth, E., Caceres, J., Ben-Yehuda, M.,
Emmerich, W., Galan, F. The RESERVOIR Model and Architecture for Open Federated
Cloud Computing, IBM Journal of Research and Development, Vol. 53, No. 4. 2009.
[Roussaki06] Roussaki, I. M., Strimpakou, M., Kalatzis, N., Anagnostou, M. and Pils, C. Hybrid
Context Modeling: A location-based scheme using ontologies. In 4th IEEE conference on
Pervasive Computing and Communications Workshop, p.p. 27, 2006.
[Ryan97] Ryan, N., Pascoe, J., Morse, D. Enhanced Reality Fieldwork: the Context-Aware
Archaeological Assistant. Gaffney, V., van Leusen, M., Exxon, S. (eds.) Computer Applications
in Archaeology, 1997.
S
[Salber99] Salber, D, Dey, A.K., Abowd, G.D., The Context Toolkit: Aiding the Development of
Context-Enabled Applications, in Proceedings of CHI99, PA, ACM Press, p.p. 434441, May
1999.
[SALESFORCE] Salesforce.com, http://www.salesforce.com/cloudcomputing/.
[Samann03] Samann, N., Karmouch, A., An Evidence-based Mobility Prediction Agent
Architecture. In Proceedings of the 5th Int. Workshop on Mobile Agents for Telecommunication
Applications (MATA2003), Marrakesch, ISBN 3-540-20298-6, Lecture Notes in Computer
Science, Springer-Verlag, October 2003.
[Schilit95] Schilit, B.N. (1995) A Context-Aware System Architecture for Mobile Distributed
Computing PhD Thesis 1995.
[Schilit94a] Schilit, B; Theimer, M. Disseminating Active Map Information to Mobile Hosts.
IEEE Network Vol. 8, No. 5, p.p. 2232, 1994.
[Schilit94b] Schilit, B. N., Adams, N. L., and Want, R. Context-aware computing applications.
In IEEE Workshop on Mobile Computing Systems and Applications, Santa Cruz, CA, USA,
p.p. 8590, 1994.
[Schnwlder99] Schnwlder, J. Straub F. Next Generation Structure of Management
Information for the Internet. In Proceedings of the 10th IFIP/IEEE International Workshop on
Distributed Systems: Operations & Management (DSOM99), Zrich, 1999.
[Schmidt02] Schmidt, A., Strohbach, M., van Laerhoven, K., Friday A. and Gellersen, H.W.
Context Acquisition based on Load Sensing, in Proceedings of Ubicomp 2002, G. Boriello
and L.E. Holmquist (Eds). Lecture Notes in Computer Science, Vol. 2498, ISBN 3-540-442677; Gteborg, Sweden. Springer Verlag, p.p. 333351, September 2002.
[Schmidt01] Schmidt, A., van Laerhoven, K., How to Build Smart Appliances?, IEEE Personal
Communications, Vol. 8, No. 4, August 2001.
[Sedaghat11] Sedaghat, M., Hernandez, F. And Elmroth, E. Unifying Cloud Management:
Towards Overall Governance of Business Level Objectives, Proceedings of 11th IEEE/ACM
International Symposium on Cluster, Cloud and Grid Computing (CCGrid), p.p. 591597, May
2011.
180
Bibliography
[Serrano10] Serrano J.M., Van deer Meer, S., Holum, V., Murphy J., and Strassner J. Federation,
A Matter of Autonomic Management in the Future internet. 2010 IEEE/IFIP Network
Operations & Management Symposium NOMS 2010. Osaka International Convention
Center, 1923 April 2010, Osaka, Canada.
[Serrano09] Serrano, J.M., Strassner, J. and Foghl, M. A Formal Approach for the Inference
Plane Supporting Integrated Management Tasks in the Future Internet 1st IFIP/IEEE ManFI
International Workshop, In conjunction with 11th IFIP/IEEE IM2009, 15 June 2009, at Long
Island, NY, USA.
[Serrano08] Serrano, J. M., Serrat, J., Strassner, J., Foghl, Mchel. Facilitating Autonomic
Management for Service Provisioning using Ontology-Based Functions & Semantic Control
3rd IEEE International Workshop on Broadband Convergence Networks (BcN 2008) in IEEE/
IFIP NOMS 2008. 0711 April 2008, Salvador de Bahia, Brazil.
[Serrano07a] Serrano, J. Martn; Serrat, Joan; Strassner, John, Ontology-Based Reasoning for
Supporting Context-Aware Services on Autonomic Networks 2007 IEEE/ICC International
Conference on Communications ICC 2007, 2428 June 2007, Glasgow, Scotland, UK.
[Serrano07b] Serrano, J. Martn; Serrat, Joan, Meer, Sven van der, Foghl, Mchel OntologyBased Management for Context Integration in Pervasive Services Operations. 2007 ACM
International Conference on Autonomous Infrastructure Management and Security. AIMS
2007, 2123 June 2007, Oslo, Norway.
[Serrano07c] Serrano, J. M., Serrat, J., Strassner, J., Cox, G., Carroll, R., Foghl, M. Services
Management Using Context Information Ontologies and the Policy-Based Management
Paradigm: Towards the Integrated Management in Autonomic Communications. 2007 1st
IEEE Intl. Workshop on Autonomic Communications and Network Management ACNM
2007, in 10th IFIP/IEEE International Symposium on Integrated Management IM 2007,
2125 May 2007, Munich, Germany.
[Serrano06a] Serrano, J. Martn, Serrat, Joan, Strassner, John, Carroll, Ray. Policy-Based
Management and Context Modelling Contributions for Supporting Autonomic Systems. IFIP/
TC6 Autonomic Networking. France Tlcom, Paris, France. 2006.
[Serrano06b] Serrano, J.M., Serrat, J., OSullivan, D. Onto-Context Manager Elements
Supporting Autonomic Systems: Basis & Approach. IEEE 1st Int. Workshop on Modelling
Autonomic Communications Environments MACE 2006. Manweek 2006, Dublin, Ireland.
October 2327, 2006.
[Serrano06c] Serrano, J.M., Justo, J., Marn, R., Serrat, J., Vardalachos, N., Jean, K., Galis, A.
Framework for Managing Context-Aware Multimedia Services in Pervasive Environments.
2006 International Journal on Internet Protocol and Technologies IJIPT Journal, Special Issue
on Context in Autonomic Communication and Computing. Vol. 1, 2006. ISSN 17438209
(Print), ISSN 17438217 (On-Line).
[Serrano06d] Serrano, J.M., Serrat, J., Galis, A. Ontology-Based Context Information Modelling
for Managing Pervasive Applications. 2006 IEEE/IARIA International Conference on
Autonomic and Autonomous Systems ICAS06. AWARE 2006. July 1921, 2006 Silicon
Valley, CA, USA.
[Serrano05] Jaime M. Serrano O., Joan Serrat F., Kun Yang, Epi Salamanca C. Modelling
Context Information for Managing Pervasive Network Services. 2005 AMSE/IEEE
International Conference on Modelling and Simulation ICMS 05, AMSE/IEEE Morocco
Section, 2224 November 2005, Marrakech, Morocco.
[SFIFAME] SFI FAME-SRC, Federated, Autonomic Management of End-to-End Services
Strategic Research Cluster. http://www.fame.ie/.
[Shao10] Shao, J., Wei, H., Wang, Q. and Mei, H. A runtime model based monitoring approach
for cloud. In Cloud Computing (CLOUD), 2010 IEEE 3rd International Conference on,
p.p. 313320, July 2010.
[Sloman94a] Sloman, M. (ed.), Network and distributed systems management, AddisonWesley, 1994.
Bibliography
181
182
Bibliography
T
[TMF-ADDENDUM] TMF, The Shared Information and Data Model Common Business
Entity Definitions: Policy, GB922 Addendum 1-POL, July 2003.
[TMF-SID] SID Shared Information Data model. http://www.tmforum.org/Information
Management/1684/home.html.
[TMN-M3010] Telecommunications Management Networks Architectural Basis. http://www.
simpleweb.org/tutorials/tmn/index-1.html#recommendations.
[TMN-M3050] Telecommunications Management Networks Management Services approach
Enhanced Telecommunications Operations Map (eTOM). http://www.catr.cn/cttlcds/itu/itut/
product/bodyM.htm.
[TMN-M3060] Telecommunications Management Networks Principles for the Management
of Next Generation Networks. http://www.catr.cn/cttlcds/itu/itut/product/bodyM.htm.
[Tennenhouse97] Tennenhouse, D.L., Smith, J. M., W. D. Sincoskie, D. J. Wetherall, and G. J.
Minden, (1997) A Survey of Active Network Research, IEEE Communications, Vol. 35, No.
1, p.p. 8086, January 1997.
[TERAGRID] TeraGrid. https://www.teragrid.org.
[Tomlinson00] Tomlinson, G., Chen, R., Hoffman, M., Penno, R. A Model for Open Pluggable
Edge Services, draft-tomlinson-opes-model-00.txt, http://www.ietf-opes.org.
[Tonti03] Tonti, G., Bradshaw, R., Jeffers, R., Suri, N. and Uszok, A. Semantic Web Languages
for Policy Representation and Reasoning: A Comparison of KAoS, Rei, and Ponder, The
Semantic WebISWC 2003: 2nd International Semantic Web Conference, LNCS 2870,
Springer-Verlag, 2003, p.p. 419437.
U
[UAPS] User Agent Profile specification: http://www.openmobilealliance.org.
[UNICORE] UNICORE (Uniform Interface to Computing Resources. http://www.unicore.eu/.
[Urgaonkar10] Urgaonkar, R., Kozat, U.C., Igarashi, K. and Neely, M.J. Dynamic Resource
Allocation and Power Management in Virtualized Data Centers Proceedings of IEEE Network
Operations and Management Symposium (NOMS), p.p. 479486, April 2010.
[Uschold96] Ushold M. & Gruninger M., Ontologies: Principles, methods and applications, in
The Knowledge Engineering Review, Vol. 11, No. 2, p.p. 93155, 1996.
[Uszok04] Uszok,A. Bradshaw, J.M. and Jeffers, R. KAoS: A Policy and Domain Services
Framework for Grid Computing and Grid Computing and Semantic Web Services, Trust
Management: 2nd Intl. Conference Procs (iTrust 2004), LNCS 2995, Springer-Verlag, 2004,
p.p. 1626.
V
[Verma00] Verma D. Policy Based Networking 1st ed. New Riders. ISBN: 1-57870-226-7
Macmillan Technical Publishing USA, 2000.
[VMWARE] Cisco, VMWare. DMZ Virtualization using VMware vSphere 4 and the Cisco
Nexus 2009. http://www.vmware.com/files/pdf/dmz-vsphere-nexus-wp.pdf.
Bibliography
183
W
[W3C] World Wide Web Consortium (3WC). http://www.w3.org.
[W3C-WebServices] W3C Consortium WebServices Activity Recommendations. http://www.
w3.org/2002/ws/.
[W3C-HTML] HyperText Markup Language Home Page. http://www.w3.org/MarkUp, http://
www.w3.org.
[Waller11] Waller, A., Sandy, I., Power, E., Aivaloglou, E., Skianis, C., Muoz, A., Mana, A.
Policy Based Management for Security in Cloud Computing, STAVE 2011,1st International
Workshop on Security & Trust for Applications in Virtualised Environments, J. Lopez, (Ed),
June 2011, Loutraki, Greece, Springer CCIS.
[Wang10] Wang, M., Holub, V., Parsons, T., Murphy, J. and OSullivan, P. Scalable run-time
correlation engine for monitoring in a cloud computing environment. In Proceedings of the
2010 17th IEEE International Conference and Workshops on the Engineering of ComputerBased Systems, ECBS 10, p.p. 2938, Washington, DC, USA, 2010. IEEE Computer Society.
[Wang04] Wang, X. et al. Ontology-Based Context Modeling and Reasoning using OWL,
Context. In Proceedings of Modeling and Reasoning Workshop at PerCom 2004.
[Ward97] Ward, A., Jones, A., Hopper, A. A New Location Technique for the Active Office.
IEEE Personal Communications Vol. 4, No. 5, p.p. 4247, 1997.
[Wei03] Wei, Q., Farkas, K. Mendes, P., Phehofer, C., Nafisi, N. Context-aware Handover Based
on Active Network Technology IWAN2003 Conference, Kyoto 1012, December 2003.
[Weiser93] Weiser, Mark. Ubiquitous Computing, IEEE Hot Topics, Vol. 26, p.p. 7172, 1993.
[Wiederhold92] Wiederhold, G. Mediators in the Architecture of Future Information Systems.
In IEEE Computer Conference 1992.
[Winograd01] T. Winograd. Architecture for Context. HCI Journal, 2001.
[Wolski99] Wolski, R., Spring, N. T., and Hayes, J. The network weather service: a distributed
resource performance forecasting service for metacomputing. Future Gener. Comput. Syst.,
15:757768, October 1999.
[Wong05] Wong, A., Ray, P., Parameswaran, N., Strassner, J., Ontology mapping for the
interoperability problem in network management, Journal on Selected Areas in
Communications, Vol. 23, No. 10, p.p. 20582068, Oct. 2005.
X
[XML-RPC] XML-RPC, XML-RPC specification, W3C Recommendation June 2003. http://
www.xmlrpc.com/spec.
[XML-XSD] XML-XSD, XML-XSD specification, W3C Recommendation, May 2001. http://
www.w3.org/XML/Schema.
[XMLSPY] XML Schema Editor. http://www.xmlspy.com.
Y
[Yang03a] Yang, K., Galis, A., Policy-driven mobile agents for context-aware service in next
generation networks, IFIP 5th International Conference on Mobile Agents for
Telecommunications (MATA 2003), Marrakech, ISBN 3-540-20298-6, Lecture Notes in
Computer Science, Springer-Verlag, October 2003.
184
Bibliography
[Yang03b] Yang, K., Galis, A., Network-centric context-aware service over integrated WLAN
and GPRS networks, 14th IEE International Symposium On Personal, Indoor And Mobile
Radio Communications, September 2003.
[Ying02] Ying, D., Mingshu, L. TEMPPLET: A New Method for Domain-Specific Ontology
Design Lecture Notes in Computer Science-KNCS, Springer ISBN: 978-3-540-44222-6,
January 2002.
Index
A
Autonomic communication
anticipatory, 18
autonomic computing, 1516
autonomic elements, 16
autonomic management, 1617
autonomic systems, 15
context-awareness, 18
open standards, 18
self-awareness, 17
self-configuring, 17
self-healing, 17
self-optimization, 17
self-protection, 17
Autonomic computing and PBM
CONTEXT project, 77
formalism process, 78
PBSM architectures, 78
policy information model, 7881
policy language, 78
policy language mapping, 7475
policy management, 7476
programmable network technology, 77
B
Business-driven architectures
cloud computing decision control, 163
cloud computing systems, 162163
cloud services, 163
ELASTICS EMC2, 164165
C
CIM. See Common information model (CIM)
Cloud management service, IT infrastructure
cloud computing challenges and trends
186
Cloud service and network management (cont.)
information models, 68
mappings, 68
semantic augmentation, 68
ontology engineering
autonomic environments, 7273
benefits, 137
COMANTO ontology, 71
CONON ontology, 71
context broker architecture ontology, 71
context ontology language, 70
CONTEXT project, 72
definition, 69
friend of a friend ontology, 71
future internet services and systems,
136137
information interoperability, 137138
information sharing and exchanging, 72
internet, 136
interoperability and linked-data,
138139
knowledge interchange format
language, 70
PLANET ontology, 71
reusability, 72
SOUPA, 71
web ontology language, 70
policy interactions (see also
Onto-CONTEXT architecture class)
ManagedEntity, 145
obligation and authorization policy, 143
onto-CONTEXT architecture class,
144, 145
PolicyApplication, 145
Policy Continuum, 144145
PolicyManager, 145
PRIMO architecture (see PRIMO
architecture)
Sir-WALLACE, 155, 159162
COBra-ONT. See Context broker architecture
ontology (COBra-ONT)
COMANTO ontology, 71
Common information model (CIM), 90
IST-CONTEXT project, 92
XML representation, 93
CONON ontology, 71
Context-awareness, 14
clientserver principle, 63
context information, 62
context model
context information handling
process, 66
definition, 6566
non-interpreted vs. interpreted context, 67
Index
related vs. unrelated context, 66
restrictive format, 67
single piece vs. multiple pieces of
context, 66
unfiltered vs. filtered context, 67
users environment, 6566
users location, 66
definition, 6061
geographic information system, 63
location-based services, 62
pervasive service, 6364
research activities, end-device, 6263
sensors, 62
system design, 62
Context broker architecture ontology
(COBra-ONT), 71
Context information, 1314
distribution and storage for
associative model, 40
entity-attribute-value model,
4142
hierarchical model, 39
network model, 39
object model, 40
object-oriented model, 40
relational model, 3940
semi-structured model, 40
formalization of, 3839
implementation tools, 4243
management of, 42
mapping
abstract model, 33
entity model, 32
main entity classes, 33
relationship classes, 3334
modelling, 3132
representation of, 3537
representation tools, 37, 38
taxonomy, 3435
Context model objects
classes, 93
object entity, 94
person entity, 94
place entity, 94
task entity, 94
E
Elastic Systems Management for Cloud
Computing Infrastructures
(ELASTICS-EMC2),
164165
Entity-attribute-value (EAV) model,
4142
Index
F
Friend of a friend (FOAF) ontology, 71
I
Information modelling and integration
data model
common information model, 90,
9293
context-awareness, 89
context information, 9091
context model objects, 9394
interoperability, 89
XML document, 91
XSD documents, 91
ontology engineering
context information model, 101103
operations management model,
106108
policy information model,
103106
service lifecycle control model,
109112
service and network management
policy hierarchy, 9697
policy model, 97100
policy structure, 9496
Information technologies and
telecommunications (ITC)
cloud computing, 78
internet services
communication system management, 5
cross-layer interactions, 56
end-to-end service, 7
initiatives, 6
integrated management, 56
transmission capabilities, 6
semantic web and services management
application-specific services, 4
context-awareness, 4
drawbacks, 1
information models, 5
link data and information management
service, 4
services and resource adaption, 4
web sensors, 4
web services requirements, 4
software and networking solutions, 1
software-oriented architectures, 79
K
Knowledge interchange format (KIF)
language, 70
187
M
Modelling and managing services
network and cloud service
(see also Cloud service and network
management)
data models, 68
information models, 68
mappings, 68
semantic augmentation, 68
ontology structures
advantages, 5657
axioms, 58
concepts, 57
functions, 58
instances, 58
relationships, 58
representation, 57
semantics and pervasive services
context-awareness
(see Context-awareness)
domain interactions, 5960
failure types, 59
service-oriented architectures, 59
MoGATU BDI ontology, 71
MUltimedia SErvices for Users in
Movement-Context-Aware &
Wireless (MUSEUM-CAW ),
155, 157159
O
Onto-CONTEXT architecture class
multiple class interactions, 146147
PolicyDecisionMaking entity, 146
PolicyExecutionPoint, 146
PolicyManager, 145
Ontology-based reasoning, 8384
Ontology editors, 8485
Ontology engineering, 15
integration of models
class interactions, 141
domain interaction, 140141
event class, 141142
IETF, 142, 143
ManagedEntity, 143
operational and business complexity,
139140
policyapplications, 143
policycontroller, 143
policy management, 140
quality of service, 139
web ontology language, 142
network and cloud service
management
188
Ontology engineering (cont.)
autonomic computing and PBM
(see Cloud service and network
management)autonomic
environments, 7273
COMANTO ontology, 71
CONON ontology, 71
context broker architecture ontology, 71
context ontology language, 70
CONTEXT project, 72
definition, 69
friend of a friend ontology, 71
information sharing and exchanging, 72
knowledge interchange format
language, 70
PLANET ontology, 71
reusability, 72
SOUPA, 71
web ontology language, 70
operational mechanism
ontology-based reasoning, 8384
ontology mapping/alignment, 8283
ontology merging, 83
specification mechanism
ontology editors, 8485
ontology reasoners, 85
Ontology mapping/alignment, 8283
Ontology merging/fusion, 83
Ontology reasoners, 85
P
Pervasive services, 1415
PLANET ontology, 71
Policy-based management, 14
Policy core information model (PCIM), 43
PRIMO architecture
Autonomic Manager, 151
business objectives, 149
components, 148149, 151152
ontology-based policy interactor, 153
policy analyzer, 153154
policy performer, 153
traffic engineering policies, 152
DEN-ng information model, 149
vs. onto-CONTEXT architecture, 151
Policy Continuum, 149
policy interactions, 149, 150
policy transformation, 148
SNMP alarm, 151
R
Resource description framework (RDF), 70
Run-time correlation engine (RTCE), 49
Index
S
Semantics, ICT
autonomic communication
anticipatory, 18
autonomic computing, 1516
autonomic elements, 16
autonomic management, 1617
autonomic systems, 15
context-awareness, 18
open standards, 18
self-awareness, 17
self-configuring, 17
self-healing, 17
self-optimization, 17
self-protection, 17
cloud service (see also Cloud
management service, IT
infrastructure)
application scalability, 27
complex infrastructure and pricing
model, 27
heterogeneity, 28
management scalability, 29
real-time scalability, 27
scalability limitations, 2728
security, 28
service levels maintanance, 28
transparency, 2829
context-awareness, 14
context information, 1314
ontology
autonomic communication, 2526
definition, 23
pervasive computing, 2325
pervasive services and management
operation, 1213
ontology engineering, 15
pervasive computing composition, 2122
pervasive services, 1415
policy-based management, 14
virtual infrastructure and cloud computing
autonomic system and service
operation, 19
cloud service, 21
cost and time, 18
evolution towards, 19
virtualization, 1920
Service integrated for WiFi Access, onto
Logy-Assisted and Context without
Errors (Sir-WALLACE), 155,
159162
Service lifecycle operations
benefits, 118
next generation networks, 116117
phases, 117118
Index
policies, 116
self-management features, 117
service billing, 119
service creation, 118
service customization, 118
service management tasks, 119
service operation, 119
service support, 119
Service management architectures
management operations
abstractions and logic-based rules, 121
context functions, 123124
dynamic context variations, 123
extensibility, 121
information interoperability
interactions, 121122
interactions in, 124125
logic-based functions, 122
policy-based service management,
120121
schema elements, 124
semantic control, 120
semantic rules, 121
service assurance policies, 130131
service distribution, 126
service execution, 129130
service invocation, 128129
service maintenance, 127128
189
ontology-based operations, semantic-based
rules, 131132
service lifecycle
benefits, 118
next generation networks, 116117
phases, 117118
policies, 116
self-management features, 117
service billing, 119
service creation, 118
service customization, 118
service management tasks, 119
service operation, 119
service support, 119
Shared information and data model (SID),
5, 90
Sir-WALLACE. See Service integrated for
WiFi Access, ontoLogy-Assisted
and Context without Errors
(Sir-WALLACE)
Software-oriented architectures (SOA), 78
Standard ontology for ubiquitous and
pervasive applications
(SOUPA), 71
W
Web ontology language (WOL), 70