2012 Book TheFutureInternet
2012 Book TheFutureInternet
2012 Book TheFutureInternet
Editorial Board
David Hutchison
Lancaster University, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Alfred Kobsa
University of California, Irvine, CA, USA
Friedemann Mattern
ETH Zurich, Switzerland
John C. Mitchell
Stanford University, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
Oscar Nierstrasz
University of Bern, Switzerland
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
TU Dortmund University, Germany
Madhu Sudan
Microsoft Research, Cambridge, MA, USA
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbruecken, Germany
Federico Álvarez Frances Cleary
Petros Daras John Domingue
Alex Galis Ana Garcia
Anastasius Gavras Stamatis Karnourskos
Srdjan Krco Man-Sze Li
Volkmar Lotz Henning Müller
Elio Salvadori Anne-Marie Sassen
Hans Schaffers Burkhard Stiller
Georgios Tselentis Petra Turkama
Theodore Zahariadis (Eds.)
13
Volume Editors
© The Editor(s) (if applicable) and the Author(s) 2011. The book is published with open access at Springer-
Link.com
Open Access. This book is distributed under the terms of the Creative Commons Attribution Noncommercial
License which permits any noncommercial use, distribution, and reproduction in any medium, provided the
original author(s) and source are credited.
This work is subject to copyright for commercial use. All rights are reserved, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of
this publication or parts thereof is permitted only under the provisions of the German Copyright Law of
September 9, 1965, in its current version, and permission for use must always be obtained from Springer.
Violations are liable to prosecution under the German Copyright Law.
The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply,
even in the absence of a specific statement, that such names are exempt from the relevant protective laws
and regulations and therefore free for general use.
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
List of Editors
Federico Álvarez
Universidad Politécnica de Madrid, Spain
[email protected]
Frances Cleary
Waterford Institute of Technology - TSSG, Waterford, Ireland
[email protected]
Petros Daras
CERTH-ITI, Thessaloniki, Greece
[email protected]
John Domingue
Knowledge Media Institute, The Open University,
Milton Keynes, UK
[email protected]
Alex Galis
Department of Electronic and Electrical Engineering,
University College London, UK
[email protected]
Ana Garcia
ENoLL, Brussels, Belgium
[email protected]
Anastasius Gavras
Eurescom GmbH, Heidelberg, Germany
[email protected]
Stamatis Karnouskos
SAP Research, Karlsruhe, Germany
[email protected]
Srdjan Krco
Ericsson Serbia, Belgrade, Serbia
[email protected]
VI List of Editors
Man-Sze Li
IC Focus, London, UK
[email protected]
Volkmar Lotz
SAP Research, Sophia Antipolis, France
[email protected]
Henning Müller
Business Information Systems, University of Applied Sciences Western
Switzerland, Sierre, Switzerland
[email protected]
Elio Salvadori
CREATE-NET, Trento, Italy
[email protected]
Anne-Marie Sassen
European Commission, Brussels, Belgium
[email protected]
Hans Schaffers
ESoCE Net, Dialogic, Aalto University, Aalto, Finland
[email protected]
Burkhard Stiller
University of Zürich, Switzerland
[email protected]
Georgios Tselentis
European Commission, Brussels, Belgium
[email protected]
Petra Turkama
CKIR, Aalto University, Aalto, Finland
[email protected]
Theodore Zahariadis
Synelixis/TEI of Chalkida, Greece
[email protected]
Preface
This publication constitutes the 2012 edition of the yearly Future Internet
Assembly book, which has been published since 2009.
The Future Internet Assembly (FIA) is a successful, unique, and bi-annual
conference that brings together participants of over 150 projects from several
distinct but interrelated areas in the EU Framework Programme 7.
They share scientific and technical results and discuss cross-domain research
topics around the notion of creating new Future Internet technologies, applica-
tions, and services with a global view.
FIAs history started in spring 2008 in Bled, Slovenia, and the spring of 2012
saw the 9th FIA conference in Aalborg, Denmark. As with prior spring FIAs,
the community has put together a book, which aggregates both representative
results achieved in the Future Internet domain and the possibilities of what can
be expected in the medium or short term.
In the FIA time line several key elements were required to ensure success.
These are:
- Cross-domain considerations: on both core technical issues, such as FI ar-
chitectures, FI services, FI experimentation, mobile FI, or Internet of Things,
and on horizontal issues, such as socio-economics, privacy, trust, and identity.
- Engagement with application areas of the Future Internet and users: to move
from FI technologies to sectors where innovation can be improved by Future
Internet technologies.
- Provision of results that are applicable in day-to-day life.
Within the structure of the book, different topics are covered in a balanced
and coherent manner.
The topics of the book have been organized into four chapters:
- Future Internet foundations cover core cross-domain technical and horizon-
tal topics. Chapters within this section include architectural questions; mobile
Internet, cloud computing, socio-economic questions; trust and identity; search
and discovery; and experiments and experimental design.
- Future Internet technical areas are those technical domains that are as-
sociated to the Future Internet, mainly but not limited to networks, services,
Internet of Things, content, and cross-area questions.
- Future Internet application areas consist of user areas and communities
where the Future Internet can boost innovation. The chapters within this sec-
tion cover smart cities, smart energy, smart health, smart enterprises, smart
environment, smart transportation, logistics and mobility, smart manufacturing,
smart agriculture, and tourism.
- Future Internet infrastructures cover experimentation and results in real
infrastructures within the FI domain.
VIII Preface
Programme Committee
Federico Álvarez Universidad Politécnica de Madrid
Frances Cleary Waterford Institute of Technology - TSSG
Petros Daras Informatics and Telematics Institute, CERTH
John Domingue Knowledge Media Institute, The Open
University
Alex Galis University College London
Ana Garcia ENoLL
Anastasius Gavras Eurescom GmbH
Stamatis Karnourskos SAP Research
Srdjan Krco Ericsson
Man-Sze Li IC Focus
Volkmar Lotz SAP AG
Henning Müller University of Applied Sciences Western
Switzerland, Sierre (HES-SO)
Elio Salvadori CREATE-NET
Anne-Marie Sassen European Commission
Hans Schaffers Aalto University
Burkhard Stiller University of Zurich
Georgios Tselentis European Commission
Petra Turkama Aalto University
Theodore Zahariadis Synelixis Ltd.
Additional Reviewers
Apostolos Axenopoulos Jacek Kopecky
Iacopo Carreras Ning Li
Smitashree Choudhury Daniele Miorandi
Dejan Drajic Nicu Sebe
Antonio Foncubierta Domenico Siracusa
Nenad Gligoric Dimitris Zarpalas
Vlastuin Group
Future Internet Foundations
Introduction
Burkhard Stiller
University of Zürich, CSG@IFI, Zürich, Switzerland
Alex Galis
University College London, U.K.
Theodore Zahariadis
Synelixis Ltd.
Since the Internet has reached a major standing in terms of being the medium
and the infrastructure for information exchange, management and computation
in the beginning of the 21st century, its usage is characterized by billions of users
as well as by hundreds, even thousands of different applications and services.
The diversity of these applications and the use of today’s Internet outline a very
successful approach and undisputedly determine the core pillar of the Informa-
tion and Communication Technology (ICT) landscape and its societal challenges
and opportunities. Such a success is deemed naturally positive, however, the
scale of a distributed system typically determines as well the overall achievable
performance, a degree of user satisfaction, and operational perspectives. There-
fore, the operational and commercial dimensions of Internet communications and
computation have turned into areas, which go well beyond the initial Internet’s
technology and its basics, including areas such as high reliability, full-fledged
security, mobility support, or delay tolerance. While these technology-driven di-
mensions are being enriched by application-specific, provider-critical, and user-
driven facets, the set of economic and societal factors of major importance are
being addressed in work in the context of Future Internet, too. Thus, the need to
re-think at least partially the Future Internet foundations is essential, especially
to enable future networks and services as well as novel technology to be able to
cope with those new demands.
In consequence, the addressing of relevant, important, and arising foundations
of a Future Internet are crucial for a success of new infrastructures to come. As
such the pure delivery of packets - one of the key design principles for a robust
Internet – has to be extended with those principles, which have to guide future
developments. In addition, the analysis of technology-to-economic relations in
terms of inter-stakeholder operations is essential for a modern Future Internet
Foundation, as the economic dimension of the information exchange has reached
it technical limitations of today’s Internet.
A particular key aspect is the study of system limits defining the constraints
and freedoms in controlling the Future Internet. Limits can be determined by
analyzing how the behaviour of the system depends on the parameters that drive
the system. Some limits would lead to unexpected and significant behaviour
changes of the system, for example the unpredictable boundaries or changes
XII Future Internet Foundations
the tools to incubate research results and accelerate their transfer to innovative
marketable products and services. But since a lack of research transfer via the
standardization channel is visible in EU research, generally referred to as the
research-to-standardization gap, this paper analyzes the root causes for this sit-
uation and proposes a research-focused pre-standardization as a supplemented
methodology and its associated processes to aim at a systematic analysis of
standardization aspects of research projects.
The paper on “From Internet Architecture Research to Standards” by Dimitri
Papadimitriou, Bernard Sales, Piet Demeester, and Theodore Zahariadis argues
that that the debate between architectural research driven by the application
of the theory of utility and the theory of change is over. It highlights a “third
path” which is based on identifying the actual foundational design principles of
the Internet such as the modularization principle and by acknowledging the need
for a all-inclusive architecture instead of (re-) designing protocols independently
and expecting that their combination would lead to a consistent architecture
at running time. The proposed path will in turn also partially impact how the
necessary standardization work is to be organized and conducted, including both
“problem-driven” and “architecture driven” work.
The work on “SOCIETIES: Where Pervasive Meets Social ” by Kevon Doolin,
Ioanna Roussaki, Mark Roddy, Nikos Kalatzis, Elizabeth Papadopoulou, Nick
Taylor, Nicolas Liampotis, David McKitterick, Edel Jennings, and Pavlos Kos-
mides provides an overview of the vision, concepts, methodology, architecture,
and initial evaluation of results toward the accomplishment of the goal to im-
prove the utility of Future Internet services by combining benefits of pervasive
systems with those of social computing. As such, the work in the SOCITIES In-
tegrated project attempts to bridge different technologies in a unified platform,
especially by allowing individuals to utilize pervasive services in a community
sphere.
The lessons learned on “Cross-Disciplinary Lessons for the Future Internet”
by Anne-Marie Oostveen, Isis Hjorth, Brian Pickering, Michael Boniface, Eric
T. Meyer, and Cristobal Cobo are described in terms of socio-economic barriers
related to the Future Internet. As the authors outline, these observations are
derived from an on-line survey and a workshop organized by the Coordination
and Support Action SESERV, which identified six key social and economic issues
to be deemed most relevant by 98 representatives from FP7 Challenge 1 projects.
Thus, the cross-disciplinary views (including social scientists, economists, policy
experts, and other stakeholders) are expressed and seen by the Future Internet
community itself. In turn, the paper presents strategies for some solutions to
these challenges, which is complemented by an investigation on how relevant the
European Digital Agenda is to Future Internet technologists.
The view on “An Integrated Development and Runtime Environment for the
Future Internet ”, expressed by Amira Ben Hamida, Fabio Kon, Gustavo Ansaldi
Oliva, Carlos Eduardo Moreira Dos Santos, Jean-Pierre Lorré, Marco Autili,
Guglielmo De Angelis, Apostolos Zarras, Nikolaos Georgantas, Valérie Issarny,
and Antonia Bertolino, sketched technological solutions for future ultra large
XIV Future Internet Foundations
Stamatis Karnouskos
SAP Research
Petros Daras
Informatics and Telematics Institute, CERTH
Henning Müller
University of Applied Sciences Western
Switzerland, Sierre (HES-SO)
Man-Sze Li
IC Focus
Hans Schaffers
Aalto University
1 Introduction
Applications are software systems perceived and utilized by their intended users
to carry out a specific task. Applications are what users are actually using in their
working environments and their daily lives, hence applications are the medium
that enables them to interact with the rapidly advancing technologies. This im-
plies that we should take the users’ needs and aspirations as a point of departure
for developing and introducing advanced applications. Therefore, it is extremely
important to pay attention to the openness of the process of developing, testing
and validating applications and to the involvement of users in that process.
Applications evolve as they depend on the capabilities provided by several
real systems. For example, the end-user devices they run on as well as virtual
resources they utilize e.g. for mash-up applications are depending on the dis-
tributed services that provide the functionalities needed by these applications.
In the Future Internet (FI) era, the applications will enjoy both the advances we
have seen on the hardware e.g. running on mobile devices such as smartphones
with memory and CPU power that comparable to supercomputers a couple of
decades ago, as well as on the software side, where virtualization of the infrastruc-
ture and real-time communication and computation on data is possible. Taking
advantage of the rich information offered by various stakeholders as well as the
FI platform core facilities, the FI applications are expected to be seamlessly ad-
justing to the user’s needs and context, while in parallel hiding the complexity
of the underlying infrastructure and the interactions with the other services and
systems.
Some of the key Internet-based technologies underlying smart Future Internet
applications include cloud computing, real-world user interfaces of cyber-physical
systems and the semantic web. Cloud computing, a new way of delivering com-
XVI Future Internet Application Areas
Srjdan Krco
Ericsson
Cities are complex, dynamic environments, catering for the needs of a large num-
ber of citizens and businesses (“users of city services”). The number of people liv-
ing in the cities globally is continuously increasing, we are witnessing emergence
of mega cities and the need for a sustainable development of such environments
is more than evident.
The smart cities concept is not new. A number of cities around the world
are using “smart” designation aiming to show that they have already done some-
thing in that regard or are planning to do so (this ranges from deploying optical
infrastructure across a city to introduction of e-government services or some
other, mainly ICT based, improvements making the city services more efficient
or quality of life of the citizens better). With the recent technological advances
in the domains of Internet of Things, M2M, big data, and visual analytics and
leveraging the existing extensively deployed ICT infrastructure, the smart cities
has attracted a lot of interest over the last few years.
Combining these technologies has made it possible to improve a range of the
city services, from the public transport domain and traffic management to the
utility services like water and waste management and public security and safety.
All these services are intrinsically connected and interweaved. Therefore, for a
city to develop in a sustainable and organized manner it is crucial to coordinate
such developments and make it possible for smart services to leverage each others’
functionality. The city governments will have a crucial role in these endeavors,
from the overall city planning perspective as well as creation of the regulation
and legislation framework for smart city service developers and providers.
The content of this area includes three chapters covering smart cities from
three different perspectives: social, legislation and safety.
The “Towards a Narrative-Aware Design Framework For Smart Urban En-
vironments” chapter is focusing on smart cities from both technical and social
perspectives. The chapter describes a new narrative-aware design framework for
the smart cities which combines quantitative sensor-generated data (Internet of
Things installations) as well as qualitative human generated data (human story-
telling) through participatory web platforms, in an always-on networked world.
Three levels are identified in the framework: “data and stories”, “analysis and
processing” and “services and applications”. Examples of narrative-aware urban
applications based on the design framework are given and analyzed.
The “Urban Planning and Smart Cities: Interrelations and Reciprocities”
chapter analyses the smart city’s contribution in the overall urban planning and
XX Future Internet Application Areas: Smart Cities
vice versa. It highlights and measures smart city and urban planning interrela-
tion and identifies the meeting points among them. The chapter starts with the
urban planning principles based on the European Regional Cohesion Policy, and
then identifies the key smart city attributes and characteristics. Finally, analysis
of the way these domains influence each other and impact development of each
domain is given.
The “The Safety transformation in the Future Internet domain” chapter is
dealing with the public safety as one of the major concerns for governments and
policy makers in smart cities. The chapter presents an introduction to Inter-
net of things, Intelligent Video Analytics and Data Mining Intelligence as three
fundamental pillars of the Future Internet infrastructure in the public safety
domain.
Future Internet Infrastructures
Introduction
Federico Alvarez
Universidad Politécnica de Madrid, Madrid, Spain
Elio Salvadori
CREATE-NET, Trento, Italy
Anastasios Gavras
Eurescom, Heidelberg, Germany
One of the most important aspects of the Future Internet is to leverage existing
investments in advanced infrastructures for testing and experimentation of novel
Future Internet technologies and speed up their introduction into the market.
A large number of advanced infrastructures are available in regions such as
Europe ranging from national or European level to many regional or city-level
initiatives promoting innovative FI concepts such as smart-cities, smart-grids
and e-health.
Europe has the potential to deliver massive capacity for Future Internet de-
velopments by leveraging the abundance of advanced infrastructures (current in-
frastructure, future infrastructure, pilot experiments, testbeds, and experimental
facilities) but the fragmentation and lack of interoperability and understanding
of capacities hinder those development at large scale.
The Future Internet Research and Experimentation (FIRE) initiative
(www.ict-fire.eu) in Framework Programme 7 created a research environment
for investigating and experimentally validating revolutionary ideas towards new
paradigms for Future Internet architecture by bridging multi-disciplinary long-
term research and experimentally driven large scale validation. FIRE invested
significant effort in familiarising the ICT research community with the method-
ology of experimental driven research as a necessary research tool in the ICT
related science disciplines.
In some cases it is difficult to define what is an ICT infrastructure. The
definition of infrastructures done in the project INFINITY (www.fi-infinity-eu)
is the following:
An infrastructure is an structured and organised collection of physical and/or
logical elements offering an ICT platform with the functionality to facilitate large
scale experimentation and testing for Future Internet projects and applications
and service developments.
Such a platform may consist of ICT-based services which could be generic
or more specific to a given domain (e.g. energy, transport, health, environment,
tourism, health. . . ).”
In consequence, ICT based infrastructures are one of the chapters which this
book is addressing.
XXII Future Internet Infrastructures
Following the review and selection process run for this FIA book, four chap-
ters were chosen. The following is a summary of main results of the “Future
Internet Foundations” section of this FIA book.
The paper on “FSToolkit: Adopting Software Engineering practices for
enabling definitions of federated resource infrastructures” by Christos Tranoris,
and Spyros Denazis describes the present the Federation Scenario Toolkit
(FSToolkit) that enables the definition of resource request scenarios, agnostic
in term of providers. This work adopts Software Engineering practices consider-
ing the concepts of modeling and meta-modeling to define a resource broker and
to specify scenarios by applying the Domain Specific Modeling (DSM) paradigm.
FSToolkit is developed for experimentally driven research for validating through
testing-scenarios new architectures and systems at scale and under realistic en-
vironments by enabling federation of resources.
The work by Leonidas Lymberopoulos, Mary Grammatikou, Martin Potts,
Paola Grosso, Attila Fekete, Bartosz Belter, Mauro Campanella and Vasilis
Maglaris“NOVI Tools and Algorithms for Federating Virtualized Infrastructures”
addresses the efficient approaches to compose virtualized e-Infrastructures to-
wards a holistic Future Internet (FI) cloud service and aspires to develop and
validate methods, information systems and algorithms that will provide users
with isolated slices, baskets of resources and services drawn from federated in-
frastructures.
The paper “Next Generation Flexible and Cognitive Heterogeneous Optical
Networks Supporting the evolution to the Future Internet” by Ioannis Tomkos,
Marianna Angelou, Ramón J. Durán Barroso,Ignacio de Miguel, Rubén Lorenzo,
Domenico Siracusa, Elio Salvadori, Andrzej Tymecki, Yabin Ye and Idelfonso
Tafur Monroy describes the new research directions in optical networking to
further advance the capabilities of the Future Internet. They highlight the latest
activities of the optical networking community and propose concepts of flexible
and cognitive optical networks including their key expected benefits..
The work by Marc Pallot, Brigitte Trousse, Bernard Senach “A Tentative
Design of a Future Internet Networking Domain Landscape” presents a tentative
FI domain landscape populated by Internet computing and networking research
areas where still open questions such as visualzsing the conceptual evolution
and articulating the various FI networking and computing research areas and
identifying appropriate concepts populating such a FI domain landscape are
developed.
Table of Contents
Foundations
A Tussle Analysis for Information-Centric Networking Architectures . . . . 6
Alexandros Kostopoulos, Ioanna Papafili, Costas Kalogiros,
Tapio Levä, Nan Zhang, and Dirk Trossen
Applications
Towards a Trustworthy Service Marketplace for the Future Internet . . . . 105
Francesco Di Cerbo, Michele Bezzi, Samuel Paul Kaluvuri,
Antonino Sabetta, Slim Trabelsi, and Volkmar Lotz
Smart Cities
Towards a Narrative-Aware Design Framework for Smart Urban
Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Lara Srivastava and Athena Vakali
Infrastructures
FSToolkit: Adopting Software Engineering Practices for Enabling
Definitions of Federated Resource Infrastructures . . . . . . . . . . . . . . . . . . . . . 201
Christos Tranoris and Spyros Denazis
Introduction. We present here a vision for the Future Internet and its impact
on individuals, businesses and society as a whole; the vision presented is based
on an extended consultation carried out by the authors within the European
Future Internet research community, as part of the work of the Future Internet
Assembly (FIA).
The purpose of the consultation was to identify key challenges and research
priorities for the Future Internet, particularly from the standpoint of current
European research projects (in Framework Programme 7). The output of the
consultation is documented in the form of a visionary research roadmap1 .
In order to elicit inputs from members of the European Future Internet re-
search community, we have had to actively participate in this community our-
selves; the vehicle for doing so has been the EU Framework Programme 7 research
project EFFECTSPLUS2 , which carries out workshops and clustering activities
for European projects, particularly in the area of ICT trust and security. As part
of this Support Action we participate in and run aspects of the Future Internet
Assembly (FIA).
The vision we present is intended to inform future research funding pro-
grammes, including the European Commission’s ”Horizon 2020” framework pro-
gramme. We have validated the results of our initial consultation, and the
associated vision, with a significant number of researchers in the FI commu-
nity, and in this paper we also present additional insights gained during this
1
This is available online at http://fisa.future-internet.eu/index.php/
FIA_Research_Roadmap
2
See www.effectsplus.eu
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 1–5, 2012.
c The Author(s). This article is published with open access at SpringerLink.com
2 N. Wainwright and N. Papanikolaou
process of validation. Overall, we observe that there are several important areas
of innovation for Future Internet research and that these need to be developed
and supported by researchers and policymakers both within and outside Europe.
Secondly, looking forward towards the research that will transform what we do
and how we do it and which are fundamentally integrative, they exploit and use
a wide range of networked technologies towards a diverse set of objectives, we
see three priorities that support us in using the Future Internet. These are:
harness the scale of the network and networked data onto individual actions,
tasks, and activities, transforming what we do and how we do it.
Abstract. Current Future Internet (FI) research brings out the trend of designing
information-oriented networks, in contrast to the current host-centric Internet.
Information-centric Networking (ICN) focuses on finding and transmitting
information to end-users, instead of connecting end hosts that exchange data. The
key concepts of ICN are expected to have significant impact on the FI, and to
create new challenges for all associated stakeholders. In order to investigate the
motives as well as the arising conflicts between the stakeholders, we apply a
tussle analysis methodology in a content delivery scenario incorporating socio-
economic principles. Our analysis highlights the interests of the various
stakeholders and the issues that should be taken into account by designers when
deploying new content delivery schemes under the ICN paradigm.
1 Introduction
Over the recent years, an increasing number of users gain access to the Internet via
numerous devices equipped with multiple interfaces, capable of running different
types of applications, and generating huge data traffic volumes, mostly for content.
Traffic stemming out of these activities implies increased cost for the Internet Service
Providers (ISPs) due to the congestion in their networks and the generated transit
costs, as well as unsatisfactory Quality of Service (QoS) for some end-users.
This exponential growth of content traffic has been initially addressed by peer-to-
peer applications, or Content Distribution Networks (CDNs). CDNs consist of
distributed data centers where replicas of content are cached in order to improve users’
access to the content (i.e., by increasing access bandwidth and redundancy, and
reducing access latency). These CDNs practically formulate overlay networks [1]
performing their own traffic optimization and making content routing decisions using
incomplete information about customer’s location and demand for content, as well as
utilization of networks and available content sources. Similarly ISPs perform individual
traffic optimization using proprietary, non-native and usually non-scalable solutions for
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 6–17, 2012.
© The Author(s). This article is published with open access at SpringerLink.com
A Tussle Analysis for Information-Centric Networking Architectures 7
traffic monitoring and shaping (e.g. Deep Packet Inspection (DPI) boxes for peer-to-
peer traffic) and have no incentive to reveal information about their network to CDNs.
This information asymmetry often leads to a suboptimal system operation.
Information-centric Networking (ICN) postulates a fundamental paradigm shift
away from a host-centric model towards an information-centric one. ICN focuses on
information item discovery and transmission and not on the connection of end-points
that exchange data. Thus, ICN has the potential to address efficiently the
aforementioned information asymmetry problem by including traffic management,
content replication and name resolution as inherent capabilities of the network.
What remains the same is that the Internet is a platform composed of multiple
technologies and an environment where multiple stakeholders interact; thus, the
Internet is interesting from both the technological and the socio-economic viewpoint.
Socio-economic analysis comprises a necessary tool for understanding system
requirements and designing a flexible and successful FI architecture.
A first attempt to investigate socio-economic aspects of FI in a systematic manner
was performed by Clark et al. [2]. They introduced the ‘Design for Tussle’ principle,
where the term ‘tussle’ is described as an ‘ongoing contention among parties with
conflicting interests’. It is obvious that the need for designing a tussle-aware FI has
emerged to enhance deployment, stability and interoperability of new solutions.
Although there are plenty of counter-examples of adopted protocols/architectures that
do not follow the Design for Tussle principle, tussle-aware protocols and architectures
are expected to have better chances for adoption/success in the long-term [3].
The need for understanding the socio-economic environment, the control exerted
on the design, as well as the tussles arising therein has been also highlighted in [4].
The purpose of this work is to explore and analyze the tussles that may arise in ICN,
as well as to consider the roles of different stakeholders; below, we present a tussle
analysis methodology which extends the methodology originally developed within the
SESERV project [5], and apply it in the content delivery scenario. We focus on the
tussle spaces of name resolution, content delivery and caching.
This paper is organized as follows: In Section 2, we present our methodology for
identifying tussles among different stakeholders. Then, Section 3 provides an
overview of representative information-centric networking architectures developed in
the PURSUIT [6] and SAIL [7] research projects. In Section 4, we focus on a use case
for content delivery; we identify the involved stakeholders and major functionalities
and roles that they can take, and then investigate the potential tussles among the
stakeholders. Finally, in Section 5, we conclude our remarks.
Fig. 1. The Socio-Economic layer and Technology layer of the Internet ecosystem
The VNC method emphasizes technologies’ role in defining the possible value
networks by identifying also the technical components and technical interfaces
between them. By doing this, the method improves our understanding of the
relationship between the technical architecture (a set of technical components linked
to each other with technical interfaces, such as protocols) and the value network
configuration (role division and related business interfaces among actors). This is
important in analyzing whether the technology is designed for tussle [2], i.e., if the
technical design allows variation in value networks. Fig. 2 presents the notation
presented in [9] that can be used to visualize the roles and VNC.
After identifying the involved stakeholders as well as the tussles among them, the
next step is to translate knowledge into models and provide quantitative analysis. In
[10] a toolkit is suggested that uses mind-mapping techniques and system dynamics to
model the tussles. System Dynamics (SD) [11] is a useful tool to evaluate dynamic
interactions between multiple stakeholders, by simulating the possible outcomes (e.g.,
how technology diffuses) when multiple stakeholders interact. The main focus is on
the assessment of outcomes and their evolution over time, since possible reactions can
be modeled. After having captured the causality models, relevant socio-economic
scenarios may be formulated to investigate the potential consequences in the Internet
market. We do not conduct SD analysis in this paper due to space constraints.
3.1 Publish/Subscribe
In the PURSUIT pub/sub paradigm, information is organized in scopes. A scope is a
way of grouping related information items together. A dedicated matching process
ensures that data exchange occurs only when a match in information item (e.g., a
video file) and scope (e.g., a YouTube channel) has been made. Each packet contains
the necessary meta-data for travelling within the network. Fig. 3 presents a high level
picture of the main architectural components of the pub/sub architecture.
10 A. Kostopoulos et al.
NDO from the optimum source based on a pre-defined criterion. At least one global
NRS must exist in the NetInf network, but also intra-domain NRS’ are possible.
The NetInf router node accepts NetInf names as input and decides how to route the
request so that eventually a NDO is returned to the previous-hop NetInf node. This
routing decision could be either towards a NRS or directly towards the NDO source, the
latter of which represents the name-based routing scenario. In addition, NetInf cache
servers for content replication can be placed both in the NR nodes and the NRS nodes.
Fig. 4 also shows the high level content retrieval process in NetInf. First, (1) a
NDO owner publishes the NDO into the network by adding it to the NRS registry.
When a (2) request for a NDO occurs, the NetInf router can either (3a) forward the
request to a NRS for (3b) the set of locators or it can (4) directly forward the request
to the NDO source, depending on whether the NetInf router knows where the NDO is.
Finally, (5) the NDO is returned to the requester via the same route as the request and
the NDO can be cached on every node that it passes.
Role Functionalities
Name Resolution Content directory control, names to locations resolution,
rendezvous, matching, applying policies
Content access management AAA (Authentication, Authorization, Accounting)
Cache management Cache servers control, content selection for being cached,
cache updating
Cache location ownership Cache locations control
Content network management Content network resources selection, path selection, QoS
Fig. 6. Generic Value Network Configuration (VNC) for content delivery in ICN
The major stakeholders that can take up the aforementioned roles in our scenario
are presented in Table 2. We also use parentheses to include the additional roles that
could be potentially taken up by stakeholders in other scenarios. Additionally, we
include the CDN providers, as well as the regulators that exist in current Internet,
although their interests and actions are not subject of this analysis.
1
Here, we assume that the Topology Manager is aware of the information item ID.
16 A. Kostopoulos et al.
5 Discussion
ICN brings new challenges in the Internet market, since name resolution services may
be offered by different stakeholders in order to meet their own optimizing criteria;
either by the ANP, or by a third-party (such as a search engine or a significant CP).
Such major stakeholders of today’s Internet are highly expected to extend their
activities to offer NRS’ in ICN.
Additionally, there is a crystal clear incentive for an ANP to deploy ICN, in order
to enter the content delivery market. Due to the information-oriented nature of the
network, an ANP could deploy his own caches, which implies that the ANP will gain
more control of the content delivery. Therefore, under suitable business agreements,
this will imply increase of his revenue, while simultaneously reducing his operational
costs due to more efficient content routing and reduction of the inter-domain traffic.
Moreover, CPs and end-users will also be affected; i.e. CPs will be able to provide
their content through more communication channels to their customers, while end-
users will enjoy increased Quality-of-Experience (QoE).
On the other hand, the emergence of ANP-owned CDNs will cause traditional
CDNs to lose revenues and control over the content delivery market. Thus, legacy
CDNs will probably react in order to maintain their large market share, or at least not
exit the market. CDNs may deploy their own backbone networks to interconnect their
own caches, but still they will probably not in position to deploy access networks to
reach the end-users; this is ANPs’ last frontier. Nevertheless, no matter how legacy
A Tussle Analysis for Information-Centric Networking Architectures 17
CDNs will react, such local CDNs owned by ANPs will (and already) be deployed
(e.g. At&T’s CDN). The evolution of this competition and the way that the system
will be lead to an equilibrium is the subject of future investigation and analysis.
Our contribution in this paper resides in the identification and analysis of tussles in
a generic ICN architecture, which should be considered by designers and engineers
that aim at deploying new content delivery schemes for the FI.
Acknowledgement. The authors would like to thank G. Xylomenos, G. D. Stamoulis,
G. Parisis, D. Kutscher C. Tsilopoulos and X. Vasilakos. The research of A.
Kostopoulos and I. Papafili has been co-financed by the European Union (European
Social Fund – ESF) and Greek national funds through the Operational Program
“Education and Lifelong Learning” of the National Strategic Reference Framework
(NSRF) - Research Funding Program: Heracleitus II-Investing in knowledge society
through the European Social Fund. C. Kalogiros is supported by the EU-FP7 SESERV
project. The research of T. Levä and N. Zhang is supported by the EU-FP7 SAIL
project. The research of D. Trossen is supported by the EU-FP7 PURSUIT project.
Open Access. This article is distributed under the terms of the Creative Commons Attribution
Noncommercial License which permits any noncommercial use, distribution, and reproduction
in any medium, provided the original author(s) and source are credited.
References
1. Clark, D.D., Lehr, W., Bauer, S., Faratin, P., Sami, R., Wroclawski, J.: Overlay Networks
and the Future of the Internet. Communications and Strategies 63, 109–129 (2006)
2. Clark, D.D., Wroclawski, J., Sollins, K.R., Braden, R.: Tussle in Cyberspace: Defining
Tomorrow’s Internet. IEEE/ ACM Trans. Networking 13(3), 462–475 (2005)
3. Kalogiros, C., Kostopoulos, A., Ford, A.: On Designing for Tussle: Future Internet in
Retrospect. In: Oliver, M., Sallent, S. (eds.) EUNICE 2009. LNCS, vol. 5733, pp. 98–107.
Springer, Heidelberg (2009)
4. Brown, I., Clark, D., Trossen, D.: Should Specific Values Be Embedde. The Internet
Architecture? In: ACM Context ReArch 2010 (December 2010)
5. EU FP7 SESERV project, http://www.seserv.org/
6. EU FP7 PURSUIT project, http://www.fp7-pursuit.eu/
7. EU FP7 SAIL project, http://www.sail-project.eu/
8. Kalogiros, C., Courcoubetis, C., Stamoulis, G.D., Boniface, M., Meyer, E.T., Waldburger,
M., Field, D., Stiller, B.: An Approach to Investigating Socio-economic Tussles Arising
from Building the Future Internet. In: Domingue, J., et al. (eds.) FIA 2011. LNCS,
vol. 6656, pp. 145–159. Springer, Heidelberg (2011)
9. Casey, T., Smura, T., Sorri, A.: Value Network Configurations in wireless local area
access. In: 9th Conference of Telecommunication, Media and Internet (2010)
10. Trossen, D., Kostopoulos, A.: Exploring the Tussle Space for Information-Centric
Networking. In: 39th TPRC, Arlington, VA (September 2011)
11. Sterman, J.: Business dynamics: Systems thinking and modeling for a complex world.
McGraw-Hill, Irwin (2000)
12. Named Data Networking (NDN) project, http://www.named-data.net/
13. Trossen, D., Sarela, M., Sollins, K.: Arguments for an Information-Centric
Internetworking Architecture. ACM Computer Communication Review (April 2010)
14. Pöyhönen, P., Stranberg, O. (eds.): (D-B.1) The Network of Information: Architecture and
Applications. SAIL project deliverable (2011)
A Systematic Approach for Closing
the Research to Standardization Gap
1 Introduction
The Digital Agenda for Europe [1] highlights the importance of ICT standards in
delivering interoperability between devices, applications, data repositories, services
and networks. It also stresses the fact that standards are to be used strategically as a
means of stimulating innovation and promoting interoperability of innovative
products.
In this context, the EC has published in June 2011 a series of measures with the
objective to have better standards for Europe and to have them faster [2]. As a follow-
up of the publication of the White Paper “Modernising ICT standardization in the EU
- The Way Forward” [3] and the related public consultation, one major requirement to
strengthen the system of standard-setting in Europe is the recognition that global ICT
standards will play a more prominent role in the EU, both from the standardization
strategy [4] and regulation standpoints. In particular, regarding EU funded research
projects, [4] states, e.g.: “Finally, standards can help to bridge the gap between
research and marketable products or services”. “A systematic approach to research,
innovation and standardisation should be adopted at European and national level to
improve the exploitation of research results, help best ideas to reach the market and
achieve wide market uptake.”
It is well recognized that standards are one important way to promote the
translation of research results into practical applications [3] [5] [6] and are also, in
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 18–29, 2012.
© The Author(s). This article is published with open access at SpringerLink.com
A Systematic Approach for Closing the Research to Standardization Gap 19
certain circumstances, the necessary pre-condition for a large deployment and the
successful commercialization of a technology. However, research projects do often
not engage consistently in standardization because they are not yet convinced by the
benefits or/and return on investment of engagement or because they are not familiar
enough with their target standardization ecosystem or need guidance on how to
address the problem of what to do, where and when to promote their research results
in standardization. This lack of engagement is generally referred to as the research-to-
standardization gap. The need for practical pre-standardization framework to close
this gap is identified as a priority by all stakeholders, including research, ICT
industry, EC, but also the Future Internet Assembly (FIA - pre-standardization WG)
which has recently proposed a shared action plan to support standardization activities
[7] [8]. It is also well accepted that initiatives to better link ICT standardization and
ICT R&D appear to be most effective when carried out at the research planning phase
rather than simply at the execution phase of specific research projects [3].
Standardization awareness thus needs to be considered early in the research life cycle
and should be an integral part of strategic research agendas.
Starting in Section 2 with an informal survey on research projects requirements,
this chapter will analyze the following aspects of the standardization gap: i) what are
the root causes of the research-to-standardization gap, ii) how to cope with the
specifics of the standardization ecosystem compared to the usual scientific
environment and iii) how to satisfy the necessary conditions to efficiently transfer the
research results to standardization. For this purpose, Section 3 of this chapter
addresses the limits of the classical standardization process in case research results
need to be incubated in standardization. In this context, a research-focused
standardization phase (generally referred to as pre-standardization), feeding the
classical standardization process, needs to be put in place. However, pre-
standardization needs to be complemented by a methodology and its associated
process aiming to systematically analyze the standardization aspects of research
projects and by helping them out to draw their strategy. These aspects are discussed in
Section 4 of this chapter.
These initial requirements and their taxonomy are a good starting point to frame the
discussion on what is needed to address the research-to-standardization gap, for
instance:
• Regarding the second requirement in Table 1, the identification of the gaps
should be in close communication with other standardization stakeholders (the
industry, regulators, standardization bodies) since researchers on their own are in
a bad position to identify the gaps effectively.
• The requirement to "make available a single reference up-to-date knowledge base"
seems difficult to achieve but however, [9] provides a first step in this direction.
• The requirement on "support/help after the end of projects to continue/follow-up
initiated standardization actions" is really crucial since without such support,
standardization plans in the typical short-lived research projects might not be
achieved, especially in cases the standardization eco-system is not ready to
progress the standardization objectives of the project.
A Systematic Approach for Closing the Research to Standardization Gap 21
It is anticipated that the “Planning” and “Guiding” aspects are necessary conditions to
reduce the research-to-standardization gap (note that the COPRAS project conducted
in the context of FP6 [10] took the same assumptions). On the other hand, the
“Linking”, “Following-up” and “Mutualizing” aspects provide means to support more
efficiently the pre-standardization actions. As one of the objectives is to address the
root causes of the research-to-standardization gap, the focus of this chapter is placed
on “Planning” and on “Guiding” while the “Linking”, “Following-up” and
“Mutualizing” aspects will no longer be discussed in this chapter.
Standardization of protocols and interfaces has played and is still playing a key role in
the Internet development. In particular, the IETF has imposed itself as the main
Internet protocols factory while other standardization bodies like IEEE, ITU-T, 3GPP
and W3C are standardizing the infrastructure and technology enablers creating the
necessary open ecosystem that contributed to the Internet development.
However, the work in standardization is dwindled by its participant strategy in
terms of R&D and conflicting business objectives leaving in practice a very little
window to the research and academia communities to influence the process. One
could observe that in the early days of the Internet, its standardization was driven by
the research community. This materialized by the creation of the IETF that was an
emanation from the research community. Over time, as the Internet and its associated
technologies progressively matured and were deployed at a larger scale, the Internet
standardization gradually shifted to engineering and operational problems (the IETF
is often qualified today as "problem-driven"). As a result, even though the research
community is still involved in the Internet standardization process, its influence is
eroding over time. Nevertheless, the involvement of the research community in
standardization can bring a lot of added value to the industry (in particular when
practical use cases are identified at this stage of the process) since it allows early de-
risking of disruptive ideas by confronting them to i) executability/developability, ii)
deployability, and iii) market environment and, if successful, will accelerate
penetration of those innovative ideas.
In this context, a research-focused standardization phase needs to complement the
classical standardization process. In this model, the research-focused standardization
phase will feed the classical standardization process with a stream of de-risked ideas
that will, if successful, lead to a full standardized solution. It has to be noted that the
interactions and discussions in the context of pre-standardization can also directly
feed back the research project with valuable inputs to be further considered inside the
project (“external loop”). For this reason, this phase intends to bridge the research-to-
standardization gap and is generally referred to as the pre-standardization phase.
Major standardization bodies are adapting their processes to capture these
requirements. For instance, ISOC created in the 90’s the IRTF (the research arm of
the IETF), the ITU-T defined the concept of Focus Group, the IEEE established
IEEE-SA Industry Connections Program and the W3C the W3C Incubator Activity.
22 B. Sales et al.
In 2006, ETSI defined the concept of Industry Specification Group (ISG). All these
pre-standardization processes share the same principles: they are open to academia
and are based on a lightweight procedural structure compared to their “mother”
standardization groups. On the other hand, one can observe that, in the context of the
Internet these pre-standardization structures are not yet used at their full potential. In
particular, when "pre-standardization" processes/organizations exist, they have often
evolved in two directions, either by focusing on shorter-term engineering problems
the standardization body is recognized for (and, in turn, being perceived as no longer
fulfilling a research role) or by focusing on longer-term architectural problems (and,
in turn, being perceived as disconnected from the rest of the standardization
organization activities). It is also anticipated that the results of the Future Internet and
Future Networks research will have the potential to boost the volume of pre-
standardization activities and could really lead to the launching of the Future Internet
pre-standardization process.
It should be noted that not all research results need to be incubated in pre-
standardization. Depending on the standardization lifecycle and rationality, research
results can go directly to the classical standardization regime without going through a
preliminary pre-standardization phase.
For instance, the classical standardization regime is not yet ready to standardize all
aspects related to Self Managed Networks and, as a result, pre-standardization is
required (e.g. in an ETSI ISG or in an IRTF Research Group). In contrast, regarding
Carrier Ethernet, the standardization regime is mature; there is no need to go through
a pre-standardization phase.
Pre-standardization is the necessary tool helping create an environment that is,
when required, more suitable to incubate research ideas than the classical
standardization regime. Despite its great potential, pre-standardization alone (i.e.
without a built-in link to standardization and without a framework to systematically
analyze the standardization aspects of research projects and helping them out to draw
their strategy) is not broad enough to motivate researchers to present and defend their
ideas only there.
4 Methodological Aspects
According to the experience acquired over years by the co-authors of this chapter, in
order to be really effective, standardization actions should be defined from and
supported by a well defined standardization strategy/planning. In the context of this
chapter, a standardization strategy is defined as a path of standardization-related
actions and objectives (in a few complex cases, a strategy may even comprise parallel
paths). Without any standardization strategy, the standardization actions are in general
unsuccessful or lead to suboptimal results. In the worst case, the standardization
achievements may even be conflicting with the research objectives of the project.
A Systematic Approach for Closing the Research to Standardization Gap 23
1. The first one is when standardization is not needed at all and when the lack of
standardization is not a roadblock for the large scale deployment of the
technology being designed by the research project.
2. The second case is when standardization is required but the related standardization
ecosystem is not ready/in place to progress the standardization objectives of the
project. In other words, it is very unlikely that a standardization body will accept
to incorporate this necessary work items in its standardization work program. In
this case, the technology needs to be incubated in a (pre-)standardization group. In
general, this will require the creation of a new (pre-)standardization WG.
3. The latter case is when the technology can be directly pushed in standardization
bodies without the need to go through a pre-standardization phase.
In this context, Step 4 of the methodology enabled the ECODE team to link their
envisioned usage scenarios for their technology (identified as structuring aspects by
the methodology) with specific standardization objectives. In particular by applying
the step 4 of the methodology, the ECODE project has identified two objectives
for the introduction of machine learning component (the above-defined ‘structuring’
aspects): 1) address current Internet operational challenges; 2) further extend Internet
functionalities (diagnosability, security, etc.).
From the standardization perspective, the first objective implies that protocols must
be standardized in the IETF, while the second one implies that an advanced
architecture should be defined, e.g. in ETSI. In addition, as machine learning
techniques were never used before in the context of Internet and are challenging in the
context of the Internet deployment, it would be necessary to have a pre-
standardization phase, e.g. in the IRTF (see the dashboard in Figure 1).
Using the proposed methodology, the standardization strategy was reassessed twice
in the course of the project due among others to the change in IRTF priorities. This
reassessment helped the ECODE partners in determining their standardization plan
beyond the lifetime of the project. All these steps enabled the ECODE project to
define and refine systematically a coherent standardization strategy starting from
requirements, followed by the identification of the target standardization bodies and
roles and ending with the definition of the standardization approach and objectives.
For this purpose, the relation between research projects and their standardization
ecosystem are analyzed in terms of downstream and upstream channels. The
downstream channel is materialized by the participation and contributive efforts of
research experts to the standardization bodies: participation to meetings, submission and
presentation of contributions and leadership positions when appropriate. This
downstream channel is generally managed by the research project, resulting in a
standardization approach defined at the project level as part of the dissemination and
exploitation plans. However, as already mentioned before, researchers are not
necessarily attracted by or familiar with the targeted standardization environment. There
are multiple reasons that can explain this:
1. Research project objectives are research results-driven whereas
standardization objectives are engineering consensus-driven.
2. Participation to most standardization bodies requires an annual fee. Unless
that cost can be sustained by academic and research institutes, the research
project cannot access the standardization organizations working documents
(contributions, meeting minutes, etc.);
3. Standardization debates and positioning of actors are often driven by
"economical" interests beyond any possible influence of academics and
research institutes (not recognized as full-fledged players).
4. Each standardization body operates with its own specific methods and
procedures whilst research projects standardization plans require combining
actions in multiple standardization bodies - which in turn increase the
complexity for the research project to conduct its standardization actions.
As a result, the standardization strategies and plans of a research project are often
defined on an ad-hoc basis and sometimes, even misleading and/or incomplete. When
a project has an insufficient understanding of the standardization environment, it may
opt for easily implementable workarounds. For instance, its contributions are
submitted only once to a standardization organization and sometimes not presented in
meetings. In this case, the standardization body just “notes” that the contribution was
submitted and, as a result, the technology designed by the research project will never
lead to a standard. Moreover, contributions from research projects are also often
missing their target: expecting that the outcomes of research as reported in project
deliverables will be accounted for as-is by the targeted standardization organization is
not realistic. Two main causes for failure can be identified: i) lack of adoption of the
conventions and writing style of the targeted standardization body, and ii) difficulty
to confront its output with various technical communities (system engineers, network
engineers, operation, etc.) before it can have a technological impact on the course of
the standards making.
In addition to the ‘downstream’ channel, there is also an ‘upstream channel’ from
the standardization community to the research projects. In the simplest way currently
available, this corresponds to the information published by standardization
organizations on their web sites. This information is often general purpose and as
such not targeted and/or tailored to/for the research community; it is at best
informative but often rather useless for researchers. As noted, if project partners do
A Systematic Approach for Closing the Research to Standardization Gap 27
not pay the standards organization membership fees (when applicable), this
information is not even accessible at all (e.g., for copyright reasons). In some cases,
this upstream channel is better managed when standardization bodies organize
‘research to standardization’ workshops (e.g. [5], [6]) though, often, the audience on
these workshops is composed of the research experts already involved in the
standardization work.
It is postulated by the authors of this chapter that three conditions need to be
satisfied in order to improve the quality of the downstream (from research to
standardization) channel and maximize the value of the output: 1) availability of
information from standardization bodies that is directly relevant to the research
project; 2) mutual understanding, at both ends of the channel, that research results
have reasonable chances to be adopted in the appropriate standardization context; and
3) joint determination of the trajectory (sequence of standardization actions with
starting and ending points) by means of a standardization strategy.
To satisfy the three conditions to improve the downstream (from research to
standardization) channel, the upstream (from standardization to research) channel
needs to be enhanced in the following ways:
1. Provide information related to standardization status and evolution specifically
targeted to the research community. (A first step in this direction is the
information repository provided by the FIA pre-standardization WG [9])
A criterion of success for this approach is the initiation, within a
standardization organization that takes this path, of a standardization track that
was not previously addressed. Two cases shall be however distinguished. In the
case of a standardization organization already working on the technology to
which the research project contributes, it is less complex to put in place the
process, but the impact on the technology specification will probably be
smaller. When the standardization organization is not yet working on the new
technology proposed by the research project, more effort will be required but -
in case of success - impact will be greater since it will define a new technology
specification track.
2. Proactively support the research project by a team of dedicated experts with a
strong ‘research and standardization’ background. The role of these experts, the
‘Research-to-Standards’ team, is i) to guide the research projects on the
definition of their standardization strategy (using the methodology defined in
Section 6) including the sequence of standardization actions required to ensure
that the technology under consideration will be developable and deployable at
a large scale (necessary condition), and ii) to regularly follow-up with research
teams on progress and open issues and/or blocking factors, to help progressing
on the trajectory and propose possible remediation actions in case of problem.
3. Research projects must be convinced of the benefits to use a well defined
methodology to define their standardization strategy and trained on how to use
the methodology.
In the context of autonomic networking (e.g. see [15]), the downstream channel from
research projects to standardization is currently working quite well e.g. in terms of i)
28 B. Sales et al.
5 Conclusion
Research-focus standardization (in general referred to as “pre-standardization”) is a
necessary instrument to attract a critical mass of researchers to participate in
standardization process. But this instrument alone is not sufficient. Actually pre-
standardization should be supplemented by a dedicated planning effort at the project
research level that will have to be materialized in a well defined standardization
strategy. However, standardization body operates with its own specific methods and
procedures. In addition, the necessary research projects standardization actions
require combining actions in multiple standardization bodies which in turn increase
the complexity for the research project to define its standardization strategy. As a way
to guide the research projects, the authors provide a methodology and its associated
process aiming to systematically analyze the standardization aspects of a project and
by helping them out to draw their strategy.
The above enhancements can be either implemented by key representative
standardization organizations or implemented by an entity external to standardization
bodies (but closely linked/interacting with the key standardization organizations).
To adopt these enhancements, standardization bodies must be convinced of the
usefulness of the approach before engaging resources to implement the proposed
process. It is currently difficult to anticipate the benefits of having this process
implemented in key standardization organizations or in an entity outside the
standardization bodies. Even more important, research projects must be convinced of
the benefits to use a well defined methodology to define their standardization strategy
and should be trained on how to use the methodology. The authors believe that the
A Systematic Approach for Closing the Research to Standardization Gap 29
proposed process, once validated in the Future Internet context, e.g. on a selected
representative research projects, can be deployed at a large scale and deliver the
expected benefits to research and standardization.
Open Access. This article is distributed under the terms of the Creative Commons Attribution
Noncommercial License which permits any noncommercial use, distribution, and reproduction
in any medium, provided the original author(s) and source are credited.
References
1. European Commission, A Digital Agenda for Europe, COM(2010) 245 final/2, Brussels
(August 26, 2010)
2. European standardisation policy, http://ec.europa.eu/enterprise/
policies/european-standards/standardisation-policy/index_en.
htm
3. European Commission, Modernising ICT Standardisation in the EU: the Way Forward,
COM(2009) 324, Brussels, July 3 (2009)
4. European Commission, A strategic vision for European standards: Moving forward to
enhance and accelerate the sustainable growth of the European economy by 2020, COM
2011, 31, Brussels, June 1 (2011)
5. ETSI, Future Network Technologies Workshop, Sophia Antipolis (March 10-11, 2010),
http://www.etsi.org/WebSite/NewsandEvents/Past_Events/201_Fu
tureNetworkTechnologies_WS.aspx
6. ETSI, 2nd ETSI Workshop on Future Networks Technologies, Sophia Antipolis
(September 26-27, 2011), http://www.etsi.org/WebSite/NewsandEvents/
Past_Events/2010_FutureNetworkTechnologies_WS.aspx
7. Bourse, D.: Future Internet Assembly (FIA) Pre-standardisation WG - Review of
Objectives and Progresses, Poznan FIA, Poznan (October 25, 2011)
8. FIA Standardisation Support, http://fisa.future-internet.eu/index.php/
FIA_Standardisation_Support
9. Pre-standardisation activities in FIA, http://fisa.future-internet.eu/
index.php/Pre-standardisation_activities_in_FIA
10. COPRAS project, http://www.w3.org/2004/copras/
11. Papadimitriou, D., Sales, B.: A path towards strong architectural foundation for the
internet design. In: 2nd Future Network Technologies Workshop, ETSI, Sophia Antipolis,
France, September 26-27 (2011)
12. Papadimitriou, D., Sales, B.: Cognitive Augmented Routing System and its
Standardisation Path. In: Future Network Technologies Workshop, ETSI, Sophia
Antipolis, France, March 10-11 (2010)
13. ECODE (Experimental COgnitive Distributed Engine) project, http://www.ecode-
project.eu/
14. Papadimitriou, D., Donnet, B.: A Cognitive Routing System for the Internet. In: 8th
Würzburg Workshop on IP (Euroview 2008): Joint EuroNF, ITC, and ITG Workshop on
Visions of Future Generation Networks, Würzburg, Germany, July 21-22 (2008)
15. Ciavaglia, L., Altman, Z., Patouni, E., Kaloxylos, A., Alonistioti, N., Tsagkaris, K.,
Vlacheas, P., Demestichas, P.: Coordination of Self-Organizing Network Mechanisms:
Framework and Enablers. In: Proceedings of the Special Session on Future Research
Directions at ICST MONAMI 2011 Conference, Aveiro, Portugal (September 2011)
SOCIETIES: Where Pervasive Meets Social
1 Introduction
Pervasive computing [1] is the next generation paradigm in computer science that
aims to assist users in their everyday tasks in a seamless unobtrusive manner, by
transparently and ubiquitously embedding numerous computing, communication and
sensing resources in the users’ environment and devices. Until now, pervasive
computing systems have been designed mainly to address the needs of individual
users. This neglects an important part of human behaviour; socialising, and might
partly explain the slow take-up of pervasiveness in commercial products. On the other
hand, social computing [2] has enjoyed meteoric success in bringing people together
online. Products in this area, however, do not integrate well with any but a few of the
many devices and services to which their users have access.
This paper describes the work being carried out in the FP7 SOCIETIES (Self
Orchestrating Community Ambient Intelligence Spaces) integrated project (www.ict-
societies.eu), the aim of which is to investigate and address the gap between pervasive
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 30–41, 2012.
© The Author(s). This article is published with open access at SpringerLink.com
SOCIETIES: Where Pervasive Meets Social 31
service provider, or simply a user, with access to that interconnected community. This
pushes beyond the capabilities of current social networks and services which rely
heavily on, for example, static personal information and user preferences, or manually
provided context changes (such as a manual check-in). This allows for the provision
of intelligent, rich, contextual data about users and the entities they interact with.
A Pervasive Community is a group of, two or more, individuals who have agreed
to share some, but not necessarily all, of their pervasive resources with other members
of that community. The Pervasive Resources that can be shared are: (i) services,
including services for controlling personal and environmental devices and (ii)
information (both individual and community), including context, preferences,
behaviours and memberships. A pervasive community, once constituted, forms a
Community Interaction Space (CIS). There is a one-to-one mapping between
pervasive communities and CISs. Individuals may belong to any number of pervasive
communities, and thus CISs, simultaneously.
Members of a pervasive community interact with a CIS via their own personal
Cooperating Smart Space (CSS). CSSs create the building blocks for enabling the
integration of pervasive computing with social communities (physical or digital).
CSSs constitute the bridge between a user's context (devices, sensors etc.) and the
community the user is a part of. A CSS is a digital representation of a user or
organisation, and also defines the impact that their services, information and resources
have within a set of communities. As such, it represents the user’s dynamic
contribution to one or more communities. The CSS provides its owner with a suite of
services which support the creation of, and participation in, pervasive communities as
well as a range of intelligent cross-community functionalities, which enable the
individual community member to benefit from the information and services of the
community as a whole. A community is a collection of CSSs and/or supporting
infrastructure services that wish to collaborate for mutually agreed purposes. There is
a one-to-one mapping between individuals and CSSs. The only way in which an
individual can participate in a CIS is via their CSS, but they can also interact with
other CSSs without having to form pervasive communities or create CISs. Individuals
may also interact with other individuals without using CSSs at all by employing more
traditional mechanisms.
The most essential resource for realising the CSS vision is the availability of
information and more importantly personal information. To address the issue of
privacy that rises, CSSs provide a range of intelligent privacy protection techniques
for managing the flow of information and allowing the user to have complete control
over the handling and disclosure of their information. Users can explicitly create
privacy preferences that state how their information is disclosed while the system is
also able to implicitly learn privacy preferences by monitoring the user’s behaviour
related to privacy protection.
Clearly SOCIETIES draws together a number of key challenges for the Future
Internet. Social computing, in many different contexts and through various devices, is
becoming a major driver for Internet use. Pervasive systems, as embodied in smart
spaces, are also set for a deployment explosion and will capitalise on the Internet of
Things to make enormous demands on the Internet in the near future. The social and the
pervasive aspects of Internet use each raise important privacy challenges in their own
right but together, the risks and consequences of failing to provide adequate and usable
privacy mechanisms increase exponentially. In addressing all of these challenges
SOCIETIES is making a significant step towards shaping the Future Internet.
4 Methodology
sent to a random sample from each group. The responses were anonymised. Graphs of
the results demonstrate commonalities and differences in the social use of technology
within and across the groups.
Participatory Design (PD) workshops were organized to provide a democratic,
collaborative approach, facilitating creative, cooperative involvement of all the stake-
holders in the development of project concepts and services. Scenarios were selected
as a key tool for the PD sessions, as they function both, as a creative process for
visioning exercises, as well as an empathetic narrative conduit for complex ideas and
information. Initial scenarios demonstrating possible uses of the proposed platform in
the context of student, enterprise and disaster management situations were sketched in
brainstorming sessions with researchers. These initial scenarios were in turn
introduced to users by researchers in the neutral creative third space of PD workshops
[3], where participants’ reactions, ideas and discussions led to alterations and
advancements of these scenarios. Creative understandings [4] forged in these sessions
led to updated scenarios envisioning how pervasive communities could function in
each group’s social setting.
1. Scenario brainstorming: This stage aimed to the design of various story flows and
scenes demonstrating and extending the features of the envisaged system.
2. Gathering of initial requirements: In this stage, an initial set of functional
requirements was extracted from the scenarios produced in stage 1. These
requirements were classified in five main categories: General, Deployment,
Service and Resource, User Experience and Security-related Requirements.
3. Scenario evaluation, analysis, ranking, filtering and refinement: In this stage, the
initial scenarios produced in stage 1 were evaluated and ranked based on various
criteria, such as the volume of features they demonstrated compared to the feature
set captured in the vision of Section 2, the quality of the initial requirements
collected in the second stage, etc. Based on this evaluation, on the end-user
feedback collected (§ 4.1) and on the business analysis performed (§ 4.3), a set of
refined final scenarios was produced.
4. Refinement of functional and non-functional requirements and extraction of use
cases: In this stage, the final scenarios have been studied in order to extract
additional technical requirements (both functional & non-functional), as well as
use-cases, while the initial requirements collected in stage 2 were homogenised,
merged, eliminated, extended and classified.
5. Harmonisation, prioritisation and ranking of requirements: In the final stage, the
elicited requirements were prioritized, harmonised and checked for consistency.
produced for the technical requirements extraction in order to identify the related
stakeholders, the potential business interests that arise and the business opportunities
that emerge, (iii) generalization of the identified stakeholders that led to the
identification of the existing and new stakeholders that are involved in the envisaged
system, (iv) extraction of business requirements that are stakeholder specific and (v)
extraction of the business opportunities and the respective value proposition. Once the
process above was complete, the Business Model Canvas methodology [7] was
exploited to assist in defining the applicable business models. Thus, the business
model canvas approach has been used over the Discover, Connect and Organise
phases that contribute to the formation of the envisaged system. This resulted in
identifying how this system can offer value to various stakeholders; portray the
capabilities and partners required for creating, marketing, and delivering this value,
with the goal of generating profitable and sustainable revenue streams.
5 Architecture
The architecture that implements the concepts presented above is illustrated in Figure 4,
where an overview of the “core services” provided by the proposed architecture is
provided. The services depicted are grouped according to the major concept they
manipulate or operate on. Thus, services that operate on a single CIS are grouped
together, as are those that operate on a CSS, and those found on every node in a CSS.
Multi CSS/CIS Services operate for the benefit of more than one CIS, or more than
one CSS. Thus, they operate for a wider group of stakeholders. They offer federated
search and domain administration functions and require multiple CSSs or CISs to be
effective. This group includes the following services: the Domain Authority (that
provides and manages the CSS and CIS identities in a decentralised manner, allowing
authentication between multiple domains), the CIS Directory (that manages the CIS
information in a decentralised repository, it records available CISs within a domain or
set of domains, it enables searching for CISs based on specific criteria and it allows a
CIS to be removed from the repository), the CSS Directory (that provides search
38 K. Doolin et al.
facilities for CSSs, based on their identifier or by specifying search criteria, such as,
public profile attributes and tags), the CIS Recommendation (that is responsible for
handling CIS recommendations, allowing for recommendations of CISs to users and
vice versa, considering, among others, the users’ privacy preferences) and the Service
Market Place (that provides access to a repository of installable 3rd party (3P) services
and optional “core” services and provide mechanisms for accounting and charging).
CIS Services operate on behalf of a single CIS. There is at least one instance of
these services per CIS and an instance of these services can be used by multiple CISs.
The CIS services are: the CIS Management (that is responsible for handling all
aspects of CIS lifecycle management (creation, update and removal), provides control
over CIS membership and includes a community profile manager and a role manager
to specify the governance model for the CIS), the Community Context Management
(that enables access to and maintenance of community context, providing query
capabilities, as well as, addition/update/removal operations for community context,
maintaining the history of context for a CIS, and inferring community context
information), the Community Learning (that supports community preferences and
community intent learning) and the Community Personalisation (that manages the
community preferences and community intent and exposes interfaces for community
members to retrieve these preferences and intent models for their own use).
CSS Services operate on behalf of a single participant or CSS. There is at least one
instance of these services per participant and an instance of these services can be used
by multiple participants. The CSS services are: the CSS Management (that controls
which Nodes (devices or cloud instances) are part of the CSS, assigns a common
identifier and manages resource sharing & configuration policies), the User Context
Management (that is responsible for acquiring the user context from sensors and other
context sources, for modelling and managing the collected data, for maintaining
current & historic context in appropriate data repositories and for the provision of
inference techniques enabling the extraction of high level information from raw
context data), the User Personalisation (that manages & evaluates the user
behavioural models, such as user preferences, user intent, Bayesian models, etc., and
eventually identifies the actions that need to be taken), the Social Network Connection
(that integrates with existing Social Networking Systems (SNSs), enabling the
extraction of public info available in SNSs, as well as access/update of non-public
information for the specified user), the Privacy Protection (that provides identity
management mechanisms, facilities for managing the CSS privacy policies, which
specify the terms and conditions the CSS will respect concerning the personal data,
also offering Privacy Policy Negotiation facilities), the User Learning (that supports
learning of user behaviour models exploiting the user’s history of actions stored in the
system), the User Agent (that acts on behalf of a single CSS based on information
from several CSS and CIS components, aiming to establish the system’s proactive
behavior, resolving any conflicts that may arise, also enabling CSS users to provide
feedback on the system actions or decisions), the Trust Management (that is
responsible for collecting, maintaining and managing all information required for
assessing the trust relationships and includes a Trust Engine for evaluating direct,
indirect and user perceived trust) and the Service Provisioning (that supports the setup
and lifecycle control of a 3P service or CSS resource, allowing for installation,
(re)configuration and removal of new 3P services, also supporting the enforcement of
3P service sharing policies).
SOCIETIES: Where Pervasive Meets Social 39
Node Services are available per CSS Node. A CSS Node is a logical node (device
or cloud instance) running CSS software that coordinates with other CSS Nodes to
form a participant’s CSS. There is an instance of these services per CSS Node. This
grouping includes the following services: the Communication Framework (that
provides the necessary mechanisms to support intra- and inter-CSS communication,
supporting the identification and maintenance of network connections, the discovery
of CSS Nodes (devices), and the communication between discovered nodes), the
Device Management (that provides mechanisms for managing devices within a CSS,
supporting the discovery of hardware devices and management of their capabilities)
and the Service Discovery (that provides service discovery and advertisement
mechanisms, enabling the discovery of core platform services within a CSS, as well
as, the discovery of 3P services shared by other CSSs or CISs).
6 Initial Evaluation
Using Paper Trials, an initial user evaluation was conducted in April 2011 across all
three user communities, i.e., the Disaster Management, the Student and the Enterprise
community. The primary objective of these trials was to record users’ responses to
early prototypes of initial scenarios & concepts and how users’ experiences of these
prototypes conformed to the previously identified user requirements. These Paper
Trials were interpreted loosely as a user evaluation trial of low-fidelity prototypes. A
secondary objective was to engage with users to confirm or discover the opportunity
spaces for pervasive and social computing, “where there is no urgent problem to be
solved, but much potential to augment and enhance practice in new ways” [8].
The envisaged system posed a challenge that could not be served by traditional
paper prototyping alone, since it required prototypes for user evaluation that focused
on user activities, goals and contexts of use, with varied levels of detail, thus
conveying a range of CSS/CIS system interactions within the user domains, which
were not necessarily focused on users manipulating device interfaces (i.e., pervasive
services working in the background for the benefit of their users). Therefore a specific
evaluation methodology was necessary.
and allow them to directly answer questions of particular interest to the project
researchers and (ii) participatory discussions that were facilitated after the storyboard
viewings, in the case of the Enterprise and Disaster Management communities, where
the storyboards provided springboards to openly discuss users’ reactions to the issues
and scenarios depicted within.
The Wizard of Oz method was used as a secondary method in the case of the
Student community. This method utilized a script based on university scenario
segments, with the project researchers playing the role of the envisioned system by
managing environmental and device responses to user activities and preferences in a
pervasive laboratory, which had been set up to stage an intelligent campus
environment. Participants answered questions posed during the experiment that was
also videotaped. It was designed to allow students evaluate an immersive experience
of a social and pervasive environment.
7 Conclusions
The SOCIETIES project aims to investigate and address the gap between pervasive
and social computing by designing, implementing and evaluating an open scalable
SOCIETIES: Where Pervasive Meets Social 41
service architecture and platform. Based on a vision of the discovery, connection and
organisation of relevant people, resources and things into dynamically formed
pervasive communities, SOCIETIES attempts to bridge the domains of pervasive and
social computing in a unified platform allowing individuals to utilise pervasive
services in a community sphere.
This paper presents concepts and research methodologies adopted in the
SOCIETIES project towards the realization of Pervasive Communities and in order to
assess whether real end users can see a value for engaging with such a system. The
overall feedback from our initial user evaluation study did indicate strong support for
the concepts that were presented, albeit with quite a number of concerns expressed,
including: trusting the system, controlling privacy and accepting automated decision
making. Based on these user concerns and considering every user feedback collected
from the initial trials, the technical requirements have been revised and the
SOCIETIES architecture has been adapted accordingly. Two more user trials have
been scheduled for 2012 and 2013 that will enable us to assess how successfully the
vision, concepts and results of SOCIETIES address the technological and user
acceptance gap between pervasive and social computing. The results achieved up to
this point and the user feedback already collected indicate that the SOCIETIES
platform can find its way to the facilities portfolio that users exploit on a daily basis.
Open Access. This article is distributed under the terms of the Creative Commons Attribution
Noncommercial License which permits any noncommercial use, distribution, and reproduction
in any medium, provided the original author(s) and source are credited.
References
1. Saha, D., Mukherjee, A.: Pervasive computing: A paradigm for the 21st century. IEEE
Computer 36(3), 25–31 (2003)
2. Dasgupta, S.: Social Computing: Concepts, Methodologies, Tools, and Applications. IGI
Global (2010)
3. Muller, M.J.: Participatory Design: The third space in HCI. In: The Human-Computer
Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications,
pp. 1051–1068. Lawrence Erlbaum Associates, New Jersey (2002)
4. Wright, P., McCarthy, J.: Empathy and Experience in HCI. In: 26th Annual SIGCHI
Conference on Human Factors in Computing Systems, Florence, Italy (April 2008)
5. Grady, R.B., Caswell, D.L.: Software Metrics: Establishing a Company-Wide Program.
Prentice Hall, Englewood Cliffs (1987)
6. Castro, J., Kolp, M., Mylopoulos, J.: Towards Requirements-Driven Information Systems
Engineering: The Tropos Project. Information Systems 27(6), 365–389 (2002)
7. Osterwalder, A., Pigneur, Y.: Business Model Generation: A Handbook for Visionaries,
Game Changers, and Challengers. John Wiley & Sons, New Jersey (2010)
8. Hornecker, E., Halloran, J., Fitzpatrick, G., Weal, M., Millard, D., Michaelides, D.,
Cruickshank, D., Roure, D.D.: UbiComp in opportunity spaces: challenges for
participatory design. In: 9th Conference on Participatory Design, Trento, Italy (August
2006)
9. Erickson, T.: Notes on Design Practice: Stories and Prototypes as Catalysts for
Communication. In: Scenario-Based Design: Envisioning Work and Technology in System
Development. John Wiley & Sons, New York (1995)
10. Houde, S., Hill, C.: What do prototypes prototype? In: Handbook of Human-Computer
Interaction, 2nd edn., Elsevier Science B. V., Amsterdam (1997)
Cross-Disciplinary Lessons for the Future Internet
1 Introduction
The Internet has become an essential part of the infrastructure of modern life.
Relationships are managed online, commerce increasingly takes place online, media
content has moved online, television and entertainment are being delivered via the
Internet, and policy makers engage the public via programs such as Digital Britain
[1], the European Digital Agenda [2], and other worldwide initiatives. Efforts to
develop the so-called Future Internet (FI), will either follow as a logical extension of
what is in place now, or as something completely different [3].
At the same time the Internet’s underlying technology is evolving, it is also
changing as a social and economic platform. Yet it is not clear how competing
interests should be balanced when technical, societal, economic and regulatory
concerns come into conflict. One view is that technology developers should develop
innovative technologies with little oversight and regulation so as not to stifle
creativity. Social and regulatory concerns can be dealt with as they arise as a result of
use. A user-centric view, on the other hand, suggests that any FI must be designed
around social and economic concerns, with technology that supports values such as
inclusion, privacy, and democracy.
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 42–54, 2012.
© The Author(s). This article is published with open access at SpringerLink.com
Cross-Disciplinary Lessons for the Future Internet 43
1
SESERV (Socio-Economic Services for European Research Projects). See http://www.
seserv.org
2
The Future Internet – Public Private Partnership, http://www.fi-ppp.eu/
44 A. Oostveen et al.
personal data to ‘improve’ the service; to classify and index data (including personal
relationship data) which allows the service to be further enhanced; to create
personalized advertising; and to provide information to businesses and governments,
for payment and/or to meet legal obligations.
The most successful Social Network Sites or online retailers are now among the
largest and most profitable businesses, and yet typically accept no responsibility for
3
user-generated content . Users can publish sensitive, sometimes scandalous
information about third parties, which is propagated freely by the service provider.
The victims have few protections and very limited recourse. They can ask the service
provider to remove the offending content after the fact, or sue the user who posted it
(if the service provider reveals their real identity, and that user falls under a
jurisdiction to which the victim has access).
The trend is towards an increase in asymmetry as service providers improve
exploitation and find new opportunities to capture personal data. Personal data is
increasingly available to the service provider and to other users, commercial customers
and government agencies. The risks from widespread disclosure - should the provider
be hacked or forced by government agencies to release information - are acute.
European privacy regulations provide little protection due to technical and jurisdictional
limitations; European service providers may therefore find it harder to compete.
Privacy clearly goes hand-in-hand with issues of security and trust. Therefore, one
could expect appropriate technical and procedural protection in support of users
online. To some degree, users may have unrealistic expectations of technical
provision for privacy. However, it is equally true that users themselves should be able
to make appropriate judgments about suitable protection and data management. Thus,
examining how users behave and wish to behave may help determine requirements.
3
Though this is not always the case, e.g. Italian law puts the onus on the service provider.
46 A. Oostveen et al.
corporations employ legal services firms. Others, however, may not have access to risk
experts or be able to cope with security threats. Most medium and small scale
companies cannot afford to hire technical risk analysts, lawyers and other experts.
Similarly, domestic users will have to trust the information provided. Security could be
left to the market, with customers avoiding services that they find too risky. But the
laissez-faire of a completely free market is not enough to manage security risks. There
is a need for regulation, and one simple approach could be to force cloud service
providers to publish statistics about the health of their activities and their monthly
attacks, allowing for validation. Yet information about security is also very sensitive,
which means that service providers might not be willing to reveal these data. Hence
there is a need for transparent metrics for comparing ‘trustworthiness’ and auditing
standards to ensure that what service providers publish is credible.
4
Possibly by extending senslets, http://www.inets.rwth-aachen.de/fileadmin/
templates/images/PublicationPdfs/2008/Senslet_EuroSSC.pdf
48 A. Oostveen et al.
Social media have grown rapidly – today nearly 4 out of 5 active internet users visit
social networks and blogs [16]; 20% of online time is spent on social networking sites
(SNS’s), up from 6% in 2007. SNS’s reach 82% of the world’s online population
[17]. Online communities center on how users interact with and exploit the range of
social networking applications (e.g., government, leisure and work). A critical success
factor is to maximize activity, mainly achieved irrespective of the purpose of
communications. However, it is also necessary to comply with required data
protection legislation in relation to responsibilities and individual actions (e.g.
consent). Herein lies a contradiction: Privacy compliance, often promoted as a means
to increase trust and hence participation, can also act as an inhibiter to greater activity.
Individuals use SNS’s because their perception of risk is considered low enough,
whilst developing an appetite for risk, upping participation regardless of associated
regulation.
This leads to an interesting challenge for European service providers and research
projects: How to strike the balance between participation and privacy - if it is
desirable to monitor and mine data - without violating a citizen’s right to privacy. It is
unlikely that the successful paradigms of the last decade, social networking and
clouds, would have prospered if they’d been subject to the European regulatory
environment from the start. The try-it-and-see approach has led to a balance over
time: participants have explored their preferences iteratively. Social networking has in
fact been a large experiment in people’s appetite for privacy.
Online Communities highlight the basic dichotomy: is it technology or society
which shapes the ICT future? The answer for now at least is that there is a real need to
back off from technology for technology’s sake and begin to take seriously how
communities are formed and what they do online. The focus would move towards
societal behaviours and away from technology, and require appropriately skilled
cross-disciplinary researchers with an understanding of these communities and what
makes healthy and vibrant online communities.
Elsewhere, SNS content (especially user profiles) are being synchronized live
across networks. What does this do for user control and user-centeredness? User-
centric platform-bridging applications with transparent filtering options can be
developed, so users should be able to manage and control sharing easily with the
online communities. Better tools in general are needed for managing online
communities such as smaller community hubs that mirror the cognitive limit for
social relationships. There are both limitations and strengths to smaller online
Cross-Disciplinary Lessons for the Future Internet 49
The discussions in Section 2 yielded recurring strategies which suggest eight cross-
cutting resolutions to the socio-economic challenges identified.
50 A. Oostveen et al.
A cross-cutting theme that emerged across several discussions (Online Identity and
Communities, the IoT, and Privacy) was a call for more balanced approaches in
design avoiding dichotomized thinking. For example, there is a need for a balance
between identity as singular and stable (e.g. passport) as well as completely fluid and
dynamic. How identity is perceived has a consequence for system design such as
more nuanced views and multi-disciplinary insights, like an identity continuum from
stable to dynamic. Similarly, design needs to balance bottom-up and top-down
Cross-Disciplinary Lessons for the Future Internet 51
3.7 Need for Clarity about Digital Rights and Digital Choice
Some discussions (Privacy, Internet of Things and Online Communities) agreed on
the need to clarify digital rights and digital choices: what levels of anonymity should
be granted, to whom and in what context? In the case of eHealth, for example, there is
a need to balance an individual's right to anonymity against appropriate access to
detect and tackle emerging health issues. Another question concerns the right to be
forgotten: to have information deleted. As stated, this might not apply to content of
historic or humanitarian value. Digital choice can be exemplified in relation to the
IoT, where off-line alternatives should be available.
52 A. Oostveen et al.
consensus that the Digital Agenda is central to taking Europe forward technologically
as well as socially: though too high-level lacking global relevance beyond the EU, as
an instrument for future strategy, technologists and social scientists have much to
contribute to the Digital Agenda and vice versa.
5 Conclusions
This chapter has presented the views of social scientists and technologists working on
the FI. The community has developed possible future strategies and priorities. The
results represent a snapshot of the challenges facing those undertaking FI research.
There is no doubt that the FI ecosystem is an increasingly rich, diverse and complex
environment, and Challenge 1 projects are aware of societal concerns and challenges,
and of their potential resolution. In contrast, the Digital Agenda is not well understood
by technologists and there is a gap between a set of high level policies and incentives
that are particularly focused on infrastructure and complex regulatory processes as
against the users of the technologies being developed. Regulations currently ignore
some of the concerns of citizens and there is a disconnect between the ‘stakeholders’
of the FI and the Digital Agenda. The European Commission needs to find a way to
update the Digital Agenda in response to the needs of a broad spectrum of people and
communities rather than focusing only on big companies or governments. For
instance, rural and remote regions, non-organized communities and even SMEs seem
to be under-represented in this policy aimed at 2020: different ‘soft’ design
mechanisms may help the Digital Agenda to adapt to the social, political, educational,
labour, and environmental needs of the community. If the Digital Agenda is not
embedded in the principles of openness, adaptability, participation and transparency,
it is hard to see how it will succeed. Supporting technologists in their understanding
of the potential broader impacts of the FI and its adoption through dialogue with
social scientists must be central to this effort. To realize the benefits for the widest
possible range of stakeholders, there will need to be increasing engagement between
those who study and those who are building the Future Internet.
Acknowledgements. The SESERV project (FP7-2010-ICT-258138-CSA) is funded
by the European Commission. The authors would like to thank all participants to the
Oxford scientific workshop and especially Ben Bashford, Ian Brown, Tony Fish,
Sandra Gonzalez-Bailon, Christopher Millard and Mike Surridge for facilitating.
Open Access. This article is distributed under the terms of the Creative Commons Attribution
Noncommercial License which permits any noncommercial use, distribution, and reproduction
in any medium, provided the original author(s) and source are credited.
References
1. Mandelson, P., Bradshaw, B.: Digital Britain: Final Report. Department for Business
Innovation & Skills and Department for Culture, Media and Sport (2009), http://
www.official-documents.gov.uk/document/cm76/7650/7650.pdf
2. European Commission: A Digital Agenda for Europe. European Commission (2010),
http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:
2010:0245:REV1:EN:HTML
54 A. Oostveen et al.
Abstract. Design principles play a central role in the architecture of the Internet
as driving most engineering decisions at conception level and operational level.
This paper is based on the EC Future Internet Architecture (FIArch) Group
results and identifies some of the design principles that we expect to govern the
future architecture of the Internet. We believe that it may serve as a starting
point and comparison for most research and development projects that target the
so-called Future Internet Architecture.
1 Introduction
Design principles play a central role in the architecture of the Internet as driving most
engineering decisions not only at conception level but also at operational level. Many
ICT systems do not consider design principles and derive their model directly from
requirements. However, when it comes to the design of the Internet, the formulation
of design principles is a fundamental characteristic of the process that guides the
design of its protocols. On the other hand, in searching for Internet architectural
principles, we must remember that technical change is continuous in the information
and communication technology industry. Indeed, as stated in RFC 1958 [1],
"Principles that seemed inviolable a few years ago are deprecated today. Principles
that seem sacred today will be deprecated tomorrow. The principle of constant
change is perhaps the only principle of the Internet that should survive indefinitely".
In this context, it is important to provide a detailed analysis of the application of
known design principles and their potential evolution.
This paper, based on the work accomplished within the EC Future Internet
Architecture (FIArch) group [2], identifies some of the design principles that we
expect to govern the future architecture of the Internet. It may serve as a starting point
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 55–67, 2012.
© The Author(s). This article is published with open access at SpringerLink.com
56 D. Papadimitriou et al.
and comparison basis for all research and development projects that target the so-
called Future Internet Architecture. This paper is structured as follows: Section 2
contains the definitions used in our analysis, and gives the needed background and
our understanding of the current design principles of the Internet. Section 3
summarizes the Design Principles that we expect to remain or evolve towards the
Future Internet and Section 4 gives some seeds of new design principles.
2.1 Definitions
We define architecture the set of functions, states, and objects/information together
with their behavior, structure, composition, relationships and spatio-temporal
distribution. The specification of the associated functional, object/informational and
state models leads to an architectural model comprising a set of components (i.e.,
procedures, data structures, state machines) and the characterization of their
interactions (i.e., messages, calls, events, etc.).
Design principles refer to agreed structural and behavioral rules on how a
designer/an architect can best structure the various architectural components and
describe the fundamental and time invariant laws underlying an engineered artefact
(i.e., an object formed/produced by engineering). By “structural and behavioral
rules” we refer to the set of commonly accepted and agreed rules serving to guide,
control, or regulate a proper and acceptable structure of a system at design time and a
proper and acceptable behavior of a system at running time. Time invariance refers to
a system whose output does not depend explicitly on time (this time invariance is to
be seen as within a given set of initial conditions due to the technological change and
paradigms shifts, the economical constraints, etc.).
We use the term data to refer to any organized group of bits, e.g., packets, traffic,
information, etc. and service to refer to any action or set of actions performed by a
provider in fulfillment of a request, which occurs through the Internet (i.e., by
exploiting data communication, as defined below) with the aim of creating and/or
providing added value or benefits to the requester(s). “Resource” is any fundamental
element (i.e., physical, logical or abstract) that can be identified.
This paper refers to communication as the exchange of data (including both control
messages and data) between a physical or logical source and sink referred to as
communication end-points; when end-points sit at the same physical or logical
functional level, communication is qualified as “end-to-end”.
Security is a process of taking into account all major constraints that encompasses
robustness, confidentiality and integrity. Robustness is the degree to which a system
operates correctly in the presence of exceptional inputs or stressful environmental
conditions. Confidentiality is the property that ensures that information is accessible
only to those authorized to have access and integrity includes both “data integrity”
and “system integrity”. The term complexity refers to the architectural complexity
(i.e., proportional to the needed number of components and interactions among
components), and communication complexity (i.e., proportional to the needed number
of messages for proper operation). Finally, scalability refers to the ability of a
computational system to continue to function without making changes to the system
Design Principles for the Future Internet Architecture 57
under satisfactory and well specified bounds, (i.e., without affecting its performance),
when its input is changed in size, volume or rate.
different transmission media, and to offer a single platform for a wide variety of
information, infrastructure, applications and services.
• Loose Coupling principle: Coupling is the degree to which each architectural module
relies on each one of the other modules [6]. Loose coupling defines a method for
interconnecting system components so that they depend on each other to the least
extent practicable. The extent of coupling in a system can be qualitatively measured
by noting the maximum number of element changes that can occur without adverse
effects. In today’s Internet design, “Modularity is good. If you can keep things
separate do so” [1]. The best example of loose coupling in the communication stack
is the decoupling between applicative layers and the TCP/IP protocol. The loose
coupling principle is further refined in [3] by stating that as things get larger, they
often exhibit increased interdependence between components. Much of the non-
linearity observed in large systems is largely due to coupling of horizontal and/or
vertical components. Loose coupling minimizes unwanted interaction among system
elements but can also give rise to difficulty in maintaining synchronization among
diverse components when such interaction is desired.
• Locality Principle: in computer science, this principle guiding the design of robust
replacement algorithms, compiler code generators, and thrashing-proof systems, is
useful wherever there is an advantage in reducing the apparent distance from a
process to the information or data it accesses. It has been used in virtual memory
systems, processor caches, disk controller caches, storage hierarchies, network
interfaces, etc. We distinguish the principle of temporal locality (recently accessed
data and instructions are likely to be accessed in the near future) from the spatial
locality (data and instructions close to recently accessed data and instructions are
likely to be accessed in the near future) leading to a combined principle of locality
where recently accessed data and instructions and nearby data and instructions are
likely to be accessed in the near future.
• The “end-to-end” and minimum intervention principle: End-to-end is one of the
fundamental principle on which the Internet has been structured and built, as it
guides the functional placement and the spatial distribution of functions across the
layers of the communication stack [7]. Following this principle, a function should
not be placed in the network if it can be placed at the end node (provided it can be
implemented "completely and correctly" in the end nodes except for performance
enhancement) while the core of the network should provide a general connectivity
service. The end-to-end principle has also important consequences in terms of
protocol design that should not rely on the maintenance inside the network of state
information. The application of this principle, together with the minimum
intervention (i.e., where possible, payload should be transported as received
without modification), results in a network that is transparent to the host
application communication and provides for a general, application agnostic
transport service.
• Simplicity principle: this common sense engineering principle also expressed as the
KISS (“Keep it Simple, ... Stupid”) or the “Occam’s Razor” principle, states when
facing doubts or multiple choices or ways in the design of, e.g., protocols and
intermediate systems, choose the simplest solution [1]. Adding functionality or
improving performance should not come at the detriment of increasing complexity.
Design Principles for the Future Internet Architecture 59
In this section, we detail the design principles that should be preserved and applied to
the future architecture of the Internet. Other should be adapted or augmented.
In this section we highlight design principles that apply to the current Internet
architecture but should be adapted to address the design objectives of the Internet [11].
• Simplicity principle: Complex systems are generally less reliable and flexible.
Architectural complexity dictates that in order to increase the reliability it is
mandatory to minimize the number of components in a service delivery path (being
a protocol, a software, or a physical path). However, this principle has already been
challenged as complex problems sometimes require more elaborated solutions and
multidimensional problems such as the Internet architecture will be providing non-
trivial functionality in many respects. The general complexity problem can be seen
as follows: determine the placement and distribution of functionality that would
globally minimize the architectural complexity. In that respect, arbitrary lowering
complexity (over space) might result in local minimum that may be globally
detrimental. Thus, when designing the Internet, the famous quote attributed to
A.Einstein may be adopted: "Everything should be made as simple as possible, but
not simpler". Though we have to recognize that this principle is still weakly
Design Principles for the Future Internet Architecture 61
applied, together with the conclusion of Section 3.1, scalability and simplicity
should be handled as strongly interconnected first priority design principles.
• Minimum Intervention principle: is critical to maintain and preserve data integrity
and to avoid useless intermediate information message or packet processing.
However, in some cases, it may conflict with the simplicity principle; e.g., in
sensor networks where communication gateways and actuators enable
communication between networks by offloading capabilities that would be costly
to support on sensors. As a result, we propose to relax the minimum intervention
principle as a design principle.
• Robustness principle: in order to increase robustness and system reliability, some
have advocated transforming this fundamental principle from “be liberal in what
you accept, and conservative in what you send" into "be conservative in what you
send and be even more conservative in what you accept from others". However,
adopting this approach would result in dropping a significant level of
interoperability between protocol implementations. Indeed, being liberal in what
you accept is the fundamental part that allows the Internet protocol to be extended.
With the anticipated architectural evolution of the Internet, another aspect of
interoperability will play a critical role: "how to change the engine of plane while
flying". Moreover, we shall account that the new engine can be of completely
different nature than the one it replaces. There is no universal operational principle
telling how such transition should best be performed; nevertheless it is possible to
provide the minimal conditions the new system has to support in order to facilitate
this transition. This principle however leads to relatively weak security. As stated
in [1]: “It is highly desirable that Internet carriers protect the privacy and
authenticity of all traffic, but this is not a requirement of the architecture.
Confidentiality and authentication are the responsibility of end users and must be
implemented in the protocols used by the end users”. Henceforth, we argue that the
principle should be adapted to incorporate self-protection structural principle
(coordination of the local responses to external intrusions and attacks including
traffic, data and services traceback that would enforce in turn accountability) as
well as confidentiality, integrity and authentication should be inherently offered to
information applications and services. Moreover, even if individual subsystems can
be simple, the overall system resulting from complex interactions becomes
sophisticated and elaborated. Therefore, these systems are prone to the emergence
of nonlinearity that results from the coupling between components, i.e., the
positive feedback (amplification) loops among and between subsystems and
unending oscillations from one state to another. It is possible to prevent the known
amplification loops and unstable conditions to occur but still impossible to
anticipate and proactively set the means to prevent all their possible occurrences.
In these conditions, it is fundamental to prevent propagation and that each system
keeps its own choice as last resort decision, and become "conservative to what each
system accepts and adopts".
• Modularity Principle: Current communication systems are designed as a stack of
modules structured by static and invariant binding between layers (modules) that
are specified at design time. After 30 years of evolution, communication stacks are
characterized nowadays by i) the repetition of functionality across multiple layers,
such as monitoring modules repeated over multiple layers and security components
62 D. Papadimitriou et al.
each associated to a specific protocol sitting at a given layer (which result into
inconsistent response to attacks), which emphasizes the need to define common
functional modules; ii) the proliferation of protocol variants (as part of the same
layer) all derived from a kernel of common functions/primitives; which emphasizes
the need to define generic modules; iii) the limited or even absence of capability
for communication stacks to cope with the increasing variability and uncertainty
characterizing external events (resulting from increasing heterogeneity where
communication systems proliferate); this observation emphasizes that the
functional and even performance objectives to be met by communication systems
could vary over time (thus messages would be processed by variable sequence of
functions determined at running time); iv) the inability to operate under
increasingly variable running conditions resulting from the increasing
heterogeneity of substrate on top of which communications stacks are actually
performing. These observations lead to reformulate the modularization principle so
as to i) consider functional modules connected by realization relationships that
supply their behavioral specification, ii) distinguish between general and
specialized modules, and iii) enable dynamic and variable binding between the
different modules such that the sequence of functions performed is specified at
running time. In turn, the application of the adapted principle allows designing
systems with a larger autonomy in diagnosing internal/external stimuli but also in
their decision and execution.
and content). Realizing such Internet Architecture requires design principles that go
well beyond the networking and primitive services aspects.
In this section, we introduce seeds for completely new design principles that may
apply to the evolution of the Internet Architecture. A seed for a new design principle
refer to a concept or a notion at the inception of a well formulated design principle.
The term seed acknowledges that i) formulating principles is a complex exercise, ii)
research is still ongoing in proving their value and utility (some of our analysis and
exploitation of research results may not be mature enough) but also impact, and iii)
the proposed seeds may not be flourishing (a lot of proposal came in and very few
will materialize).
i) Services are not cognizant of end-user expectations and needs, especially for
mission critical applications. Services are often static, lack of flexibility and they
are not negotiable. Often it is left up to the users/clients to implement their own
systems to ensure the service performs as expected
ii) Services operate on a "best-effort" basis. Moreover, services are often not
accountable towards the end-user;
iii) Services are modeled prior to their deployment in any environment and
according to the aforementioned modeling scalability rules and policies are
enforced during runtime. Nevertheless and given that infrastructures are
application-unaware, the enforced scalability rules and policies are not always
adequate to meet the application requirements in terms of efficiency,
performance, etc.; and
iv) Distributed dynamic environments ask for control policies able to deal
intelligently and autonomously with problems, emergent situations, tasks, and
other circumstances not necessarily envisaged at the design time.
The design of the Future Internet must be imbued with the principle of dependability
(reliability–accountability–verifiability feedback loop) including self-adaptation and
self-learning capability to cope and learn from changes in the operating conditions.
However, enabling such capability shall not result into monopolistic or a monolithic-
proprietary designed architecture. In that respect, this principle ought to provide
means to avoid vertical integration with proprietary components. This critical element
is part of the open research questions remaining unaddressed since so far.
The Internet has evolved to a playground for different stakeholders such as Internet
Service Providers (ISPs), Content Distribution Network (CDN) providers, end-users,
etc. and each stakeholder tries to optimize its own utilities (or more generally
benefits), e.g., ISPs to reduce inter-domain costs, CDNs to improve content routing,
users to benefit from different choices. The so-called information asymmetry between
different stakeholders leads often the ecosystem to a suboptimal performance.
Addressing the information asymmetry problem may allow stakeholders to make
alternative decisions that would lead them collectively to a more beneficial state.
Furthermore, the emerging Design for Choice principle seed suggests that Internet
technologies should be designed so that they allow variation in outcome, rather than
imposing a particular outcome [10]. The rationale behind is that the Internet is a rather
unpredictable system and it is very difficult to assess if a particular outcome will
remain desirable in the future. The exchange of information between stakeholders
implies a flow of information from one stakeholder to another, and the “processing”
by each stakeholder; therefore the constituent capabilities of this principle include: i)
the exposure of information to a stakeholder, ii) the abstraction/aggregation of
information to be exchanged, iii) the collection of information by a stakeholder, iv)
the assessment of information by a stakeholder, and iv) the decision making.
66 D. Papadimitriou et al.
5 Conclusion
evidences. Consequently, we believe that this work may serve as a starting point and
comparison basis for many research and development projects that target the Future
Internet Architecture. The result of these projects would in turn enable to refine the
formulation of these principles that will govern the design of the foundation of a
common architecture.
Open Access. This article is distributed under the terms of the Creative Commons Attribution
Noncommercial License which permits any noncommercial use, distribution, and reproduction
in any medium, provided the original author(s) and source are credited.
References
1. Carpenter, B.: Architectural Principles of the Internet. RFC 1958 (June 1996)
2. European Commission, Future Internet Reference Architecture Group, http://ec.
europa.eu/information_society/activities/foi/docs/fiarch
designprinciples-v1.pdf
3. Bush, R., Meyer, D.: Internet Architectural Guidelines. IETF, RFC 3439 (updates RFC
1958) (December 2002)
4. RFC793, Transmission Control Protocol (1981)
5. Braden, R.: Requirements for Internet Hosts-Communication Layers. RFC1122 (October
1989)
6. Stevens, W., Myers, G., Constantine, L.: Structured Design. IBM Systems Journal 13(2),
115–139 (1974)
7. Saltzer, J.H., Reed, D.P., Clark, D.D.: End-To-End Arguments in System Design. ACM
Transactions on Computer Systems 2(4), 277–288 (1984)
8. Lyons, P.A.: The End-End Principle and the Definition of Internet. Working Group on
Internet Governance (WGIG) (November 10, 2004)
9. Papadimitriou, D., Welzl, M., Scharf, M., Briscoe, B.: Open Research Issues in Internet
Congestion Control. RFC6077 (February 2011)
10. Clark, D.D., Wroclawski, J., Sollins, K.R., Braden, R.: Tussle in Cyberspace: Defining
Tomorrow’s Internet. IEEE/ ACM Trans. Networking 13(3), 462–475 (2005)
11. Zahariadis, T., et al.: Towards a Future Internet Architecture. In: Domingue, J., et al. (eds.)
FIA 2011. LNCS, vol. 6656, pp. 7–18. Springer, Heidelberg (2011)
From Internet Architecture Research to Standards
1 Introduction
The Internet model based on TCP/IP is driven since its inception by a small set of
design principles rather than derived from an architecture specification [1]. These
principles guided the structure and behavior as well as the relationships between the
protocols designed for the Internet. Nowadays, within the Internet community, some
argue that changes should be carried on once a major architectural limit is reached
(theory of change) and thus the architecture should be designed to enable such
changes. Others argue that as long as it works and it is useful for the “majority”, no
major changes should be made (theory of utility) as the objective is to keep longevity
of the design as much as possible. As a consequence of the theory of utility, the
evolution of the Internet is driven by incremental and reactive additions to its
protocols or when these protocol extensions are not possible (without changing the
fundamental properties of existing Internet Protocols) complement them by means of
overlaying protocols. Nevertheless, this approach has already shown its limits. For
instance, the design of IP multicast as an IP routing overlay led to limited Internet-
wide deployment (even if some have argued that it only enables optimizing capacity
consumption without necessarily improving end-user utility). On the other hand,
mobile IP (MIP) also designed as an IP network overlay suffers from limited
deployment too but it is undoubtedly an essential IP networking functionality to be
provided by the Internet.
In this paper, we argue that the debate between the theory of change vs. the theory
of utility is reaching its end. Indeed, the representative examples of design decisions
provided in Sections 2 aim to explain that the architecture resulting from
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 68–80, 2012.
© The Author(s). This article is published with open access at SpringerLink.com
From Internet Architecture Research to Standards 69
d) Border Gateway Protocol (BGP): has been designed to compute and maintain
Internet routes between administrative domains. Its route selection algorithm is
subject to Path Exploration phenomenon: BGP routers may announce as valid, routes
that are affected by a failure and that will be withdrawn shortly later during
subsequent routing updates. This phenomenon is (one of) the main reasons for the
large number of routing update messages received by inter-domain routers. In turn,
path exploration exacerbates inter-domain routing system instability and processing
overhead. Both result in delaying the convergence time of BGP routing tables upon
topology or policy change. Several mitigation mechanisms exist but practice has
shown that the reactive (problem-driven) approach at the origin of the design of these
mechanisms does not allow evaluating their potential detrimental effects on the global
routing system.
Observations: All these problems could have been avoided or at least mitigated if the
Internet was not relying on a minimalistic architectural model. Indeed, a systematic
architectural modeling of the system would have i) provided the various possible
design options from the beginning and ii) offer to the protocol designer a framework
to reason on the role of each of these components and their interactions. Without any
architecture model, the components (in particular, the protocols) tend to be designed
independently, thus, preventing any holistic approach at design time. Moreover,
independent component design does not delimit sufficient condition to achieve global
design objectives. For instance, one of the root causes of the Internet scaling resides
in the lack of modeling of the global routing system. Indeed, the main choice when
designing a routing protocol resides in the selection of the algorithm performing route
computation. However, as the routing system is not properly modeled, the impacts of
these design choices on the global routing system are almost impossible to evaluate.
In contrast, good engineering practices suggest to first model the Internet addressing
and the routing system by identifying its architectural components and their
relationships. Next, the algorithms for route computation can be designed and their
impact on the global routing system can be analyzed and evaluated by using the
architectural model. It is to be emphasized here that even if following a systematic
and holistic architectural approach does not tell the "right" routing algorithm, this
approach can certainly help delimiting what would constitute a suitable algorithm
from a functional and behavioral perspective.
What Can We Learn? The Internet architecture is implicitly defined by the
concatenation and the superposition of its protocols. In this context, architectural
components (in particular, the protocols) tend to be designed independently thus,
preventing any holistic approach at design time. Moreover, following the argument of
“utility”, the evolution of the TCP/IP model is mainly carried out by means of
incremental and reactive additions of features to existing protocols relying on a
reduced set of kernel functions. This approach has been effectively used since so far
but is now progressively reaching objective limits that range from global functional
shortcomings and/or performance degradation to maintenance1 which in turn lead to
1
Note here that the replacement and/or addition of key architectural components is impossible
without changing the properties of the architecture.
72 D. Papadimitriou et al.
serious and costly operational problems. Hence, independent component design does
not ensure the sufficient conditions to achieve global design objectives and when
achieved lead to detrimental effects. Indeed, reasoning by protocols instead of
thinking by functions will ineluctably lead to duplicated functions, conflicting
realization of the same function and unforeseen interactions between functions
impacting the global system operation. The above examples show that the argument
of “utility” is not sufficiently compelling anymore for certain key functions such as
mobility, congestion control and routing. On the other hand, the theory of change
cannot lead to any significant improvement since there is actually no common
architectural baseline (i.e., replacement of an independent component is unlikely to
lead to a global architectural change, IPv6 being probably the best example). This
corroborates the need for conducting systematic and holistic architectural work in
order to provide a proper architectural common baseline for the Internet.
The disparity of arguments regarding the research path to follow (change vs. utility) is
resulting in maintaining the genuine Internet design foundations instead of starting
from the root causes: the progressive depletion of the foundational design principles
of the Internet. In this section, we argue that the research path to follow is not limited
anymore to the selection of the trajectory but the revision of the starting point as
determined by these root causes. We contrast the main architectural methods so as to
derive a synthetic approach that challenges these foundations.
Design principles refers to a set of agreed structural and behavioral rules on how an
architect/a designer can best structure the various architectural components and
describe the fundamental and time invariant laws underlying the working of an
engineered artefact. These principles are the corner stone of the Internet design
compared to architectures that rely exclusively on modeling. They play a central role
in the architecture of the Internet by driving most engineering decision at conception
time but also at the operational level. When it comes to the design of the Internet, the
formulation of design principles is a fundamental characteristic of the Internet design
process that guides the specification of the design model. On the other hand,
commonly shared design principles define necessary (but not sufficient) conditions to
ensure that objectives are met by the Internet.
Due to their importance, several initiatives have been initiated over last decade that
study the evolution of the design principles. Among others, the FIArch initiative has
undertaken a systematic analysis of the Internet design principles and their expected
evolution [4]. Analytical work on design principles documents the most common
design principles of the Internet and put them in perspective of the Internet protocols
design and their evolution. These studies aim to identify and to characterize the
different design principles that would govern the architecture of the Future Internet.
From Internet Architecture Research to Standards 73
mention open-flow, and virtualization but also the more recent software-define/-
driven networks (SDN).
Third Path to Architectural Research: Following these observations, we argue that
architectural research should follow a "third path" instead of focusing on observable
consequences (theory of utility) or its premises (theory of change). This path starts by
identifying the actual root causes, i.e., the progressive depletion of the foundational
design principles of the Internet and by acknowledging the need for a common
architectural foundation relying on a revision of these principles. Indeed, without
strong motivations to adapt or to complement the current set of design principles, it is
unlikely that the current architectural model of the Internet (TCP/IP model) would
undergo significant change(s). If such evidences remain unidentified, the
accommodation of new needs either in terms of functionality or performance will
simply be realized by the well-known engineering practices residing outside the scope
of genuine architectural research work. A representative example is provided by the
evolution of the Internet communication stack that leads to reconsider the
modularization principle. This principle structures at design time the communication
stacks as a linear sequence of modules related by static and invariant bindings.
Indeed, when developed CPU and memory were scarce resources and specialization
of communication stacks for computers networks lead to a uniform design optimizing
the cost/performance ratio at design time. After 30 years of evolution, communication
stacks are characterized by: i) the repetition of functionality across multiple layers,
such as monitoring modules repeated over multiple layers (which then requires to
recombine information in order to be semantically interpretable) and security
components each associated to a specific protocol sitting at a given layer (which result
into inconsistent response to attacks), which emphasizes the need to define common
functional modules; ii) the proliferation of protocol variants (as part of the same layer)
all derived from a kernel of common functions/primitives; which emphasizes the need
to define generic modules; iii) the limited or even absence of capability for
communication stacks to cope with the increasing variability and uncertainty
characterizing external events (resulting from the increasing heterogeneity where
communication systems proliferate); this observation emphasizes that the functional
and even performance objectives to be met by communication systems could vary
over time (thus, messages would be processed by variable sequence of functions
determined at running time); and iv) the inability to operate under increasingly
variable running conditions resulting from the increasing heterogeneity of substrate
on top of which communications stacks are performing. Altogether these observations
lead to reformulate the modularization principle in order to i) connect functional
modules by realization relationships that supply their behavioral specification, ii)
distinguish between general and specialized modules (inheritance), and iii) enable
dynamic and variable bindings between the various modules such that the sequence of
functions performed is determined at running time. In turn, the newly formulated
principle provides the means to, e.g., ensure coordinated monitoring operations and
account for all security constraints (that comprises robustness, confidentiality and
integrity) consistently across all functions performed by the communicating entities.
From Internet Architecture Research to Standards 75
• Top-down method (see Figure 1): starts (using knowledge from research results) by
defining the global architectural level with generic and common specification
including function, information and state (step_1). Then these elements are
specialized in order to fit the needs of the domain(s) to which they apply (step_2).
By specialization we mean here the profiling of certain function and/or information
while keeping the generic properties associated to the global level. Finally these
specialized elements are translated into system level components (step_3). The
challenge here consists in specifying these components from the top so as to
produce appropriate building blocks (e.g., protocol components).
• Bottom-up method (see Figure 2): starts by exploiting research results and position
them as either global (network-level) or local (system-level). In most cases, the
corresponding elements are specialized since realized in order to reach
architectural objectives that are domain-specific. The challenge with this method
then consists in deriving from this set of common and generic components
underlying the architecture. Once identified, the result of this step is fed back in
order to align the specification of global (network-level) or local (system-level)
specific elements. Note there are no actual steps in this method that is characterized
by iterative cycles of updates between generic and specialized specification.
4 Standardization Aspects
5 Conclusion
In this paper, we argue that the debate between architectural research driven by the
application of the theory of utility and the theory of change is over. Indeed, neither of
these approaches can fundamentally address the limits of the Internet architecture.
Instead, we propose that architectural research should follow a "third path" starting by
identifying the actual root causes (by adapting the foundational design principles of
the Internet such as the modularization principle) and by acknowledging the need for
a holistic architecture (instead of (re-)designing protocols independently and
expecting that their combination would lead to a consistent architecture at running
time). The proposed path will in turn also partially impact how the necessary
standardization work is to be organized and conducted. Indeed, the design principles
and the modeling part of the architecture need to be standardized to ensure its
adoption at the international level. Following this path, the chartering of the new work
item to define, e.g., new protocol, will need to be not only “problem-driven” but also
“architecture driven”. It is also anticipated that, resulting from the current wave of
Future Internet research projects, the pre-standardization work will become more and
more relevant with a mix of architecture- and technology-driven work items. As such,
this is an opportunity since this nascent pre-standardization ecosystem can be seen as
a laboratory to learn how to introduce an “architecture-driven” dimension in the
Internet standardization working method.
Open Access. This article is distributed under the terms of the Creative Commons Attribution
Noncommercial License which permits any noncommercial use, distribution, and reproduction
in any medium, provided the original author(s) and source are credited.
References
1. Carpenter, B.: Architectural Principles of the Internet. IETF, RFC 1958 (June 1996)
2. European Commission, Future Internet Reference Architecture Group (FIArch), http://
ec.europa.eu/information_society/activities/foi/research/
fiarch/
3. Papadimitriou, D., Sales, B.: A path towards strong architectural foundation for the
internet design. In: 2nd Future Network Technologies Workshop, ETSI, Sophia Antipolis,
France, September 26-27 (2011)
4. FIArch Group Report v1.0. Future Internet Architecture-Design Principles (January 2012)
(in press)
5. Clark, D.: Designs that last. In: FP7 EIFFEL Think Tank, Athens, Greece, October 6-7
(2009)
6. Zahariadis, T., et al.: Towards a Future Internet Architecture. In: Domingue, J., et al. (eds.)
FIA 2011. LNCS, vol. 6656, pp. 7–18. Springer, Heidelberg (2011)
An Integrated Development and Runtime
Environment for the Future Internet
1 Context
Raising the Future Internet Challenges. The Future Internet (FI) context draws
a global environment populated with a plethora of services. Such services are
related to two - commonly identified by many FI initiatives - key FI dimensions,
the Internet of (traditional) Services and the Internet of Things. The latter di-
mension is expected to considerably change the way we perceive the Internet
today, by incorporating in it vast populations of physical objects or, from an-
other viewpoint, sensors and actuators linking to the physical world. We take
this SOA view of the FI one step forward by advocating choreographies of ser-
vices i.e., compositions of peer interacting services as the primary architectural
solution for leveraging and sustaining the richness and complexity of the FI. In
this context, three key challenges, namely, scalability, heterogeneity, and aware-
ness are raised. As already pointed out, the large scale of today’s Internet be-
comes ultra large scale (ULS) in the FI, in terms of numbers of devices, services,
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 81–92, 2012.
c The Author(s). This article is published with open access at SpringerLink.com
82 A. Ben Hamida et al.
The Requirements Specification Tools are mainly responsible for enabling do-
main experts to specify functional and quality requirements on services and
service-based applications, and in turn, to enable the domain expert to produce
a first draft choreography specification. First, the Specification Expressing Tool
and DataBase provide the domain expert with service consumer requirements
and associated attributes. The service consumer specifies requirements using a
structured approach facilitated by mobile tools – such as the iPhone app (ap-
plication). There can be many service consumers with many user needs. The
expressed requirements are recorded in a DataBase along with attributes for
quality, priority and situation. Associated with the requirements is a quality
model, which relates the user requirements on service-based applications to QoS
on services aggregated in these applications. Second, the Requirements Manage-
ment and Analysis Tool provides the domain expert with requirements manage-
ment and analysis functions. These functions are provided to help the domain
expert to pull out individual requirements in order to form a set of require-
ments for choreography. Third, the Requirements Engine executes a matching
and grouping algorithm to cluster the service consumer and domain expert ex-
pressed requirements. A ‘calculate similarity’ algorithm, enables the requirement
comparison for similarity using natural language processing techniques. The out-
put from this component is grouped requirements for choreographies. Finally,
the Matching Tool and User Task Model Database are responsible for matching
the requirements on the choreography specification to user task models using a
matching tool. A set of CTT (Concur Task Trees) task models, describing struc-
tured activities that are often executed during the interaction with a system are
defined and stored in a database. Finally, the prioritized quality-based require-
ments and user task models are then associated with choreography strategies,
which are expressed in the form of patterns by the choreography designer. The
final output of this process is a first draft choreography specification and a set
of associated requirements to inform the discovery of abstract services.
set of discovered services. The first input comes from the refinement of the CTT
models and choreography patterns (and hence, the first draft choreography spec-
ification discussed in Section 3.1). The latter comes from the exploitation of the
service base management mechanisms described in Section 4.1. Thus, the synthe-
sis process assumes that the services into the registry/base have been discovered
so that they satisfy the local (to the service) functional and non-functional re-
quirements that have been specified for the choreography and, hence, can be
considered as potential candidates to participate in the global choreography
process. Finally, the choreography synthesis produces the coordination delegates
that will be then managed by the service composition engine for choreography
realization purposes presented in Section 4.2, hence accessing the participant
services through the service access subsystem presented in Section 4.3.
of coordination delegates using BPEL, while SCA-based XSC supports the im-
plementation of coordination delegates using SCA. In a complementary way,
the Thing-based Composition & Estimation component deals with the composi-
tion of Thing-based services to handle requests for interacting with the physical
world. While enacting a choreography, some services may not respect the ini-
tially contracted agreements and choreography reconfigurations need then to be
operated. For that end, the CHOReOS XSC relies also on a reconfiguration and
substitution mechanism.
ULS choreographies bring into play a very large number of services, users and
resources employing the system for different purposes. Therefore, methodologies
and approaches that will permit the smooth integration of independently devel-
oped pieces of software need to be implemented. In IT Systems, the Governance
approach enables supervising such large systems. Indeed, a set of processes, rules,
policies, mechanisms of control, enforcement policies, and best practices are put
in place throughout the life-cycle of services and choreographies, in order to
ensure the successful achievement of the SOA implementation. Activities such
as policy definition, auditing & monitoring, and finally evaluation & validation
are recommended. Within CHOReOS, we implement a Governance and V&V
Framework (See figure 4) that underly the services, and choreographies lifecy-
cle. Precisely, the Service Level Agreement-SLA and lifecycle management deals
with the lifecycle of relevant resources such as services, service level agreements,
and choreographies. Further, the V&V Components perform the testing of ser-
vices before their involvement in choreographies. Online testing of services and
choreographies at runtime is also operated. Finally, the Test Driven Development
Framework (TDD) operates a series of complementary tests.
An Integrated Development and Runtime Environment for the FI 89
6 Conclusion
The FI world challenges the SOA by raising scalability, distribution and het-
erogeneity issues. The CHOReOS project addresses these issues by providing
responses at several levels. The CHOReOS Integrated and Runtime Environ-
ment gathers top-level technological SOA approaches including Model-driven
Architectures, SOA Discovery, SOA Composition and SOA Governance. In this
chapter, we have presented the CHOReOS platform as well as its main compo-
nents. Ongoing works concern the realization of the IDRE in ULS choreography
use cases.
92 A. Ben Hamida et al.
Open Access. This article is distributed under the terms of the Creative Commons
Attribution Noncommercial License which permits any noncommercial use, distribu-
tion, and reproduction in any medium, provided the original author(s) and source are
credited.
References
1. Athanasopoulos, D., Zarras, A., Issarny, V.: Towards the maintenance of service
oriented software. In: Proc. of the 3rd CSMR Workshop on Software Quality and
Maintenance, SQM (2009)
2. Athanasopoulos, D., Zarras, A., Vassiliadis, P., Issarny, V.: Mining service abstrac-
tions - nier. In: Proc. of the 33rd International Conference on Software Engineering
(ICSE), pp. 944–947 (2011)
3. Autili, M., Mostarda, L., Navarra, A., Tivoli, M.: Synthesis of decentralized and
concurrent adaptors for correctly assembling distributed component-based systems.
Journal of Systems and Software (2008)
4. Baude, F., Filali, I., Huet, F., Legrand, V., Mathias, E., Merle, P., Ruz, C., Krum-
menacher, R., Simperl, E., Hamerling, C., Lorré, J.: Esb federation for large-scale
soa. In: Proc. of the ACM Symposium on Applied Computing, SAC 2010, pp.
2459–2466 (2010)
5. Bertolino, A., De Angelis, G., Kellomäki, S., Polini, A.: Enhancing service federa-
tion trustworthiness through online testing. IEEE Computer 45(1), 66–72 (2012)
6. Bertolino, A., De Angelis, G., Polini, A.: Validation and verification policies for
governance of service choreographies. In: Proc. of the 8th International Conference
on Web Information Systems and Technologies, WEBIST (to appear, April 2012)
7. Bertolino, A., Polini, A.: Soa test governance: Enabling service integration testing
across organization and technology borders. In: Proc. of Software Testing, Verifi-
cation and Validation Workshops (ICSTW), pp. 277–286 (April 2009)
8. Calvanese, D., De Giacomo, G., Lenzerini, M., Mecella, M., Patrizi, F.: Automatic
service composition and synthesis: the roman model. IEEE Data Eng. Bull. 31(3),
18–22 (2008)
9. Issarny, V., Georgantas, N., Hachem, S., Zarras, A., Vassiliadis, P., Autili, M.,
Gerosa, M., Ben Hamida, A.: Service-Oriented Middleware for the Future Inter-
net: State of the Art and Research Directions. Journal of Internet Services and
Applications 2(1), 23–45 (2011)
10. Newman, M.: Networks: An Introduction, 1st edn. Oxford University Press (2010)
11. Tivoli, M., Inverardi, P.: Failure-free coordinators synthesis for component-based
architectures. Sci. Comput. Program. 71, 181–212 (2008)
12. Zribi, S., Bénaben, F., Ben Hamida, A.: Towards a service and choreography gov-
ernance framework. In: Proc. of the I-ESA Conference, Valencia Spain. Springer,
Heidelberg (to be published, 2012)
Visual Analytics: Towards Intelligent Interactive
Internet and Security Solutions
Abstract. In the Future Internet, Big Data can not only be found in
the amount of traffic, logs or alerts of the network infrastructure, but
also on the content side. While the term Big Data refers to the increase
in available data, this implicitly means that we must deal with problems
at a larger scale and thus hints at scalability issues in the analysis of
such data sets. Visual Analytics is an enabling technology, that offers
new ways of extracting information from Big Data through intelligent,
interactive internet and security solutions. It derives its effectiveness both
from scalable analysis algorithms, that allow processing of large data sets,
and from scalable visualizations. These visualizations take advantage of
human background knowledge and pattern detection capabilities to find
yet unknown patterns, to detect trends and to relate these findings to
a holistic view on the problems. Besides discussing the origins of Visual
Analytics, this paper presents concrete examples of how the two facets,
content and infrastructure, of the Future Internet can benefit from Visual
Analytics. In conclusion, it is the confluence of both technologies that will
open up new opportunities for businesses, e-governance and the public.
1 Introduction
We live in a world that faces a rapidly increasing amount of data. Today, in
virtually every branch of commerce and industry, within administrative and
legislative bodies, in scientific organisations and even in private households vast
amounts of data are generated. In the last four decades, we have witnessed a
steady improvement in data storage technologies as well as improvements in
the means for the creation and collection of data. Indeed, the possibilities for
the collection of data have increased at a faster rate than our ability to store
them [4]. It is little wonder that the buzzword Big Data is now omnipresent. In
most applications, data in itself has no value. It is the information contained in
the data which is relevant and valuable.
The data overload problem refers to the danger of getting lost in data, which
may be: 1. irrelevant for the current task, 2. processed in an inappropriate way, or
3. presented in an inappropriate way. In many application areas success depends
on the right information being available at the right time. The acquisition of raw
data is no longer a problem: it is the lack of methods and models that can turn
data into reliable and comprehensible information.
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 93–104, 2012.
c The Author(s). This article is published with open access at SpringerLink.com
94 J. Davey et al.
Visual Analytics aims at turning the data overload problem into an opportu-
nity. Its goal is to make the analysis of data transparent for an analytic discourse
by combining the strengths of human and electronic data processing. Visualisa-
tion becomes the medium of a semi-automated analytical process, where humans
and machines cooperate using their distinct, complementary capabilities to ob-
tain the best results. The user has the ultimate authority in determining the
direction of the analysis. At the same time, the system provides the user with
effective means for interaction. Visual Analytics research is interdisciplinary,
combining visualisation, data mining, data management, cognition science and
other research areas. By fusing the research efforts from these fields, novel and
effective analysis tools can be developed to solve the data overload problem.
In this position paper we postulate that Visual Analytics will play a key role
in the Future Internet. We consider two facets of the Future Internet: content
and infrastructure. Both facets are characterised by vast and growing amounts
of data including the following examples: 1. Vast amounts of user-generated
content exists in private and public networks. 2. The new trend towards open
data means that ever more administrations and NGOs are making their data
available online. 3. Simulations of new architectural concepts for the Internet
generate vast amounts of data. 4. Huge repositories of network and security
data exist and are growing.
Visual Analytics researchers are already developing techniques to address the
data overload problem. Thus, we believe that these technologies can make a
significant contribution to the success of the Future Internet. With the help of
Visual Analytics, the creators and users of the Future Internet will be able to
turn data overload from a problem into an opportunity.
The rest of this article is structured as follows: Sect. 2 provides an introduction
to Visual Analytics. In the subsequent two sections, an overview of the current
and potential uses of Visual Analytics in the Future Internet is presented. In
Sect. 3 we focus on content analysis and in Sect. 4 on analysis for the improve-
ment and protection of network infrastructure. We close with a conclusion and
outlook in Sect. 5.
Data Mining was also born in the 1990’s from the need to explore and analyse
large amounts of data. The field was first formalised in the book Knowledge
Discovery in Databases (KDD) in 1991 [17]. The so-called KDD pipeline shown
in Fig. 2 was defined in a subsequent book in 1996 [5].
In a broad sense, data mining involves the use of statistical and machine-
learning techniques to discover patterns in large data sets. Data mining tasks
include the characterization or description of data subsets, the mining of rules
describing associations or correlations, classification or regression for predictive
analysis, cluster analysis and outlier analysis [9]. Initially, these techniques were
focused on relational database management systems. However, the field has de-
veloped to include techniques for the analysis of a great variety of data sources,
including text collections, video, image and spatio-temporal data.
In 2009 and 2010 a Coordinated Action named VisMaster and funded by the
European Commission set out to establish a Visual Analytics research commu-
nity in Europe. The primary result of the project was a research roadmap entitled
Mastering the Information Age [11]. The established community has continued
its work after the project. Its main channel for dissemination and coordination
of community activities is the European Visual Analytics website.1
(e.g., D3 [2] or Polymaps2) suggests that visualization will play a greater role in
the near future. In addition, the number of organisations publishing their data
online is growing. As a result, new opportunities for linking and exploring these
open data sets in the Future Internet with the help of Visual Analytics arise.
During the last decade the visualization community began the creation of in-
teractive web visualizations that empowered the public to investigate open data
sets on their own. One of the most successful approaches was IBM’s ManyEyes
platform [20], which allows users to upload data, visualize it either statically
or interactively and then facilitates discussions about findings within the user
community. Besides well known charts, such as scatter plots, bar, line and pie
charts, the platform features more advanced visualizations, such as tree maps,
stacked graphs and bubble charts, and a number of textual visualizations, such
as word clouds, phrase nets and word trees. Newer web visualization tools, such
as Google’s Public Data Explorer3 and Tableau Public4 extend both the acces-
sibility of data as well as the diversity of available web visualization tools.
While web visualization tools for open data have already started to emerge,
the combination of visualization and data mining tools in Visual Analytics ap-
plications are not yet available for the web. However, we expect them to emerge
in a new wave of Visual Analytics frameworks and tools for the web.
Smart Cities are characterized by their competitiveness in the areas smart econ-
omy, smart mobility, smart environment, smart people, smart living and smart
governance [8]. While strengths in each of these areas have strong links to the
historic development of cities, technological advancements such as the Future
Internet or Visual Analytics can play a role in boosting their competitiveness.
As an example, Visual Analytics applications such as the one detailed in the
study [1] can significantly empower the analysis of traffic conditions (e.g. traffic
jams) using data from GPS tags of a sample of the total vehicle population
within the city. Future Internet technologies not only play a role in the data
collection infrastructure (Internet of Things), but also in the propagation of
analysis results to commuting citizens. However, Visual Analytics is required to
turn the large and complex mobility data into useful information.
Smart governance can be enhanced through the combination of Visual Analyt-
ics and Future Internet technologies by analysing available data in the detailed
geographic context of the city. MacEachren et al. [15], for example, created a
Visual Analytics tool that takes advantage of a geo-tagged Twitter stream for
the assessment of situational awareness in application scenarios ranging from
2
http://polymaps.org/
3
http://www.google.com/publicdata/
4
http://www.tableausoftware.com/public
98 J. Davey et al.
The Internet is full of unstructured, but often interlinked, data that could po-
tentially be valuable if processed and presented in a meaningful way. However,
issues of data processing (e.g., data quality or entity recognition) and representa-
tion (e.g., usability or scalability) turn such efforts into challenging undertakings
and only very focused approaches have so far succeeded.
Fig. 4, for example, shows a Visual Analytics system for the analysis of online
news [14] collected by the Europe Media Monitor 5 . Text clustering is used to
extract stories from the news articles and to detect merging and splitting events
in such stories. Special care is taken to minimize clutter and overlap from edge
crossing while allowing for incremental updates. Besides the main entity and
daily keywords for each story, the figure shows a list of emerging stories at the
top and a list of disappearing stories at the bottom of the screen.
Text mining can be useful for the automatic extraction of opinions or senti-
ments from user-generated content. While this data in itself is valuable, mak-
ing sense of a large collection of results can be supported using visualization as
demonstrated in the study of Kisilevich et al. [13] dealing with photo comments.
In summary, the use of Visual Analytics in the Future Internet for the analysis
of text and news data can lead to innovative web applications. However, the
unstructured nature and the linguistic intricacies of processing large but possibly
short (e.g. Twitter postings) textual data generated by a multitude of people in
several languages can impose significant challenges on the processing side.
Fig. 4. Visual Analytics for news story development. Stories are extracted from online
news articles and visualized over several days. Distinct stories about “Omar Suleiman”
and “Tahrir Square” partly merge on the 9th of February. On the 10th of February a
linked story involving the “White House” emerges.
The NOMAD project7 aims to provide politicians with tools to draw on non-
moderated sources in the Social Web for the policy appraisal. A focus will be
laid on the presentation of arguments drawn from relevant constituencies for and
against policy decisions. The FUPOL project8 aims to combine simulations of
the effects of policy decisions with information drawn from the Social Web, as
well as crowd-sourcing techniques. FUPOL will target domains such as sustain-
able development, urban planning, land use, urban segregation and migration.
While most of the interactive Visual Analytics applications currently run as
stand-alone applications, we believe that in the near future these applications
will not only take advantage of the open and public data available in the web,
but move towards client-based applications running in modern web browsers.
Furthermore, we are convinced that data linkage, text mining and modern data
management approaches will open up new opportunities for the inclusion of
Visual Analytics in Future Internet technologies. This is further supported by
the fact that streaming text data visualization (cf. [18]) is currently a hot topic
in the visualization and Visual Analytics research community.
7
http://www.nomad-project.eu
8
http://www.fupol.eu
100 J. Davey et al.
and traffic data which is characteristic of the field. In particular, we use traffic
patterns which are common for signature-based intrusion detection systems and
one day of network traffic statistics (NetFlows) from the main gateway of a
medium sized university, which amounts to approximately 10 GB of raw data.
Fig. 5 shows the visual output of an analysis with NFlowVis. After having se-
lected suspicious hosts from the intrusion detection system, their network traffic
to all hosts in the internal network is retrieved from a database and visualized.
While automatic intrusion detection systems output many alerts in a large net-
work, the visualization supports the analyst in the difficult task of correlating
these alerts with each other and setting them into context. In this particular
case, we chose an SSH traffic pattern and visualized a number of external hosts
matching this traffic pattern.
Before visualizing the information, the system first clusters the external hosts
(potential attackers) and then places them on the nearest border in such a way
that a) hosts with similar traffic patterns appear next to each other and b)
preferably short splines are drawn to connect the dots of the external hosts
and the rectangles representing their internal communication partners. Color
encodes the first byte of the IP address of the external host in such a way that
attackers from nearby network prefixes are drawn in a similar color. This helps
102 J. Davey et al.
to judge whether the attack is conducted from a particular network or from hosts
distributed all over the Internet.
Drawing straight connecting lines results in a lot of visual clutter. To reduce
this clutter, the lines are grouped by exploiting the structure of the underlying
hierarchical visualization of the /24 prefixes. As a result, the analyst can easily
identify the pattern of the distributed attack on the upper right of Fig. 5, which
details a number of external hosts targeting the same subset of internal hosts
in the university network. A more detailed analysis revealed that all attacking
hosts contacted 47 hosts and thereby consciously avoided a common threshold of
an automatic intrusion detection system. The visual output furthermore shows
scanning activity of individual hosts on the lower left and top right of Fig. 5. We
assume that scanning activity first identified candidate victims in the network
and that the botnet then used this information to target this subset of hosts in
the university network, since the number of attacked hosts per subnet varies.
Currently, the VIS-SENSE project9 , funded by the European Commission, is
applying Visual Analytics techniques to large, network-security-related data sets.
The project focuses on the strategic analyses of spam, malware and malicious
websites. In addition, the misuse of the Border Gateway Protocol for criminal
activities will be analysed.
5 Conclusion
In this article we presented an introduction to Visual Analytics and its relevance
for the Future Internet. We considered the two facets content and infrastructure.
Both facets are characterized by a vast and growing amount of data.
With respect to content in the Future Internet, we have shown that emerging
data visualization platforms for the web derive their value from the relevance
of the data that is analysed with them. Since more and more open and public
data becomes available every day, it is only a matter of time before existing
visualization platforms hit scalability limits – due to the data overload problems
at hand – and need to include automated data analysis functionality. While the
analysis of the abundance of text and news available in modern media like Twit-
ter imposes significant challenges, working on these problems can have drastic
effects on the development of countries, regions and smart cities. We are thus
convinced that targeted research in Visual Analytics can revolutionize the way
in which we interact with content in the Future Internet.
Besides its potential for content, Visual Analytics can play an important role
in the network infrastructure of the Future Internet. Due to the amount of data
available from networking devices, the inherent complexity of the network and
the need to immediately react to failures or attacks, visual and computational
support for tasks in this domain can significantly improve infrastructure plan-
ning and testing, as well as network monitoring and security. We conclude that
strengthening the connection between Visual Analytics and the Future Internet
will enable us to build a more secure, reliable and scalable network.
The examples presented show how Visual Analytics is already contributing
solutions to the data overload problem in the Future Internet. Thus we are
convinced that the confluence of both technologies has enormous potential for
use in the business, administrative and private spheres.
Open Access. This article is distributed under the terms of the Creative Commons
Attribution Noncommercial License which permits any noncommercial use, distribu-
tion, and reproduction in any medium, provided the original author(s) and source are
credited.
References
1. Andrienko, G.L., Andrienko, N.V., Hurter, C., Rinzivillo, S., Wrobel, S.: From
movement tracks through events to places: Extracting and characterizing signifi-
cant places from mobility data. In: IEEE Conference on Visual Analytics Science
and Technology (VAST 2011), pp. 161–170 (2011)
2. Bostock, M., Ogievetsky, V., Heer, J.: D3: Data-driven documents. IEEE Transac-
tions on Visualization and Computer Graphics 17(12), 2301–2309 (2011)
3. Card, S.K., Mackinlay, J.D., Shneiderman, B.: Readings in information visual-
ization: using vision to think. Morgan Kaufmann Publishers Inc., San Francisco
(1999)
4. Cukier, K.: Data, data everywhere: A special report on managing information. The
Economist 1(1), 14 (2010)
104 J. Davey et al.
SAP Research
805, Av. du Docteur Maurice Donat
06250 Mougins, France
[email protected]
1 Introduction
The marketplace metaphor is increasingly pervasive in today’s digital economy.
A software marketplace is a virtual place, where software providers can adver-
tise their “apps” or services, and customers can browse and buy them; software
marketplaces offer a centralized application distribution mechanism that reaches
immediately many potential customers, all over the world. Marketplaces dedi-
cated to specific devices or operating environments are nowadays proliferating
and they represent a valuable business opportunity for software vendors. In many
Corresponding author.
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 105–116, 2012.
c The Author(s). This article is published with open access at SpringerLink.com
106 F. Di Cerbo et al.
cases, like for the Apple Store[4], Windows Marketplace[17], or the Amazon Kin-
dle Store[1], they are evolving to become gateways to entire ecosystems, with a
potential audience of millions.
Similarly to apps, services can leverage on the marketplace distribution chan-
nel. Services relieve consumers from the burden of acquiring and managing their
own operational infrastructure, on top of the benefits of component-based soft-
ware [24]. Nowadays, following the SaaS (Software-as-a-Service) model, services
are more and more commonly consumed as “commoditized” pieces of functional-
ity, and are extensively adopted as a means to increase flexibility and to optimise
IT expenditure.
The very nature of the model aims at simplifying software consumption by
insulating the consumers from the complexity related to deployment, operation,
and management of the software. However, in the process, important informa-
tion about the quality of the software are not evidently reported to consumers,
raising a relevant challenge with regard to the trust of the consumers on software
providers. In addition, the centralized nature of most of software marketplaces
results in “one size fits all” security checks, which are not appropriate for many
security-critical applications, typically characterized by domain-specific require-
ments.
We believe that addressing these challenges is key to the success of the fu-
ture Internet of Services, especially with respect to services that are considered
highly valuable, sensitive critical or in the context of serious applications. Two
key factors can contribute to that: the availability of a more detailed description
of the security features of services and the possibility to include some addi-
tional guarantees on the quality of security mechanisms provided by established,
domain-specific security experts (security certifications).
It is crucial that this information be provided to service consumers (human
or software agent) in a machine-readable form such that they can check directly
and just-in-time what specific security features are provided, what assurance
they can get from a software product and how this assurance is provided. In this
paper we introduce the concept of a trustworthy Service Marketplace (SM) that
is suitable for hosting a larger class of security- and business-critical services,
and service compositions for both businesses and end-users.
The remainder of this paper is organised as follows: Section 2 contains an
overview on the state of the art in software marketplaces, Section 3 details the
major challenges to be addressed towards a trustworthy SM, with particular
attention to the limitation of current security certification schemes. Sections 4
presents our approach to tackle these challenges, while Section 5 illustrates the
vision of a trustworthy SM. Finally, Section 6 concludes the chapter.
Before introducing the concept for trustworthy SMs, we analyse the state-of-
the art in software marketplaces, and their relevant security checks. We focus
mainly on mobile software markets, as they provide a large user base and are the
Towards a Trustworthy Service Marketplace for the Future Internet 107
subject of many studies. This section is composed of two parts: we review the
main marketplace approaches to security, then we focus on the security checks
performed when an application/service has to be admitted in the marketplace
(the vetting process).
This means that the security requirements for a service significantly depend on
the application domain, the application context, and the business context (in-
tended usage). Hence, the security properties that a service provides should be
evaluated and consequently certified by specialized entities that have the re-
quired domain- and application-specific knowledge. The lack of assurance on the
110 F. Di Cerbo et al.
security of services is one of the key reasons of the trust deficit of consumers on
such services [18]. Security certification of services can bridge this trust deficit
by providing the required assurance on service’s security. Though current se-
curity certification schemes are successful in providing assurance in monolithic
software systems, they suffer from severe limitations when applied in a service
environment due to economic and technological factors.
In addition, the stakeholders, the consumption models of current certification
schemes are modelled for monolithic software and hence current schemes are
inadequate to provide the security assurance in a service environment.
Some of the shortcomings of current certification schemes have conceptual rea-
sons. Schemes such as Common Criteria are intentionally designed to be flexible
and generic, in order to be able to certify different products ranging from software,
firmware to hardware [25]. However this prevents these schemes to be prescriptive
and so comparing certificates of different products becomes complex.
In addition, current certification schemes are structured in a manner that they
cater to software provisioning paradigms where the consumer has control over
the operation and execution of the product. However, in the service-oriented
computing paradigm, the consumer does not have any control over the opera-
tional environment nor on the execution environment.
Another limitation is the application of current schemes in practice is a very
expensive and time consuming process, often requiring years even for medium-
level security assurance [25]. This is a major obstacle for services, where time-
to-market can be a critical factor for the success of the service. Schemes such
as Common Criteria allow a lightweight certification, but they lead to very low
assurance. Also the evaluation is focused more on the accompanying documen-
tation (Architecture, Design, Process related etc.,) or on the security processes
followed, rather than the actual implementation of the product, especially at
lower assurance levels.
The certification process, and results of the evaluation are captured in a hu-
man readable form that do not allow automated reasoning and processing to be
performed. This is one of the major challenges that hampers the usage of current
security certification schemes to service marketplaces where the security require-
ments of the consumers must be easily matched with the security properties of
the services.
profile illustrates the security goals of the service (Privacy and Authorization);
it also indicates the security mechanisms and technologies adopted to meet the
security goals (Obligation and PPL Language in one case, AccessControl and
XACML in the other).
<<enumeration>>
SecurityGoal
<<enumeration>> 1 1..*
SecurityProfile Accountability
SecurityFeaturesType
Anonymity usdl:ServiceDescription
SecurityService goals Authentication 1 1
ServiceWithSecurityFeatures Confidentiality
etc
: PPLService a u sd l : S e r v i c e ;
s e c : h a s S e c u r i t y P r o f i l e <#usdlSecDHGESecurityProfile>.
6 Conclusions
Trustworthy Service Marketplaces can represent a key factor for opening new
market perspectives for the future Internet of Services, especially with respect
to sensitive, critical services and service composition. Trustworthy SMs will serve
all their stakeholders with advanced and more secure services, as well as with
transparent and evidence-based vetting processes. They will enable refined ser-
vice discovery operations in marketplaces, also according to specific security re-
quirements. Candidate services shall be then presented to users, along with their
security certificates and evidences. In this way, a customer could evaluate each
alternative according to her specific operational scenario. Trustworthy SMs could
set certain security thresholds, such that a minimal security standard will have to
be met by any of their advertised element. To sustain this vision, new technolo-
gies and standards are in development: digitally consumable service descriptions,
covering business , technical, security and contextual aspects (Usdl/Usdl-Sec
in Fi-Ware); new assessment and certification methodologies, as well as digitally
consumable certificates (Assert4Soa). Relying on assumptions and constraints
expressed, more functionalities will come, like for instance a support for secure
service compositions, through analysing security requirements and prerequisites
of services, and secure deployment of services. We believe that trustworthy SMs
can increase the trust and confidence in Internet-based systems, thus enabling
even more sensitive operations to take place, in a secure, reliable and effective
way.
Open Access. This article is distributed under the terms of the Creative Commons
Attribution Noncommercial License which permits any noncommercial use, distribu-
tion, and reproduction in any medium, provided the original author(s) and source are
credited.
116 F. Di Cerbo et al.
References
1. Amazon. Kindle, http://www.amazon.com/kindle-store-ebooks-newspapers-
blogs
2. Anisetti, M., Ardagna, C.A., Guida, F., Gürgens, S., Lotz, V., Maña, A., Pandolfo,
C., Pazzaglia, J.-C.R., Pujol, G., Spanoudakis, G.: ASSERT4SOA: Toward Secu-
rity Certification of Service-Oriented Applications. In: Meersman, R., Dillon, T.,
Herrero, P. (eds.) OTM 2010. LNCS, vol. 6428, pp. 38–40. Springer, Heidelberg
(2010)
3. Apple inc. FCCs answers, http://www.apple.com/hotnews/apple-answers-fcc-
questions/
4. Apple inc. Official apple online store, http://store.apple.com/us
5. Barrera, D., van Oorschot, P.: Secure software installation on smartphones. IEEE
Security & Privacy 99, 1 (2010)
6. Bezzi, M., Sabetta, A., Spanoudakis, G.: An architecture for certification-aware
service discovery. In: Proc. of IWSSC (co-located with NSS 2011) (2011)
7. Cantor, S., Kemp, I., Philpott, N., Maler, E.: Assertions and protocols for the oasis
security assertion markup language. OASIS Standard (March 2005)
8. O. W. S. S. Committee. OASIS web services security (WSS) TC OASIS,
http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wss
9. Doraswamy, N., Harkins, D.: IPSec: the new security standard for the Internet,
intranets, and virtual private networks. Prentice Hall (2003)
10. Gilbert, P., Chun, B., Cox, L., Jung, J.: Vision: automated security validation of
mobile apps at app markets. In: Proceedings of the Second International Workshop
on Mobile Cloud Computing and Services, pp. 21–26 (2011)
11. Google inc. Evaluate a marketplace app’s security, https://support.google.com
12. Herzog, A., Shahmehri, N., Duma, C.: An ontology of information security. Inter-
national Journal of Information Security 1(4), 1–23 (2007)
13. Martin, D., Burstein, M., Hobbs, J., Lassila, O., McDermott, D., McIlraith, S.,
Narayanan, S., Paolucci, M., Parsia, B., Payne, T., et al.: OWL-S: semantic markup
for web services. W3C Member Submission 22, 200704 (2004)
14. McDaniel, P., Enck, W.: Not so great expectations: Why application markets
haven’t failed security. IEEE Security & Privacy 8(5), 76–78 (2010)
15. Microsoft inc. Market, http://msdn.microsoft.com/en-us/library/gg490776.
aspx
16. Microsoft inc. Windows azure: Terms of use, https://datamarket.azure.com/
terms
17. Microsoft inc. Windows marketplace, http://www.windowsphone.com/marketplace
18. Nasuni. Security and control are greatest concerns preventing enterprises from
adopting cloud storage, http://www.nasuni.com/news/press_releases/
19. Nokia. Nokia ovi store content guidelines, http://support.publish.nokia.com
20. Nokia. Packaging and signing, http://www.developer.nokia.com/
21. Pedrinaci, C., Leidig, T.: Linked-USDL, http://linked-usdl.org/ns/usdl-core
22. RIM inc. BlackBerry app world, http://us.blackberry.com/developers/
appworld/
23. Salesforce. Security review, http://wiki.developerforce.com/page/Security
Review
24. Szyperski, C., Gruntz, D., Murer, S.: Component software: beyond object-oriented
programming. Addison-Wesley Professional (2002)
25. Zhou, C., Ramacciotti, S.: Common criteria: Its limitations and advice on improve-
ment. Information Systems Security Association ISSA Journal, 24–28 (2011)
Using Future Internet Infrastructure
and Smartphones for Mobility Trace Acquisition
and Social Interactions Monitoring
1 Introduction
Experimentation in the field of Internet-of-Things has currently grown to encom-
pass enormous infrastructure sizes, heterogeneous pools of resources, as well as a
large breadth of application scenarios. Research projects such as WISEBED [1]
and SmartSantander [2] serve as examples of the aforementioned advancements,
depicting the use of federated testbeds of large scale, diverse application scenar-
ios and enormous scale deployment and operation in urban settings. However,
certain aspects of current technology and application trends have not been ef-
fectively dealt with; namely, the use of smartphones in combination with IoT
infrastructure and, on the application side, human mobility and social network-
ing related themes. Instead, the currently utilised application scenarios revolve
1
FET’11, The European Future Technologies Conference and Exhibition.
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 117–129, 2012.
c The Author(s). This article is published with open access at SpringerLink.com
118 A. Antoniou et al.
less around human activity and more around monitoring environmental param-
eters; opening up to additional possibilities with regard to IoT experimentation
should provide further insight to the Future Internet.
On the one hand, smartphones are increasingly getting closer to the Internet-
of-Things, encompassing at the same time an impressive range of integrated sen-
sors: accelerometers, cameras, gyroscopes, microphones, thermistors, proximity
sensors, etc., while also adopting novel technologies like Near Field Communi-
cation (NFC). Also, the latest smartphone operating systems also offer enough
flexibility for adding external sensing units directly or communicating wirelessly
with them. Furthermore, additional functionality and more potent hardware is
bridging the gap in capabilities with traditional computing systems.
On the other hand, inferring social and contextual interactions has direct and
important applications in our daily lives, uncovering fundamental patterns in
social dynamics and coordinated human activity. Deriving accurate models for
human activity [4] is of great importance to both social sciences and computer
scientists dealing with studies/simulations of mobile users; real-world data can
aid tremendously in this direction, since they can provide a realistic means to
(re)produce, fine-tune and empirically validate models that attempt to capture
the characteristics of human mobility in a variety of environments and situa-
tions/occasions. Similarly, recording the daily activity of elders at home using
sensors can produce patterns that may help in providing a better quality of life
for them. RFID deployments inside a university or enterprise building can reveal
communication patterns among students and faculty over time, helping in un-
derstanding (in)efficiencies in that respect. Smartphones’ proliferation can also
aid in delivering similar results [3]. Finally, an interesting issue is to capture, in
a qualitative and quantitative manner, the characteristics of meetings, confer-
ences and gatherings where a large amount of people from different backgrounds,
disciplines and interests congregate and cooperate.
Therefore, we believe that there is currently a need to add the following per-
spectives to the Future Internet research agenda and develop:
– architectures and systems for combined experimentation using smartphones
and Internet-of-Things devices,
– techniques for sensor-based behaviour profiling and models of behaviour,
– tools that exploit cross-correlations of behavioural profiles of an individual
user and across user groups in order to gain new insights and utilise them
in selected services and applications of high socio-economic value.
We envisage a domain of Future Internet applications that become possible uti-
lizing semantically rich information derived from real-world mobility and pres-
ence traces. Such applications can have as their main focus to perform statistical
analysis and provide reports on collected trace data inferring possible interac-
tions among the monitored population. Other ones can analyse the trace data
and publish results while the data are still being gathered. Additional applica-
tions could use the trace data to predict the future behavior of the observed
population, or even extend the results to larger populations. We also consider
applications that combine a subset or all of the above functionalities, providing
Using Future Internet Infrastructure and Smartphones 119
reports on collected data, generating real-time content in parallel with the trace
gathering process and predicting the behavior of the monitored population.
Moreover, people in cities work in enterprises, offices, etc., spending consid-
erable time inside such environments. Capturing the collaborative, social and
physical behaviour in an organizational context is a critical foundation for man-
aging information, people, and ICT. E.g., customers can be segmented on the
basis of common or similar patterns along multiple behavioural feature dimen-
sions such as frequency of face-to-face contacts, commonality of location and
similarities in movement patterns, as well as commonness in network and ser-
vice use. According to the information richness theory, face-to-face interactions
are the richest and the most effective medium during daily interactions. These
can provide clues of higher quality of social relationships than co-presence indi-
cations, leading to better predictive models about user behaviour. These models
can be utilised for improving current mobility models of mobile subscribers or
consumer models in mobile commerce environments. Furthermore, personalised
content streaming, satisfying customer needs and further pushing business ac-
tivity could be possible by utilising such social networking knowledge, location
awareness and recorded data. Additional examples of such applications are a
smart mall application that can adaptively push product advertisements and
personalised bargain offers to potential customers that move within its premises
and a smart conference scenario, whereby interaction statistics and a presence
heatmap are generated periodically and reported.
Related to such concepts, we discuss here a system for monitoring large groups
of users using a combination of static and mobile IoT infrastructure, target-
ing multiple application domains, which become possible or are considerably
enhanced by analyzing the inferred interactions in space-time-social character-
istics dimensions and furthermore exploiting the prediction of future behavior
and contacts for individuals or groups of people with common social attributes.
Moreover, one should consider our approach in light of the Future Internet vi-
sion and current trends such as crowdsourcing and social computing; we expect
such enablers to unlock the potential of the Internet-of-Things, since computing
is rapidly becoming an integral part of our society. Future systems will orches-
trate myriads of nodes, web services, business processes, people and institutions;
inferring social interactions is needed to support such a Future Internet vision.
We applied our system in 2 scenarios, an office building and a large confer-
ence setting (FET’11) and the results show definite potential in our approach.
We present our architecture and current implementation, along with technical
issues related to our design choices. Along with the monitoring and archiving
functionality of the system, we additionally offer on-line statistics for various fea-
tures. The proposed solution, considers detection of human interaction and pref-
erences by exploiting Internet of Things infrastructures and novel middle layer
mechanisms. We believe that building applications, by adopting the proposed
methodology, can leverage innovation capabilities to a wide range of application
domains like Smart Cities and Smart Organizations.
120 A. Antoniou et al.
that correspond to the presence of people who agreed to participate and carry
mobile detectable devices, submitting a registration form. In addition to the
participation consent, the registration form may request optional information
regarding personal attributes of a participant, that will be used to infer behav-
ioral patterns for groups of people that share common attributes. The application
layer also includes an automated mechanism that posts links to interesting re-
sults with a short description on a Twitter account, which end-users can follow
in order to receive updates about the dynamics of the participants’ interactions.
Our system architecture was designed with an emphasis on scalability, ease of
deployment and simplicity in participation requirements, as well as the ability
for people to register online, even after the monitoring deployment has launched.
Such flexibility is usually absent in other related systems, both in terms of adding
users online, as well as modifying the supporting infrastructure and maintaining
overall system stability. The distributed nature of our system results also to an
easier and faster installation phase.
Another basic consideration was respecting the privacy and self-reported data
of the participants, and the deletion of “external” traces belonging to devices
not registered for the particular deployments. Privacy concerns of the partici-
pants were answered by the anonymisation of collected data. Since privacy is-
sues should not be perceived by the participants themselves as an afterthought,
all were informed prior and during the experiments regarding the data col-
lection aspects of their participation, the future availability of the produced
anonymised data sets and our conformance to the related legislation (EU direc-
tive 95/46/EC). At the same time, users had control over the software compo-
nents running on their smartphones and could opt for turning them off anytime.
By utilizing Bluetooth networking, which is supported by the vast majority
of the mobile phones that are in use today, certain advantages were evident:
participants are only required to carry with them a personal device, the collected
trace data can be delivered in real-time, while also the infrastructure cost is cheap
to purchase and maintain. Moreover, Bluetooth allows for greater localisation
accuracy compared to WiFi, due to its more limited range. It is also easier and
safer to setup and operate, due to the inherent features in Bluetooth’s design.
We deployed our system twice: a) inside our institute building, with 23 Bluetooth-
capable base stations, distributed over 3 building levels, monitoring for 27 hours
(9am-6pm, 3 days), b) at a large-scale conference event, with 36 base stations
(12 mobile), for 27 hours. A total of 103 participants in both events carried with
them their mobile phones, with Bluetooth switched on, set to be discoverable.
We describe here the main characteristics of the discussed deployments.
CTI Deployment: In essence, a building-scale IoT infrastructure was used
to monitor interactions between co-workers and/or different enterprise depart-
ments, in order to infer both online and over time intraconnections and in-
terconnections within such entities. This kind of knowledge could give further
insight for optimizing business processes, re-organizing hierarchical structures or
re-establishing connections through e.g., reimplementing certain standard proce-
dures or changing the actual physical locations of specific people or departments.
CTI’s staff consists of a number of research teams and administrative / support
staff, with each one housed in discrete parts of the CTI building. Moreover, CTI
is situated in a 5-floor building, with the thick walls and steel doors of each floor
sector providing isolation in terms of wireless communication between adjacent
parts. This provided an advantage in determining the position of participants
inside the building more accurately. The setup of the system inside 23 different
building rooms overall required 4 hours of work from 3 members of our team.
Bluetooth-enabled gateways were used in all rooms, being powered on for the
whole duration of the experiment, monitoring all Bluetooth networking activ-
ity and reporting to the system, as defined in Section 2. The duration of the
experiment prohibited the use of battery-powered gateways, since we wanted
the infrastructure to operate largely unattended. The layout of the building
also contributed in confining the activity of people interested only in commu-
nicating within their own group, allowing the activity of persons behaving as
“hubs” between different groups to be more visible. It is interesting to note
that we monitored physical presence, and thus interaction in the physical space.
As discussed in the next section, it reflects the structure of the institute quite
accurately.
FET’11 Deployment: In the second set (conference) of experiments, a number
of weeks after our initial deployment, we tested our system in a less controlled en-
vironment. The performance fine-tuning after the first set of experiments allowed
us to scale the system even further. Since this was a larger scale deployment and
was done in harsher conditions, we used a larger team of people to setup and
Using Future Internet Infrastructure and Smartphones 125
operate the infrastructure, i.e., 5. The setup was completed within 4 hours on
the day before the opening of the conference (FET’11) to the public. In contrast
with the CTI deployment, the networking isolation offered by walls and doors
was not available in this case, making it more difficult to determine the location
of participants. Furthermore, we implemented additional components for provid-
ing results and statistics, e.g., posting latest information about booth popularity
on Twitter and other social networks. Conference participants showed interest
towards the statistics, even though they were produced with some minutes of
delay. The set of statistics produced included information such as the top 10 pop-
ular booths, booths where visitors spend the most time, among others. Apart
from visitors, exhibitors also showed interest in statistics about their booth and
indicators regarding how their exhibits faired against others.
(a) (b)
Fig. 1. (a) Rising number of nodes and average degree reflects the users’ gradual in-
volvement. (b) The general features characterizing the graph can be captured in rela-
tively little time in both scenaria.
explicitly mobility habits among sections of the institute. Finally, in Fig.2 (a)
we also show the intensity of interaction among research units.
In the second set of experiments (FET conference setting), we observed slightly
different behaviors. While Fig. 1 (a) reveals for this case a similarly dense net-
work and the diameter of the network is again found to be 3, Fig. 1 (b) depicts a
greater tendency to form a clique as the clustering coefficient and network den-
sity is large (larger than the CTI deployment). Network centralization is quite
low (lower than CTI) while Network Heterogeneity is average again. Fig. 3(b)
depicts interactions of the participants among groups of different scientific back-
ground. In Fig. 3(a) the distinct number of users who attended each of the 10
most popular booths for each day of FET’11 is presented, while Fig. 4 depicts
the average time spent by each Scientific Background group in each booth. Such
information was delivered almost online, i.e., with a latency of about 5 minutes,
and can be utilised in accessing overall tendencies in such an event and deliv-
ering useful statistics to both participants and organisers. Overall, the statistics
delivered could reveal “hidden” trends and synergies between different scientific
fields, which could otherwise be difficult to recognise.
Educational Technology
Project Management
Research Unit 1
e-Government
(a) (b)
Fig. 2. Interactions among (a) the various groups in CTI - the results reflect largely the
hierarchical structure and actual cooperation patterns among groups, (b) interaction
matrices reveal strong cooperation among participants’ own groups at all times, with
varying degree during different times of the day
Environmental science
Chemistry Other
Mathematics
Biology
null
Computer sciences
Engineering
Economics Physics
Psychology
Management
(a) (b)
Fig. 3. (a) Number of distinct users per booth per day, (b) various groups of partici-
pants in FET’11 with different scientific background
Fig. 4. Average Time (min) each Scientific Background group spent in each booth
128 A. Antoniou et al.
References
1. Coulson, G., et al.: Flexible experimentation in wireless sensor networks. Commu-
nications of the ACM (CACM) 55(1), 82–90 (2012)
2. SmartSantader project, http://smartsantander.eu
3. Miluzzo, E., et al.: Darwin Phones: the Evolution of Sensing and Inference on
Mobile Phones. In: MobiSys 2010, pp. 5–20 (2010)
4. Eagle, N., Pentland, A.: Reality Mining: Sensing Complex Social Systems. In: Per-
sonal and Ubiquitous Computing, pp. 255–268 (May 2006)
5. Borovoy, R., et al.: Meme tags and community mirrors: moving from conferences
to collaboration. In: CSCW 1998, pp. 159–168 (1998)
6. Hui, P., et al.: Pocket switched networks and human mobility in conference envi-
ronments. In: WDTN 2005, pp. 244–251 (2005)
7. Nordstrom, E., Diot, C., Gass, R., Gunningberg, P.: Experiences from measuring
human mobility using Bluetooth inquiring devices. In: MobiEval 2007, pp. 15–20
(2007)
8. Nicolai, T., Yoneki, E., Behrens, N., Kenn, H.: Exploring Social Context with the
Wireless Rope. In: Meersman, R., Tari, Z., Herrero, P. (eds.) OTM 2006 Workshops,
part I. LNCS, vol. 4277, pp. 874–883. Springer, Heidelberg (2006)
9. Natarajan, A., Motani, M., Srinivasan, V.: Understanding Urban Interactions from
Bluetooth Phone Contact Traces. In: Uhlig, S., Papagiannaki, K., Bonaventure, O.
(eds.) PAM 2007. LNCS, vol. 4427, pp. 115–124. Springer, Heidelberg (2007)
10. Chaintreau, A., Hui, P., Crowcroft, J., Diot, C., Gass, R., Scott, J.: Impact of
Human Mobility on Opportunistic Forwarding Algorithms. IEEE Transactions on
Mobile Computing 6(6), 606–620 (2007)
11. SocioPatterns, http://www.sociopatterns.org
12. Brockmann, D., Hufnagel, L., Geisel, T.: The scaling laws of human travel. Na-
ture 439(7075), 462–465 (2006)
13. Gonzalez, M., Hidalgo, C., Barabasi, A.-L.: Understanding individual human mo-
bility patterns. Nature 453(7196), 779–782 (2008)
14. Song, C., Koren, T., Wang, P., Barabasi, A.-L.: Modelling the scaling properties
of human mobility. Nature Physics (2010)
15. Eagle, N., Pentland, A.: Eigenbehaviors: Identifying structure in routine. Behav-
ioral Ecology and Sociobiology 63(7), 1057–1066 (2009)
16. Liben-Nowell, D., Kleinberg, J.M.: The link prediction problem for social networks.
In: CIKM 2003, pp. 556–559 (2003)
17. Wang, D., Pedreschi, D., Song, C., Giannotti, F., Barabasi, A.-L.: Human Mobility.
In: Social Ties, and Link Prediction KDD 2011 (2011)
Using Future Internet Infrastructure and Smartphones 129
1 Introduction
Current Internet (CI) was developed 30 years ago for serving research demands (host-
to-host communications). However, it is obvious that it cannot be used today with the
same efficiency, since new demanding applications rise. The number of Internet users
as well as the available multimedia content of any type increase exponentially.
Moreover, the increase of user-generated multimedia content and the number of
mobile users will raise new challenges. Towards this direction, the Future Internet
(FI) aims to overcome current limitations and address emerging trends including:
network architectures, content and service mobility, diffusion of heterogeneous nodes
and devices, mass digitisation, new forms of user-generated (multimodal) content
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 130–141, 2012.
© The Author(s). This article is published with open access at SpringerLink.com
I-SEARCH: A Unified Framework for Multimodal Search and Retrieval 131
While the problem of retrieving one single modality at a time, such as 3D objects,
images, video or audio has been extensively covered, retrieval of multiple modalities
simultaneously (multimodal retrieval) has yet to yield significant results. In [10], the
intra- and inter-media correlations of text, image and audio modalities are
investigated in order to produce a Multi-modality Laplacian Eigenmaps Semantic
Subspace (MLESS). In [11], a structure called Multimedia Document (MMD) is
introduced to define a set of multimedia objects (images, audio and text) that carry the
same semantics. After creating a Multimedia Correlation Space (MMCS), a ranking
algorithm is applied, which uses a local linear regression model for each data point
and it globally aligns all of them through a unified objective function. Within I-
SEARCH, an approach for multimodal retrieval has been introduced. It is based on
Laplacian Eigenmaps [12], while it has been further enhanced with large-scale
indexing [13] and relevance feedback [14].
132 A. Axenopoulos et al.
1
http://www.tineye.com/
2
http://www.google.com/insidesearch/searchbyimage.html
I-SEARCH: A Unified Framework for Multimodal Search and Retrieval 133
2 Overview
4 Multimodal Indexing
The low-level descriptors of the COs’ constituting modalities are further processed to
construct a new multimodal feature space. In this new feature space all COs,
irrespective of their constituting modalities, are represented as d-dimensional vectors,
where semantically similar COs lie close to each other with respect to a common
I-SEARCH: A Unified Framework for Multimodal Search and Retrieval 135
5 Multimodal Interfaces
5.1 The I-SEARCH Graphical User Interface and Multimodality
With the I-SEARCH project, we aim at the creation of a multimodal search engine
that allows for both multimodal in- and output. Supported input modalities are audio,
video, rhythm, image, 3D object, sketch, emotion, social signals, geolocation, and text
[5]. Each modality can be combined with all other modalities in an enhanced version
of the search box pattern. The graphical user interface (GUI) of I-SEARCH is not tied
to a specific class of devices, but rather dynamically adapts to the particular device
constraints like varying screen sizes of desktop and mobile devices like cell phones
and tablets. Fig. 3 gives an impression of what this adaptive behaviour looks like in
practice and how multimodal queries are assembled i.e. on a mobile device (Fig. 4).
The I-SEARCH GUI is implemented with the objective of sharing one common code
base for all possible input devices.
It uses a JavaScript-based component called UIIFace [4], which enables the user to
interact with I-SEARCH via a wide range of modern input modalities like touch,
gestures, or speech. Therefore it provides an adaptive algorithm for gesture
recognition along with support for novel input devices like Microsofts Kinect in a
web environment. The GUI also provides a WebSocket-based collaborative search
tool called CoFind [4] that enables users to search collaboratively via a shared results
basket, and to exchange messages throughout the search process. A third component
called pTag [4] produces personalized tag recommendations to create search queries,
filter results and add tags to retrieved result items.
One important goal of I-SEARCH is to hide this complexity from the end-user
through a consistent and context-aware user interface based on standard HTML5,
JavaScript, and CSS, with ideally no additional plug-ins like Flash required. We aim
at sharing one common code base for both device classes, mobile and desktop, with
the user interface getting progressively enhanced [3] the more capable the user's Web
browser and connection speed are. Search engines over the years have coined a
I-SEARCH: A Unified Framework for Multimodal Search and Retrieval 137
common interaction pattern: the search box. We enhance this interaction pattern by
context-aware modality input toggles that create modality query tokens in the I-
SEARCH search box. Below within Fig. 5, three example modality query tokens for
audio, emotion, and geolocation, can be seen.
To describe this type of interfaces, we sketch up a couple of use cases, which are
also studied in I-SEARCH: a) Individual multimodal search of music content and b)
Social multimodal search of music content.
According to the first use case, a professional user is looking for music material
that share common features. This research aims at discovering unexpected filiations
and similarities across music artworks. The target group can vary from professional
users/music experts to end-users/music lovers. Multimodal input includes text (words,
phrases, tags, etc.), audio files/clips (query by example), gestures captured via a video
camera or accelerometers embedded in mobile devices and real-world information
(e.g. the GPS position of the user). Typical search and retrieval tasks are the
following: search for a list of audio files having the same rhythm of a pattern specified
by the user (via tapping with a finger on a table/microphone, clapping her hands or
moving her arms in the air) but also sharing the same emotional features (e.g. similar
level of arousal) of the captured user movements or attitude.
The second use case deals with collaborative music retrieval by a group of users.
More specifically, four friends at a party wish to dance together, and to accomplish this
they search some music pieces resonating with their (collective) mood. They do not
know in advance the music pieces they want, and they use the I-SEARCH tool
collaboratively to find their music, and possible associated videos. Multimodal input
includes audio or video clip of a favourite singer (query by example), text-based
keywords, rhythmic queries (using hands, clapping, full-body movement), gestures,
entrainment /synchronization and dominance/leadership among users, measured by on-
body sensors and/or environment video cameras. Typical search and retrieval tasks
include the following: Iterative search for audio files (as well as the video clips or
images that are associated with them) by periodically performing a query for a new
music piece similar to the one currently been played and having a location in the
valence/arousal plane close to the position obtained from the movements of the dancers.
6 Adaptive Presentation
The proposed visualisation framework is based on a hierarchical conceptual
organization of the dataset. According to this conceptual organizations the result of
each query may be diverse enough to be organized in several topics and associated
sub-topics, while each sub-topic (at the bottom of the hierarchy) may be specific
enough to be mapped to a continuous similarity space designating a variability of a
single object along some important dimensions. We argue that such organization is
very suitable for explorative browsing of dataset and is diverse enough to cover a vast
range of data, information needs, and browsing tasks. To achieve the proposed
organization, we automatically augment the results of the multi-modal search engine
with analytics information. In particular, given a mutual similarity matrix among
results documents we perform hierarchical clustering by means of spectral clustering
algorithm. For each resulting group of results we subsequently perform a
dimensionality reduction or transformation algorithm (e.g. minimum spanning trees)
that maps documents on 2D “similarity space”.
We use Treemaps, Hyperbolic Trees and classical tree-like structures
interchangeably to navigate the user to specific groups of results. To avoid cluttered
I-SEARCH: A Unified Framework for Multimodal Search and Retrieval 139
time-stamp for each document). This allows rearranging the document thumbnails to
reflect spatial or temporal relationships instead of document similarity.
7 Conclusions
A novel approach for multimodal search was presented in this article. I-SEARCH
allows easy retrieval of multiple media types simultaneously, namely 3D objects,
images, audio and video, using as queries combinations of different media types, text,
real-world information, expressions or emotions captured from the user with simple
devices. Several innovative solutions, which were developed within I-SEARCH,
constitute the proposed search and retrieval framework: a) a method for multimodal
descriptor extraction and indexing able to index COs irrespective of their constituting
modalities; b) a dynamic graphical use interface (GUI), enhanced with multimodal
querying capabilities; c) methods for analysing non-verbal emotional behaviour
expressed by full-body gestures and translating this behaviour to multimodal queries;
d) adaptive presentation of the search results using visual analytics technology. The
multimodal search engine is dynamically adapted to end-user’s devices, which vary
from a simple mobile phone to a high-performance PC. The framework will be
extended, including more functionalities, such as personalisation, relevance feedback,
annotation propagation and personalised recommendation exploiting social tagging.
The technologies implemented within I-SEARCH can potentially influence the FI
architecture and related frameworks. The outcomes of I-SEARCH can contribute to
Future Internet Public Private Partnership (FI-PPP) [19], which aims to advance
Europe's competitiveness in FI-related technologies and to support the emergence of
FI-enhanced applications of public and social relevance, more specifically to FI-
WARE Core Platform [18]. FI-WARE is expected to deliver an integrated service
infrastructure, building upon elements (called Generic Enablers) which offer reusable
and commonly shared functions making it easier to develop FI applications in
multiple sectors. Since multimedia/multimodal search has not yet been adopted by FI-
WARE, it can be proposed as a Generic Enabler of the FI-WARE core platform.
Open Access. This article is distributed under the terms of the Creative Commons Attribution
Noncommercial License which permits any noncommercial use, distribution, and reproduction
in any medium, provided the original author(s) and source are credited.
References
1. Daras, P., Axenopoulos, A., Darlagiannis, V., Tzovaras, D., Le Bourdon, X., Joyeux, L.,
Verroust-Blondet, A., Croce, V., Steiner, T., Massari, A., Camurri, A., Morin, S.,
Mezaour, A.D., Sutton, L., Spiller, S.: Introducing a Unified Framework for Content
Object Description. International Journal of Multimedia Intelligence and Security, Special
Issue on Challenges in Scalable Context Aware Multimedia Computing 2(3-4), 351–375
(2011), doi:10.1504/IJMIS.2011.044765
I-SEARCH: A Unified Framework for Multimodal Search and Retrieval 141
2. Daras, P., Alvarez, F.: A Future Perspective on the 3D Media Internet. Towards the Future
Internet - A European Research Perspective (January 2009) ISBN: 978-1-60750-007-0
3. Champeon, S.: Progressive Enhancement and the Future of Web Design,
http://www.hesketh.com/thought-leadership/our-publications/
progressive-enhancement-and-future-web-design
4. Etzold, J., Brousseau, A., Grimm, P., Steiner, T.: Context-Aware Querying for Multimodal
Search Engines. In: Schoeffmann, K., Merialdo, B., Hauptmann, A.G., Ngo, C.-W.,
Andreopoulos, Y., Breiteneder, C. (eds.) MMM 2012. LNCS, vol. 7131, pp. 728–739.
Springer, Heidelberg (2012)
5. Steiner, T., Sutton, L., Spiller, S., et al.: I-SEARCH – A Multimodal Search Engine based
on Rich Unified Content Description (RUCoD). Submitted to the European Projects Track
at the 21st International World Wide Web Conference. Under review,
http://www.lsi.upc.edu/~tsteiner/papers/2012/isearch-multi
modal-search-www2012.pdf
6. Zagoris, K., Arampatzis, A., Chatzichristofis, S.A.: www.MMRetrieval.Net: a multimodal
search engine. In: Proceedings of the Third International Conference on SImilarity Search
and APplications (SISAP 2010), pp. 117–118. ACM, New York (2010)
7. Bederson, B.B.: PhotoMesa: A Zoomable Image Browser Using Quantum Treemaps and
Bubblemaps.. In: ACM Symposium on User Interface Software and Technology, UIST
2001, CHI Letters, vol. 3(2), pp. 71–80 (2001)
8. Graham, A., Garcia-Molina, H., Paepcke, A., Winograd, T.: Time as essence for photo
browsing through personal digital libraries. In: Proceedings of the 2nd ACM/IEEE-CS
Joint Conference on Digital Libraries (JCDL 2002), pp. 326–335. ACM, NY (2002)
9. Platt, J.C., Czerwinski, M., Field, B.A.: PhotoTOC: automatic clustering for browsing
personal photographs. In: Proceedings of the 2003 Joint Conference and the Information,
Communications and Signal Processing Fourth Pacific Rim Conference on Multimedia,
December 15-18, vol. 1, pp. 6–10 (2003)
10. Zhang, H., Weng, J.: Measuring Multi-modality Similarities Via Subspace Learning for
Cross-Media Retrieval. In: Advances in Multimedia Information Processing, PCM (2006)
11. Yang, Y., Xu, D., Nie, F., Luo, J., Zhuang, Y.: Ranking with Local Regression and Global
Alignment for Cross Media Retrieval. ACM MM, Beijing, China (2009)
12. Axenopoulos, A., Manolopoulou, S., Daras, P.: Multimodal Search and Retrieval using
Manifold Learning and Query Formulation. In: ACM International Conference on 3D Web
Technology, Paris,France, June 20-22 (2011)
13. Daras, P., Manolopoulou, S., Axenopoulos, A.: Search and Retrieval of Rich Media
Objects Supporting Multiple Multimodal Queries. Accepted on IEEE Transactions on
Multimedia
14. Axenopoulos, A., Manolopoulou, S., Daras, P.: Optimizing Multimedia Retrieval Using
Multimodal Fusion and Relevance Feedback Techniques. In: Schoeffmann, K., Merialdo,
B., Hauptmann, A.G., Ngo, C.-W., Andreopoulos, Y., Breiteneder, C. (eds.) MMM 2012.
LNCS, vol. 7131, pp. 716–727. Springer, Heidelberg (2012)
15. Gennaro, C., Amato, G., Bolettieri, P., Savino, P.: An Approach to Content-Based Image
Retrieval Based on the Lucene Search Engine Library. In: Lalmas, M., Jose, J., Rauber, A.,
Sebastiani, F., Frommholz, I. (eds.) ECDL 2010. LNCS, vol. 6273, pp. 55–66. Springer,
Heidelberg (2010)
16. Glowinski, D., Dael, N., Camurri, A., Volpe, G., Mortillaro, M., Scherer, K.: Towards a
Minimal Representation of Affective Gestures. IEEE Transactions on Affective
Computing 2(2), 106–118 (2011)
17. Varni, G., Volpe, G., Camurri, A.: A System for Real-Time Multimodal Analysis of
Nonverbal Affective Social Interaction in User-Centric Media. IEEE Transactions on
Multimedia 12(6), 576–590 (2010)
18. http://www.fi-ware.eu/
19. http://www.fi-ppp.eu/
Semantically Enriched Services
to Understand the Need of Entities
Abstract. Researchers from all over the world are engaged in the de-
sign of a new Internet, and Software-Defined Networking (SDN) is one
of the results of this engagement. Net-Ontology uses a SDN approach
to bring semantics to the intermediate network layers and make them
capable of handling application requirements and adapt their behaviour
over time as required. In this paper we present an experimental evalu-
ation of Net-Ontology and a feature comparison against the traditional
TCP/IP stack. This paper extends our earlier work towards a Future
Internet, showing a viable approach to introduce semantics at network
lower layers by contributing to bring richer and efficient services.
Introduction
The evolution of the intermediate network layers have been lagging behind that
of the lower and upper layers. The Internet Protocols, specified more than three
decades ago, are the likely culprit; the application needs have changed by leaps,
while the TCP/IP has only been patched, trying to meet these requirements.
Over the last few years, the networking community has strived to correct this
phenomenon[1, 3, 4, 21].
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 142–153, 2012.
c The Author(s). This article is published with open access at SpringerLink.com
Semantically Enriched Services to Understand the Need of Entities 143
Researchers from all over the world are engaged in the design of a new Internet,
from the ground up. This so called clean slate approach, frees the research from
the legacy of the current architecture and fosters innovations[18]. At a future
time, when results should be deployed, the research will then be refocused to the
transition from the current Internet to the future Internet
One of the results of this effort to create the Future Internet is Software-Defined
Networking (SDN)[5, 6]. SDN enables researchers to innovate and experiment new
network protocols, naming and addressing schemes, such as the one presented in
this paper, which aims at bridging the evolutionary gap between upper, lower, and
the intermediate network layers by using a richer semantics [15, 16].
FINLAN (Fast Integration of Network Layers) [9, 13, 14, 19] aims at provid-
ing high adaptability through the use of semantic concepts based on ontology,
with the elimination of static routing and addressing tied to physical location,
resulting in a better and efficient utilization of the network infrastructure.
FINLAN defines two intermediate layers that communicate between each
other using OWL (Web Ontology Language), but that clearly differentiate in
function: DL-Ontology and Net-Ontology.
The DL-Ontology layer is essentially responsible for data transfer between
the Physical layer and the upper layers, handling the semantic communication
between the peer entities and bringing a richer capacity to express their require-
ments. On the other hand, the Net-Ontology layer is accountable for handling
service needs, as it is capable of understanding specific requirements from the ap-
plication and adapting the communication to support them only when required,
using DL-Ontology to deal with the semantic communication.
In this chapter we present the Net-Ontology layer, which sits between the
DL-Ontology layer and the application. We also present its implementation and
a first experimental evaluation. The implementation presented is based on the
Title Model[17], our vision regarding future networks.
The remainder of this work is organized as follows: Section 1 describes the
Net-Ontology. Section 2 shows the Net-Ontology implementation and Section 3
the experimental results. The conclusions are presented in Section 4.
1 The Net-Ontology
The DL-Ontology is the lower layer of the FINLAN stack depicted in Figure
1, and enables the communication using concepts expressed in OWL over the
Physical layer.
The Net-Ontology layer is responsible for supporting the service needs of the
upper layer and deliver them to the DL-Ontology layer, built according to the
FINLAN Ontology. In this approach, the Net-Ontology is able to understand
specific requirements of a given application that may arise over communication
and provide them.
For example, let us suppose that two persons, P1 and P2 , are chatting over
the Internet, using the application FinChat that runs over the FINLAN stack.
In a certain moment, they want to start a secret conversation. To FinChat meet
144 F. de Oliveira Silva et al.
this need, the only thing it has to do is to inform the Net-Ontology layer that
from now on the chat is to be confidential. The Net-Ontology layer is able to
understand this need and act accordingly modifying all packets exchanged from
that moment.
The Net-Ontology consists, basically, of two main modules: requirement anal-
ysis and requirement manager, as depicted in Figure 1.
The requirement analysis module (RAM) is responsible for handling the ap-
plication requests regarding communication requirements. To accomplish this,
RAM uses the Leśniewski’s logic as proposed in [8]. The purpose is to manage
the services requirements over time. This module recognizes what technologi-
cal features are necessary to satisfy a given requirement, in a given moment,
combining them in logical formulas.
As an example, let us suppose that a service S1, in a moment t1 , may need
to establish communication with the service S2, with a specific requirement.
The RAM will verify that this upper layer requirement can be provided by the
technological requirements R1 and R2. In another moment t2 , S1 wishes to
improve security, using confidentiality in the conversation. For so, it is necessary
the technological requirement R3. These scenarios will be interpreted by the
analysis module and represented by the following axioms:
S1S2t1 → R1 ∧ R2 (1)
S1S2t2 → (R1 ∧ R2) ∧ R3 (2)
The requirement manager module (RMM) takes the rules requirements and
transform them into FINLAN ontology fragments. Besides that, this module
is able to interpret and deploy the algorithms correlated with each requirement
of the ontology in the network stack.
Semantically Enriched Services to Understand the Need of Entities 145
After some time, at instant t2 , John starts an important subject, and selects
the feature delivery guarantee of FinChat. This means that from now on, FinChat
requires delivery guarantee to the network. The Figure 2 shows the messages
flow that will be sent and received between the FinChat entities and the DTS,
to attend this request.
With a new requirement, the Net-Ontology layer is triggered, and the require-
ment analysis module checks that it is necessary the technological requirement
of a delivery guarantee algorithm. John’s FinChat, then, sends the following
control message to DTS:
< C o n t r o l M e s s a g e rdf:ID = " C o n t r o l M e s s a g e _ 1" >
< A p p l i c a t i o n rdf:ID = " FinChat " >
< HasNeed >
< D e l i v e r y G u a r a n t e e rdf:ID = " D e l i v e r y G u a r a n t e e _ 0 1" / >
</ HasNeed >
</ A p p l i c a t i o n>
< w o r k s p a c e I D r d f : d a t a t y p e= " http: // www . w3 . org /2001/ X M L S c h e m a# string " >
WKS .1 </ w o r k s p a c e I D>
< source r d f : d a t a t y p e= " http: // www . w3 . org /2001/ X M L S c h e m a# string " > John </
source >
< d e s t i n a t i o n r d f : d a t a t y p e= " http: // www . w3 . org /2001/ X M L S c h e m a# string " >
DTS </ d e s t i n a t i o n>
< payload r d f : d a t a t y p e= " http: // www . w3 . org /2001/ X M L S c h e m a# string " >
AddNeed </ payload >
</ C o n t r o l M e s s a g e>
After registering John’s need, the DTS will send him a confirmation message:
< C o n t r o l M e s s a g e rdf:ID = " C o n t r o l M e s s a g e _ 1 R" >
< A p p l i c a t i o n rdf:ID = " FinChat " / >
< w o r k s p a c e I D r d f : d a t a t y p e= " http: // www . w3 . org /2001/ X M L S c h e m a# string " >
WKS .1 </ w o r k s p a c e I D>
< source r d f : d a t a t y p e= " http: // www . w3 . org /2001/ X M L S c h e m a# string " > DTS </
source >
< d e s t i n a t i o n r d f : d a t a t y p e= " http: // www . w3 . org /2001/ X M L S c h e m a# string " >
John </ d e s t i n a t i o n>
< payload r d f : d a t a t y p e= " http: // www . w3 . org /2001/ X M L S c h e m a# string " > OK </
payload >
</ C o n t r o l M e s s a g e>
At the same time, DTS will also send to Paul, who is in the same workspace as
John, a control message, asking if the need requested is supported:
Semantically Enriched Services to Understand the Need of Entities 147
If Paul ’s FinChat can supply the delivery guarantee feature, the response below
is sent to DTS and it is established a communication with support to delivery
guarantee:
< C o n t r o l M e s s a g e rdf:ID = " C o n t r o l M e s s a g e _ 2 R" >
< A p p l i c a t i o n rdf:ID = " FinChat " / >
< w o r k s p a c e I D r d f : d a t a t y p e= " http: // www . w3 . org /2001/ X M L S c h e m a# string " >
WKS .1 </ w o r k s p a c e I D>
< source r d f : d a t a t y p e= " http: // www . w3 . org /2001/ X M L S c h e m a# string " > Paul </
source >
< d e s t i n a t i o n r d f : d a t a t y p e= " http: // www . w3 . org /2001/ X M L S c h e m a# string " >
DTS </ d e s t i n a t i o n>
< payload r d f : d a t a t y p e= " http: // www . w3 . org /2001/ X M L S c h e m a# string " > OK </
payload >
</ C o n t r o l M e s s a g e>
In case of Paul ’s FinChat with no support for the delivery guarantee, this feature
will not be present in the communication between both applications.
Notice that through the Net-Ontology, FINLAN is able to register the services
needs into the DTS. From now on, it can manage what is the best way to deliver
FINLAN packages.
If a third person, Ringo, wants to join the conversation, Ringo’s FinChat will
handshake with DTS to check if it has support to DeliveryGuarantee 01. This
scenario is illustrated in Figure 3.
The following messages are exchanged and Ringo joins the workspace WKS.1.
After the joining and, hence, sharing of the workspace, Ringo’s FinChat and
all the other entities will receive the same data messages without the need of
multiple data flows.
148 F. de Oliveira Silva et al.
2 Implementation
Our FINLAN stack implementation consists of a Java library that uses commu-
nication interfaces through Raw Sockets. The linking between Java and C por-
tions of the code was done in Java Native Interface (JNI) [16, 19], as depicted in
Figure 4.
It is observed that the application App.java should use the API available in
the library Finlan.jar to establish communication. In this way, when an appli-
cation sends a packet, it communicates with the Net-Ontology sending its char-
acteristics. According to these characteristics, the Requirement Module Analisys
determines, through an inference engine, the application needs and proceeds
Semantically Enriched Services to Understand the Need of Entities 149
with the delivery of these. After the completion of the relevant operations, Net-
Ontology sends the primitive to the DL-Ontology which, in turn, takes care of
sending the packet through the JNI interface to the libFinlan.so library.
3 Experimental Results
The experiments were performed over the following environment: hosts with 4GB
of RAM, CPU Intel R
CoreTM 2 DUO @ 2.10GHz, running Linux operational
system with kernel 2.6.41.10-3.fc15.x86 64. The files transfered have size of 1, 5,
10, 15, 20, 25, 30, 35, 40, 45 and 50MB. The RTT variable was set to a fixed
150 F. de Oliveira Silva et al.
value of 1 second. Figure 5 shows the results, comparing the number of packets
transmitted in both: FINLAN and TCP.
It is possible to observe that in the scenarios of this experimentation, FINLAN
had a smaller number of transmitted packets. In the transfer operation of 10MB,
for example, FINLAN transmitted 8140 packets, while the TCP transmitted
10631 (one difference of 30.6 percent).
This is due to the delivery guarantee algorithm implemented in FINLAN that
sends confirmation messages in intervals of the RTT, informing only the lost
ones, in a period, to be re-transmitted, while TCP transmitted several number of
ACK packages. This confirms that the network traffic packets is decreased using
the delivery guarantee algorithm implemented over a stack that semantically
understands the concepts and adapts the messages from this understanding.
To illustrate the primitives in these experiments, Figure 6 shows snapshots
from the Wireshark of two packets captured during the transmission of the 50MB
file. The first one, in Figure 6(a), is the confirmation request of the source entity,
called “fabiola”, informing that the range of packages from 133 to 367 was sent.
The Figure 6(b) represents the response, confirming the lost packages, through
Semantically Enriched Services to Understand the Need of Entities 151
the field LostMessageQuantity. According to this capture, the packages from 220
to 281 were lost and only them were re-transmitted.
152 F. de Oliveira Silva et al.
4 Conclusions
This work presented the Net-Ontology Layer, experimental results of its imple-
mentation and how it is possible to use ontology at the intermediate networks
layers to understand and support different entities needs.
The results of using ontology to support the delivery guarantee need demon-
strate an optimization of more than 30 percent of the packets sent in a file
transfer, compared with the traditional TCP/IP protocols usage.
By the Net-Ontology use, it was demonstrated the possibility to substitute
the traditional TCP/IP protocols used at the transport and network layers. This
brings more semantic power for the Future Internet networks, as the network
intermediate layers become able to better understand the entities needs.
Future Internet is being constructed with worldwide collaboration and is based
on research and experimentation. Our previous work showed [17, 19] how FIN-
LAN approach and the Title Model Ontology can work together with different
efforts regarding the future, while the work presented details on how these pro-
posals can come true.
As future works, it is expected to experiment the Net-Ontology implemen-
tation in different testbeds, such as OFELIA [11] and FIBRE (Future Internet
testbeds/experimentation between BRazil and Europe)[2, 20]. In complement,
it will be finished the actual working in progress to the experimentation using
OpenFlow [10]. Also, experimental tests using workspaces for multicast aggre-
gation [12] are being executed at OFELIA testbed.
The research and experimentation results show that we are facing a viable
approach to introduce semantics at network lower layers, by contributing to
bring richer and efficient services.
References
[1] Clayman, S., Galis, A., Chapman, C., Toffetti, G., Rodero-Merino, L., Vaquero,
L.M., et al.: Monitoring Service Clouds in the Future Internet. Towards the Future
Internet - Emerging Trends from European Research, p. 115 (2010)
[2] FIBRE: FIBRE Project (January 2012), http://www.fibre-ict.eu/
[3] FIRE: FIRE White Paper (August 2009), http://www.ict-fireworks.eu/
fileadmin/documents/FIRE White Paper 2009 v3.1.pdf
Semantically Enriched Services to Understand the Need of Entities 153
[4] Galis, A., Denazis, S., Bassi, A., Giacomin, P., Berl, A., Fischer, A., et al.: Manage-
ment Architecture and Systems for Future Internet. Towards the Future Internet
- A European Research Perspective, p. 112 (2009)
[5] Greenberg, A., Hjalmtysson, G., Maltz, D.A., Myers, A., Rexford, J., Xie, G.,
Yan, H., Zhan, J., Zhang, H.: A clean slate 4d approach to network control and
management. SIGCOMM Comput. Commun. Rev. 35, 41–54 (2005),
http://doi.acm.org/10.1145/1096536.1096541
[6] Greene, K.: TR10: Software-Defined Networking. MIT - Technology Review (2009)
[7] Jacobson, V.: Congestion Avoidance and Control. SIGCOMM Communications
Architectures and Protocols, USA 88, 314–329 (1988)
[8] Leśniewski, S.: Comptes rendus des séances de la Société des Sciences et des Lettres
de Varsovie. Class III, pp. 111–132 (1930)
[9] Malva, G.R., Dias, E.C., Oliveira, B.C., Pereira, J.H.S., Kofuji, S.T., Rosa, P.F.:
Implementação do Protocolo FINLAN. In: 8th International Information and
Telecommunication Technologies Symposium (2009)
[10] McKeown, N., Anderson, T., Balakrishnan, H., Parulkar, G., Peterson, L., Rex-
ford, J., Shenker, S., Turner, J.: OpenFlow: Enabling Innovation in Campus Net-
works. SIGCOMM Comput. Commun. Rev. 38, 69–74 (2008)
[11] OFELIA: OFELIA Project (January 2012), http://www.fp7-ofelia.eu/
[12] de Oliveira Silva, F.: Experimenting domain title service to meet mobility and
multicast aggregation by using openflow. In: MyFIRE Workshop (September 2011)
[13] Pereira, F.S.F., Santos, E.S., Pereira, J.H.S., Rosa, P.F., Kofuji, S.T.: Proposal
for Hybrid Communication in Local Networks. In: 8th International Information
and Telecommunication Technologies Symposium (2009)
[14] Pereira, F.S.F., Santos, E.S., Pereira, J.H.S., Rosa, P.F., Kofuji, S.T.: FINLAN
Packet Delivery Proposal in a Next Generation Internet. In: IEEE International
Conference on Networking and Services, p. 32 (2010)
[15] Pereira, J.H.S., Pereira, F.S.F., Santos, E.S., Rosa, P.F., Kofuji, S.T.: Horizontal
Address by Title in the Internet Architecture. In: 8th International Information
and Telecommunication Technologies Symposium (2009)
[16] Pereira, J.H.S., Santos, E.S., Pereira, F.S.F., Rosa, P.F., Kofuji, S.T.: Layers Op-
timization Proposal in a Post-IP Network. International Journal on Advances in
Networks and Services (2011)
[17] Pereira, J.H.S., Silva, F.O., Filho, E.L., Kofuji, S.T., Rosa, P.: The Title Model
Ontology for Future Internet Networks. In: Domingue, J., et al. (eds.) FIA 2011.
LNCS, vol. 6656, pp. 103–114. Springer, Heidelberg (2011)
[18] Roberts, J.: The clean-slate approach to future internet design: a survey of research
initiatives. annals of telecommunications - annales des télécommunications 64(5-6),
271–276 (2009), http://www.springerlink.com/content/e240776641607136/
[19] Santos, E., Pereira, F., Pereira, J.H.: Meeting Services and Networks in the Future
Internet. In: Domingue, J., et al. (eds.) FIA 2011. LNCS, vol. 6656, pp. 339–350.
Springer, Heidelberg (2011)
[20] Stanton, M.: FIBRE-EU and FIBRE-BR (October 2011), http://www.future-
internet.eu/fileadmin/documents/poznan documents/Session2 3 Inter
national coop/2-3-stanton.pdf
[21] Tselentis, G., Domingue, J., Galis, A., Gavras, G., Hausheer, D., Krco, S., Lotz,
V., Zahariadis, T.: Towards the future internet a European research perspective.
IOS Press, Amsterdam (2009)
Supporting Content, Context and User Awareness
in Future Internet Applications
1 Introduction
One of the main motivations for designing new architectures for the Future Internet is
to meet challenges imposed on the ICT infrastructure by new applications. These
challenges include among others:
1. Content awareness – meaning the sensitivity of data processing and transmission
methods to the content being delivered to the end-user. Content awareness may
emerge in: different processing of various data streams (i.e. video encoding or
sensor data encryption) and different forwarding methods (e.g. routing) for various
streams.
2. Context awareness consisting in different treatment (in terms of forwarding and
processing methods) of traffic depending on the particular use-case scenario of
application generating this traffic. Context may be connected for example with the
type of networking device used by a user or users geographical localization.
3. User awareness understood as personalization of services delivered to end-user.
Personalization is achieved by means of proper choice of data processing and
transmission methods according to functional and non-functional requirements
stated by the user. Users requirements may be formulated explicitly or be a result
of automatic recommendation which is based on the history of the application
usage.
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 154–165, 2012.
© The Author(s). This article is published with open access at SpringerLink.com
Supporting Content, Context and User Awareness in Future Internet Applications 155
4. Sensor networks and applications covering such applications such as: smart energy
metering, vehicle networks, intelligent building infrastructure, telemedicine, etc.
Each particular telemetry application involves specific types of data processing
methods and transmission of large number of small portions of data often requiring
real-time or near real-time end-to-end performance.
2 Systems Architecture
3.1 SmartFit
Sustained progress and developing of infrastructure for wireless sensor networks and
wearable sensors makes basis for pervasive computing systems. Such systems can
operate in distributed environment where wireless sensor networks consist of huge
amount of low-cost, low-power sensing nodes and many different services for data
transfer, processing, storage and supporting decision making [1]. Sensing nodes can
be used to build sensor networks such as Body Area Networks (BAN) or Personal
Area Networks (PAN). On the other hand we have vast number of services in
distributed environment facilitating the access to one or more functionalities.
SmartFit is a system adopting new technologies of pervasive computing and was
designed to support endurance and technical training of either amateur and elite
athletes. Application such as SmartFit must be designed to provide its functionalities
“anywhere and anytime”. It means that acquired data must be transmitted between
users of the system (i.e. athletes and trainers) with predefined quality level
independently of their location. In order to fulfil this requirement each functionality
was decomposed on small modules called atomic services. For each atomic service
few different required levels of quality was defined. It means that we have different
Supporting Content, Context and User Awareness in Future Internet Applications 159
versions of each atomic service. These different versions of atomic services are used
in the process of user-centric functionality delivery with use of orchestration
mechanism. User-centric functionality means that in order to compose such
functionality user’s specific requirements and needs are taken into account.
In Fig. 2 general architecture of SmartFit is presented. The first tier is used to
sensor data acquisition. The second tier is data processing and decision making tier.
The last one is presentation tier. For each tier the set of atomic services is defined. In
the process of user-centric functionality composition all versions of atomic services at
each tier are taken into account.
limb during strokes. Whereas signals from gyroscopes and accelerometers are used to
estimate trajectory of upper limb movement during tennis strokes such as serve,
forehand and backhand. In this case parameters of upper limb movement’s model
must be determined with use of delivered signals from gyroscope and accelerometers
units. Finally, the results of trajectory estimation is visualised and delivery to the
trainer and/or athlete.
In order to provide the required by the user quality of service it is necessary to
apply mechanism allowing for context awareness. Context awareness incorporated in
SmartFit can be used to adapt the packet size according to the user’s requirements.
Context information can be obtained through sensor networks e.g. measurement of
heart rate during training session or/and personal server e.g. GPS or networking
devices. Based on this information it is possible to predict user behaviour and his/her
location. Such mechanism facilitating SmartFit system with functionalities for
efficient management of network and computational resources in order to deliver
system functionality with required by user’s quality of service.
OL-Core is constantly monitoring the OL-Services, storing execution times and data
transfers in its database. From the user point of view, in the case of computational tools,
the key element of the Quality of Experience (QoE) is waiting time. The waiting time is
the sum of computation time and communication times. The first is query-specific and
must be taken into account as a value predicted on the basis of known history of user
queries. The second depends on the volume of data and code.
Online Lab classifies user queries (computational tasks) and reserves
communication services of the IIP system in order to guarantee the QoE for the user.
The computational tasks are scheduled in order to minimize the waiting time of the
user, which is done by computational service monitoring and dynamic configuration
of communication links using the IPv6 QoS infrastructure. This approach is used to
address the requirements defined in the introductory section:
1. Content awareness – such OL-Service is chosen to provide the minimum
processing time. The data volume of the task influences the parameters used during
the link reservation in IPv6 QoS system (to achieve the minimum transfer time).
2 Context awareness is maintained by the Load Balancer. Its task is to analyze the
stream of requests and manage the negotiations with the service stratum. It is also
equipped with the prediction module which forecasts the user behavior.
3 User awareness. The services are personalized, taking into account the user
preferences, typical volumes of associated data and recommendation scheme.
Taking the above into account the general task of Online Lab is to compose a
computational service, given the request stream from the users is known or predicted.
All the components of the system (OL-Core and available OL-Services) are registered
and have unique IDs. Once the optimal (with respect to the QoE) structure of this
complex service (including the set of OL-Services and the parameters of
communication links between them) is decided by the Load Balancer, the OL-Core
reserves (via SCF functions, as described in sec. 2) the communication links connecting
all the Online Lab services. This guarantees delivery of required QoS parameters. In the
second phase the negotiation in the service stratum takes part to establish and confirm
Supporting Content, Context and User Awareness in Future Internet Applications 163
pairwise communication between the services. After that the computational tasks are
scheduled and assigned to the appropriate OL-Services by the OL-Core.
An additional unique feature of Online Lab is the possibility of implementing
dedicated computational services which may be available to other applications. An
example of this scenario will be sketched in the following section, where we describe
the use of Online Lab service to be used by the SmartFit application.
Bl
ue
to
ot
h
Described application provides context, content and user aware adaptive control
system supporting the user in a real-time.
4 Conclusions
Open Access. This article is distributed under the terms of the Creative Commons Attribution
Noncommercial License which permits any noncommercial use, distribution, and reproduction
in any medium, provided the original author(s) and source are credited.
References
1. Alemdar, H., Ersoy, C.: Wireless Sensor Networks for healthcare: A Survey. Computer
Networks 54, 2688–2710 (2010)
2. Bęben, A., et al.: Architecture of content aware networks in the IIP system. Przegląd
Telekomunikacyjny, Wiadomości Telekomunikacyjne 84(8/9), 955–963 (2011) (in polish)
3. Blake, S., et al.: An architecture for differentiated services. RFC2475 (1998)
4. Burakowski, W., et al.: The Future Internet Engineering Project in Poland: Goals and
Achievements. In: Future Internet Poland Conference, Poznan, Poland (October 2011)
5. Cheng, T.M., et al.: Nonlinear Modeling and Control of Human Heart Rate Response
During Exercise With Various Work Load Intensities. IEEE Trans. on Biomedical
Engineering 55, 2499–2508 (2005)
6. Cobelli, C., et al.: Diabetes: Models, Signals, and Control. IEEE Reviews In Biomedical
Engineering 2, 54–96 (2009)
7. Mosharaf Kabir Chowdhury, N.M., Boutaba, R.: A survey of network virtualization.
Computer Networks: The International Journal of Computer and Telecommunications
Networking 54(5), 862–876 (2010)
8. Grzech, A., Rygielski, P., Świątek, P.: Translations of Service Level Agreement in
Systems Based on Service-Oriented Architectures. Cyb. and Systems 41, 610–627 (2010)
9. ITU-T Rec. Y. Functional requirements and architecture of next generation networks
(2012)
10. Jacobson, V., Smetters, D.K., Thornton, J.D.: Networking Named Content. CACM 55(1),
117–124 (2012)
11. Man, C.D., et al.: Physical activity into the meal glucose-insulin model of type 1 diabetes:
in silico studies. Journal of Diabetes Science and Technology 3(1), 56–67 (2009)
12. Noguez, J., Sucar, L.E.: A Semi-open Learning Environment for Virtual Laboratories. In:
Gelbukh, A., de Albornoz, Á., Terashima-Marín, H. (eds.) MICAI 2005. LNCS (LNAI),
vol. 3789, pp. 1185–1194. Springer, Heidelberg (2005)
13. Pautasso, C., Bausch, W., Alonso, G.: Autonomic Computing for Virtual Laboratories. In:
Kohlas, J., Meyer, B., Schiper, A. (eds.) Dependable Systems: Software, Computing,
Networks. LNCS, vol. 4028, pp. 211–230. Springer, Heidelberg (2006)
14. Świątek, J., Brzostowski, K., Tomczak, J.: Computer aided physician interview for remote
control system of diabetes therapy. In: Advances in Analysis and Decision-Making for
Complex and Uncertain Systems, Baden-Baden, Germany, August 1-5, vol. 1, pp. 8–13
(2011)
15. Tarasiuk, H., et al.: Provision of End-to-End QoS in Heterogeneous Multi-Domain
Networks. Annals of Telecommunications 63(11) (2008)
16. Tarasiuk, H., et al.: Performance Evaluation of Signaling in the IP QoS System. Journal of
Telecommunications and Information Technology 3, 12–20 (2011)
Towards a Narrative-Aware Design Framework
for Smart Urban Environments
Keywords: Smart cities, sensor data analysis, social data mining, smart urban
services, Internet of things, narrative, storytelling, navigation, mobility, sensors,
web 2.0.
1 Introduction
The Internet of today enables users to access an unprecedented amount of information
at anytime and from any device. In the future, an emerging Internet of Things (IoT)
will connect everyday objects (such as toothbrushes, shoes or car keys), which will
become information storehouses of their own, capable of collecting and transmitting
real-time data to their surrounding environment (people, places and things). The
resulting myriad of smart interconnected objects and places will make up the
intelligent urban landscape of the future.
Urban environments offer unique opportunities for developing and testing new
applications and platforms in line with the vision of the Internet of Things. European
IoT platforms have already begun emerging over the last few years inline with the
*
Professor.
**
Associate Professor.
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 166–177, 2012.
© The Author(s). This article is published with open access at SpringerLink.com
Towards a Narrative-Aware Design Framework for Smart Urban Environments 167
future Internet momentum. Large smart city infrastructures have now been set up in
Europe (e.g.SmartSantander1, Spain) and worldwide (e.g. Songdo2, South Korea).
The growing need and interest in smart city innovation was highlighted by the
Commission in its report “Internet of Things in 2020: A roadmap for the future”3, in
which it identified key topics such as “Smart living” as part of what it termed a
“mastered continuum of people, computers and things”. There is a growing number of
innovative social and people-centric application areas, including social networking,
smart metering, smart data collection, city information models and so on [Atzoria10].
Although these application areas provide an excellent starting point to test services
and infrastructure, most offer merely quantitative solutions for a world that is
primarily qualitative (particularly from the human perspective). For the most part,
they collect and store data and information from technical devices and sensors. With
the growth of web 2.0 and social media, however, a wide array of human experience,
information and know-how is being shared and distributed across networks –
information that has yet to be properly harvested for the creation of smarter living
environments.
The present chapter proposes a new design framework for the smart city, one which
considers quantitative sensor-generated data as well as qualitative human-generated data
through participatory web platforms, in the future Internet context. In this manner,
storytelling and “listening” by networked objects is enhanced and vetted by human
storytelling, thereby getting us that much closer to true human-machine collaboration.
This chapter begins with an overview and gap analysis of the main developments
in urban IoT applications with a focus on resident mobility (Section 2). It then goes
on to highlight the need for a new kind of holistic urban storytelling (Section 3). The
section that follows describes a new design approach for smart urban environments
that is both sensor-driven and socially aware (Section 4). The concept is then applied
to a hypothetical urban mobility scenario (Section 5).
1
http://www.smartsantander.eu/
2
http://www.songdo.com/
3
ftp://ftp.cordis.europa.eu/pub/fp7/ict/docs/enet/internet-of-
things-in-2020-ec-eposs-workshop-report-2008-v3_en.pdf
168 L. Srivastava and A. Vakali
guide users through a city. Examples include European projects such as SMILE 4
which deals with sustainable and cost-efficient mobility and i-Travel5 which provides
personalized, context-aware “virtual travel assistant” services in urban settings.
The majority of existing ICT urban mobility applications have focused on :
• Sensor-aware transport: This area deals with effective traffic management for a
city’s public transportation system. Sensors (e.g. in combination with IoT
platforms) capture specific measurements (such as CO2 emissions) [6]. Use cases
have focused on managing city traffic, eco-driving and emergency handling [9].
• Urban travel planners: Planners are generated on the basis of the current location
of users, their preferences and mobile device settings. Semantic web tools and
technologies such as Global Navigation Satellite Systems (GNSS) and
Geographical Information Systems (GIS) are used to improve context and geo-
location awareness, respectively [12]. Current mobile route planning tools are
typically geared towards points of interest for tourists (sightseeing, hotels,
restaurants, and packaged tour routes) [14], [17].
4
SMILE : Towards Sustainable Mobility for people in urban areas http://www.ist-
world.org/ProjectDetails.aspx?ProjectId=258180ce08fd44cfa050fc
554c80e828
5
i-Travel : The connected traveler in the city, region and world of tomorrow http://
cordis.europa.eu/fetch?CALLER=FP7_PROJ_EN&ACTION=D&RCN=85751
Towards a Narrative-Aware Design Framework for Smart Urban Environments 169
• Social-wise urban mobility guidance: With the emergence of web 2.0, some urban
mobility applications have sought to leverage the opinions of users on urban points
of interest (POIs). Collaborative filtering approaches are combined with location-
based partitioning and user comments [15]. More recently, recommendations for
POIs have used location-based social networks (e.g. Foursquare) and included user
ratings, proximity and similarities [16], [13]. Services such as GeoLife analyze the
GPS trajectories of users off-line to provide personalized recommendations [18].
Table 1 summarizes current practices in the three areas identified above. It identifies
some of the more pressing needs and priorities for each area. It would seem evident
that little effort has been made to date to exploit the synergies between technical data
and social data streams.
It is important to note that both the IoT industry and the mobile industry are
continuing to expand, though not always in the same direction. Therefore, much is to
be gained by unifying their vision and creating a more holistic understanding of user
needs and requirements. The convergence of machine and human perspectives will
serve to enrich and facilitate daily living in the urban context. The next sections
propose a new design for urban mobility based on this principle.
The needs and requirements (summarized in Table 1) reveal that a proposed
approach which will merge real-time social and sensor data streams is expected to be
beneficial since citizens engagement can be improved. Such an improvement is
guaranteed by the fact that, according to the authors knowledge, there are no universal
applications which go beyond a typical residents navigation or mobility assistance.
The daily social activities of residents are being broadcast in real-time by a growing
number of mobile devices. Urban residents use mobile devices to manage their
professional and personal lives, their interaction with others, and their interaction with
their environment. Not surprisingly, the use of web 2.0 applications and social media
has proliferated on mobile networks. In this context, mobile users act as storytellers
and listeners, exchanging experiences over the internet. This so-called “urban social
pulse” can be gleaned through applications like Flickr, Facebook, Twitter but also
location-based services such as Foursquare and Gowalla. It contributes to a larger
urban story that can be heard by authorities and residents. For instance, residents
might express overcrowding and excessive heat (e.g. at a concert), and this might
serve to override physical data such as room temperature (which may not be high
enough to cause concern). Future concerts planned in that area could be re-
considered. In this manner, a future internet would augment sensor-generated data
into usable stories that might refine a resident-driven urban narrative.
Figure 1 is a graphical representation of the two sides of an urban story (sensor data
and resident data) and how these fit within the smart city context.
Towards a Narrative-Aware Design Framework for Smart Urban Environments 171
In the future internet, sensor and socially-aware storytelling could provide vital
support and guidance to a city’s actors, such as residents, visitors and authorities.
Uses and benefits for these actors are highlighted below:
• Residents typically move around their city on a daily basis using different modes of
transport (e.g. personal vehicles, public transport, walking, running, biking) and for
various purposes at different times of the day (e.g. work, caregiving, errands,
leisure etc.). Narrative-aware services should place emphasis on the collection of
information that generates real-time adaptive recommendations for residents. This
can improve navigation within the city and can assist in the selection of the most
appropriate routes based on various parameters, e.g. distance, CO2 emissions,
congestion, noise levels, parking, public transport routes and schedules. Resident
input (stories) over social media can provide invaluable qualitative information to
complement sensor-generated data.
172 L. Srivastava and A. Vakali
• Visitors are particularly interested in city navigation, POIs, queues and crowds.
Narrative-aware services could offer recommendations on points of interest or city
walks, based on proximity, popularity, tourist opinions, weather, opening times,
congestion and so on. Both residents and visitors could provide observations that
would be used to complement any technical measurements taken by sensors and
location-based technologies.
• Authorities (such as the police, fire department, or city council) could exploit
narrative-aware urban services and applications to enable monitoring of the city’s
major “variables” (e.g. noise, temperature, crime, CO2 levels) through global and
user-centered visualization interfaces. Such interfaces could also enable detection
of vulnerable geographical areas which indicate both over-threshold sensor
measurements and any alerts broadcast by the residents on the move.
The fact that the proposed narrative approach is flexible and multi-scenario oriented,
differentiates it from existing approaches which are more vertical and they focus on
improving the separate angles, i.e. either the data management or the usage side. The
end-users involvement is expected from the appealing story telling emphasis which is
expected to attract in particular users with mobile phones in a smart city context.
Capturing and reading urban narratives involves several complex steps and processes,
and cuts across various service layers (infrastructure applications, content, usage).
The design of such a framework must be cognizant of this complexity. Figure 2 is an
illustration of the urban narrative-aware design framework, with functionality at three
different levels:
• Data and stories: All tasks related to the collection of data and socially-generated
stories are carried out at this level. Differentiated techniques are required for
storing sensor data and social streams into individual “DataStores” or data
repositories. Targeted data storage and scalable indexing schemes should be used
to cope with the ever-growing number of sensor and social measurements.
Moreover, specific data and stories pre-processing is required to provide noise-free
DataStores.
• Analysis and Processing. DataStore integration, refinement and analysis are key
tasks at this level. In particular, the first core task is the integration of sensor data
and human stories, with the objective of constructing new narrative-aware
DataStores, i.e. “Narrative stores” or “NarraStores”. NarraStores host the various
digital narratives of an urban context. This integration is an ongoing task which can
benefit from regular refinement (i.e. calibration) and analysis, due to the emerging
and unpredictable nature of urban sensor and social data streams. DataStore
calibration involves processes which will validate and fine-tune information from
the two different data sources (sensors and social) and will revise either the content
of the DataStore or the data collection process itself. DataStore analysis can
involve a wide array of methodologies and algorithms from the fields of data
Towards a Narrative-Aware Design Framework for Smart Urban Environments 173
mining and recommender systems. This leads to the generation of quantitative and
qualitative NarraStores. The continuous calibration and analysis processes
optimize the content of NarraStores by summarizing, clustering and packaging
diverse narratives.
• Services and applications. NarraStores are the foundation upon which a wide
variety of services and applications can be built. The proposed design in Fig.2
puts forward an initial number of key urban services (assistance, alerting, and
planning) for the main urban actors (as outlined in 3.4). Such services leverage
the Narrative stores at their disposal in order to offer contextualized mobile and
Web applications for short-term (e.g. alerting) or long-term tasks (e.g. business
planning).
buses etc). The fictitious city in this case is known as “SmartVille”. SmartVille would
also have in place a dedicated City Department, whose objective would be to support
narrative-aware services under the proposed design framework.
SmartVille hosts and supports scalable DataStores and the resulting NarraStores,
which are repositories stored in large scale data centers. As described in the previous
section, these NarraStores include both SmartVille's physical data (generated by
sensors and IoT installations) and its social data streams (generated by residents on
the move). The NarraStores collect social data in an anonymous manner, such that
no personal or private data is traceable. These NarraStores are then offered by city
authorities either for public sector use or for private enterprises, for the development
urban applications and services.
The city's NarraStores are made available to these public and private clients
through a cloud infrastructure, as follows:
• Resident cloud services: Resident-specific information regarding daily urban
living, public places and things with common safety concerns (in coordination with
city police and other related departments). Costs are sponsored by the City and
services are therefore offered to residents on a discounted basis.
• Emergency cloud services: Emergency information and services available to all
residents, visitors and the wider general public over the Internet and mobile
networks (free of charge)
Towards a Narrative-Aware Design Framework for Smart Urban Environments 175
Events like festivals and concerts are quite popular in SmartVille. For instance, the
city hosts an annual 3-day jazz festival, attracting many residents and visitors who use
the services highlighted above: City-Watch, City-Park, City-Nav and City-Pulse.
During the jazz festival, there are scheduled shows and concerts on various stages
around the city, e.g. in the SmartVille stadium at two major city squares. Locations
are equipped with IoT sensor installations and are not far from main traffic routes.
176 L. Srivastava and A. Vakali
In this context, the proposed narrative-aware services can support the jazz festival in all
of its phases and create a more pleasant and engaging experience for the festival goer.
The need for each service at different phases of event planning is highlighted by
the number of checkmarks in Table 3. The analysis is based on the following:
• City-Park : recommends parking spots on the basis of relevant data tracked by
sensors; recommendations are delivered to festival attendants arriving or departing
from festival sites, on the basis of location and tagging reports (captured by their
mobile LBSN);
• City-Watch issues alerts when both sensors and social bursts report emergency
situations within the festival environs; these alerts are of importance prior, during
and after the festival, since they can monitor the entire span from sensor to social
threads (e.g. from temperature levels to overcrowding, respectively);
• City-Nav delivers navigation recommendations (primarily) to visitors for an eco-
aware and safe arrival/departure at the festival sites;
• City-pulse stores and monitors narratives during the entire festival through the
fusion of sensors and social stories. As a result of this continuous processing, the
narratives offer valuable information to authorities. There is a triage and ranking of
narratives such that important conclusions can be reached after, during and after
the event. For example, authorities might reorganize parking areas and re-program
sensor installations in line with user demand for parking. It might also be possible
to verify whether certain sensors are malfunctioning (e.g. sensor oversensitivity),
as sensor data can be refined or even contradicted by social storytelling.
The design framework remains abstract in its main design principles in order to offer
a wide range of potential uses and scenarios. Its simplicity can supports different
applications and services which might range from the event managing and scheduling
to new policy making.
6 Conclusion
Urban environments offer a fertile ground for developing and testing new smart
applications in line with the Internet of Things and the future Internet vision. The
narrative-aware design framework proposed herein exploits sensor and social data
collection in a holistic manner through its design integration, analysis and calibration
processes. The design includes qualitative data stores (and not merely quantitative ones)
which embed both machine (sensors) and human (social) measurements. Alerting,
assistance and planning are considered vital services in a city context, as highlighted in
the event-based scenario. Narrative-aware design can be of tremendous benefit to
primary future Internet city actors (residents, visitors and authorities) for a wide range
of services and requirements (e.g. time-critical, long-term analysis, processing rates etc).
Such a holistic approach is invaluable for the development of the smart, context-aware
and user-centric services that lie at the very heart of a future Internet.
Open Access. This article is distributed under the terms of the Creative Commons Attribution
Noncommercial License which permits any noncommercial use, distribution, and reproduction
in any medium, provided the original author(s) and source are credited.
Towards a Narrative-Aware Design Framework for Smart Urban Environments 177
References
1. Atzoria, L., Ierab, A., Morabitoc, G.: The Internet of Things: A survey. Computer
Networks 54(15), 2787–2805 (2010)
2. Breunig, M., Baer, W.: Database Support for Mobile Route Planning Systems. Computers,
Environment and Urban Systems 28(6), 595–610 (2004)
3. Castillo, L., et al.: SAMAP: An User-Oriented Adaptive System for Planning Tourist
Visits. Expert Systems with Applications 34(2), 1318–1332 (2008)
4. Chiu, D.K.W., Leung, H.F.: Towards Ubiquitous Tourist Service Coordination and
Integration: a Multi-Agent and Semantic Web Approach. In: International Conference on
Electronic Commerce, pp. 574–581 (2005)
5. Driver, C., Clarke, S.: An Application Framework for Mobile, Context-Aware Trails.
Pervasive and Mobile Computing 4(5), 719–736 (2008)
6. Giannopoulos, G.A.: The application of information and communication technologies in
transport. European Journal of Operational Research 152 (2004); [Guinard10] Guinard, D.,
Trifa, V., Wilde, E.: A resource oriented architecture for the Web of Things. In: Internet of
Things (IOT) Conference (December 2010)
7. Guinard, D., Trifa, V., et al.: Interacting with the SOA-Based Internet of Things:
Discovery, Query, Selection, and On-Demand Provisioning of Web Services. IEEE
Transactions on Services Computing 3(3), 223–235 (2010)
8. Kenteris, M., Gavalas, D., Economou, D.: An Innovative Electronic Tourist Guide
Application. Pervasive and Ubiquitous Computing 13(2), 103–118 (2009)
9. Hernández-Muñoz, J.M., Bernat Vercher, J., Muñoz, L., Galache, J.A., Presser, M.,
Hernández Gómez, L.A., Pettersson, J.: Smart Cities at the Forefront of the Future
Internet. In: Domingue, J., et al. (eds.) FIA 2011. LNCS, vol. 6656, pp. 447–462. Springer,
Heidelberg (2011)
10. Niaraki, A.B., Kim, K.: Ontology Based Personalized Route Planning System Using a
Multi-Criteria Decision Making Approach. Expert Systems with Applications 36(2p1),
2250–2259 (2009)
11. Prete, L.D., Capra, L.: diffeRS: A Mobile Recommender Service. In: Proceedings of the
2010 Eleventh International Conference on Mobile Data Management (MDM 2010), pp.
21–26. IEEE Computer Society, Washington (2010)
12. Sadoun, B., Al-Bayari, O.: Location Based Services Using Geographical Information
Systems. Computer Communications 30(16), 3154–3160 (2007)
13. Sandholm, T., Ung, H.: Real-time Location-aware Collaborative Filtering of Web Content.
In: Proceedings of the 2011 Workshop on Context-awareness in Retrieval and
Recommendation (CaRR 2011), pp. 14–18. ACM, New York (2011)
14. Scherp, A., Boll, S.: Generic Support for Personalized Mobile Multimedia Tourist
Applications. In: ACM International Conference on Multimedia, pp. 178–179 (2004)
15. Takeuchi, Y., Sugimoto, M.: CityVoyager: An Outdoor Recommendation System Based
on User Location History. In: Ma, J., Jin, H., Yang, L.T., Tsai, J.J.-P. (eds.) UIC 2006.
LNCS, vol. 4159, pp. 625–636. Springer, Heidelberg (2006)
16. Ye, M., Yin, P., Lee, W.-C.: Location recommendation for location-based social networks.
In: Proceedings of the 18th SIGSPATIAL International Conference on Advances in
Geographic Information Systems (GIS 2010), pp. 458–461. ACM, New York (2010)
17. Yu, C.C., Chang, H.P.: Personalized Location-Based Recommendation Services for Tour
Planning in Mobile Tourism Applications. In: Di Noia, T., Buccafurri, F. (eds.) EC-Web
2009. LNCS, vol. 5692, pp. 38–49. Springer, Heidelberg (2009)
18. Zheng, Y., Zhang, L., Xie, X.: Recommending friends and locations based on individual
location history. ACM Trans. Web 5(1) (2011)
Urban Planning and Smart Cities:
Interrelations and Reciprocities
Abstract. Smart cities are emerging fast and they introduce new practices and
services which highly impact policy making and planning, while they co-exist
with urban facilities. It is now needed to understand the smart city’s contribution
in the overall urban planning and vice versa, to recognize urban planning
offerings to a smart city context. This chapter highlights and measures smart city
and urban planning interrelation and identifies the meeting points among them.
Urban planning dimensions are drawn from the European Regional Cohesion
Policy and they are associated with smart city’s architecture layers.
1 Introduction
Regional planning concerns the context and the organization of human activities in a
determined space via taking into account the available natural resources and the
financial requirements. Urban planning particularizes regional planning in a
residential area. Both regional and urban planning are policy frameworks that reflect
the Government willing for sustainable land uses and development in a specific space
for a limited time period [6], [9], [12], [14]. Planning accounts various parameters
such as the environmental capacity, population, financial cohesion, and transportation
and other public service networks.
Smart cities appeared in late 80s as a means to visualize urban context and they
evolve fast since then. Today, they enhance digital content and services in urban
areas, they incorporate pervasive computing and they face environmental challenges.
Various international cases present alternative approaches to the smart city, while
they capitalize the Information and Communication Technologies (ICT) for multiple
purposes, which vary from simple e-service delivery to sophisticated data collection
for municipal decision making. South Korean smart cities for instance, use pervasive
*
Assistant Professor.
**
Associate Professor.
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 178–189, 2012.
© The Author(s). This article is published with open access at SpringerLink.com
Urban Planning and Smart Cities: Interrelations and Reciprocities 179
computing to measure various environmental indices [15], which are used by the local
Government to carry out interventions for the improvement of life in the city (e.g. for
traffic improvement).
This chapter is inspired by the co-existence of the smart city and the urban space,
and seeks to investigate the relation between the smart city and the urban planning, in
terms of mutual support and benefit. In order for this relation to be identified, an
analysis of these terms and of their structure is performed, and the points of mutual
interest are recognized. Moreover, this chapter addresses the Future Internet
application areas that comprise out of user areas and communities, where the Future
Internet can boost their innovation capabilities. In this context, various smart city’s
infrastructure and applications can contribute to urban planning data collection and
decision making by the planning stakeholders’ groups.
In the following background section the notions of regional and urban planning are
described and the planning framework is outlined on the basis of the European
practice. Moreover, the smart city context is clarified, along with a classification of
various metropolitan ICT-based environments which are further evaluated according
to a generic architecture. Section 3 identifies and summarizes interrelations between
urban planning and smart city contexts. The final section 4 has the conclusions of this
chapter and some future implications.
In this context, the regional planning [5], [11] seeks to protect the environment and to
secure the natural and cultural resources, while it highlights the competitive
advantages of different areas. Moreover, it strengthens the continuous and balanced
national development via taking into account the broader supranational surroundings.
Finally, it focuses on financial and on social national cohesion via signalizing
particular geographic areas with lower growth rates.
As highlighted in Fig. 1, urban planning particularizes the regional planning in
cities and residential areas, it is composed and managed by the local Governments [5],
and it is realized via three core plans (Fig. 1):
for building constructions, and of the authorization of the monitoring and of the
intervention procedures. Campbell [6] described the triangle of conflicts (property,
development and resource) that exist between economic development, environmental
protection, equity, and social justice, and which the urban planning aim to
manipulate.
Fig. 1. The hierarchical organization diagram of regional and urban planning’s framework
According to [8] the term smart city is not used in a holistic way describing a city with
certain attributes, but is used for various aspects which range from smart city as an IT-
district to a smart city regarding the education (or smartness) of its inhabitants. In this
context, the smart city is analyzed in intelligent dimension [8], [13], which concern
“smart people”, “smart environment”, “smart economy”, “smart governance”, “smart
mobility” and at a total “smart living”.
The term was originally met in Australian cases of Brisbane and Blacksbourg [4]
where the ICT supported the social participation, the close of the digital divide, and
the accessibility to public information and services. The smart city was later evolved
to (a) an urban space for business opportunities, which was followed by the network
of Malta, Dubai and Kochi (India) (www.smartcity.ae); and to (b) ubiquitous
technologies installed across the city, which are integrated into everyday objects and
activities.
182 L.G. Anthopoulos and A. Vakali
The notion of smart city has been also approached as part of the broader term of
Digital City by [2], where a generic multi-tier common architecture for digital cities
was introduced, and assigned smart city to the software and services layer. This
generic architecture (Fig. 2) contains the following layers:
• User layer that concerns all e-service end-users and the stakeholders of a
smart city. This layer appears both at the top and at the bottom of the generic
architecture because it concerns both the local stakeholders –who supervise
the smart city, and design and offer e-services- and the end-users –who
“consume” the smart city’s services and participate in dialoguing and in
decision making-.
• Service layer, which incorporates all the particular e-services being offered
by the smart city.
• Infrastructure layer that contains network, information systems and other
facilities, which contribute to e-Service deployment.
• Data layer that presents all the information, which is required, produced and
collected in the smart city.
This generic architecture can describe all the different types of attributes needed to
support the smart city context, and which typically include:
• Web or Virtual Cities, i.e. the America-On-Line cities, the digital city of
Kyoto (Japan) and the digital city of Amsterdam: they concern web
environments that offer local information, chatting and meeting rooms, and
city’s virtual simulation.
• Knowledge Based Cities, i.e. the Copenhagen Base and the Craigmillar
Community Information Service (Edinburgh, Scotland): they are public
databases of common interest that are updated via crowd-sourcing, and
accompanied by the appropriate software management mechanisms for
public access.
• Broadband City/Broadband Metropolis, i.e. Seoul, Beijing, Antwerp, Geneva,
and Amsterdam: they are cities where fiber optic backbones -called
“Metropolitan Area Networks (MAN)”- are installed, and enable the
interconnection of households and of local enterprises to ultra-high speed
networks.
• Mobile or Ambient cities, i.e. New York, San Francisco installed wireless
broadband networks in the city, which were accessible (free-of-charge) by
the habitants.
• Digital Cities i.e. Hull (UK), Cape Town and Trikala (Greece) extension of
the previous resources to “mesh” metropolitan environments that
interconnect virtual and physical spaces in order to treat local challenges.
• Smart or Intelligent Cities, i.e. Brisbane and Blacksbourg (Australia), Malta,
Dubai and Kochi (India), Helsinki, Barcelona, Austin and others of smart-
cities networks (http://smart-cities.eu, http://www.smartcities.info): they are
particular approaches that encourage participation and deliberation, while
they attract investments from the private sector with cost-effective ICT
platforms. Today, smart cities evolve with mesh broadband networks that
offer e-services to the entire urban space. Various ICT vendors [10] have
implemented and offer commercial solutions for the smart cities.
Urban Planning and Smart Cities: Interrelations and Reciprocities 183
Ubiquitous Cities, i.e. New Songdo (South Korea), Manhattan Harbour (Kentucky,
USA), Masdar (Abu Dhabi) and Osaka (Japan): they arose as the implication of
broadband cost minimization, of the commercialization of complex information
systems, of the deployment of cloud services, and of the ubiquitous computing. They
offer e-services from everywhere to anyone across the city via pervasive computing
technologies.
Eco-cities, i.e. Dongtan and Tianjin (China), Masdar (Abu Dhabi): they capitalize
the ICT for sustainable growth and for environmental protection. Some indicative
applications concern the contribution of ICT sensors for environmental measurement
and for buildings’ energy capacity’s evaluation; smart grids deployment for energy
production and delivery in the city; encouragement of smart solutions for renewable
energy production.
The above smart city classification could be evaluated for its sophistication in the
following (Table 1), according to the matching of each approach to the generic multi-
tier architecture of (Fig. 2). The values of the above table are self-calculated
according to empirical findings [2], and they represent the contribution of each
architecture layer to the particular smart city approach. The rows of (Table 1)
concern the architecture layers, while the columns refer to the abovementioned smart
city approaches. The value entries are based on Likert scale (values from 1 to 5) [7]
and they reflect how important each layer is considered for each particular approach.
On the basis of this measurement:
These estimated values can support researchers and supervisors in selecting the
appropriate approach for their city [3] and to design and predict their city’s future
“character”.
On the above attributes, various e-service portfolios can be offered in a modern smart
city [4]:
Moreover, the smart city’s infrastructures have to conform to planning rules and
not to charge the local environment or the local protected areas, while planning has to
uniformly develop smart cities across the regions for coherent development. In this
context, the Infrastructure layer meets all planning dimensions.
186 L.G. Anthopoulos and A. Vakali
Concerning the Service layer, the environmental and the intelligent transportation
services align directly to the Quality and to the Viability Timeline planning
dimensions. Moreover, the e-Democracy services align to the Capacity dimension,
since public consultations and open dialogue can influence planning and express local
requirements; planning on the other hand aims to establish resource capitalization for
local development that meets local needs. Finally, the e-Business portfolio aligns to
the planning dimensions of Capacity and of History and Landscape, since tourist
guides demonstrate and can protect traditional settlements, archaeological areas,
forests and parks; while business installation services oblige enterprises to install in
business centers and in areas that do not influence sustainability.
Finally, the smart city’s data layer must be kept up to date with accurate planning
information, in order to deliver efficient and effective e-services to the local
community. This one way relation between smart city and urban planning is displayed
on (Fig. 3) and shows that the development of a smart city has to align to planning
dimensions.
A vice versa relation exists too (Fig. 4), via which urban planning has to account the
existence of a smart city: the environmental data that is collected from ubiquitous
sensors has to contribute to Quality and to the History and Landscape dimensions, and
useful directions can be considered for land and for residential uses.
Furthermore, the smart city infrastructure layer consists of significant ICT facilities
-e.g. broadband networks, computer rooms and inductive intelligent transportation
loops-, which influence the Viability Timeline and the Capacity planning dimensions.
All these findings result in a bidirectional relation between planning and smart city
(Fig. 3), (Fig. 4), which shows that the smart city aligns to urban planning
dimensions, while the urban planning has to capitalize and to respect the existence of
a smart city. Furthermore, an important outcome would consider the rate of influence
between each urban planning’s dimension and each smart city’s layer. According to
the previous description, the interrelation would be measured with the meeting points
between dimensions and layers (Table 2).
The rows in (Table 2) represent the smart city architecture layers, and the columns
the urban planning dimensions. The calculated entries in table cells reflect the
meeting points that previously discussed. The Service layer for instance, meets the
four urban planning dimensions; three kinds of e-services address the Viability
Timeline dimension, meaning three meeting points (the value of 3) for this cell etc.
The Users layer meets all urban planning dimensions, since stakeholders can
participate in planning, while planning affects stakeholders. The Infrastructure layer
concerns resources and therefore Capacity in Urban Planning, while the Data layer
(e.g. environmental data collection via ubiquitous sensors) contributes and must be
accounted by the Quality and by the Viability Timeline planning dimensions. On the
other hand, the Viability Timeline and the Quality dimensions are mostly affected by
the existence of a smart city.
Table 2. Measuring the interrelation between planning dimensions and smart city’s layers
Smart cities are “booming” and various important cases can be faced worldwide,
which can be classified in various approaches and can be evaluated according to their
sophistication. All alternative approaches deliver emerging types of services to the
local communities with the use of physical and of virtual resources. This chapter
considered this co-existence of the smart city and the Urban Space and in this context
it investigated the interrelation between smart city and urban planning.
Urban planning supports sustainable local growth, it consists of four dimensions
that were recognized according to the European Regional Policy Framework, and
their context was described. A smart city on the other hand can follow a multi-tier
architecture, which can be considered generic for all particular approaches. The
analysis of the planning’s dimensions and of the smart city’s architecture layers
shows various meeting points, via which these two notions interact. More specifically,
smart city’s service layer aligns and contributes to all the urban planning’s
188 L.G. Anthopoulos and A. Vakali
dimensions and various e-Services support sustainable local growth. On the other
hand, planning’s dimensions can be affected by smart city’s stakeholders via
participatory policy making, while the smart city’s infrastructure has to be recognized
and capitalized.
This chapter tried to interrelate the physical and the digital space of a smart city
with tangible measurement means in order to support Future Internet application
areas. Relative efforts have been performed in the South Korean ubiquitous cities,
where the smart city moved towards the environmental protection. This chapter’s
resulted meeting points between smart city’s layers and planning’s dimensions can
provide Future Internet research with details concerning where the developed
applications and the deployed infrastructure have to account the physical space and
the environment.
General suggestions that require further investigation concern that the smart city
has to be accounted in the regional and the urban planning frameworks, with means
that the ICT resources are capitalized for information retrieval and analysis for policy
making; while the environmental charge of a smart city has to be measured and
evaluated during regional and urban planning.
Open Access. This article is distributed under the terms of the Creative Commons Attribution
Noncommercial License which permits any noncommercial use, distribution, and reproduction
in any medium, provided the original author(s) and source are credited.
References
1. Ahern, J.: Planning for an extensive open space system: linking landscape structure and
function. Landscape Urban Plann. 21, 131–145 (1991)
2. Anthopoulos, L., Tsoukalas, I.A.: The implementation model of a Digital City. The case
study of the first Digital City in Greece: e-Trikala. Journal of e-Government 2(2) (2006)
3. Anthopoulos, L., Gerogiannis, V., Fitsilis, P.: Supporting the solution selection for a
digital city with a fuzzy-based approach. In: International Conference on Knowledge
Management and Information Sharing, KMIS 2011 (2011)
4. Anthopoulos, L., Tougoutzoglou, T.: A Viability Model for Digital Cities: Economic and
Acceptability Factors. In: In Reddick, C., Aikins, S. (eds.) Web 2.0 Technologies and
Democratic Governance: Political, Policy and Management Implications. Springer,
Heidelberg (forthcoming, 2012)
5. Aravantinos, A.: Urban planning for sustainable growth (in Greek). Symmetria Publishing
(1997)
Urban Planning and Smart Cities: Interrelations and Reciprocities 189
6. Campbell, S.: Green Cities, Growing Cities, Just Cities? Urban planningand the
Contradictions of Sustainable Development. Journal of the American Planning Association
(1996)
7. European Committee: Regional Policy: Bridging the prosperity gap (2012),
http://europa.eu/pol/reg/index_en.htm
8. Giffinger, R., Fertner, C., Kramar, H.: Meijers, e. & Pichler-Milanovic, N.: smart cities:
Ranking of European medium-sized cities (2007)
9. Handy, S.L., Boarnet, M.G., Ewing, R., Killingsworth, R.E.: How the Built Environment
Affects Physical Activity: Views from Urban Planning. American Journal of Preventive
Medicine 23(2S) (2002)
10. IBM: How Smart is your city? Helping cities measure progress (2009),
http://www-935.ibm.com/services/us/gbs/bus/html/ibv-smarter-
cities-assessment.html
11. van Kampb, I., Leidelmeijera, K., Marsmana, G.: Urban environmental quality and human
well-being Towards a conceptual framework and demarcation of concepts; a literature
study. Landscape and Urban Planning 65, 5–18 (2003)
12. Kiernan, M.J.: Ideology, politics, and planning: reflections on the theory and practice of
urban planning. Environment and Planning B: Planning and Design 10(1), 71–87 (1983)
13. Komninos, N.: Intelligent Cities: Innovation, Knowledge Systems and Digital Spaces, 1st
edn. Routledge, London (2002)
14. Koutsopoulos, K., Siolas, A.: Urban Geography – The European City (in Greek). Greek
National Technical University publishing (1998)
15. Lee, S., Oh, K., Jung, S.: The Carrying Capacity Assessment Framework for Ubiquitous-
Ecological Cities in Korea. In: 2nd International Urban Design Conference, Gold Coast
2009, Australia (2009)
16. Trochim William, M.: Likert Scaling. Research Methods Knowledge Base, 2nd edn
(2006), http://www.socialresearchmethods.net/kb/scallik.php
The Safety Transformation in the Future Internet
Domain
Roberto Gimenez1, Diego Fuentes1, Emilio Martin1, Diego Gimenez2, Judith Pertejo2,
Sofia Tsekeridou3, Roberto Gavazzi4, Mario Carabaño5, and Sofia Virgos5
1
HI-Iberia Ingeniería y Proyectos
{rgimenez,dfuentes,emartin}@hi-iberia.es
2
ISDEFE
{dgimenez,jpertejo}@isdefe.es
3
Athens Information Technology - AIT
[email protected]
4
Telecom Italia
[email protected]
5
Everis
{mario.carabano.mari,sofia.virgos.casal}@everis.com
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 190–200, 2012.
© The Author(s). This article is published with open access at SpringerLink.com
The Safety Transformation in the Future Internet Domain 191
to reduce emergency response time and urban crime: for example, digital surveillance
cameras have been placed in many critical areas and buildings throughout cities and
call dispatchers have been created to distribute the emergency calls. Moreover,
advanced technological capabilities facilitate urban public safety systems to become
not just more interconnected and efficient, but also smarter and self-adaptive. Instead
of merely responding to crimes and emergencies after a critical situation, novel smart
systems emerge to analyse, anticipate and, actually, contribute to preventing them
before occurring. After the terrorist attacks of March 2004 in Madrid, the city
developed a new fully integrated Emergency Response Centre which, after an
incoming emergency call, simultaneously alerts the required emergency agency
(police, ambulance and/or fire brigade). The system can recognize if alerts relate to a
single or multiple incidents, and assign the right resources based on the requirements
coming from the ground. Furthermore, specialized video analytics systems are
successfully installed for traffic surveillance purposes. These are CCTV-based
systems capable of automatically detect illegal vehicles behaviour (e.g. cars stopped
in forbidden areas, going in the opposite direction), restricted entries behaviour (e.g.
bike entering in a forbidden road), stolen vehicles, etc. In addition, M2M
communications, that is, intelligent communications by enabled devices without
human intervention, are nowadays present in home and industrial security monitoring
systems and alarms. Several Public Safety organizations and Public Administrations
are using sensor networks to monitor environmental conditions or to be temporally
deployed driven by an emergency situation. Other advanced technologies are focused
on enhancing emergency notification mechanisms, fire and enforcement records
management, surveillance, etc.
As presented, outstanding capabilities offered by advanced technologies are
currently in use for safety purposes. However, there is still a wide list of non-satisfied
safety capabilities requested by Public Safety agencies. Several on-going initiatives
research upon how Future Internet can assist these entities in their daily work and
during emergency response phases. That is the case of SafeCity (Future Internet
Applied to Public Safety in Smart Cities) [1], an EU-funded project under the FP7 FI-
PPP programme which proposes to enhance the role of Future Internet by developing
smart Public Safety applications of high value. SafeCity aims at significantly
improving the implementation and up-taking of Future Internet services in this safety
field by 2015, leveraging the internet infrastructure as the bases of Public Safety
centred open innovation schemes. It is focused in situational awareness (i.e.
surveillance of public facilities, transport stations, energy facilities, roads, citizens in
the streets; environmental monitoring), decision-making tools in C2 centres, seamless
usage of ad-hoc communication networks temporarily deployed to support additional
demand communication capacity (e.g. due to a major plan event) and alerting
population mechanisms.
This paper presents the state-of-the-art and on going advances in these three vital
technological fields (Internet of things, Intelligent Video Analytics and Data Mining
intelligence) that are envisaged as fundamental pillars of the FI infrastructure in the
Public Safety domain. It further continues discussing and concluding on what the
economic implications of such technological advances for Safety purposes are.
192 R. Gimenez et al.
Fig. 1. Architecture and infrastructure model for IoT services in a Smart City
So the bands of frequency currently under consideration in Europe are: 169 MHZ and
868 MHZ. All the devices need to be managed by a M2M (Machine To Machine)
Platform with the following main features: open and standard interface with devices,
open and standard interface with applications, legacy, non standard, adapters, devices
and applications self discovery and identity management (access controls),
connectivity management (session, mobility), content management (QoS), security,
privacy and trust, service management (auto provisioning, auto configuration, self
healing, SW and FW upgrade, ...) for applications and devices, asset management
(SIMs Card for example), etc.
The Enabling platform layer besides M2M platform shall host also databases to
manage big quantity of data and data mining capability to extract “Meaning” from the
huge amount of data (see chapter 4). Capabilities of video analytics can also be part of
the enabling platforms as enabling capabilities for many application based on image
recognition (see chapter 3). Geographical localization of devices is also important to
intervene in the geographical area impacted.
Finally, the application layer is where the various applications reside. The
applications use web services APIs provided by the enabling platform layers. The
architecture is based on state of the art of web 2.0 techniques like [2]: Service-Oriented
Architecture (SOA), Software as a Service (SaaS), Mashups or Structured Information.
The SOA (OASIS Reference Model for SOA#) is an architectural paradigm for
integrated services available on the Net and owned and managed by different entities.
With SaaS, the software for implementing services is not locally installed and self
contained (an example of SaaS is world editor not installed on the computer where the
editing is done, but available in the Net). Mashup techniques are based on SOA and
enable to integrate different services to create a new and original service that can be as
well available in the Net for future mashups. Last but not least, XML family languages
have enabled the exchange of structured information between applications in the Net
without any previous design phase in the databases. Regarding data connectivity the
debate is open and research is on going to assess if public networks can be reusable for
safe city applications. The main consideration in favor of a re-use of current commercial
IP networks is that they are already in place while to build a specific network
infrastructure for the smart city case would require efforts, time and money that cannot
be spared (not to mention network planning and management).
Safety applications will leverage the IoT platforms described previously; in
particular the capillary network hot spots will be very important points for installation
on the territory of safety oriented sensors and actuators. First of all the IP Cameras
sending video streaming in Real Time can be managed as “Smart Things” both in
terms of data collection and in terms of operation management (maintenance in case
of faults). The capillary networks hot spots can also be the points of installation of
tools for alerting citizens. The alerting phase in safety services is very important.
When there is some emergency situation, citizens shall be informed as soon as
possible especially citizens close to the emergency areas. To alert citizens, Digital
Signage panels or totem can be installed in the capillary networks points. Moreover
through broadband connections it shall be possible to send alerting messages directly
to the mobile devices of the citizens in the Area or close to the area using for example
WiFi short range connections. To summarize the IoT is important for the safety smart
services and eventually safety smart services can defined as IoT services.
194 R. Gimenez et al.
Given the explosion in the amount of video footage captured by security forces, the
need to develop automatic intelligent methods for detecting suspicious people, objects
or activities in video to trigger immediate alerts or further analysis has been widely
recognized. Intelligent video analytics tools have been emerging for that purpose,
deployed in the safety domain. However, recognizing objects and people in loaded
scenes, identifying a person based on gait, recognizing complex behaviors and
conducting analytics in multi-camera systems are still among the main challenges of
research in this field. In video analysis, a monitored scene is visually analyzed to
extract foreground, eliminating all irrelevant background information to the
understanding of the problem at hand. A large number of methods exist, including
adaptive Gaussian mixtures [3], which can be used together with shadow filters [4].
The Safety Transformation in the Future Internet Domain 195
When medium or close field video streams are available (depth info), then more
sophisticated scene analysis can be provided, e.g. body shapes and regions can be
extracted. The dynamics of the evolving scene are interpreted and, according to the
density and clutter in the scene, it may be possible to track single persons or moving
objects, even in complex scenes. Multiple cameras with overlapping fields of view
allow for 3D tracking. Such methods are heavily based on the quality of features
detected (appearance, shapes etc.) and fail if image primitives are not reliably
detected. There are approaches that attempt to infer events without construction of
models. The detection of complex motion using motion models based on HMMs
(Hidden Markov Models) targets to detect abnormal events in complex scenes. Apart
from building models, the extracted information is used to recognize the event,
usually under assumptions/rules. Other methods achieve event recognition by relying
on both low-level motion detection and tracking, and high level recognition of
predefined (threat) scenarios corresponding to specific behaviors.
Advancements in video analytics technology have increased recognition abilities,
dramatically reducing false alerts due to weather, sun positions, and other
environmental factors. Some of today’s video analytics capabilities, along with safety
example cases that these may handle, include:
• Character (e.g., alphanumeric) and inscription recognition for reading license
plates, name tags, and containers; for e.g. suspicious parked cars detection.
• Facial recognition; for criminal/terrorist identification in public places
(metro, airport, large public squares, etc.)
• Density of people, people counts, behavior (such as loitering, fighting,
reading, sampling), slip-and-fall detection, gang activity, tailgating (vehicle
or human) in restricted areas, a person coming over a fence;
• Object removal and tracking; for e.g. theft detection cases
• Smoke detection; for potential fire detection
• Pattern recognition and directional motion;
• Tampering (such as with ATMs or other devices);
• Illegally parked cars, unattended bags, spills; for citizens’ protection
• Camera sabotage or malfunction, etc.; for crime intention detection
Intelligence and detection accuracy increases when one combines many of the above
capabilities together, or fuses detection results from the analysis of diverse
data/sensor inputs in an IoT infrastructure. For example, it is now possible to allow
entrance to a secure building by linking a fingerprint with a face and a person’s voice,
as well as personal handwriting, requiring all to match before granting access. Today's
intelligent video analytics systems can even spot potential problems by analyzing how
people move in multi-camera crowded scenes – many video streams are analyzed
simultaneously flagging suspicious people or objects, directing security personnel to
focus on particular activities. Artificial intelligence combined with video analytics
adds an intelligence layer, allowing learning of patterns while analyzing and dropping
false alarm rates. Finally, the use of both server-based (up to now the prevailing
architecture) and embedded, on-camera video analytics has led to even better
performance and lower energy and bandwidth consumption.
Nowadays, there is another great challenge to be faced due to the great demand for
respect for citizen’s privacy in order to retain public trust. The Anonymous Video
196 R. Gimenez et al.
Analytics (AVA) technology has emerged for that purpose [5], which uses pattern
detection algorithms to scan real time video feeds, looking for patterns that match the
software’s understanding of faces. The data is logged and the video destroyed on the
fly – with nothing in the process recognizing the persons who passed in front of the
sensors. In safety applications, only the identity of suspicious people, logged in a
database, is found and revealed. The advantages of intelligent video analytics for
enabling safe cities as Future Internet applications in combination with other
technologies, such as sensor networks or data mining, fusion and decision support, are
thus numerous.
Besides mining knowledge from large amounts of data, annotation and correlation
of data from numerous and diverse digital evidence sources are essential in the
context of public safety. Annotation and correlation of data across multiple devices in
order to highlight an activity matching a scenario of interest are considered as a
promising technique to support the public safety agencies activities using a large
volume of information derived from heterogeneous environments. Therefore, there is
a need for normalization in the representation of data from multiple sources of digital
evidence in order to support such pattern recognition [8].
By providing a normalised view of all the data available, generating scenarios of
interest, mining of behavioural patterns and correlation between events can be
established. The needs for new architectures that incorporate techniques to analyse
data from multiple sets of digital evidence used by police and other investigation
entities and to represent such data in a normalized manner are presented in [7].
Currently, techniques based on semantics are applied for annotation and correlation
of data in the Safety and Security Knowledge Domain. Semantic data modelling
techniques provide the definition and format of manipulated data. They define
standardized general relation types, together with the kinds of things that may be
related by such a relation type. In addition, semantic data modelling techniques define
the meaning of data within the context of its interrelationships with other data. At this
point, it is where ontologies fit into, which are actually the semantic data models.
Ontology [9] is a formal representation of knowledge as a set of concepts within a
domain, and the relationships between those concepts. It is used to reason about the
entities within that domain, and may be used to describe the domain. Data models,
metadata and annotations, classification schemes and taxonomies, and ontology are
greatly used in a variety of applications and domains. In the security and safety
application (knowledge) domain, effective data modelling and knowledge
representation facilitate automated semantic-based analysis of large volumes of data
and identification of suspicious or alert situations and behaviours. Their added value
remains in sharing and extending such models and representations with other
stakeholders and similar applications to facilitate data interoperability and unified
reasoning within the same knowledge domain.
Finally, data management during disaster/crisis situations also requires facing the
same, mentioned above problems, due to the fact that disaster data are extremely
heterogeneous, both structurally and semantically. This creates a need for data
integration and ingestion in order to assist the emergency management officials in
rapid disaster recovery. Since the data in disaster management could come from
various sources and different users might be interested in different kinds of
knowledge, data mining typically involve a wide range of tasks and algorithms such
as pattern mining for discovering interesting associations and correlations; clustering
and trend analysis to classify events in order to prevent future reoccurrences of
undesirable phenomena. Due to the fact that real-world data in disaster management
tend to be incomplete, noisy, inconsistent, high dimensional and multi-sensory etc.,
development of missing / incomplete data correlation approaches in order to increase
the situational awareness can be especially beneficial in this context [10].
198 R. Gimenez et al.
data mining) contributing to reduce possible emergencies as well as its response time.
For example, New York City for their City Control uses an innovative four
dimensional, integrated visualization technology that provides automated situational
awareness for anyone responsible for Securing and Protecting Infrastructure and/or
Human Assets. These technologies contribute on an essential way to optimize
capacity and first responders’ response time, both to beat to a potential risk, and to
response to an emergency. This phenomenon also brings an outstanding saving for
responsible organizations in charge of economic management of cities.
To conclude, it is worth to highlight that most of the benefits for the end users do
not create direct revenues, but significant operational savings and increased
efficiency. Also it is expected that transformation will produce significant economic
benefits for the society and business at large.
6 Conclusions
Smart Cities of tomorrow will provide a larger number of innovative services and new
capabilities that will highly contribute to reinforce the feeling of safeness in citizens.
Enhanced M2M communications will allow the massive usage of heterogeneous
sensors (smart meters) around the city and its surroundings, internet-connected and
self-configured devices that enable web-sensors access and surveillance-information
sharing among diverse safety agencies involved. Robust intelligent video analytics
that enable smarter contextual awareness will be applied not only for traffic purposes
but also for other aspects as suspicious objects/behaviors early detection, and will
represent the required answer to the existing explosion of video footage captured by
security forces who want to enlarge the automated detection capabilities of their video
surveillance systems. Predictive modeling and data mining techniques applied to
surveillance data enable the early detection of incidents and the generation of new
insights that efficiently support decision-makers. Depicted expected technological
advances within these three pillar areas clearly benefit Public Safety services with
intelligent real-time surveillance capabilities, efficient early detection mechanisms,
enhanced information visualization and sharing, and semi-automatic decision support
systems at Command and Control centers. Public Authorities will extremely reduce
the response time to emergencies (see that Madrid Emergency Response Centre
helped to reduce it to 25%) since innovative internet-based capabilities are expected
soon, for instance, an efficient monitorization for road safety purposes detecting
drastic weather changes, road condition, foreign objects, or the early detection
mechanisms based on video analytics of suspicious/missing people, suspicious
behaviors, illegal entries, suspicious objects, etc., which can be even more efficient
with alerting capabilities to specific geo-graphically based population.
All these new techniques will have an important impact and fostering of economic
sustainability within a Smart City while offering high quality Public Safety services.
Open Access. This article is distributed under the terms of the Creative Commons Attribution
Noncommercial License which permits any noncommercial use, distribution, and reproduction
in any medium, provided the original author(s) and source are credited.
200 R. Gimenez et al.
References
1. SafeCity Project: FI-PPP (Future Internet Public Partners Partnership) Programme (2011-
2013), http://www.safecity-project.eu/
2. Nuckull, D., Hinchcliffe, D., Governor, J.: Web 2.0 Architectures. O’Reilly (2009)
3. Stauffer, C., Grimson, W.E. L.: Learning patterns of activity using real-time tracking.
IEEE Trans. Pattern Analysis and Machine Intelligence 22(8), 747–757 (2000)
4. Cucchiara, R., Grana, C., Neri, G., Piccardi, M., Prati, A.: Sakbot system for moving
object detection and tracking. In: Video-Based Surveillance Systems-Computer Vision and
Distributed Processing, pp. 145–157 (2001)
5. Cavoukian, A.: Anonymous Video Analytics (AVA) technology and privacy. White Paper,
Information and Privacy Commisioner, Ontario, Canada (April 2011)
6. Han, J., Kamber, M.: Data Mining: Concepts and Techniques. The Morgan Kaufmann
Series in Data Management Systems, Jim Gray Series Editor. Morgan Kaufmann
Publishers (2000)
7. Osborne, G., Turnbull, B.: Enhancing Computer Forensics Investigation through
Visualisation and Data Exploitation. In: International Conference on Availability,
Reliability and Security, ARES 2009 (2009)
8. Mohay, G.: Technical Challenges and Directions for Digital Forensics. In: SADFE 2005:
Proceedings of the First International Workshop on Systematic Approaches to Digital
Forensic Engineering, p. 155. IEEE Computer Society, Washington, DC (2005)
9. Guarino, N.: Formal Ontology and Information Systems. In: 1st International Conference
on Formal Ontology in Information Systems (FOIS 1998), Trento, pp. 3–15 ( June 1998)
10. Hristidis, V., Chen, S.C., Li, T., Luis, S., Deng, Y.: Survey of Data Management and
Analysis in Disaster Situations. Journal of Systems and Software 83(10) (October 2010)
FSToolkit: Adopting Software Engineering Practices
for Enabling Definitions of Federated Resource
Infrastructures
1 Introduction
Future Internet research needs new infrastructures for supporting approaches that
exploit, extend or redesign current Internet architecture and protocols. During the last
few years experimentally driven research is proposed as an emerging paradigm for the
Future Internet on validating through testing-scenarios new architectures and systems
at scale and under realistic environments. Until recently, testbeds used in testing
activities have usually a certain scope of testing capabilities. Organizations own
resources and infrastructure (i.e. networking devices, gateways, wireless devices) that
would like to either offer through the cloud model or to combine with resources of
other infrastructures in order to enable richer and broader experimentation scenarios.
Experimentally driven research addresses the need to evolve the test beds into
coherent experimentation facilities. This is possible by enabling large-scale federated
infrastructures of exposed organizational resources and testbed facilities. Such future
experimental facilities are leaded by global efforts like GENI [1] and FIRE [2].
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 201–212, 2012.
© The Author(s). This article is published with open access at SpringerLink.com
202 C. Tranoris and S. Denazis
that offers, through appropriate notations and abstractions, expressive power focused
on, and usually restricted to, a particular problem domain. For the language definition
an abstract syntax (the meta-model), a concrete syntax and semantics are needed. All
of these are captured in a solution workbench, which in this case is Eclipse, used both
as a development but also as a deployment environment.
Having stated the above, we present a meta-model for defining a resource broker
and how Domain Specific Modeling (DSM) is used to define Federation Scenarios.
We implemented a meta-model that describes resource brokers offering (representing)
services later mapped to resource providers. The Domain Specific Languages (DSLs)
used by resource brokers, resource providers and experimenters, have the proposed
meta-model as an abstract syntax. The meta-model is called Office meta-model, since
it has inherited its name from the Panlab Office which is used to federate resources.
However, the Office meta-model is generic enough to describe any resource broker.
A DSL called Office Description Language (OfficeDL) is used by resource brokers
or resource providers to describe them. The end-user (an experimenter or customer)
uses the Federation Scenario Description Language (FSDL). FSDL is a DSL to
describe the needed services of an experiment over a federated infrastructure. We also
discuss how we used Model-to-Model transformation between resource brokers in
order to import in the language heterogeneous resources by other resource brokers or
resource providers expressed with other models. Model-to-Text transformations are
used to generate wrapper code for exposing resources and for targeting different
provisioning engines.
The paper is structured as follows: First we present the proposed meta-model and
its core entities. Then we present the OfficeDL used by resource brokers and resource
providers and then we provide details of the FSDL and its concrete syntax in
describing Federation Scenarios. All the languages are supported by the FSToolkit
tooling which is also presented.
Office "myBroker" {
registeredUsers {
OfficeCustomer "Tranoris" {
address "Address"
hasAccount Account "Name" {
password "Password" username "Username"
}
},
ResourcesProvider "ProviderA"{
offeredSiteList {
Site "WCL" {
ptm PTM "uopPTM" { IP "150.140.184.234" }
igwlist { IGW "uopIIW" { IP "150.140.184.231" } }
The language tokens are with bold fonts and variables with other fonts. Having this, while
someone uses the language, he creates a model of his own office, defining: users, offered
services, resource providers, offered resources, etc… Some benefits of creating such a
DSL: there is a way to quickly check the meta-model for its correctness; tools can import
the instantiated models which are validated from the framework; resource brokers can use
it to describe their users, offered services, providers and contracts; finally, resource
providers may use it for describing only their own organization resources for local usage
and offer all the available tooling to their users.
It is expected that OfficeDL will be used for small to medium broker and provider
descriptions. For large organizations a permanent repository supporting the model is
more adequate. These descriptions though will be useful later on, when end-users use
them for defining their federation scenarios.
for enabling the end-user describing federation scenarios. The language is called
Federation Scenario Description Language (FSDL). In the simplest usage an FSDL
definition starts with the keyword RequestedFederationScenario followed by a name.
A set of import office statements that contain definitions of the offices (the resource
brokers, services and resources) may follow. Next, one can define either a resource
agnostic scenario request or specific resources of providers. To illustrate the above we
will discuss some examples.
The following, discusses a resource agnostic scenario request example (with a
request for offered services). The request is towards a broker brokerOfficeXYZ. We
would like to use an echo service that the brokerOfficeXYZ provides. The request is
described in the following FSDL :
RequestedFederationScenario myScenarioName
RequestServices{
Service "brokerOfficeXYZ.echo" as myecho settings {//An echo
resource. Write something in input. Read the output to get it
Setting "input" : myinput = "Hello" //An input text for echo
Setting "sleeptime_ms" : mysleeptime_ms = "3000" //delay of echo
in msecs
}
}
Inside the RequestServices section we describe the request for services and their
initial settings. The keyword Service declares a new service request followed by the
name of the requested service. In the presented example we request the echo service
echo. After the as keyword we define an alias of the service (i.e. myecho). After the
settings keyword follows the section with the initial settings of the requested service.
In our example we define the two settings input (the input setting will be the output of
the echo service) and sleeptime_ms (delay of the message).
In the next example we present the case of selecting resources from specific
providers, where we use a slightly different syntax of the language. In this case, we
would like to use an echo resource that provider ProviderAcme offers. We have two
ways to express this request in FSDL. The first:
RequestedFederationScenario myScenarioName
RequestServices{
Service "brokerOfficeXYZ.echo" as myecho offered by
"brokerOfficeXYZ.ProviderAcme" settings{
Setting "input" : input = "Hello" //An input text for echo
Setting "sleeptime_ms" : sleeptime_ms = "3000" //delay of echo
in msecs
}
}
208 C. Tranoris and S. Denazis
The keyword offered by is used to define that the end-user wants to request the
resource by the ProviderAcme provide. Another way for expressing this request is as
follows:
RequestedFederationScenario myScenarioName
RequestInfrastructure {
Resource "brokerOfficeXYZ.ProviderAcme.site.echo_rp12_or10782" as
myecho settings {
Setting "output" : output = "" //
Setting "input" : input = "Hello" //
Setting "sleeptime_ms" : sleeptime_ms = "2000" //
}
}
The RequestInfrastructure is used to describe a concrete infrastructure of resources
and their attributes by specific resource providers. Both approaches could be used for
different needs. Usually service definitions are more generic and contain generic
settings that all resource providers supply. However it is possible that a resource can
have more settings than the offered service it matches. The latter description of
describing the infrastructure is submitted for provisioning. In general, the section
ServicesRequest contains a list of ServiceRequest entities. The user creates instances
of ServiceRequests in the language referenced by the imported model. The syntax for
requesting an Offered Service is as follows:
Service "NAME_OF_SERVICE" as nameAlias([1.. numOfServices
])?(offered by "ResourcesProvider" (optional)? )? settings {
Setting "NAME_OF_SETTING":settingNameAlias (= staticValue)?
(assign +=SettingInstance|STRING] ( , SettingInstance )?
Setting "NAME_OF_SETTING":settingNameAlias (= staticValue)?
(assign +=SettingInstance|STRING] ( , SettingInstance )?
...
...
}
Where:
• NAME_OF_SERVICE: a full qualified name of the service
• nameAlias: a user chosen value to name the service followed optionally by how
many services he wants
• offered by is optionally to indicate to the broker that we need the specific provider.
• the optional keyword says to the broker to try to match the selected provider if
possible
• NAME_OF_SETTING: the name of an attribute of an offered service
FSToolkit: Adopting Software Engineering Practices for Enabling Definitions 209
• settingNameAlias: a user chosen value to name the setting. If after the alias there is
a = then the setting can have a static value. If there is the keyword assign the user
can assign the value of another setting.
A more complex example to illustrate FSDL is the following: An end-user wants to
deploy a XEN VM image to 15 machines. The resource broker brokerOfficeXYZ will
allocate these later to his resource providers. The FSDL specification is as follows:
RequestedFederationScenario deployingAXenImage
RequestServices{
Service "brokerOfficeXYZ.xenimagestore" as myXENImageP2ner settings{
Setting "Name":imgname = "myXENImageP2ner"
Setting "InputURL":inputurl
="http://196.140.184.233/myxenimage.img"//The url to copy from
Setting "OutputURL":outputurl //holds the location of the stored
image, to be used by testbed's resources
}
utility helps the end-user with the correct syntax by suggesting commands, keywords
and variables. Moreover, double clicking on offered services triggers automatic text
injection in the scenario description.
family of DSLs targeting brokers, providers and end-users having the meta-model as
abstract syntax. All appropriate tooling supporting is given through FSToolkit. All
presented tools are licensed under the Apache License, Version 2.0. The meta-model
can be downloaded from http://svn.panlab.net/PII/repos/Software/sources/FCI/org.
panlab.software.office.model/model/. More details, instructions, source code and
downloads are available also at our web site http://nam.ece.upatras.gr/fstoolkit.
Acknowledgments. The research leading to these results has received funding from
the European Union's Seventh Framework Programme (FP7/2007-2013) from project
PII- Panlab and under grant agreement n° 287581 – OpenLab.
Open Access. This article is distributed under the terms of the Creative Commons Attribution
Noncommercial License which permits any noncommercial use, distribution, and reproduction
in any medium, provided the original author(s) and source are credited.
References
1. National Science Foundation, GENI, http://www.geni.net (last accessed February
12, 2012)
2. European Commission, FIRE website,
http://cordis.europa.eu/fp7/ict/fire (last accessed February 12, 2012)
3. Tranoris, C., Denazis, S.: Federation Computing: A pragmatic approach for the Future
Internet. In: 6th IEEE International Conference on Network and Service Management
(CNSM 2010), Niagara Falls, Canada (2010)
4. Website of Panlab and PII European projects, supported by the European Commission in
its both framework programmes FP6 (2001-2006) and FP7 (2007-2013),
http://www.panlab.net
5. Opencloudmanifesto, Cloud Computing Use Cases White Paper,
http://www.opencloudmanifesto.org/ (last accessed February 12, 2012)
6. OMG website. Catalog of OMG Modeling and Metadata Specifications,
http://www.omg.org/technology/documents/modeling_spec_
catalog.htm (last accessed February 12, 2012)
7. Steinberg, D., Budinsky, F., Paternostro, M., Merks, E.: EMF eclipse modeling
framework, 2nd edn. Addison Wesley (2008)
8. Eclipse Foundation website, http://www.eclipse.org (last accessed March 27,
2011)
9. Xtext framework website, http://www.eclipse.org/Xtext/ (last accessed
February 12, 2012)
10. RADL, Panlab wiki website, http://trac.panlab.net/trac/wiki/RADL (last
accessed February 12, 2012)
11. Teagle, http://www.fire-teagle.org (last accessed February 12, 2012)
12. Specification Business Process Execution Language for Web Services, Version 1.1,
ftp://www6.software.ibm.com/software/developer/library/
ws-bpel.pdf (last accessed February 12, 2012)
13. Belaunde, M., Falcarin, P.: Realizing an MDA and SOA Marriage for the Development of
Mobile Services. In: ECMFA: European Conference on Modelling Foundations and
Applications, pp. 393–405 (2008)
14. Federation Computing Interface (FCI), Panlab wiki website,
http://trac.panlab.net/trac/wiki/FCI (last accessed February 12, 2012)
15. Federation Scenario Toolkit (FSToolkit) web site,
http://nam.ece.upatras.gr/fstoolkit (last accessed February 12, 2012)
NOVI Tools and Algorithms for Federating Virtualized
Infrastructures
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 213–224, 2012.
© The Author(s). This article is published with open access at SpringerLink.com
214 L. Lymberopoulos et al.
descriptions and the terminology for describing physical nodes, virtual nodes, virtual
topologies, etc. The Resource Ontology supports the operation of all the services of
NOVI’s Federated Control & Management Architecture, which will be presented in
the Section 3 of this paper. For example, it is used to express requests within the
NOVI GUI or by the Resource Information Service and the Intelligent Resource
Mapping Service to communicate when coordinating the exchange of information
about resources suitable for the embedding of virtual resources. The Monitoring
Ontology extends the Resource Ontology to provide descriptions of the concepts and
methods of monitoring operations, such as details about monitoring tools, how these
relate to the resources, types of measurements that can be gathered etc. This ontology
provides the primary support to the operation of the Monitoring Service. Finally, the
Policy Ontology also extends the Resource Ontology by providing descriptions of the
concepts and methods for the management and execution of policies defined within
member platforms of a NOVI federation. This ontology supports the operation of the
Policy Service. More information on the developed ontologies can be found in the
project’s public deliverable D2.2: First Information & Data Models [8].
As shown in Fig.2, the NO OVI API receives requests from the NOVI GUI. The GU UI is
based on the Ontology Instan nce Editor - OIntEd [10], which was originally used to asssist
in the development phase of o the NOVI IM and subsequently was customized to alllow
users to create and send requ
uests for NOVI slices. In its current implementation, the NOOVI
GUI provides an intuitive draag-and-drop interface for this instantiation process and alloows
users to define relations bettween instantiated objects. For example, a user can definne a
virtual network topology aloong with the characteristics for requested resources. For evvery
request, the GUI generates an n OWL document based on the NOVI IM which is sent too the
NOVI API by means of an HTTPH post request.
Using the NOVI GUI (accessible
( online at http://novi-im.appspot.com/) the uuser
can choose from the availaable ontologies in order to define the topology of the slice
for his experiment.
federated physical substrate network. This was initially formulated for a single
domain (infrastructure) as Virtual Network Embedding (VNE), an NP-Hard
combinatorial problem [14]. In the NOVI federated profile, VNE had to be extended
towards a multi-domain environment via graph spitting as in [15] and intelligent
selection of intra-domain mapping.
Evaluation and testing of the embedding procedure for NOVI experiments require
the appropriate representation of a VN request, formulated using the NOVI
Information Model. The IRM gathers information from the Resource Information
Service (RIS) and the Monitoring Service regarding available resources. As a first
step, user requests for VN resources are apportioned to infrastructures that are
members of a NOVI federation. Subsequently, single-domain VNE problems are
formulated, resulting into sub-optimal allocation of virtual resources within the
federated substrate.
A user may submit requests for standalone virtual resources, topologies of virtual
resources and specific services regarding virtual resources/topologies. These requests
may request specific mappings of virtual resources to substrate infrastructures. As
specified by the ProtoGENI RSpec [16]. VN requests may contain a complete (pre-
specified, bound), partial, or empty (free, unbound) mapping between virtual
resources and available physical (substrate) resources.
resources or resource status notification updates received from the Monitoring Service
(MS). RIS will only store the static part of the information from the monitoring
ontology, while the dynamic parts will be obtained by directly calling MS.
The Request Handler, as shown in Fig. 1, communicates via RSpec with a server
running SFA. Since the SFA code was initially developed for PlanetLab, we just had
to installa private SFA server for the PlanetLab part of our testbed. However, there is
no SFA implementation for FEDERICA; thus we developed an appropriate
FEDERICA RSpec and an FEDERICA SFA Wrapper service acting as FEDERICA’s
Aggregate Manager (see Fig. 1). More information can be found in D2.2: First
Information and Data Models [8].
the NOVI GUI and (2) south-bound with the middle layer (SFA). The north-bound
interface is the NOVI API, while the south-bound interface is the Request Handler
Service. Intra-domain C&M Services within the top layer exchange messages via an
Enterprise Service Bus - ESB [26]. Inter-domain C&M services can communicate (1)
via the Request Handler using SFA services (e.g. for slice creation across domains) or
(2) directly in a peer-to-peer mode via secure RPCs in cases that SFA mechanisms
were deemed as inadequate (e.g. for remote interactions of monitoring services).
An example of C&M service integration is the Slice Creation Use Case detailed in
NOVI Public Deliverable D4.2: Use Cases [27], which also provides an overview of
initial usage scenarios of the project. In summary, an authenticated experimenter is
authorized to use a set of resources across domains, as confirmed by the relevant per-
domain Policy Services. He may then request a desired topology using the NOVI GUI.
The virtual topology request is then passed to the IRM through the NOVI API. Prior
to solving the inter-domain VNE, IRM contacts RIS to identify available resources
that would fulfill the requirements imposed by the experimenter. RIS interacts with
the Monitoring Service to obtain information regarding the status (e.g. availability,
capacity, usage) of resources. Finally, when an appropriate mapping of virtual-to-
substrate resources is identified, reservation requests in the form of RSpecs are sent by
the Request Handler to the relevant testbed(s) slice manager(s).
NOVI developed a software integration framework for its C&M Services
architecture. It follows the Service Oriented Architecture complemented with the
Event Driven Architecture to enable synchronous and asynchronous communication
between components. The integration framework was implemented using Java
technologies. However it supports communication of components written in different
programming languages via a range or specific bridges such as: Jython [28], a Python
engine for Java; JRuby [29] for the Ruby language; JNI [30], a Java Native Interface
API for components written in C/C++.
Fig.4 presents the topology of one operational slice used to test control and
management plane components, detailed in NOVI Public Deliverable D4.1: Software
Architecture Strategy & Developers’ Guide [35]. This slice is comprised of three
FEDERICA core PoPs located in PSNC (Poznan, Poland), DFN (Erlangen, Germany)
and GARR (Milano, Italy). These are connected over the Internet via GRE tunnels to
private PlanetLab nodes in NTUA (Athens, Greece), ELTE (Budapest, Hungary) and
PSNC (Poznan Poland).
To isolate the slice in Fig. 4 from other NOVI slices using the same FEDERICA
core PoPs, Logical Routers are created on the Juniper MX480 routers. The open
source MyPLC (PlanetLab’s C&M software) is deployed at PSNC, managing the
private PlanetLab testbed.
An illustration of a typical slice deployed in the NOVI testbed is the NOVI-
MONITORING devoted for validating NOVI’s monitoring methods (active and
passive) and their corresponding tools. Measurements assembled via this slice are
depicted in Fig. 3.
References
[1] NOVI FP7 STREP Project, http://www.fp7-novi.eu
[2] Szegedi, P., Figuerola, S., Campanella, M., Maglaris, V., Cervelló-Pastor, C.: With
Evolution for Revolution: Managing FEDERICA for Future Internet Research. IEEE
Communications Magazine 47(7), 34–39 (2009)
[3] PlanetLab, http://www.planet-lab.org
[4] GÉANT, http://www.geant.net/pages/home.aspx
[5] Global Environment for Network Innovations (GENI), http://www.geni.net/
[6] D2.1: Information Models for Virtualized Architectures, http://www.fp7-novi.
eu/index.php/deliverables
[7] Web Ontology Language (OWL), http://www.w3.org/TR/owl-features
[8] D2.2: First Information and Data Models, http://www.fp7-novi.eu/index.
php/deliverables
[9] Slice Federation Architecture, v2.0, http://groups.geni.net/geni/
attachment/wiki/SliceFedArch
[10] Ontology instance editor - OIntEd, http://novi-im.appspot.com
[11] Openrdf Sesame, http://www.openrdf.org
[12] Resource Description Framework - RDF, http://www.w3.org/RDF/
[13] Alibaba, http://www.openrdf.org/alibaba.jsp
224 L. Lymberopoulos et al.
[14] Mosharaf Kabir Chowdhury, N.M., Boutaba, R.: Network Virtualization: State of the
Art & Research Challenges. IEEE Communications Magazine 47(7), 20–26 (2009)
[15] Houidi, I., Louati, W., Ameur, W.B., Zeghlache, D.: Virtual network provisioning
across multiple substrate networks. Elsevier Computer Networks 55, 1011–1023 (2011)
[16] RSpec, http://www.protogeni.net/trac/protogeni/wiki/RSpec2rac/
protogeni/wiki/RSpec
[17] Lymberopoulos, L., Grosso, P., Papagianni, C., Kalogeras, D., Androulidakis, G., van
der Ham, J., de Laat, C., Maglaris, V.: Managing Federations of Virtualized
Infrastructures: A Semantic-Aware Policy Based Approach. In: Proc. of 3rd IEEE/IFIP
International Workshop on Management of the Future Internet, Dublin, Ireland, May 27
(2011)
[18] Ponder2, http://ponder2.net
[19] Hullár, B., Laki, S., Stéger, J., Csabai, I., Vattay, G.: SONoMA: A Service Oriented
Network Measurement Architecture. In: Korakis, T., Li, H., Tran-Gia, P., Park, H.-S.
(eds.) TridentCom 2011. LNICST, vol. 90, pp. 27–42. Springer, Heidelberg (2012)
[20] https://wiki.man.poznan.pl/perfsonar-mdm/index.php/Hades_MA
[21] Santos, T., Henke, C., Schmoll, C., Zseby, T.: Multi-hop packet tracking for
experimental facilities. In: SIGCOMM 2010, New Delhi, India, August 30-September 3
(2010)
[22] VINI, http://www.vini-veritas.net
[23] Farinacci, D., Li, T., Hanks, S., Meyer, D., Traina, P.: RFC 2784, Generic Routing
Encapsulation (GRE) (March 2000)
[24] Open vSwitch, http://openvswitch.org/
[25] OpenFlow, http://www.openflow.org
[26] Chappell, D.: Enterprise Service Bus. O’Reilly (June 2004) ISBN 0-596-00675-6
[27] D4.2: Use Cases, http://www.fp7-novi.eu/index.php/deliverables
[28] Jython, http://www.jython.org
[29] JRuby, http://jruby.org
[30] Java Native Interface, http://java.sun.com/docs/books/jni/
[31] Juniper MX480, http://www.juniper.net/us/en/local/pdf/
brochures/1500027-en.pdf
[32] VMware ESXi, http://www.vmware.com/files/pdf/ESXi_
architecture.pdf
[33] MyPLC, http://www.planet-lab.org/doc/myplc
[34] PlanetLab federation, http://www.planet-lab.org/federation
[35] D4.1: Software Architecture Strategy and Developers’ Guide, http://www.fp7-
novi.eu/index.php/deliverables
Next Generation
Flexible and Cognitive Heterogeneous Optical Networks
Supporting the Evolution to the Future Internet
1 Introduction
After the establishment of the first fiber-based telecom networks in the 1980s, it was
the emergence of Wavelength Division Multiplexing (WDM) a decade later that
enabled the current expansion of Internet. In these early steps of WDM networks
though, each optical channel had to be converted to the electrical domain and then
back to the optical at every node even if the optical channel was not destined for that
*
Corresponding author.
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 225–236, 2012.
© The Author(s). This article is published with open access at SpringerLink.com
226 I. Tomkos et al.
node; these networks are commonly referred to as opaque networks. Later on, the idea
of avoiding all these costly O/E/O conversions triggered the development of Optical
Add-Drop Multiplexers (OADMs) that, in turn, allowed the establishment of
transparent networks, where the signal propagates all-optically from source to
destination throughout the network. In transparent networks, the regeneration-related
costs of opaque networks are eliminated [1] achieving up to 50% of cost saving when
compared with opaque networks [2]. Furthermore, reconfigurable OADMs
(ROADMs) and Optical Cross-Connects (OXCs) were implemented to achieve a
higher degree of flexibility and to enable networks to adapt remotely and on-demand
to the potential traffic changes, thus reducing the associated operational costs.
Moreover, the introduction of high data-rate transmission technology aims to provide
large trunks so as to accommodate the bandwidth-intensive new multimedia
applications. Nevertheless, not all traffic demands require such high bit rates and
operators are seeking for networks that are not wasting resources but are cost-
effective and therefore versatile. In this framework, existing 10 Gb/s optical networks
may upgrade their infrastructure gradually migrating to heterogeneous networks that
accommodate mixed 10/40/100 Gb/s traffic [3]. This new solution is known as Mixed
Line-Rate (MLR), as opposed to the legacy one, also referred to as Single Line-Rate
(SLR).
However, the above cited solutions provide limited flexibility and are not capable to
scale to the envisioned capacities of the Future Internet. In fact they operate under added
complexity and cost due to the rigid wavelength granularity of the systems currently
deployed. Operators provide connections with capacity that fulfils the highest (worst
case) demand (over-provisioning), while these connections remain underutilised for
most of the time. To this account, the recent advances in coherent technology, software-
defined optics and multicarrier transmission tecniques, such as Orthogonal Frequency
Division Multiplexing (OFDM) [4]-[5] and Nyquist WDM (N-WDM) [6], have
introduced the possibility to achieve a significantly high spectrum-efficiency providing
a fractional bandwidth feature. In fact, thanks to these technologies it is possible to
dynamically tune the required bit-rate and the optical reachability by appropriately
choosing the allocation of the spectrum and the modulation format. Some of the terms
often associated in literature to the optical networks exploiting these technological
advancements are “flexible”, “tunable”, “elastic” or “adaptive”. Hence, flexibility
means that the network is able to dynamically adjust the resources in an optimal and
elastic way according to the continuous varying traffic conditions, These new concepts
will enable a new network architecture where any two nodes can be connected with the
amount of bandwidth required, either providing a sub-wavelength service or super-
channel connectivity [7]-[8].
On the other side, the aforementioned emerging heterogeneous networks have
introduced a new type of challenge in network design. In reconfigurable single line-
rate networks, the resources at hand during the design phase were limited to the
channels considered feasible according to Quality of Transmission (QoT) parameters
(through physical-layer aware processes [9]) while the rate and the modulation format
were fixed. The new heterogeneous network paradigms have introduced an additional
level of flexibility, also interpreted as additional complexity. In this context, to serve a
given traffic demand, the network manager has to select the route, the channel, the
bit-rate and the modulation format [8]. Hence, traditional Routing and Wavelength
Next Generation Flexible and Cognitive Heterogeneous Optical Networks 227
Since the cognitive decision system must deal with very diverse tasks, it is
composed by five different modules, all of them exploiting cognition. Thus, it
includes a RWA/RMLSA module to process optical connection (lightpath) requests; a
QoT estimator module to predict the QoT of the optical connections before being
established (and thus helping the RWA/RMLSA module to ensure that quality
requirements are met); a virtual topology design module, which determines the
optimal set of lightpaths that should be established on the network to deal with a
given traffic demand, and a traffic grooming module, which is in charge of routing
traffic through the lightpaths composing the virtual topology. Last but not least, a
network planner and decision maker module coordinates and triggers the operation of
the other modules and handles the communications with other network elements.
In the framework of this architecture, the advantages of cognition have already been
demonstrated in a number of scenarios, such as on quickly and effectively assessing
whether an optical connection (i.e., a lightpath) satisfies QoT requirements [20], or on
determining which set of connections should be established on an optical network (i.e.,
the so-called virtual topology) in order to support the traffic load while satisfying QoT
requirements and minimizing energy consumption and congestion [21].
In the former scenario, the utilization of Case-Based Reasoning techniques to
exploit knowledge acquired through previous experiences leads to obtaining not only
a high percentage of successful classification of lightpaths into high or low QoT
categories (Fig. 2), but also to a great reduction in the computing time (around three
orders of magnitude) when compared to a previous tool for QoT assessment which
does not employ cognition [20].
In the latter scenario, the inclusion of cognition in a multi-objective algorithm to
determine the optimal set of virtual topologies with different trade-offs in terms of
throughput and energy consumption brings great advantages. Since a multi-objective
230 I. Tomkos et al.
100.0 100
Successful classification of lightpaths (%)
99.9 8 wavelengths
99.8 16 wavelengths
32 wavelengths 80
99.7
64 wavelengths
4000
SLR 40G
3500 SLR 100G
Spectrum Utilized (GHz)
SLR 400G
3000 MLR
E-OFDM
2500
O-OFDM
2000
1500
1000
500
0
1 2 3 4 5 6
Traffic Load Multiplier
Fig. 4. Spectrum utilization for all solutions and different traffic loads
To calculate the bandwidth utilized by the various solutions the Deutsche Telekom
core network (14 nodes, 23 bidirectional links) and the realistic traffic matrix of the
DT network for 2010 scaled up to 11 times to obtain traffic ranging from 3.6 Tb/s up
to 39.6 Tb/s has been utilized. Under the given assumptions, the flexible multi-carrier
solutions offer the most efficient spectrum allocation as expected from the optimized
packing of the connections in the frequency domain (Fig. 4).
of transponders, the cost of node equipment and the one related to the number of
“dark” 50GHz channel slots that are utilized and associated only with the link
infrastructure cost.
Among the fixed-grid networks the distinctive component that determines the
capital requirements is the type of the transponders. Fig. 5 illustrates the absolute
number of transponders per networking solution. Fig. 6. shows the relative
transponder cost of all fixed-grid solutions; the relative cost values are set at
1/2.5/3.75/5.5 for the 10 Gb/s, 40 Gb/s, 100 Gb/s and 400 Gb/s transponders
respectively [27]. For MLR systems, two variations of the planning algorithm are
reported; the first one seeks to minimize the number of utilized wavelengths, and the
second one optimizes the transponder cost of the network.
Fig. 5. Required number of transponders for Fig. 6. Relative transponder cost for the
all solutions to serve the different traffic fixed-grid networking solutions
matrices (in absolute numbers)
600%
Additional cost of E-OFDM transponder
Load 1 Load 3
Additional cost of O-OFDM transponder
300%
Load 1 Load 3
Load 5 Load 7
over the 100G transponder cost
Load 5 Load 7
over the 100G transponder cost
Load 9 Load 11
500% Load 9 Load 11 250%
400% 200%
300% 150%
200% 100%
100% 50%
0% 0%
10000 25000 40000 55000 70000 85000 100000 10000 25000 40000 55000 70000 85000 100000
Cost per 50GHz channel slot Cost per 50GHz channel slot
Fig. 7. Allowable additional cost for E-OFDM Fig. 8. Allowable additional cost for O-
transponder compared to SLR 100G from OFDM transponder compared to SLR 100G
spectrum savings for different traffic loads from spectrum savings for different traffic
loads
However, reliable data for the cost of the flex-grid networks components, i.e., the
software-defined transponders and bandwidth-variable nodes, are currently not
available. To overcome this, the extra cost of the E-OFDM and O-OFDM
transponders over the cost of a 100 Gb/s transponder so as to achieve total network
cost equal to that of the related SLR network is examined. The comparison was
Next Generation Flexible and Cognitive Heterogeneous Optical Networks 233
focused on the cost of the E-OFDM and O-OFDM transponders as those rely on
electronics for DSP. Fig. 7 presents the allowable additional cost for the E-OFDM
transponder compared to the SLR 100 Gb/s transponder for different traffic loads. For
a 50 GHz-channel cost that ranges from 10 k€ to 100 k€, an E-OFDM transponder
may cost 3 to 5 times more when the traffic load is equal to 11 so as to achieve total
network cost equal to that of the SLR network. For the lowest traffic scenario
(load=1), where the spectrum savings of the flex-grid solution compared to the 100G
SLR are less pronounced, the E-OFDM solution is preferable over the SLR network
when the additional cost that is tolerable ranges between 6% to 50%. In a similar
manner, Fig. 8 presents the results for the comparison between O-OFDM and 100G
SLR. The O-OFDM transponder may cost approximately 2-3 times more for the
highest traffic load scenario. The difference with the O-OFDM case is justified by its
higher spectrum utilization as shown in Fig. 4. From the operators’ perspective, these
results indicate how the spectrum savings of the flex-grid networks can be used to
mitigate the additional cost of the new spectrum flexible transponders.
SLR 100G
0,16
0,14 SLR 400G
0,12
MLR-Optimizing
0,10
TR cost
0,08 MLR-Optimizing
0,06 wavelengths
E-OFDM
0,04
0,02 O-OFDM
0,00
1 3 5 7 9 11
Traffic Load Multiplier
Fig. 9. Energy Efficiency achieved for all solutions and different traffic loads
The estimated energy efficiency (in Gb/s/W) for the various traffic loads is
illustrated in Fig. 9. 400G SLR appears to be the least efficient for traffic load up to 5
although it tends to improve for higher loads. The other SLR solutions achieve better
efficiency that decreases for high loads justified by the great number of transponders
234 I. Tomkos et al.
4 Conclusions
Optical networking developments allow the reduction of complex operations at the IP
layer so as to reduce the latency of the connections and the expenditures to deploy and
operate the networks. New research advancements in optical networking promise to
further fortify the capabilities of the Future Internet. In this context, the CHRON
project proposes a Cognitive Heterogeneous Reconfigurable Optical Network, which
observes, acts, learns and optimizes its performance, taking into account its high
degree of heterogeneity with respect to QoS, transmission and switching techniques.
The aim of CHRON is to develop and showcase a network architecture and a control
plane which efficiently use resources in order to minimize CAPEX and OPEX while
fulfilling QoS requirements of each type of service and application supported by the
network in terms of bandwidth, delay and quality of transmission, and reducing
energy consumption.
The cognitive process and the consequent cross-layer proposed solutions have been
extensively exploited to deliver connections at a single line-rate. Nevertheless due to
their potential, flexible optical networking solutions have been investigated within the
CHRON project, as well as their predecessor, the mixed line-rate (MLR) one. In order
to demonstrate the potential of cognitive techniques, we have shown the performance
advantages brought when cognition is used in two different scenarios: the estimation
of the QoT of the lightpaths established (or to be established) in an optical network,
and the design of efficient virtual topologies in terms of throughput and energy
consumption. Then, the advantages of flexible optical networks have been evaluated.
As opposed to the rate-specific and fixed-grid solution of an MLR network,
flexible optical networks, regardless of the employed technology, are bandwidth
agnostic and have the ability to deliver adaptive bit-rates. The associated technologies
and concepts that enable the vision of flexible optical networks include advanced
modulation formats that offer higher spectral efficiency, the concept of a spectrum-
flexible grid, software-defined optical transmission, single-carrier adaptive solutions
and multi-carrier technologies. Nevertheless the increased level of flexibility imposes
complex requirements with respect to the spectrum and capacity allocation.
Therefore, in this context, CHRON has evaluated the core networks of the Future
Internet from a cost, spectral and energy perspective and has provided a
comprehensive view of the potential of various technologies. This investigation has
been carried out by taking into account the greatly different requirements of Future
Next Generation Flexible and Cognitive Heterogeneous Optical Networks 235
Open Access. This article is distributed under the terms of the Creative Commons Attribution
Noncommercial License which permits any noncommercial use, distribution, and reproduction
in any medium, provided the original author(s) and source are credited.
References
[1] Saleh, A.A.M.: Transparent optical networking in backbone networks. In: Optical Fiber
Communication Conference, OSA Technical Digest Series Optical Society of America
(2000); paper ThD7
[2] Gunkel, M., et al.: A Cost Model for the WDM Layer. In: Photonics in Switching
Conference PS (2006)
[3] Liu, X., Chandrasekhar, S.: High Spectral-Efficiency Mixed 10G/40G/100G
Transmission. In: Proc. AOE 2008, OSA Technical Digest (CD) (Optical Society of
America) (2008); paper SuA2
[4] Klekamp, A., et al.: Transmission Reach of Optical-OFDM Superchannels with 10-600
Gb/s for Transparent Bit-Rate Adaptive Networks. In: Proceedings of ECOC (2011);
paper Tu.3.K.2
[5] Chandrasekhar, S., et al.: Transmission of a 1.2-Tb/s 24-Carrier No-Guard-Interval
Coherent OFDM Superchannel over 7200-km of Ultra-Large-Area Fiber. In:
Proceedings of ECOC (2009); paper PD2.6
[6] Gavioli, G., et al.: Investigation of the Impact of Ultra-Narrow Carrier Spacing on the
Transmission of a 10-Carrier 1Tb/s Superchannel. In: Proceedings of OFC (2010);
paper OThD3
[7] Borkowski, R., et al.: Experimental Demonstration of Mixed Formats and Bit Rates
Signal Allocation for Spectrum-flexible Optical Networking. In: Proc. Optical Fibre
Communications Conference, Los Angeles CA, USA (March 2012); paper OW3A.7
[8] Jinno, M., et al.: Spectrum-efficient and scalable elastic optical path network:
architecture, benefits, and enabling technologies. IEEE Communications Magazine 47,
66–73 (2009)
[9] Agraz, F., et al.: Experimental Demonstration of Centralized and Distributed
Impairment-Aware Control Plane Schemes for Dynamic Transparent Optical Networks.
In: OFC/NFOEC (2010); paper PDPD5
236 I. Tomkos et al.
[10] Nag, A., Tornatore, M., Mukherjee, B.: Optical Network Design With Mixed Line
Rates and Multiple Modulation Formats. IEEE/OSA Journal of Lightwave Technology
(JLT) 28, 466 (2010)
[11] Salvadori, E., et al.: Handling Transmission Impairments in WDM Optical Networks by
Using Distributed Optical Control Plane Architectures. IEEE/OSA Journal of
Lightwave Technology (JLT) 27(13) (July 2009)
[12] Zhang, F., et al.: Requirements for GMPLS Control of Flexible Grids. IETF Internet
Draft. draft-zhang-ccamp-flexible-grid-requirements-01.txt (October 2011)
[13] King, D., et al.: Generalized Labels for the Flexi-Grid in Lambda-Switch-Capable
(LSC) Label Switching Routers. IETF Internet Draft, draft-farrkingel-ccamp-flexigrid-
lambda-label-01.txt (October 2011)
[14] Cugini, F., et al.: Demonstration of Flexible Optical Network based on Path
Computation Element. IEEE/OSA Journal of Lightwave Technology (JLT) 30(5)
(December 2011)
[15] Thomas, R.W., Friend, D.H., DaSilva, L.A., MacKenzie, A.B.: Cognitive networks:
adaptation and learning to achieve end-to-end performance objectives. IEEE
Communications Magazine 44, 51–57 (2006)
[16] Future Internet Public-Private Partnership Programme (FI-PPP), http://ww.
fi-ppp.eu
[17] Gerstel, O.: Realistic Approaches to Scaling the IP Network using Optics. In:
Proceedings of OFC (2011); paper OWP1
[18] CHRON project, http://www.ict-chron.eu
[19] Mahmoud, Q.H. (ed.): Cognitive Networks: Towards Self-Aware Networks. John
Wiley & Sons, Ltd. (2007)
[20] Jiménez, T., et al.: A Cognitive System for Fast Quality of Transmission Estimation in
Core Optical Network. In: Proc. OFC/NFOEC (2012); paper OW3A.5
[21] Fernández, N., et al.: Cognition to Design Energetically Efficient and Impairment
Aware Virtual Topologies for Optical Networks. In: 16th International Conference on
Optical Networking Design and Modeling, ONDM 2012. University of Essex,
Colchester (in press, 2012)
[22] Christodoulopoulos, K., et al.: Elastic Bandwidth Allocation in Flexible OFDM-based
Optical Networks. J. Lightwave Technol. 29, 1354–1366 (2011)
[23] Bocoi, M., et al.: Cost Comparison of Networks Using Traditional 10&40 Gb/s
Transponders versus OFDM Transponders. In: Proceedings of OFC (2008); OThB4
[24] Poole, S., et al.: Bandwidth-flexible ROADMs as Network Elements. In: Proceedings
of OFC (2011); paper OTuE1
[25] Christodoulopoulos, K., et al.: Value analysis methodology for flexible optical
networks. In: ECOC 2011 (2011); paper We.10.P1.89
[26] Christodoulopoulos, K., Manousakis, K., Varvarigos, E.: Reach Adapting Algorithms
for Mixed Line Rate WDM Transport Networks. J. Lightwave Technol. 29, 3350–3363
(2011)
[27] Patel, A.N., Ji, P., Jue, J.P., Wang, T.: First Shared Path Protection Scheme for
Generalized Network Connectivity in Gridless Optical WDM Networks. In:
Proceedings of ACP 2010, PD6, 1-2 (December 2010)
A Tentative Design of a Future Internet Networking
Domain Landscape
Abstract. The Future Internet (FI) will dramatically broaden both the spectrum
of available information and the user’s possible contexts and situations. This
will lead to the vital need of a more efficient use of the Internet resources for
the benefit of all. While the Internet has already delivered huge economic and
social benefits over its short lifespan, there must be a realignment of how
Internet research and investments are made and value is captured for enabling a
continuous growth. The increase of available online contents and networking
complexity require the exploration, experimentation and evaluation of new
performance optimisation approaches for delivering different types of contents
to users within different contexts and situations. Several network research areas,
such as peer-to-peer, autonomous, cognitive and ad hoc networking, have
already demonstrated how to improve network performance and user
experience.
Interestingly, there are various Internet-networking research areas and
corresponding technologies that were investigated, experimented and
progressively deployed, while others emerged more recently. However, there
are still open questions such as visualising the conceptual evolution and
articulating the various FI networking and computing research areas and
identifying appropriate concepts populating such a FI domain landscape. This
paper presents a tentative FI domain landscape populated by Internet computing
and networking research areas.
1 Introduction
The Internet has progressively become a ubiquitous environment for globally
communicating and disseminating information. There is a limitless amount of
available online resources and tools to share information and develop a better
understanding on whatever topics. With the recent advent of user created content,
thanks to the web 2.0 social approach, there has been a tremendous expansion in the
number of web pages created every day for exposing and sharing societal issues such
as environmental monitoring, energy efficiency, food and drug security as well as
human well-being. Tools like photo/video sharing, mash-ups, tagging, wikis and
collaborative virtual worlds enable new ways for the society to explore and
understand past present and future challenges. The Future Internet (FI) will
F. Álvarez et al. (Eds.): FIA 2012, LNCS 7281, pp. 237–249, 2012.
© The Author(s). This article is published with open access at SpringerLink.com
238 M. Pallot, B. Trousse, and B. Senach
dramatically broaden both the spectrum of available information and the user’s
possible contexts and situations. This will lead to the vital need of a more efficient use
of the Internet resources for the benefit of all. While the Internet has already delivered
huge economic and social benefits over its short lifespan, there must be a realignment
of how Internet research and investments are made and value is captured for enabling
a continuous growth.
several testbeds were initiated, such as PlanetLab [3], TEFIS, BonFIRE and SensLAB
[2]. TEFIS supports the Future Internet of Services Research by offering a single
access point to different testing and experimental facilities for communities of
software and business developers to test, experiment, and collaboratively elaborate
knowledge [4], [5]. The main goal of the BonFIRE project is to design, build and
operate a multi-site Cloud prototype FIRE facility to support research across
applications, services and systems at all stages of the R&D lifecycle, targeting the
services research community on Future Internet. The purpose of the SensLAB project
is to deploy a very large scale open wireless sensor network platform, in order to
provide an accurate and efficient scientific tool to help in the design and development
of real large-scale sensor network applications. SensLAB has been instrumental in
detecting overlapping communities in complex networks [6].
Challenging issues arise from the study of dynamic networks like the
measurement, analysis and modelling of social interactions, capturing physical
proximity and social interaction by means of a wireless network. A concrete case
study exhibited the deployment of a wireless sensor network applied to the
measurement of Health Care Workers' exposure to tuberculosis infected patients in a
service unit of the Bichat-Claude Bernard hospital in Paris, France [7]. As described
above through different testbed projects, the Future Internet is the “provider” of future
Internet infrastructure and applications. Obviously, the Future Internet will be the key
driver of technological support for services and products to be explored, experimented
and evaluated.
Two categories are clearly identical to two of the above mentioned dimensions,
namely: “Internet Routing” and “Network Type”. A third category, “Network
Computing”, quite overlaps with the dimension named “Evolution trends”.
As for the category “Network Computing”, it is worth to note that the concept of
Pervasive Computing, often mentioned as the ‘disappearing computing’, and
Ubiquitous Computing, rather evoked as ‘computing is everywhere’ are often used
synonymously especially in the Ambiance Intelligence area. In the same vein, the
concept of Grid computing, known as a cluster of networked computers, and Cloud
Computing, computing as a service or storage as a service, are quite closely related
from the perspective of shared resources. Regarding the category of Network
Globalisation, all the concepts are related to the convergence towards ‘all IP1’ strategy
and to the concepts of the Network Computing category as well as the Internet
Routing category. The Network Security and Network Assessment categories have
more transversal concepts that need to be considered at the earlier stage of the FI
design.
For each research stream, a Google scholar search over three different time periods
was carried out as a publication metric intended to show their respective growth or
decline. All selected concepts, considered as research areas, are individually described
in the Table 4 Appendix at the following URL2. The respective levels of publication
for each concept are provided in the table below (see Table 1) showing the
publication values for the three respective time periods, and sorted by ascending value
of the column 2006-2011.
1
Internet Protocol.
2
http://www.mosaic-network.org/pub/bscw.cgi/0/69097
A Tentative Design of a Future Internet Networking Domain Landscape 241
The bar-graph below (see Figure 1) shows the growth in terms of published papers for
the respective selected concepts across the three different time-periods. The highest
level of publication belongs to the concepts of the category “Network Computing”
and Ad hoc Network as well as Wireless Internet. However, the growth rate of Cloud
Computing looks so impressive that it is quite easy to predict it as the next big thing
on the Future Internet. Not surprisingly, among other concepts having an impressive
growth rate are Wireless Sensor Network and Internet of Things. The lowest level of
published papers appears to be related to more emerging concepts of the Internet
Routing category, such as Content Centric Networking, Self-adaptive Network,
Resilient Network, Fault-tolerant Network and Cognitive Network.
The growth rate of Virtual Private Network is impressively decelerating in the last
time-period while it had an impressive growth rate in the middle time-period. The
same evidence appears to apply on Internet Security and Quality of Services. The
situation is even worst in terms of growth rate for Optical Networking, which seems
to have reached its maximum amount of annual publication.
242 M. Pallot, B. Trousse, and B. Senach
The landscape is divided twice. First of all, it is divided in two spaces: a top space
and a bottom space that respectively address the wired and wireless Internet.
Secondly, it is divided in a right hand located space corresponding to the more
traditional “Computer Network” and in a left hand space representing the more recent
“Network Computing”.
A tentative design of the Future Internet networking research domain landscape for
three successive time periods appears below (see Figures 2, 3 and 4) where each
concept, presumed research area, appears as a bubble whose size is proportional to the
overall amount of publication in the corresponding time-period. The various concepts
and their allocated bubbles populate the landscape according to the four different
dimensions.
On the vertical dimension from wired to wireless Internet, the island in the bottom
area is constituted of “Optical Networking” while the island in the top area is based
on “Wireless Internet”. The islands on the right and left hand spaces as well as the
islands in the bottom and top spaces are supposed to generate a certain gravity
attracting other concepts through the other dimensions of internet routing, evolution
approach and autonomic & convergent network.
All low brightness small bubbles represents emerging aspects with very few
published papers such as “IP Multimedia System” counting 3 published papers and
“Internet of Things” with 8 published papers. The only concept that was not emerging
by year 1999 is represented by “Content Centric Networking”, which scored 0
published papers, has a very low brightness level in the figure.
of Things” moves in the same way but with a less exponential (factor 20) progression
from 117 to 2400. There are other concepts that make a good progression in this
period, such as “Autonomic Network”, “Wireless Sensor Network”, “Cognitive
Network” and “Quality of Experience”. Finally, the concept of “Next Generation
Network” makes also a significant progression in moving from 1650 to 4030
published paper in the period 2006-2011.
As for the concepts having a stable publication level over the two period
2000-2005 and 2006-2011, “Optical Networking” and “Internet Security” have a very
small increase of respectively 2% and 13%. The concept of “Quality of Services” is
also on the way to reach a stable plateau kind of situation with 20% progression
compare to the previous progression (5 times) from 1990-1999 to 2000-2005
publication levels.
Rank
Concepts
2006-2011 2000-2005 1990-1999
Content-centric Networking 24 24 24
Self-adaptive network 23 23 18
Resilient Network 22 19 17
Fault tolerant network 21 18 6
Autonomic Network 20 20 19
Cognitive Network 19 17 9
Network Convergence 18 14 5
Quality of Experience 17 15 15
Internet of Things 16 22 22
Optical Networking 15 10 10
IP Multimedia Subsystem 14 16 23
Next Generation Network 13 12 11
Peer-to-Peer Network 12 9 14
Quality of Services 11 11 3
Internet Security 10 7 1
Wireless Sensor Network 9 13 20
Semantic Web Services 8 8 21
Virtual Private Network 7 6 4
Cloud Computing 6 21 12
Wireless Internet 5 3 7
Ad hoc Network 4 5 8
Grid Computing 3 4 16
Ubiquitous Computing 2 1 2
Pervasive Computing 1 2 13
It shows as well that several FI research areas (concepts) while they were part of
the most popular in the time period 1990-1999, became the less popular in the time
period 2006-2011, such as ‘Fault Tolerant’ (from rank 6 to rank 21), ‘Network
Convergence’ (from rank 5 to rank 18), ‘Cognitive Network’ (from rank 9 to rank
19), ‘Quality of Services’ (from rank 3 to rank 11) and finally ‘Internet Security’
(from rank 1 to rank 10).
Others remain in the most popular, such as ‘Ubiquitous Computing’ (from rank 2
to rank 2), ‘Ad hoc Network’ (from rank 8 to rank 4), and ‘Wireless Internet’ (from
rank 7 to rank 5). Finally, FI research areas that were the less popular in the time
period 1990-1999, became the most popular in the time period 2006-2011, such as
‘Grid Computing’ (from rank 16 to rank 3), ‘Pervasive Computing’ (from rank 13 to
rank 1), ‘Cloud Computing’ (from rank 12 to rank 6), ‘Semantic Web Services’ (from
rank 21 to rank 8) and ‘Wireless Sensor Network’ (from rank 20 to rank 9).
A Tentative Design of a Future Internet Networking Domain Landscape 247
Ranking Evolution
Concepts 2011-2006 1990-1999
Content-centric Networking 0 0
Self-adaptive network 0 -5
Resilient Network -3 -2
Fault tolerant network -3 -12
Autonomic Network 0 -1
Cognitive Network -2 -8
Network Convergence -4 -9
Quality of Experience -2 0
Internet of Things 6 0
Optical Networking -5 0
IP Multimedia Subsystem 2 7
Next Generation Network -1 -1
Peer-to-Peer Network -3 5
Quality of Services 0 -8
Internet Security -3 -6
Wireless Sensor Network 4 7
Semantic Web Services 0 13
Virtual Private Network -1 -2
Cloud Computing 15 -9
Wireless Internet -2 4
Ad hoc Network 1 3
Grid Computing 1 12
Ubiquitous Computing -1 1
Pervasive Computing 1 11
Acknowledgements. This work was performed within the FIREBALL ICT project
and partly funded by the European Commission. The authors wish to acknowledge the
European Commission and project partners for their support. We also wish to
acknowledge our gratitude and appreciation to INRIA scientific leaders of project
teams involved in FI related projects, namely: Denis Caromel and Franca Perrina
(TEFIS), Walid Dabbous (PlanetLab and OneLab), Eric Fleury (SensLAB), and
David Margery (BonFIRE), for their active contribution.
Open Access. This article is distributed under the terms of the Creative Commons Attribution
Noncommercial License which permits any noncommercial use, distribution, and reproduction
in any medium, provided the original author(s) and source are credited.
References
1. Pirolli, P., Preece, J., Shneiderman, B.: Cyberinfrastructure for Social Action on National
Priorities. IEEE Computer Journal, special Issue on Technology-Mediated Social
Participation (November 2010)
2. Tselentis, G., Galis, A., Gavras, A., Krco, S., Lotz, V., Simperl, E., Stiller, B., Zahariadis,
T.: Towards the Future Internet - Emerging Trends from European Research (2010),
http://www.booksonline.iospress.nl/Content/View.aspx?piid=16
465 (retrieved)
A Tentative Design of a Future Internet Networking Domain Landscape 249
3. Burin des Rosiers, C., Chelius, G., Fleury, E., Fraboulet, A., Gallais, A., Mitton, N., Noël,
T.: SensLAB Very Large Scale Open Wireless Sensor Network Testbed. In: Korakis, T.,
Li, H., Tran-Gia, P., Park, H.-S. (eds.) TridentCom 2011. LNICST, vol. 90, pp. 239–254.
Springer, Heidelberg (2012)
4. Kaafar, M., Mathy, A., Barakat, L., Salamatian, C., Turletti, K., Dabbous, T.: Securing
Internet Coordinate Embedding Systems. In: Proceedings of ACM SIGCOMM 2007,
Kyoto, Japan (August 2007)
5. Schaffers, H., Santoro, R., Sällström, A., Pallot, M., Trousse, B., Hernandez-Munoz, J.M.:
Integrating Living Labs with Future Internet Experimental Platforms for Co-creating
Services within Smart Cities. In: Proceedings of the 17th International Conference on
Concurrent Enterprising, ICE 2011 Innovating Products and Services for Collaborative
Networks, Aachen, Germany (June 2011)
6. Leguay, J., Sallstrom, A., Pickering, B., Boniface, M., Gracia, A., Perrina, F., Giammatteo,
G., Roberto, J., Campowsky, K.: Testbed Facilities for Multi-faceted Testing throughout
the Service Development Lifecycle. In: ServiceWave 2011, Poznan (October 2011)
7. Friggeri, A., Chelius, G., Fleury, E., Fraboulet, A., Mentré, F., Lucet, J.-C.: Reconstructing
Social Interactions Using an unreliable Wireless Sensor Network. Computer
Communications 34(5), 609–618 (2011a)
8. Friggeri, A., Chelius, G., Fleury, E.: Egomunities, Exploring Socially Cohesive Person-
based Communities, N° RR-7535 (2011) [inria-00565336 - version 2] (2011b)
9. Pallot, M., Trousse, B., Senach, B., Scapin, D.: Living Lab Research Landscape: From
User Centred Design and User Experience towards User Cocreation. In: Proceedings of the
Living Lab Summer School, Cité des Sciences, Paris (August 2010)
Author Index