Computer Communications: Subharthi Paul, Jianli Pan, Raj Jain
Computer Communications: Subharthi Paul, Jianli Pan, Raj Jain
Computer Communications: Subharthi Paul, Jianli Pan, Raj Jain
Computer Communications
journal homepage: www.elsevier.com/locate/comcom
Architectures for the future networks and the next generation Internet: A survey
Subharthi Paul , Jianli Pan, Raj Jain
Department of Computer Science and Engineering, Washington University in Saint Louis, United States
a r t i c l e
i n f o
Article history:
Received 21 August 2009
Received in revised form 23 July 2010
Accepted 3 August 2010
Available online 25 August 2010
Keywords:
Future Internet architecture survey
Future Internet design (FIND)
Future network architecture
Next generation Internet
Global Environment for Networking
Innovations (GENI)
a b s t r a c t
Networking research funding agencies in USA, Europe, Japan, and other countries are encouraging
research on revolutionary networking architectures that may or may not be bound by the restrictions
of the current TCP/IP based Internet. We present a comprehensive survey of such research projects and
activities. The topics covered include various testbeds for experimentations for new architectures, new
security mechanisms, content delivery mechanisms, management and control frameworks, service architectures, and routing mechanisms. Delay/disruption tolerant networks which allow communications
even when complete end-to-end path is not available are also discussed.
2010 Elsevier B.V. All rights reserved.
1. Introduction
The Internet has evolved from being an academic pursuit to a
huge commercial commodity. The IP thin waist associated with
the simplicity of the present design has been a remarkable architectural choice, motivated by the need to converge multiple link layer
technologies and end-to-end transport mechanisms. However, the
assumptions under which the original Internet was designed have
changed. Newer contexts and specic requirements have subjected
the original design paradigms of the Internet to a lot of abuse. Due
to the limitations of the underlying architecture, such overlaid
hacks have limited effectiveness and are often highly inefcient.
Commercialization of the Internet has introduced concerns
about security, trust, and value added services. Introduction of networkable wireless systems has brought about a mobile paradigm.
Use of the Internet as a communication commodity upon which
business communications depend has raised the need for better
resilience and fault tolerance through ne-grained control and
management. A best effort delivery model of IP is no longer considered adequate. Routing is no longer based on algorithmic optimization, but rather has to deal with policy compliance. Assumptions
about persistently connected end systems do not hold with the
introduction of delay tolerant networking paradigms. Protocols designed without concern for energy efciency cannot integrate energy conscious embedded system networks such as sensor
Corresponding author. Tel.: +1 773 679 7723.
E-mail addresses: [email protected] (S. Paul), [email protected] (J. Pan),
[email protected] (R. Jain).
0140-3664/$ - see front matter 2010 Elsevier B.V. All rights reserved.
doi:10.1016/j.comcom.2010.08.001
system from scratch without being restrained by the existing system, providing a chance to have an unbiased look at the problem
space. However, the scale of the current Internet forbids any
changes, and it is extremely difcult to convince the stake-holders
to believe in a clean-slate design and adopt it. There is simply too
much risk involved in the process. The only way to mitigate such
risks and to appeal to stake-holders is through actual Internetscale validation of such designs that show their superiority over
the existing systems. Fortunately, research funding agencies all
over the world have realized this pressing need and a world-wide
effort to develop the next generation Internet is being carried out.
The National Science Foundation (NSF) was among the rst to announce a GENI (Global Environment for Networking Innovations)
program for developing an infrastructure for developing and
testing futuristic networking ideas developed as part of its FIND
(Future Internet Design) program. The NSF effort was followed by
the FIRE (Future Internet Research and Experimentation) program
which support numerous next generation networking projects
under the 7th Framework Program of the European Union, the
AKARI program in Japan, and several other similarly specialized
programs in China, Australia, Korea, and other parts of the world.
The scale of the research efforts to develop a next generation
Internet proves its importance and the need for its improvement
to sustain the requirements of the future. However, the amount
of work being done or proposed may bafe someone who is trying
to get a comprehensive view of the major research areas. In this paper, it is our goal to explore the diversity of these research efforts
by presenting a coherent model of research areas and by introducing some key research projects. This paper does not claim to be a
comprehensive review of all of the next generation Internet projects but may be considered as an introduction to the broader aspects and some proposed solutions.
Next generation Internet research efforts can be classied under
the primary functions of a networking context such as routing,
content delivery, management and control, and security. We argue
against such an organization of the research efforts with the view
that this organization is contrary to clean-slate design. A cleanslate view of isolated problems in a specic functional area do
not necessarily t together to dene a seamlessly integrated system. This is because they are dened under xed assumptions
about the other parts of the system. The result is that the best individual solutions often contradict each other at the system level. For
example, a clean-slate centralized management and control proposal may interfere with the objectives of a highly scalable distributed routing mechanism, rendering both the solutions useless in
the systems perspective. Also, we believe that the current Internet
and its success should not in any way bias clean-slate thought.
Designers should be able to put in radical new ideas that may have
absolutely no semblance to any design principle of the current
Internet. At present, there are very few architectures that actually
focus on a holistic design of the next generation Internet. Some
holistic designs have been proposed under service centric architectures [discussed in Section 7]. Most service centric architectures
design new service primitives and propose holistic architectural
frameworks for composing applications over these federated service primitives. An example of such an architecture is the Internet
3.0 architecture [discussed in Section 7.6].
In this survey, a major portion of the research being undertaken
in the area of next generation Internet research is covered. First, we
survey some of the more progressive and interesting ideas in smaller, more independent research areas and classify them in various
sections as follows:
1. Security: In the current Internet, security mechanisms are
placed as an additional overlay on top of the original architecture rather than as part of the Internet architecture, which leads
2.
3.
4.
5.
3. Security
The original Internet was designed in a trust-all operating environment of universities and research laboratories. However, this
assumption has long since been invalidated with the commercialization of the Internet. Security has become one of the most important areas in Internet research. With more and more businesses
online and a plethora of applications nding new uses for the Internet, security is surely going to be a major concern for the next generation. In the next generation Internet, security will be a part of
the architecture rather than being overlaid on top of the original
architecture, as in the current Internet. Years of experience in security research has now established the fact that security is not a singular function of any particular layer of the protocol stack, but is a
combined responsibility of every principal communication function that participates in the overall communication process. In this
section, we present several next generation proposals that address
the problem of security from a different angle. This includes the
security policies, trust relationships, names, identities, cryptography, anti-spam, anti-attacks, and privacy.
3.1. Relationship-Oriented Networking
The basic goal of the Relationship-Oriented Networking project
[5] is to build a network architecture that makes use of secure
cryptographic identities to establish relationships among people,
entities, and organizations in the Internet. It tries to provide better
security, usability, and trust in the system, and allow different
users and institutions to build trust relationships within networks
similar to those in the real world.
Relationship-Oriented Networking will mainly:
1. Consider how to pervasively incorporate cryptographic identities into the future network architecture.
2. Use these strong identities to establish relationships as rstclass citizens within the architecture.
3. Develop an architectural framework and its constituent components that allows users and institutions to build trust relationships within the context of digital communications. These can
be viewed and utilized in a similar fashion to relationships outside the realm of digital communications.
3.1.1. Identities
The traditional Internet uses unique names to identify various
resources. These names can be email addresses, account names
or instant messaging IDs. For example, we use the email address
[email protected] as the identier for the email service.
However, these identities offer little security since they can be easily spoofed. Moreover, they are invalidated after a change of service
providers. In Relationship-Oriented Networking, these problems
are solved by cryptographic identities that are used throughout
the architecture. These identities are more secure than the plain,
name-based schemes because security features are integrated in
the form of keys or certicates.
3.1.2. Building and sharing relationships
The Relationship-Oriented Network architecture permits relationships to be established implicitly or explicitly. Allman et al.
[5] provide an example in support of this requirement. For sensitive applications with tight access control, such as banking, the
relationship between a bank and a patron, and the patron with
their account, would need explicit conguration. In comparison,
less sensitive services may be able to rely on less formal opportunistic relationships. For example, a public enterprise printer may
not need tight access control, and the relationship may be opportunistic and less formal. The relationship between people can also be
built implicitly or explicitly. As with trust relationship formations
in our society, the relationship can also be setup by user introductions. Also, the sharing of a relationship among different people or
entities is allowed, which represents some degree of transitivity in
the relationship. Moreover, the relationship can also be leveraged
as a vote of condence when trying to decide whether an unknown
service provider is legitimate or malicious. Thus, the sharing of the
relationship should be limited by the potential downside and privacy implications.
3.1.3. Relationship applications
Access control is one of the relationship applications. It spans
from low-level access controls on the physical network infrastructure to high-level, application specic control. The rst level of enhanced access control comes from having stronger notions of
identity due to the adoption of cryptographic-based schemes. Thus,
access control can be implemented based on the users or the actors
rather than on rough approximations, such as MAC addresses, IP
addresses, and DNS names. Typical examples are Allow the employee in the human resource department to access the disk share
that holds the personnel les and Allow Bob, Jane, and Alice access to the shared music on my laptop.
Relationships can also be used for service validation. In practice,
users need to know that they are communicating with the expected service provider and not a malicious attacker.
Relationship oriented networking also tries to build a naming
system that follows the social graph to an alias resource. The resource with a name can also be aliased in a context-sensitive manner by the users. Users can expose their name to the social
networks which in turn provides ways to share information. For
example, the name babysitter can be set in the personal namespace and expose the resource to a friend who is in need of child
care. The name will be mapped to the unique email address of a
babysitter.
Step 2:
Step 1:
Publish B.http
Allow A access
Request Capability to
B.http
Step 0:
Authenticate with DC
Step 3:
Use returned capability to
Communicate with B
Server B
Client A
Fig. 1. SANE model.
additional issues that need to be solved. For example, SANE requires the switches to perform per-packet cryptographic operations to decrypt the source route. This requires modications and
redesign of the switches and may slow down the data plane. Moreover, the end-hosts also need to be modied to address the malicious attacks. Mechanisms to integrate the middle-box and
proxies into the SANE architecture pose important research challenges. More detailed mechanisms and designs to address these
challenges need to be presented and validated before they can be
applied to the real world.
3.3. Enabling defense and deterrence through private attribution
Current network security depends mainly on defenses that are
mechanisms that could impede any malicious activity. However,
deterrence is also necessary to reduce the threat and attacks in
the Internet. Thus, there is a requirement for a balance between defense and deterrence in the future Internet. Deterrence is usually
achieved by making use of an attribution that is the combination
of an individual and an action. However, compared to the physical
world, it is much more difcult to gain such an attribution in the
Internet.
Two main design goals of this research project [192] are preserving privacy and per-packet attribution. Moreover, the security
architecture provides content-based privacy assurance and tries to
avoid any private information from leaking across the network.
This proposal requires every packet to be self-identifying. Each
packet is tagged with a unique, non-forgeable label identifying
the source host. The private attribution based on group signatures
allows the network elements to verify that a packet was sent by a
member of a given group. Through the participation of a set of
trusted authorities, the privacy of the individual senders can be
ensured.
The per-packet attribution and the privacy preservation ensure
that all of the packets are authenticated and traceable. This reduces
potential attacks and offers deterrence to some extent, while at the
same time maintaining sender privacy by the use of a shared-secret key mechanism.
Some of the challenges that need to be addressed are: (1) decisions that determine the source of the trafc in situations where
trafc may be relayed by an intermediate host on behalf of the
source host, (2) tradeoff between the need for attribution security
and the users privacy, and (3) technical details for packet transformation, overhead reduction, and guaranteeing minimum changes
and impact on the existing software.
3.4. Protecting user privacy in a network with ubiquitous computing
devices
Ubiquitous presence and the use of wireless computing devices
have magnied privacy concerns [185]. These concerns are inherent to the design of the link-layer and lower layer protocols and
are not well addressed by the currently available approaches. In
the next generation Internet, proliferation of these wireless computing devices is expected to worsen the issue of privacy.
The central problem is to devise a mechanism that conceals the
end-points information from all parties that do not need to know
it for the network to function. For example, IP addresses only need
to be revealed to the immediate provider, not to all providers along
the network path. It is assumed that sources trust their immediate
provider more than any other transit provider on the network path.
In this way, the users privacy can be guaranteed as well as be both
manageable and accountable.
In this proposal, encrypted addresses are used to provide privacy. Entire packets, including their addresses, can be encrypted
over links, hiding identities from other users of the network.
Most of these projects are still in their initial phases, so only initial proposals and task goals are available at this point.
The trustworthy network infrastructure research is dedicated to
nding new architecture designs for future heterogeneous networks and systems. These are designed with built-in security, reliability, and privacy, with secure policies across multiple domains
and networks, and with trustworthy operation and management
of billions of devices or things connected to the Internet. It also
includes the research and development of trustworthy platforms
for monitoring and managing malicious network threats across
multiple domains or organizations. Typical E.U. FP7 projects on this
topic include ECRYPT II [44] (on future encryption technologies),
INTERSECTION [70] (on the vulnerabilities at the interaction point
of different service providers), AWISSENET [13] (on security and
error resilience on wireless ad hoc networks and sensor networks),
the trafc engineering policies of the underlying provider IP networks. This leads to a selection of more expensive routes, endangering peering policies between ISPs. The P4P [147] group is
investigating methods for the benecial co-existence of P2P networks and ISPs [216,217]. One possible solution is to develop P2P
mechanisms that are aware of the underlying topology and location of peers [212]. An oracle mechanism wherein the ISPs assist
the P2P networks in selecting peers has been described in [1].
P2P could be useful in serving as the next generation content
delivery mechanism, mostly because of its scalability, resilience,
self-conguration, and self-healing properties. Research groups,
such as P2P-Next [148], are working towards solutions for topology-aware P2P, carrying legal and licensed content for media channels such as IPTV and video on demand. We think, these research
efforts are important for P2P networks to alleviate the huge data
dissemination needs of the future Internet.
4.3. Swarming architecture
Uswarm [207]proposes a data dissemination architecture for
the future Internet based on some established techniques of the
P2P world. A swarm (as used in the context of P2P systems) is
a set of loosely connected hosts that act in a selsh and highly
decentralized manner to provide local and system level robustness
through active adaptation. BitTorrent is an extremely successful
swarming P2P system. BitTorrent solves the traditional P2P
problems of leeching (clients downloading les and not sharing
it with other peers) and low upload capacity of peers. To counter
leeching, Bittorrent employs a tit-for-tat mechanism wherein the
download speed of a peer is dependent on the amount of data it
shares. Also, BitTorrent implements a multi-point-to-point mechanism wherein a le is downloaded in pieces from multiple locations, thus ensuring that the download capacity of a peer is
generally much higher than the upload capacity.
However, it is argued [207] that although BitTorent solves the
problem of ash crowds (sudden high popularity of a piece of content) through its swarming model, it does not have good support
for a post-popularity download when only a few seeds for the content may exist and the demand for the content is not very high.
Also, BitTorrent uses a centralized architecture for its tracker
which introduces a single point of failure. Thus, in scenarios such
as delay tolerant networks (DTN), if the tracker is unreachable
from the peer, then the peer cannot download data even though
all the peers uploading the le may be within communication
reach of the DTN peer. The mechanisms introduced to counter this
situation are the use of replicated trackers or Distributed Hash Tree
(DHT) tracking mechanisms. However, replicated trackers result in
un-unied swarms (multiple swarms for a single le), while DHT
mechanisms introduce additional latency and burden on the peers.
Despite some of these drawbacks, as of 2004, BitTorrent was reported to be carrying one-third of the total Internet trafc
[142,199]. Motivated by the huge success of swarming systems
such as BitTorrent, Uswarm [207] proposes to investigate the
feasibility of a swarming architecture as the basis for content delivery in the future Internet. Some of the key modications needed to
dene an architecture based on swarming rather than an isolated
service are: (1) a generic naming and resolution service, (2) a massively distributed tracking system, (3) economic and social incentive models, and (4) support for in-network caches to be a part
of the swarm architecture.
Uswarm needs to devise a generic naming and resolution mechanism to be the basis for content distribution architecture. The
objective of this mechanism called the Intent Resolution Service
(IRS) is to translate the intent specied in an application specic
form (URL, CSS, etc.) to a standardized meta-data, and resolving
the meta-data (Meta-data Resolution Service or MRS) to a set of
peers that can serve the data. The MRS service is devised using a
combination of highly replicated tracking using logically centralized tracking system (such as DNS), in-network tracking where a
gateway may intercept the request and process it, and peer-to-peer
tracking using peer-to-peer gossip mechanisms (as in KaZaa [149],
Gnutella [150], etc.). All of these tracking mechanisms are highly
distributed and are expected to signicantly improve the availability of the system.
Uswarm is a unied swarming model. Unlike models similar to
BitTorrent where each le is associated with its own swarm, uswarm advocates a unied swarm. In a unied swarm, peers are
not connected loosely together based on a particular content,
rather they are all part of the system and help each other attain
their objectives. For example, suppose there are two les, A and
B, each with their associated swarm, A_swarm and B_swarm,
respectively. Also suppose that the peers of B_swarm already have
the le A and similarly the peers of A_swarm already have the le
B. In such a situation, A_swarm could contribute to B_swarm by
providing a pre-formed swarm for le B and vice versa.
The co-operative swarming mechanism requires some fundamental extensions to incentive mechanisms similar to BotTorrent.
Uswarm uses the same tit-for-tat principle of the BitTorrent
incentive mechanism but also provides incentive for a peer to upload blocks from multiple les (rather than only the le that it is
presently downloading) to support the co-operative swarming paradigm of uswarm. A control plane incentive mechanism also needs
to be developed for uswarm since it depends on a distributed P2P
mechanism for control messages for the MRS. The control plane
incentive mechanism includes tit-for-tat (keeping track of peers
that are most helpful for resolving control messages), and dynamic
topology adaptation (in which peers select their neighbors dynamically based on how helpful they are).
Uswarm looks to solve some very relevant problems of P2P networks. Menasche et al. [99] have presented a generalized model to
quantify the availability of content in swarming systems such as
BitTorrent. This supplements previous studies on the robustness,
performance, and availability of swarming systems similar to BitTorrent [103,173] and is expected to advance the feasibility analysis of such systems as a candidate data dissmination mechanism
for the next generation Internet. Research in this area addresses
some general issues relevant to other research areas as well. For
example, leveraging in-network caches, uswarm addresses some
of the concerns of the P2P-ISP tussle and also has some similarities
to the Content Centric Networking architecture mechanisms discussed in Section 4.4.
4.4. Content Centric Networking
Although classied as a content delivery mechanism, Content
Centric Networking (CCN) [3034,75,76,79] offers much more than
that. It proposes a paradigm shift from the traditional host centric
design of the current Internet to a content centric view of the future Internet. CCN is motivated by the fact that the Internet was
designed around 40 years ago and has lost its relevance in the present context of its use. While designed originally as a mechanism to
share distributed resources (for example, access to a printer
attached to a single remote host in the organization), today the
Internet is used more for content delivery. Since resource access
and data access are fundamentally different with completely different properties, the Internet needs to be re-designed to accommodate the present context. Although, the ideas of a content
centric network have existed for quite some time through a series
of papers on this topic at the University of Colorado [3034], it has
gained momentum only recently in the context of the next generation Internet design initiatives [75,76,79]. In this subsection we
shall discuss the specics of two of the most recent efforts in this
10
11
12
works and rural connectivity networks have almost exact knowledge about available storage resources and mobility patterns. Such
information is not available to DANs. Also, being a service-host paradigm and limited in topological and service diversity, DAN is able
to optimize its routing using anycasting. Such role-based methods
are generally not employed for traditional DANs. Apart from these,
DANs also have to (1) deal with a higher degree of diversity in its
underlying communication technology, (2) offer better optimizations in the use of redundancy for resilience, (3) better use resources such as storage and communication opportunities, (4)
dene a more stricter prioritization of trafc to ensure timely dissemination of critical life-saving data, (5) formulate incentive
schemes for sharing personal resources for common good, and
(6) dene security mechanisms to protect against potential abuse
of resources, compared to most classical DTN scenarios.
The architectural elements of Phoenix incorporate all available
resources that include personal wireless devices such as cellular
phones and home WLANs, external energy resources such as car
batteries, wide-area broadcast channels, and dedicated short-range
communication systems (DSRCs). They incorporate these resources
into one cohesive host-service network and provide an unied
communication channel for disaster recovery and rescue operations, till the original infrastructure for communication is re-instated. To achieve this convergence and the stated objectives of
DANs in general, Phoenix relies on two underlying communication
protocols: (1) The Phoenix Interconnectivity protocol (PIP) and (2)
The Phoenix Transport Protocol (PTP).
1. Phoenix Interconnectivity Protocol (PIP): In a DAN scenario, the
communication nodes are expected to be partitioned into a
number of temporarily disconnected clusters and each cluster
comprises of one or more network segments using different
communication technologies. A multi-interface node supporting multiple access technologies can bridge two or more network segments. Also, node mobility, disaster recovery
activities and topology changes may initiate connection
between clusters. In Phoenix, the PIP layer provides role-based
routing service between nodes belonging to connected clusters.
Each node advertises its specic roles. The forwarding table of
PIP maintains entries mapping routes to specic roles and an
associated cost metric. Thus, PIP provides an abstract view of
a fully connected cluster of nodes to the upper layers while
managing all the heterogeneity of access technologies, rolebased naming of nodes, and energy efcient neighbor and
resource discovery mechanisms within itself. An energy-aware
routing protocol for disaster scenarios has been more recently
proposed by the same group [205].
2. Phoenix Transport Protocol (PTP): DAN operates in an environment of intermittent connectivity, like DTNs. Also, negotiation
based control signaling to optimize bandwidth utilization is
not possible in such scenarios. Thus, the Phoenix Transport
Layer (PTP) is responsible for optimization of storage resources
to guarantee eventual delivery of the message. This store and
forward paradigm of Phoenix is pretty similar to DTNs except
that in DANs like Phoenix, storage resources are highly constrained and congestion control issues are more important in
DANs than in other types of DTNs. In an attempt to optimize
storage resources at forwarding nodes, PTP follows strict prioritization in data forwarding during contact opportunities.
To deliver data between PTP neighbors (logically connected
nodes, similar to the concept of neighbors in the end-to-end
paradigm) belonging to the same connected cluster, PIP routing
may be used. However, for PTP neighbors in disconnected clusters, opportunistic dissemination techniques need to be used.
PTP tries to optimize this dissemination process through selective dissemination deciding what data to be given to whom
to maximize the eventual delivery probability of the data. However, lack of pre-estimated knowledge about node mobility and
capability makes it challenging for PTP to optimize selective
dissemination. A mechanism of diffusion lters based on
exchange of context information (neighbors encountered in a
time window, current neighbors, degree of connectivity of
nodes, etc.) between PTP peers has been suggested as a solution
for such situations.
Other architectural considerations of Phoenix include those of
security, role management, context sensing and localization, and
accounting and anomaly detection issues.
Phoenix is, thus, an instantiation of a more general class of
disaster Day After Networks (DAN), that is expected to use established concepts and techniques of DTNs and spawn an important
research area for future networking research.
5.5. Selectively Connected Networking (SCN)
Most future system designs will need to be energy efcient.
Networking systems are no exception. The original design of the
Internet assumed an always-on mode for every architectural element of the system routers, switches, end-hosts, etc. Sleepmodes dened in modern operating systems are capable of preserving the local state of the end-hosts, but not their network
states. This incapability can be attributed to the design of the networking protocols. Most protocols implicitly assume the prolonged
non-responsiveness from a particular end-host to be signs of a failure and thus discard all associated communication state with the
end-host. Obviously, a new paradigm of energy efcient protocol
design is required to design energy efcient networking systems.
Methods for developing a selectively connected energy efcient network architecture are proposed for study by Allman
et al. [3,4]. Although not particularly similar to DTNs, research in
designing selectively connected systems could benet from the
existing ideas in DTNs, particularly when sleep modes of end-hosts
render an environment of intermittent connectivity. The key ideas
in the design of selectively connected systems are: (1) Delegation
of proxy-able state to assistants that help the end system to sleep,
(2) policy specications by the end system to be able to specify
particular events for which it should be woken, (3) dening application primitives allowing the assistant to participate in the application (e.g., peer-to-peer searches) on behalf of the host and wake
up the host only when required, and (4) Developing security mechanisms to prevent unauthorized access to the systems state from
its patterns of communication.
The delegation of proxy-able state to the assistant and also delegating application responsibilities to it on behalf of the host bear
some resemblance to the transfer of custody transfer mechanisms
of DTNs. Nonetheless, custody transfer has the implication of
dening a paradigm wherein end-to-end principle is not strictly
adhered to while it seems that the assistant mechanism simply
acts as a proxy for the host for control messages of distributed protocols (thus maintaining selective connectivity) and is authorized
to wake up the host whenever actual end-to-end data communication is required. We believe that the design of assistants can be further extended using the concepts of custody transfer and storeand-forward networks such as DTNs.
6. Network monitoring and control architectures
The Internet has scaled extremely well. From its modest beginnings with a few hundreds of nodes, the current Internet has
evolved into a massive distributed system consisting of millions
of nodes geographically diversied across the whole globe. How-
13
observed that the lack of proper interface for cooperation of distributed algorithms, for example, between inter-domain and intra-domain routing protocols, leads to instabilities.
As an example from the original FIND proposal on the 4D architecture [156], Fig. 2 further illustrates the motivation. Fig. 2 presents a simple enterprise scenario, wherein AF1 and BF1 are the
front ofce hosts of an enterprise while AD1 and BD1 are the data
centers. The enterprise level policy allows front ofce hosts to access each other (AF1 may access BF1 and vice versa) but allows
only local access for the data centers (AF1 can access AD1 and
not BD1). To implement this policy, the routers at R1 and R3 place
packet lters at the interfaces i1.1 and i3.1, respectively, to prevent
any non-local packets to have access to the data center. Now, suppose a redundant or backup link is added between the routers R1
and R3. Such a small change requires positioning of additional
packet lters at interfaces i1.2 and i3.2 of routers R1 and R3,
respectively. However, such packet lters prevent the ow of packets between AF1 and BF1 through R2R1R3R4, in case of failure
of the link between R2 and R4, even though a backup route exists.
The four Ds of the 4D architecture are: data, discovery, dissemination and decision. These four planes are related to each other as
shown in Fig. 3 to dene a centralized control architecture based
on network-wide views (view of the whole network) to be able
to dictate direct control over the various distributed entities for
meeting network level objectives of policy enforcements. The
individual functions of each plane in the four dimensional structure are as follows:
1. Discovery plane: Responsible for automatic discovery of the
network entities. Involves box-level discoveries router characteristics, neighbor discovery, link layer discovery-link characteristics. The discovery plane is responsible for creating the
network level views.
2. Dissemination plane: Based on the discovery plane data a dissemination channel is created between each network node
and the decision elements.
3. Decision plane: The centralized decision elements form the decision plane. This plane computes individual network entity state
(e.g., routing tables for routers, etc.) based on the view of the whole
network topology and network level policies to be enforced.
4. Data plane: The data plane is responsible for handling individual packets and process them according to the state that has
been output by the decision plane. This state may be the routing
tables, placement of packet lters, tunnel congurations,
address translations, etc.
Thus, the 4D architecture sets up a separate dissemination
channel for control and management activities through link layer
Data Center
AF1
AD1
R1
AD2
i.1.1
i.2.1
Metric = 1
i.1.2
R2
AF2
i.2.2
Location A
Metric = 1
Apart from these, network trouble shooting, debugging, problem isolation, etc. are extremely complicated for large enterprise
networks and are additional motivations for the design of a more
autonomic management framework for the next generation Internet. An interesting observation made by Yan et al. [219] regarding
the state-of-art of management protocols in the current Internet is
that problems in the data plane cannot be addressed through a
management plane (when it is most required) because the management plane typically rides over the data plane itself. It is further
Front Office
Metric = 1
R5
Location B
BD1
i.3.2
i.3.1
R3
BD2
Metric = 1
i.4.2
i.4.1
Metric = 1
BF1
R4
BF2
14
UP-Pipe
Dependency
Dependency
Filtering
IKE
IP-Sec
Module
UDP
Performance
Switching
IP
Eth
Security
Filtering
Down-Pipe
Fig. 4. CONMan: module abstraction and dependency graph.
Inter- domain
Routing Policy
Network
Maintenance
15
Operating Platform
Meta-Management System Interface
ISP 2
ISP 1
16
Functional Component
Point of Attachment
Supervision Interface
Service Interface
In-Net
Mgmt
Kernel
Neighbor
Neighbor
Service
Management Interface
Hardware
17
End-to-End Information
Transfer
End System
End System
Data Services
Mapping
Data Services
Mapping
Port
Port
Switching
Fabric
Port
Mapping
Data Services
self-virtualized in lower layers, put service to IP layer. In comparison, the EU FP7 projects are more concerned about the relationship among different interested parties and how to setup the
service agreement and achieve the service integration from business level to infrastructure level.
Proc
Proc
Proc
Proc
Proc
Proc
Application
Application
S2
Application
M1.2
S2
S2
M1.2
M1.3
Service 1
Policies and
Strategies
Precedence
Constraint
M2.2
M1.2
M3.2
Method 1.1
M1.2
M1.3
Physical Interfaces
TCP
SACK
Segmentation
(Variable Length
sequential Segmentation)
Segmentation
(Variable Length
sequential Segmentation)
In-order delivery
(Sequence Numbers)
In-order delivery
(Sequence Numbers)
Error Detection
(16 bit checksum)
Flow Control
(Window based)
Reliable Transmission
IP
PHY+
MAC
Raw video
SILO State
Control
Agent
Control Agent
SILO State
18
Error Detection
(16 bit checksum)
Rate control
(leaky bucket)
Error Free Delivery
MAC:802.3, PHY:100MbpsUTP
MAC:802.11, PHY:OFDM RF
To/From N/W
TCP/IP over 100Mbps Ethernet
To/From N/W
Control Agent
Communications
Fig. 16. SILO examples: (a) TCP/IP emulation and (b) MPEG video transmission over
wireless [190].
19
SLA (Re-)negotiation
Monitoring
Arbitration
Service Provider
SLA
Service Procurement
Software Provider
SLA Orchestration,
Transformation,
Aggregation
Virtual
Provisioning
SOI
Service Demand
SOA
Customer
Physical Mapping
Monitoring,
Adjustment, Alerting
Infrastructure Provider
20
SO4All Studio
Analysis
Platform
Provisioning Consumption
Platform
Platform
User
Management
SO4All API
Light-weight
Semantic
Web
Services
SO4All
Platform
Services
Execution
Engine
Reasoning
Engine
Light-weight
Processes and
Mash-ups
Third Party
Traditional
WSDL
Services
Discovery
Engine
tions of the current Internet. The top features are: strong security,
energy efciency, mobility, and organizational policies. The architecture explicitly recognizes new trends in separate ownership of
infrastructure (carriers), hosts (clouds), users and contents and
their economic relationships. This will shape the services that the
network can provide enabling new business models and
applications.
As shown in Fig. 21, Internet 1.0 (approx 1969) had no ownership concept since the entire network was operated by one organization. Thus, protocols were designed for algorithmic optimization
with complete knowledge of link speeds, hosts, and connectivity.
Commercialization of Internet in 1989 led to multiple ownership
of networking infrastructure in what we call Internet 2.0. A key impact of ownership is that communication is based on policies
(rather than algorithmic optimization) as is seen in inter-domain
(BGP) routing. The internals of the autonomous systems are not exposed. We are seeing this trend of multiple ownership to continue
from infrastructure to hosts/devices (Clouds), users, and content.
Internet 3.0s goal is to allow policy-based secure communication
that is aware of different policies at the granularity of users, content, hosts, or infrastructure.
Cloud computing is an example of applications that benet
from this inherent diversity in the network design. Hosts belonging
to different cloud computing platforms can be leased for the duration of experiments requiring use of data (e.g., Gnome) to be analyzed by scientists from different institutions. The users, data,
hosts, and infrastructures belong to different organizations and
need to enforce their respective policies including security. Numerous other examples, related to P2P computing, national security,
distributed services, cellular services exist.
Organization is a general term that not only includes employers
(of users), owners (of devices, infrastructure, and content) but also
includes logical groups such as governments, virtual interest
groups, and user communities. Real security can be achieved only
if such organizational policies are taken into account and if we design means of monitoring, measurement, and independent validation and enforcement.
Internet 1.0 was designed for host systems that had multiple
users and data. Therefore, the hosts were the end systems for communication. Today, each user has multiple communication devices.
Content is replicated over many systems and can be retrieved in
parallel from multiple systems. The future user-to-user, user-tocontent, machine-to-machine communications need a new
paradigm for communication that recognizes this new reality and
allows mobility/multihoming for users and content as easily as it
does for devices. In this new paradigm, the devices (hosts) are
intermediate systems while the users and content are the
end-systems.
The inclusion of content as an end-system requires Internet to
provide new services (e.g., storage, disruption tolerance, etc.) for
developing application specic networking contexts. There will
be more intelligence in the network which will also allow it to
be used easily to use by billions of networking-unaware users.
21
22
primitives that shall allow the next generation Internet to be diversied. It signicantly improves upon the one-suit ts all paradigm of the current Internet and allows each application context
to be able to fully program and optimize its specic context [167].
The current state-of-art of the routing function at the internetworking layer of the Internet is marred with numerous problems.
The biggest and most immediate concern is that of scalability. With
the huge growth in network-able devices participating in the Internet, the routing infrastructure is nding it difcult to provide
unique locaters to each of these devices (address depletion problem) and the routing nodes are unable to cope with the exponential
growth in routing table sizes, number of update messages and
churn due to dynamic nature of networks [100]. Border Gateway
Protocol (BGP) [20], the de facto inter-domain routing protocol of
the current Internet, takes in the order of minutes to re-converge
after a failure [84,86]. The basic advantage of packet switched networks in providing higher resilience is hardly implemented in
practice. Also, basic services such as mobility, quality of service,
multicasting, policy enforcements and security are extremely hard
to be realized, if at all. New innovations proposed to mitigate some
of the ills, such as IPv6 [71], have hardly seen any wide-scale
deployment. Another issue is the tussle between users need to
control the end-to-end path and the provider policies to optimize
their commercial interests. These and other weaknesses of the
routing mechanism in the current Internet have resulted in a spur
of activity trying to design a better routing architecture for the next
generation Internet. While some of the schemes are clean-slate,
thus requiring a complete architectural overhaul, others are more
incremental that can be implemented over the present underlying
system to procure partial benets. In this section, we discuss some
23
counter congested paths, a mechanism wherein the routers articially suppress the acknowledgments based on a probability
dependant on the current congestion condition is devised. These
articial suppression of acknowledgments feed the loss metric
view of the network for each ow that try to route along the least
cost path over this metric based on a ow control mechanism that
adaptively re-routes the ows.
The dynamic metric discussed thus far needs to be supported
over large network topologies in a scalable manner. The topological
hierarchy aids aggregation (and thus scalability) of the current
Internet. Such aggregation schemes designed for a static metric become ineffective for a network based on a dynamic metric. Thus,
instead of aggregating based on pre-determined xed identiers,
a new aggregation scheme based on physical location is dened.
The proposal is to devise a new locality preserving, peer-to-peer
directory service rather than a xed infrastructure DNS service.
Thus, a newer algorithmic basis for Internet protocols hold the
potential to free the current Internet routing from most of the current constraints that it faces, especially in the routing plane. The
contributions of this proposal, if implemented, shall lay the basis
of a highly dynamic and hence more robust routing function for
the next generation Internet.
8.2. Greedy routing on hidden metrics (GROH Model)
One of the biggest problems with routing in the current Internet
is scalability. The scalability problem is not so much due to the
large space requirements at routers but is more due to the churn
as a result of network dynamics causing table updates, control
messages and route recalculations. The problem is expected to
exacerbate further with the introduction of IPv6. This problem
seems to be unsolvable in the context of the present design of routing protocols, hinting towards the need of some truly disruptive
ideas to break this impasse.
The GROH model [83] proposes a routing architecture devoid of
control messages. It is based on the small world phenomenon
exhibited in Milgrams social network exercise [101] and later depicted in the famous play Six Degrees of Separation [66] in 1990.
This experiment demonstrated the effectiveness of greedy routing
in a social network scenario and can be established as the basis of
routing in the Internet which shows similar scale-free behavior as
that of social networks, biological networks, etc. The idea of greedy
routing on hidden metrics is based on the proposition that: Behind every metric space including the Internet there exists a hidden metric space. The observable scale-free structure of the
network is a consequence of natural network evolution that maximizes the efciency of greedy routing in this metric space. The
objective of the GROH model is to investigate this proposition to
try and dene the hidden metric space underlying the Internet
topology and develop a greedy routing scheme that maximizes
the efciency of routing in this metric space. Such a greedy routing
algorithm belongs to the class of routing algorithms called compact routing that are aimed at reducing the routing table size,
the node addresses and the routing stretch (the ratio of distance
between the source and destination for a given routing algorithm
to that of the actual shortest path distance). However, existing
compact routing algorithms do not address the dynamic nature
of networks, such as the Internet.
Three metric spaces are being considered initially as part of the
investigation to model the Internets scale-free topology. They are:
(1) Normed spaces, (2) random metric spaces, and (3) expanding
metrics. Now using a concrete measured topology of some network
(in this case, the Internet) G and these metric spaces, their combinations or additional metric spaces as a candidate hidden metric
space H, a t of G into H is found. If a t of G into H is found
successfully, two tasks are undertaken: (1) Label size determina-
24
25
26
LISP enables site multihoming without any changes to the endhosts. The mapping from identier to RLOC (Routing Locater) is
performed by the edge routers. LISP also does not introduce a
new namespace. Changes to the routers are only in the edge routers. The high-end site or provider core routers do not have to be
changed. All these characteristics of LISP lead to a rapid deployment with low costs. There is also no centralized ID to locater mapping database and all the databases can be distributed which
enable high mapping data upgrade rates. Since LISP does not require current end-hosts with different hardware, OS platform and
applications, and network technologies to change their implementations, the transition is easier compared to HIP. The requirements
for hardware changes are also small which allow fast product
delivery and deployment.
However, LISP uses PI addresses as routable IDs which potentially leads to some problems. In the future, it will be necessary
to create economic incentives to not use the PI addresses, or to create an automatic method for renumbering by Provider Aggregatable (PA) addresses.
Obviously, there is a tradeoff between compatibility to the current applications and enabling more powerful functions. Since LISP
does not introduce any changes to the end-host network stack, by
design it cannot support the same level of mobility as HIP. The host
multihoming issue is similar. Specically, from design perspectives, LISP lacks support for host mobility, host multihoming, and
trafc engineering. Some researchers argue that LISP is a shortterm solution for routing scalability rather than a long-term solution for all the challenges listed in the beginning of this section.
8.6.3. MILSA
MILSA [161164] is basically an evolutionary hybrid design
which has combined features of HIP and LISP, and avoids the disadvantages of these two individual solutions. Since there is still a
debate regarding whether the ID locater split should happen in
end-host side such as HIP or in network side such as LISP, it is hard
to decide which is the right way to go at this point of time. Thus,
MILSA is designed to be adaptive; it supports both directions and
allows them to evolve to either direction in the future. By doing
this, we can avoid the deployment risk at the furthest.
Specically, MILSA introduces a new ID sublayer into the network layer in the current network stack, i.e., it separates ID from
locater in the end-host and uses a separate distributed mapping
system to deliver fast and efcient mapping lookup and update
across the whole Internet. MILSA also separates trust relationships
(administrative realms) from connectivity (infrastructure realms).
The detailed mechanisms on how to setup and maintain this trust
relationship are presented in [161]. A new hierarchical ID space is
introduced which combines the features of at IDs and hierarchical
IDs. It allows a scalable bridging function that is placed between
the host realms and the infrastructure realms. The new ID space
can be used to facilitate the setup and maintenance of the trust
relationships, and the policy enforcements among different organizations. Moreover, MILSA implements signaling and data separation to improve the system performance, efciency, and to
support mobility. Detailed trust relationship setup and maintenance policies and processes are also presented in MILSA.
Through the hybrid combination, the two approaches are integrated into one solution to solve all the problems identied by
the IRTF RRG design goals [90] which include mobility, multihoming, routing scalability, trafc engineering, and incremental
deployabiltiy. It prevents the Provider Independent (PI) address
usage for global routing, and implements identier locater split
in the host to provide routing scalability, mobility, multihoming,
and trafc engineering. Also the global routing table size can be reduced step by step through our incremental deployment strategy
which is also one of the biggest MILSA advantages. Specically,
27
28
ing control over the nodes that they own and users running experiments on these nodes, warrant the requirement of a trust based
security model that can scale. To avoid a N N blow up of the trust
relationship model, the PLC acts as a trusted intermediary that manages the nodes on behalf of its owners according to a set of policies
specied by the owners, creates slices by combining resources from
these nodes and manages allocation of slices to experimenters.
PLC supports two methods of actual slice instantiation at each node,
direct and delegated. PLC runs a slice creation service called
pl_conf at each node. In the direct method, PLC front-end directly
interacts with the pl_conf service to create a corresponding virtual
machine and allocate resources to it. However, in the delegated
method, a slice creation agent on behalf of a user contacts the PLC
for a ticket. This ticket encapsulates rights to instantiate a virtual
machine at a node and get specied resources allocated to it. The
agent then contacts the pl_conf of each node to redeem this ticket
and create a slice for the user. Currently, two slice creation services
are supported on PlanetLab: (1) PLC, implementing the direct
method and (2) Emulab, implementing the delegated method.
Over time, the PlanetLab design has been extended and modied to provide better and more efcient control and support.
One such extension, within the PlanetLab control framework itself
is to allow federation of separate and independent PlanetLab instances. Federation of such nature necessitates separate instances
of PLCs to be able to communicate and coordinate with each other
through well-dened interfaces. It can be easily observed that the
PLC conducts two distinct functionalities: (1) node management on
behalf of node owners and (2) slice creation on behalf of users,
allowing the PLC to export two distinct interfaces. Also, adopting
a hierarchical naming system for slices establishing a hierarchy
of slice authorities ease trust and delegation related issues in federation. These extensions combined with added facility at the
pl_conf to create slices on behalf of multiple slice authorities
has lead to the development of regional and private PlanetLab instances that may peer with the public PlanetLab instance.
An instance of PlanetLab federation extension is the PlanetlabEurope testbed, supported by the Onelab project [112], which is
the European contribution to the world-wide publicly available
Planetlab testbed. However, the Onelab project is contributing to
enhancing the monitoring infrastructure of Planetlab [180],
extending Planetlab to newer contexts such as wireless testbeds
[41,28,29], adding capability for IPv6 based multihoming of sites
[107,108], dealing with unstable connectivity [97], integrating
and mixing emulation tools [35], and providing a framework for
network measurements.
PlanetLab being organized as an overlay over IP, it is not necessarily a realistic experimental substrate for network layer protocols. As such, actual routing protocols and router level code
cannot be run effectively on a PlanetLab slice. The VINI [113,17]
running the Internet in a slice (IIAS) effort was aimed at lling
this void by leveraging the existing widely distributed PlanetLab
network, User Mode Linux [43] and advances in open source router
code. Fig. 26 presents the PlanetLab VINI slice organization. Router
code requires root level kernel access. Thus, running router code
directly over a Planetlab slice is not possible. VINI installs User
Mode Linux (UML) [114,43] over the PlanetLab slice and installs
open source router code, XORP [115] over it. UML provides a virtual Linux kernel implementation at the user-level. This sets up a
distributed set of routers over a PlanetLab slice allowing network
level experimentation. However, VINI routers are not directly connected to each other being part of the PlanetLab overlay network.
Thus, any network level experimentation is hindered by interfering
effect of actual path routers and corresponding routing protocols
implemented on them.
Another extension of PlanetLab concerns extending the core
mechanism of the overlay hosting facility. Overlay nodes run dis-
9.2.1. Federation
Networking testbeds strive to provide a realistic testing and
experimentation facility to researchers. The prime goal is to be able
to provide a platform that is as close to the production environment as possible.Federation helps realize this goal through
[159] (1) Providing larger testbed scenarios, (2) providing a diverse
testbed with specialized or diverse resources such as access technologies, (3) creating scientic communities with diverse research
backgrounds and inspiring cross discipline research, and (4) amortization of costs through more efcient sharing.
However, there exists a lot of challenges that make federation
an interesting research problem. These challenges can be categorized into technical challenges and political or socio-economic
challenges.
The technical challenges involve problems such as (1) homogenization of diverse contexts to facilitate easy deployment of experiments, (2) fair and efcient sharing of scarce resources, and (3)
interoperability of security protocols.
The political or social-economic challenges are based more on
the implications of economic and organizational policies of sharing
such as policies of governments, conicts between research agencies, conicts between commercial and non-commercial interests,
and intellectual property rights related conicts.
Thus, the problem of federation of testbeds has different contexts and the solution to a specic scenario for federation varies
in accordance with the context. We shall discuss three approaches
to federation that are under research currently in the European
network community.
9.2.2. Virtualization
In spite of the tremendous success of the Internet, it is often
made to deliver services that it was not designed for (e.g., mobility,
multihoming, multicasting, anycasting, etc.). However, the IP based
one-suite-ts-all model of the Internet does not allow innovative
new architectural ideas to be seamlessly incorporated into the
architecture. Innovative and disruptive proposals, either never
get deployed or are forced to resort to inefcient round about
means. The huge investments in the deployed infrastructure base
of todays networks add to this ossication by preventing newer
paradigms of networking from being tested and deployed. Virtualization seems to be the only possible solution to break this current
impasse [7].
Turner et al. [204] propose a diversied Internet architecture
that advocates the ideas of virtualization of the substrate elements
(routers) of the network infrastructure. Such an approach would
allow researchers to implement and test diverse routing protocols
(non-IP based) and service paradigms. The argument is that multiple competing technologies shall be able to co-exist in large scale
experimentation and thus the barrier to entry from experimentation to production environments shall be reduced considerably.
Such a testbed shall also be free from all intrinsic assumptions that
commonly malice the credibility of conventional experimental
testbeds.
CABO (Concurrent Architectures are Better than One) by Feamster et al. [51] is a design of the next generation Internet that allows concurrent architectures to co-exist. The key idea is to
decouple the infrastructure from the infrastructure services. The
infrastructure providers in CABO are expected to lease infrastructure entities such as backbone routers, backbone links, and
switches, over which service providers could deploy their own specic protocols and run their own network services optimized to
specic service parameters such as quality of service, low latency,
and real-time support. The infrastructure providers may virtualize
their infrastructure substrate and thus allow the isolated co-existence of multiple service providers.
29
30
VN 1
VN 2
VN 2
The AKARI Project [116,2] of Japan also advocates the use of virtualization as the basis of the Internet architecture in the next generation [67]. As shown in Fig. 27, the AKARI project extends the
idea of isolated virtual networks to: (1) Transitive virtual networks
cooperation and/or communication between virtual networks
and (2) overlaid virtual networks: one virtual network over the
other.
However, though Internet-scale deployment of virtualization as
the basis of the Internet architecture may not be possible in the
near future, network testbed designs may immensely benet from
it. The properties of isolation and exibility of virtualization suit
the needs of next generation testbeds that need to be able to support diverse architecture experiments on a shared substrate such
that they do not interfere with each other. Also, the feasibility of
the core idea of virtualization as the basis of an Internet-scale network can be tested through experiences in deploying testbeds
based on virtualization.
Virtualization in testbed design. The idea of virtualization to
isolate network experiments running on shared substrate is not
new. However, existing networking testbeds operate on an overlay
above the IP based networks, seriously constraining the realism of
network level experiments. To overcome this impasse, the future of
networking testbeds shall have to be designed for end-to-end isolation, requiring the virtualization of end-hosts, substrate links and
substrate nodes.
Turner [202] proposes a GENI substrate design that allows multiple meta-networks to co-exist. Each meta-network consist of a
meta-router (a virtualized slice from a router) and meta-links joining the meta-networks. The design of substrate routers that support co-existence of several meta-routers has to cope with the
challenges of exibly allocating bandwidth and generic processing
resources among the meta-routers, maintaining isolation properties. The three main components of a router are: (1) line cards
terminate physical links and process packets, (2) switching fabric
transfers data from line cards where they arrive to line cards connected to outgoing links, and (3) control processor a general purpose microprocessor for control and management functions of the
router such as running routing protocols and updating tables at the
line cards.
A natural design choice of virtualizing such a hardware would
be to virtualize the line cards to derive meta-line cards. However,
31
tural facilities to add to its diversity and support for realism. In the
rest of this subsection on GENI, we rst discuss the key GENI
requirements, the generalized GENI control framework and nally
we look into the ve different cluster projects, each developing a
prototype control framework for GENI underlying the components
of the generalized GENI control framework.
GENI requirements. GENI comprises of a set of hardwire components including computer nodes, access links, customizable routers, switches, backbone links, tail links, wireless subnets, etc.
Experiments on GENI shall run on a subset of these resources
called a slice. In general, two types of activities shall be supported over the GENI testbed: (1) deployment of prototype network systems and observing them under real usage and (2)
running controlled experiments. Some of the key requirements
for the GENI infrastructure are:
1. Sliceability: In order for GENI to be cost-effective and be able to
cater to as many experimental requirements as possible, GENI
shall need to support massive sharing of resources, at the same
time ensuring isolation between experiments.
2. Programmability: GENI is a testing environment needing generality. All GENI components need to be programmable so that
researchers are able to implement and deploy their own set of
protocols at the component level.
3. Virtualization and resource sharing: Sliceability entails sharing
of resources. A common form of resource sharing is through virtualization techniques, wherever possible. However, for some
resources, owing to the some inherent properties of the
resource (e.g., an UMTS link can support only one active connection at a time), other methods such as time-shared multiplexing may be employed.
4. Federation: The GENI suite is expected to be a federated whole
of many different parts owned and managed by different organizations. Federation also adds diversity to the underlying
resource pool, thus allowing experiments to run closer to real
production systems.
5. Observability: One of GENIs core goals is to provide highly
instrumented infrastructure to support accurate measurements
of experiments. Hence, the GENI design should allow an efcient, exible, robust and easily speciable measurement
framework.
6. Security: GENI is expected to run many disruptive and innovative protocols and algorithms. Also, GENI experiments may be
allowed to interact with existing Internet functionality. Hence,
security concerns require that GENI nodes cannot harm the production Internet environment, either maliciously or
accidentally.
Several other requirements and detailed discussions can be
found in the GENI design documents [8,186,38,175,22,18,80].
However, the key value proposition of GENI that separates it from
smaller scale or more specic testbeds are:
1. Wide scale deployment access not restricted to those who
provide backbone resources to GENI.
2. Diverse and extensible set of network technologies.
3. Support for real user trafc.
In the rest of this discussion on GENI, we focus specically on
the control architectural framework of GENI and also look at some
of the protocol designs that are being undertaken as the rst phase
of prototype design.
GENI generalized control framework. Before looking at the
specic prototype designs for the GENI generalized control framework in Fig. 29, we need to look at the generic GENI control framework as dened in [58]. GENI consists of several subsystems:
32
Administrator
Administrative
and
Accounting Tools
Operations
and
Management Tools
Operator
GENI
Aggregates
GENI Clearinghouse
Principal
Registry
Authentication
Query
Slice Registry
Component
Registry
Query
.....
Slice A Record
Principal
Registry
Slice B Record
GENI Components
Aggregate
Manager
Research
Labs
Slice Manager
Component
Registry
GENI Services
Component
Manager
Hosts
Services
Manager
Components
Services
Experiment
A (Control
Tools )
Slice A
Host X
Component A
Sliver
Sliver
GENI End
Users
Service S
Sliver
Sliver
GENI Access
Network
Experimental Plane
Measurement Plane
Control Plane
Mgmt and Ops Plane
Fig. 29. GENI: generalized control framework and constituent subsystems.
33
SEER
Integration of
SEER to control
prototype GENI
Environment
Users
DETER
Physical Infrastructure
DETER
GMPLS
GMPLS
GENI
GMC
GENI
User
Credential
Repository
DETER Test-bed
Federant
Integration of
DRAGON to control
physical network
infrastructure
D
R
A
G
O
N
Federant
Internet
Physical Infrastructure
GMC
Federator
Extension of DETER to
create GENI Federation
framework
GENI
34
Clearinghouse
Registries
Slices
Users
Component managers
Federates Certificate
Register Users
Register Slices
Federate 1
Slice
Authority
Aggregate
Manager
Federate 2
Slice
Authority
Root Certificates
Aggregate
Manager
Root Certificates
of the Emulab software. Thus, this makes any site running the latest version of Emulab code to join the federation quite easily. It
may be noted that the federation concepts of ProtoGENI is in contrast to that of the Planetlab federation concept that allows federation between any two testbeds that implement a slice based
architecture.
Cluster D: Open Resource Control Architecture (ORCA) control framework. The Cluster D GENI prototype development
plan involves the extension of ORCA (a candidate control framework for GENI) [128] to include the optical resources available in
BEN (Breakable Experimental Network). ORCA is a control plane
approach for secure and efcient management of heterogeneous
resources [129]. ORCA is different from traditional resource management schemes based on middlewares operating between the
host operating system supplying resources and the applications
requesting them. ORCA denes a paradigm of resource management wherein the resource management of ORCA runs as an
underware [37] below the host operating system. ORCA uses virtualization to allocate containers over which a resource requester may install its own environment. Hence, as shown in Fig. 37,
the ORCA control plane may be viewed as an Internet Operating
System supporting a diverse set of user environments on a common set of hardware resources.
Also, as shown in Fig. 38, the ORCA underware control plane
allows federation of various heterogeneous underlying resource
pools, each with their own set of resource allocation policies.
Federation architecture: The implementation of federation of
diverse resource pools is architected through Shakiro [73] resource
leasing architecture based on the SHARP [56] secure resource peer-
35
Measurements for
Experiment A
Database A
User A
Aggregate Manager
Experiment A
Experiment
Description
Experiment
Controller
for A
Resource Manager
(RM)
Resource Controller
(RC) for Experiment A
RM
RM
RC
RC
Resource Controller
(RC) for Experiment B
Experiment B
Resource 1 .. Resource N
Ticket
Update
Request
Ticket
Broker
Plug-In broker Policies
for:
Resource Selection
Provisioning
Admission Control
Export
Tickets
Service Manager
Application
Resource
Request
Policy
Site Authority
Broker Service Interface
Leasing
API
Join/Leave Lease
Handlers
Event
Monitoring Interface
Lease Update
Leasing
Service
Interface
Lease
Status
Notify
Assignment
Policy
Handlers for
Setup/Teardown
Monitoring
36
EUQOS
ANEMONE
New Routing
RING
Measurements,
Tomography
QoS in Multi-domain
IP Networks
Research
Infrastructures
OPENNET
ONELAB
Testbeds
PANLAB
Federation of Testbeds
GEANT
VITAL Convergence Services over
IP: IMS
Wisebed
Vital++
PII
FEDERICA
Onelab2
37
Overlay Software
running on the portal server
Uses Configuration,
Services
PANLAB Customer
PANLAB Partner
PANLAB Office
TUBS
Teagle
TUD
Discovers, Configures,
Manipulates resources
for test configurations
Services,
Features
Results
UZL
UPC
PANLAB
Database Server
Test-bed 1
Resources
Test-bed 2
Resources
Users connect
directly with the
federated test-bed
using Webservices via the
overlay network
UBERN
Users connect
directly with a
single test-bed
using Webservices defined
by the OFA
standard
Overlay Network
Resource
Management
System
ULANC
Test-bed n
Resources
UNIGE
Resource Provider
(Create virtual
Resource, set
Policies, etc)
FUB
CTI
Each WISEBED
partner site
maintains its
own testbed with
its own
equipment
Management
9.3.3. WISEBED
The WISEBED project [213] is aimed at federating large scale
wireless sensor testbeds to provide a large, diversied, multi-level
infrastructure of small-scale heterogeneous devices. An Open Federation Alliance (OFA) is dened that develops open standards for
accessing and controlling the federation. WISEBED classies the diverse testbeds into two categories: (1) Fully integrated: The testbed denes a full range of services as dened by the OFA and (2)
semi integrated: Provides sunset of the service dened in the
OFA. Another classication based on the access to the testbed also
consists of two categories: (1) Fully Accessible: users can access
the testbed data and also re-program the testbed devices and (2)
semi accessible: Users are only permitted to extract experimental
data from the testbed.
Federation mechanism: As shown in Fig. 44, WISEBED federates multiple wireless sensor node testbeds comprising of a diverse
AAA
Operation
Web
services
Server
Portal
Server
802.15.4
DB #k
Data
Storage
Hardware
DB #2
802.15.1
RS232
XML
DB #1
38
10. Conclusions
A number of industry and government funding agencies
throughout the world are funding research on architecture for future networks that are clean-slate and are not bound by the constraints of the current TCP/IP protocol suite. In this paper, we have
provided an overview of several such projects. National Science
Foundation (NSF) in the United States started a future Internet design (FIND) program which has funded a number of architectural
studies related to clean-slate solutions for virtualization, highspeed routing, naming, security, management, and control. It also
started the Global Environment for Network Innovations (GENI)
program that is experimenting with various testbed designs to allow the new architectural ideas to be tested.
The Future Internet Research and Experimentation (FIRE) program in Europe is also looking at future networks as a part of the
7th Framework program of the European Union (FP7). Another
similar study is the AKARI program in Japan.
In addition to the above, Internet 3.0 is an industry funded
program that takes a holistic view of the present security, routing,
and naming problems rather than treating each of them in isolation. Isolated clean-slate solutions do not necessarily t together,
since their assumptions may not match. Internet 3.0, while
clean-slate, is also looking at the transition issues to ensure that
there will be a path from todays Internet to the next generation
Internet.
NSF has realized the need for a coherent architecture to solve
many related issues and has recently announced a new program
that will encourage combining many separate solutions into complete architectural proposals.
It is yet to be seen whether the testbeds being developed today,
which use TCP/IP protocol stacks extensively, will be able to be
used for future Internet architectures that have yet to be
developed.
In this paper, we have provided a brief description of numerous
research projects and hope that this will be a useful starting point
for those wishing to do future network research or simply to keep
abreast of the latest developments in this eld.
DNS
DONA
DTN
FEDERICA
FIND
FIRE
FP6
FP7
GENI
GROH
HIP
HLP
ID
IIAS
INM
IP
IRTF
ISP
LISP
MILSA
NGI
NGN
NNC
NSF
OMF
ORBIT
ORCA
PANLAB
PI
PIP
PLC
PONA
PTP
RANGI
RCP
RTS
SANE
SCN
SLA
SLA@SOI
SMTP
SOA
SOA4ALL
SPP
TIED
UML
WISEBED
References
[1] V. Aggarwal, O. Akonjang, A. Feldmann, Improving user and ISP experience
through ISP-aided P2P locality, in: Proceedings of INFOCOM Workshops 2008,
New York, April 1318, 2008, pp. 16.
[2] New Generation Network Architecture AKARI Conceptual Design (ver2.0),
AKARI Architecture Design Project, May, 2010, <http://akari-project.
nict.go.jp/eng/concept-design/AKARI_fulltext_e_preliminary_ver2.pdf>.
[3] M. Allman, V. Paxson, K. Christensen, et al., Architectural support for
selectively-connected end systems: enabling an energy-efcient future
Internet, NSF NeTS FIND Initiative. <http://www.nets-nd.net/Funded/
ArchtSupport.php>.
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
[52]
[53]
[54]
[55]
[56]
[57]
[58]
[59]
[60]
[61]
[62]
[63]
[64]
39
40
[181]
[182]
[183]
[184]
[185]
[186]
[187]
[188]
[189]
[190]
[191]
[192]
[193]
[194]
[195]
[196]
[197]
[198]
[199]
[200]
[201]
[202]
[203]
[204]
[205]
[206]
[207]
[208]
[209]
[211]
[212]
[213]
[214]
[215]
41
<http://www.onelab.eu/index.php/results/deliverables/252-d3a2-passivemonitoring-component.html>.
(Online) Internet Research Task Force Routing Research Group Wiki page,
2008. <http://trac.tools.ietf.org/group/irtf/trac/wiki/RoutingResearchGroup>.
S. Schwab, B. Wilson, C. Ko, et al., SEER: A Security Experimentation
EnviRonment for DETER, in: Proceedings of the DETER Community
Workshop on Cyber Security Experimentation and Test, August 2007.
K. Scott, S. Burleigh, Bundle Protocol Specication, IETF RFC 5050, November
2007.
T. Wolf, Service-Centric End-to-End Abstractions for Network Architecture, NSF
NeTS FIND Initiative. <http://www.nets-nd.net/Funded/ServiceCentric.php>.
S. Seshan, D. Wetherall, T. Kohno, Protecting User Privacy in a Network with
Ubiquitous Computing Devices, NSF NeTS FIND Initiative. <http://www.netsnd.net/Funded/Protecting.php>.
L. Sha, A. Agrawala, T. Abdelzaher, et al. GDD-06-32: Report of NSF Workshop
on Distributed Real-time and Embedded Systems Research in the Context of
GENI, GENI Design Document 06-32, September 2006. <http://groups.geni.
net/geni/attachment/wiki/OldGPGDesignDocuments/GDD-06-32.pdf>.
N. Shenoy, Victor Perotti, Switched Internet Architecture, NSF NeTS FIND
Initiative. <http://www.nets-nd.net/Funded/SWA.php>.
E. Nordmark, M. Bagnulo, Internet Draft: Shim6: level 3 multihoming Shim
protocol for IPv6, IETF RFC 5533, June 2009.
SHIELDS: Detecting known security vulnerabilities from within design and
development tools, European Union 7th Framework Program. <http://
www.shieldsproject.eu>.
G. Rouskas, R. Dutta, I. Baldine, et al., The SILO Architecture for Services
Integration, Control, and Optimization for the Future Internet, NSF NeTSFIND Initiative. <http://www.nets-nd.net/Funded/Silo.php>.
Empowering the Service Economy with SLA-aware Infrastructures, European
Union 7th Framework Program. <http://sla-at-soi.eu>.
A.C. Snoeren, Y. Kohno, S. Savage, et al., Enabling Defense and Deterrence
through Private Attribution, NSF NeTS-FIND Initiative. <http://www.netsnd.net/Funded/EnablingDefense.php>.
Service Oriented Architectures for ALL, European Union 7th Framework
Program. <http://www.soa4all.eu>.
I. Stoica, D. Adkins, et al., Internet Indirection Infrastructure, in: Proceedings
of ACM SIGCOMM 2002, Pittsburgh, Pennsylvania, USA, 2002.
L. Subramanian, M. Caesar, C.T. Ee, et al., HLP: a next generation inter-domain
routing protocol, in: Proceedings of SIGCOMM 2005, Philadelphia,
Pennsylvania, August 2226, 2005.
SWIFT: Secure Widespread Identities for Federated Telecommunications,
European Union 7th Framework Program. <http://www.ist-swift.org>.
TAS3: Trusted Architecture for Securely Shared Services, European Union 7th
Framework Program. <http://www.tas3.eu>.
TECOM: Trusted Embedded Computing, Information Technology for
European Advanced (ITEA2) Programme. <http://www.tecom-itea.org>.
C. Thompson, The BitTorrent Effect, WIRED, Issue 13.01, January 2005.
J. Touch, Y. Wang, V.Pingali, A Recursive Network Architecture, ISI Technical
Report ISI-TR-2006-626, October 2006.
J.Touch, V.K. Pingali, The RNA Metaprotocol, Proc. IEEE ICCCN (Future Internet
Architectures and Protocols track), St. Thomas, Virgin Islands, August 2008.
J. Turner, GDD-06-09: A Proposed Architecture for the GENI Backbone
Platform, Washington University Technical Report WUCSE-2006-14, March
2006. <http://groups.geni.net/geni/attachment/wiki/OldGPGDesignDocuments/
GDD-06-09.pdf>.
J. Turner, P. Crowley, J. DeHart, et al., Supercharging PlanetLab a High
Performance, Multi-Application, Overlay Network Platform MultiApplication, Overlay Network Platform, in: Proceedings of ACM SIGCOMM,
Kyoto, Japan, August 2007.
J. Turner, P. Crowley, S. Gorinsky, et al., An Architecture for a Diversied
Internet, NSF NeTS FIND Initiative. <http://www.nets-nd.net/Funded/
DiversiedInternet.php>.
M.Y.S. Uddin, H. Ahmadi, T. Abdelzaher, R.H. Kravets, A low-energy, multicopy inter-contact routing protocol for disaster response networks, Sensor,
Mesh and Ad Hoc Communications and Networks, in: 6th Annual IEEE
Communications Society Conference on SECON09, June 2009, pp. 19.
K. Varadhan, R. Govindan, D. Estrin, Persistent route oscillations in interdomain routing, Computer Networks 32 (1) (2000) 116.
A. Venkataramani, D. Towsley, A Swarming Architecture for Internet data
transfer, NSF NeTS-FIND Initiative. <http://www.nets-nd.net/Funded/
Swarming.php>.
VMWare. <http://www.vmware.com/.
VServer. <http://linux-vserver.org/Welcome_to_Linux-VServer.org>.
Y. wang, H. Wu, F. Lin, et al., Cross-layer protocol design and optimization for
delay/fault-tolerant mobile sensor networks (DFT-MSNs), IEEE Journal on
Selected Areas in Communications 26 (5) (2008) 809819.
J.W. Han, F.D. Jahanian, Topology aware overlay networks, in: Proceeding of
IEEE INFOCOM, vol. 4, March 1317, 2005, pp. 25542565.
WISEBED: Grant Agreement, Deliverable D1.1, 2.1 and 3.1: Design of the
Hardware Infrastructure, Architecture of the Software Infrastructure and
Design of Library of Algorithms, Seventh Framework Programme Theme 3,
November 30, 2008. <http://www.wisebed.eu/images/stories/deliverables/
d1.1-d3.1.pdf>.
L. Wood, W. Eddy, P. Holliday, A bundle of problems, in: IEEE Aerospace
Conference, Big Sky, Montana, 2009.
Xen. <http://www.xen.org/>.
42
[216] H. Xie, Y.R. Yang, A. Krishnamurthy, Y.G. Liu, A. Silberschatz, P4p: provider
portal for applications, SIGCOMM Computer Communication Review 38 (4)
(2008) 351362.
[217] H. Xie, Y.R. Yang, A. Krishnamurthy,Y.G. Liu, A. Silberschatz, Towards an ISPcompliant, peer-friendly design for peer-to-peer networks, in: Proceedings of
the 7th International Ip-Tc6 Networking Conference on Adhoc and Sensor
Networks, Wireless Networks, Next Generation internet, Singapore, May 0509,
2008. Also Available at: A. Das, F.B. Lee, H.K. Pung, L.W. Wong, (Eds.), Lecture
Notes In Computer Science, Springer-Verlag, Berlin, Heidelberg, pp. 375384.
[219] H. Yan, D.A. Maltz, T.S. Eugene Ng, et al., Tesseract: A 4D Network Control
Plane, in: Proceedings of USENIX Symposium on Networked Systems Design
and Implementation (NSDI 07), April 2007.
[220] R. Yates, D. Raychaudhuri, S. Paul, et al., Postcards from the Edge: A Cacheand-Forward Architecture for the Future Internet, NSF NeTS FIND Initiative.
<http://www.nets-nd.net/Funded/Postcards.php>.
[221] X. Yang, An Internet Architecture for User-Controlled Routes, NSF NeTS FIND
Initiative> <http://www.nets-nd.net/Funded/InternetArchitecture.php>.
[222] Z. Zhang, Routing in intermittently connected mobile ad hoc networks and
delay tolerant networks: overview and challenges, IEEE Communications
Surveys and Tutorials 8 (1) (2006).
[223] J. Zien, The Technology Behind Napster, About, 2000. <http://Internet.
about.com/library/weekly/2000/aa052800b.htm>.