0% found this document useful (0 votes)
36 views24 pages

Software-Defined Networking: State of Art and Research Challenges

SDN security 2

Uploaded by

j.nyambeya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views24 pages

Software-Defined Networking: State of Art and Research Challenges

SDN security 2

Uploaded by

j.nyambeya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Software-Defined Networking: State of the Art and Research Challenges

Manar Jammala1, Taranpreet Singha, Abdallah Shamia, RasoolAsalb, and Yiming Lic
a
Department of Electrical and Computer Engineering, Western University, Canada
b
British Telecom, UK
c
StarTech.com, Canada

Abstract—Plug-and-play information technology (IT) virtualized networks, administrators must resolve the physical
infrastructure has been expanding very rapidly in recent years. infrastructure concerns that increase management complexity.
With the advent of cloud computing, many ecosystem and In addition, most modern-day vendors use control-plane
business paradigms are encountering potential changes and may software to optimize data flow to achieve high performance
be able to eliminate their IT infrastructure maintenance
processes. Real-time performance and high availability
and competitive advantage [2]. This switch-based control-
requirements have induced telecom networks to adopt the new plane paradigm gives network administrators very little
concepts of the cloud model: software-defined networking (SDN) opportunity to increase data-flow efficiency across the
and network function virtualization (NFV). NFV introduces and network as a whole. The rigid structure of legacy networks
deploys new network functions in an open and standardized IT prohibits programmability to meet the variety of client
environment, while SDN aims to transform the way networks requirements, sometimes forcing vendors into deploying
function. SDN and NFV are complementary technologies; they do complex and fragile programmable management systems. In
not depend on each other. However, both concepts can be merged addition, vast teams of network administrators are employed
and have the potential to mitigate the challenges of legacy to make thousands of changes manually to network
networks. In this paper, our aim is to describe the benefits of
using SDN in a multitude of environments such as in data
components [2, 3].
centers, data center networks, and Network as Service offerings.
We also present the various challenges facing SDN, from
scalability to reliability and security concerns, and discuss
existing solutions to these challenges.

Keywords—Software-Defined Networking, OpenFlow, Datacenters,


Network as a Service, Network Function Virtualization.

1. INTRODUCTION
Today’s Internet applications require the underlying networks
to be fast, carry large amounts of traffic, and to deploy a
number of distinct, dynamic applications and services.
Adoption of the concepts of “inter-connected data centers”
and “server virtualization” has increased network demand
tremendously. In addition to various proprietary network
hardware, distributed protocols, and software components,
legacy networks are inundated with switching devices that
decide on the route taken by each packet individually;
moreover, the data paths and the decision-making processes
for switching or routing are collocated on the same device.
This situation is elucidated in Fig. 1. The decision-making
capability or network intelligence is distributed across the
Figure 1: Inflexible Legacy Infrastructure
various network hardware components. This makes the
introduction of any new network device or service a tedious
job because it requires reconfiguration of each of the The demand for services and network usage is growing
numerous network nodes. rapidly. Although growth drivers such as video traffic, big
Legacy networks have become difficult to automate [1, data, and mobile usage augment revenues, they pose
2].Networks today depend on IP addresses to identify and significant challenges for network operators [4]. Mobile and
locate servers and applications. This approach works fine for Telco operators are encountering spectrum congestion, the
static networks where each physical device is recognizable by shift to internet protocol (IP), and increased mobile users.
an IP address, but is extremely laborious for large virtual Concurrently, data-center operators are facing tremendous
networks. Managing such complex environments using growth in the number of servers and virtual machines,
traditional networks is time-consuming and expensive, increasing server-to-server communication traffic. In order to
especially in the case of virtual machine (VM) migration and tackle these challenges, operators require a network that is
network configuration. To simplify the task of managing large efficient, flexible, agile, and scalable.

1
Submitted for review and possible publication in Elsevier’s Journal of Computer Networks
Inspired by the words of Marc Andreesen, “software is eating computing industries. Section IV discusses various SDN
the world”, software-defined networking (SDN) and applications in data-center networks and Network as a Service.
virtualization are poised to be the solutions that overcome the Section V analyzes the various challenges facing SDN, their
challenges described above. SDN operates on an aggregated causes, and their recent solutions. Finally, the last section
and centralized control plane that might be a promising summarizes various research initiatives in the SDN field,
solution for network management and control problems. The starting from SDN prototypes, development tools and
main idea behind SDN is to separate the forwarding/data plane languages, and virtualization implementations using SDN and
from the control plane while providing programmability on ending with the various SDN vendors.
the control plane, as illustrated in Fig.2.
2. SOFTWARE-DEFINED NETWORKING AND OPENFLOW
ARCHITECTURE
Most current network devices have control and data-flow
functionalities operating on the same device. The only control
available to a network administrator is from the network
management plane, which is used to configure each network
node separately. The static nature of current network devices
does not permit detailed control-plane configuration. This is
exactly where software-defined networking comes into the
picture. The ultimate goal of SDN as defined in [7] is to
“provide open user-controlled management of the forwarding
hardware of a network element.” SDN operates on the idea of
centralizing control-plane intelligence, but keeping the data
plane separate. Thus, the network hardware devices keep their
switching fabric (data plane), but hand over their intelligence
(switching and routing functionalities) to the controller. This
enables the administrator to configure the network hardware
directly from the controller. This centralized control of the
entire network makes the network highly flexible [8, 9].
2.1 SDN Architecture
Compared to legacy networks, there are four additional
components in SDN [8, 9, and 10].

1) Control Plane
The control plane/controller presents an abstract view of the
complete network infrastructure, enabling the administrator to
Figure 2: SDN Architecture
apply custom policies/protocols across the network hardware.
The network operating system (NOX) controller is the most
Despite its obvious advantages and its ability to simplify widely deployed controller.
networks, SDN encounters some technical challenges that can
restrict its functionality and performance in cloud computing, 2) Northbound Application Interfaces
information technology (IT) organizations, and networking The “northbound" application programming interfaces (APIs)
enterprises. Compared to recent surveys [5, 6], this paper represent the software interfaces between the software
tackles most of the SDN challenges with their causes and modules of the controller platform and the SDN applications
existing solutions in a comprehensive and detailed manner. It running atop the network platform. These APIs expose
addresses reliability, scalability, latency, controller placement, universal network abstraction data models and functionality
recent hardware shortages, and security issues. Overcoming for use by network applications. The “northbound APIs” are
these challenges might assist IT organizations and network open source-based.
enterprises in exploring and improving the various
opportunities and functionalities of SDN. 3) East-West Protocols
In the case of a multi-controller-based architecture, the East-
In this paper, Section II defines SDN and discusses its West interface protocol manages interactions between the
architecture and its protocol, OpenFlow. The concept of various controllers.
network virtualization (NV) is elucidated in Section III, with a
discussion of how NV has emerged as a potential solution to 4) Data Plane and Southbound Protocols
the current ossified network architecture and offers benefits The data plane represents the forwarding hardware in the SDN
that can rapidly alter both the networking and cloud- network architecture. Because the controller needs to

2
communicate with the network infrastructure, it requires and adding visibility for applications and services, SDN
certain protocols to control and manage the interface between simplifies network management and brings virtualization to
various pieces of network equipment. The most popular the network. It abstracts flow control from individual devices
“southbound protocol” is the OpenFlow protocol. The to the network level. Network-wide data-flow control gives
following section explains OpenFlow and its architecture. administrators the power to define network flows that meet
connectivity requirements and address the specific needs of
discrete user communities.
With the SDN approach, network administrators no longer
need to implement custom policies and protocols on each
device in the network separately. In the general SDN
architecture, control-plane functions are separated from
physical devices and are performed by an external controller
(e.g., standard server running SDN software). SDN provides
programmability on the control plane itself, through which
changes can be implemented and disseminated either to a
specific device or throughout the network hardware on a
secure channel. This approach promises to facilitate the
integration of new devices into the existing architecture. The
SDN controller improves the traffic engineering capabilities of
the network operators using video traffic. It enables network
operators to control their congestion hot spots and reduces the
complexity of traffic engineering [4].

2) The Rise of Virtualization: SDN is a promising opportunity


for managing hyper-scale data centers (DCs). Data centers
Figure 3: Basic SDN-Based Network Architecture raise significant scalability issues, especially with the growth
of virtual machines (VMs) and their migration. Moving a VM
and updating the media access control (MAC) address table
using traditional network architecture may interrupt the user
experience and applications.
Therefore, network virtualization, which can be seen as an
SDN application, offers a prominent opportunity for hyper-
scale data centers. It provides tunnels that can abstract the
MAC address from the infrastructure layer, enabling Layer 2
traffic to run over Layer 3 overlays and simplifying VM
deployment and migration in the network [4].
Furthermore, SDN enables multi-tenant hosting providers to
link their physical and virtual servers, local and remote
facilities, and public and private clouds into a single logical
network. As a result, each customer will have an isolated view
of the network provider. SDN adds a virtualization layer to the
fabric architecture of the cloud providers. This enables their
Figure 4: API Directionality in SDN Architecture tenants to obtain various views over the data-center network
(DCN) according to their demands.
SDN is a promising approach for offering Networks as a
2.2 SDN Benefits Service (NaaS) which will enable flexible service models and
SDN provides several benefits to address the challenges facing virtual network operators and endow enterprises with the
legacy network architectures. ability to control DCs and their traffic. This paper introduces
the benefits of NaaS and its consolidation with SDN using
1) Programmability of the Network: By implementing a new different cloud models.
orchestration level, SDN can tackle the inflexibility and
complexity of the traditional network. SDN provides 3) Device Configuration and Troubleshooting: With SDN,
enterprises with the ability to control their networks device configuration and troubleshooting can be done from a
programmatically and to scale them without affecting single point on the network which pushes us closer to realizing
performance, reliability, or the user experience [4]. The data- the ultimate goal of “a dynamic network” that can be
and control-plane abstractions constitute the immense worth of configured and made adaptable according to needs. SDN also
SDN. By eliminating the complexity of the infrastructure layer provides the capability to encourage innovation in the

3
networking field by offering a programmable platform for
experiments on novel protocols and policies using production
traffic. Separating data flows from test flows facilitates the
adoption of newer protocols and ideas into the networking
domain [2, 3].

From a broader perspective, SDN offers a form of networking


in which packet routing control can be separated from
switching hardware [3]. As a result, when the SDN and
Ethernet fabrics are consolidated, real network intelligence is
achieved [4].

Since OpenFlow is the industrial standard interface for SDN


between the control and the data layers, the following
subsection defines it and its architecture.

2.3 Open Flow Definition


OpenFlow is the protocol used for managing the southbound
interface of the generalized SDN architecture. It is the first
standard interface defined to facilitate interaction between the
control and data planes of the SDN architecture. OpenFlow Figure 5: Basic Architecture of OpenFlow
provides software-based access to the flow tables that instruct
switches and routers how to direct network traffic. Using these
flow tables, administrators can quickly change network layout a) Defining a Flow
and traffic flow. In addition, the OpenFlow protocol provides Network traffic can be partitioned into flows, where a flow
a basic set of management tools which can be used to control could be a transmission control protocol (TCP) connection,
features such as topology changes and packet filtering. The packets with the same MAC address or IP address, packets
OpenFlow specification is controlled and defined by the non- with the same virtual local area network (VLAN) tag, or
profit open network foundation (ONF), which is led by a packets arriving from the same switch port [9].
board of directors from seven companies that own and operate
some of the largest networks in the world (Deutsche Telekom, b) OpenFlow Switch
Facebook, Google, Microsoft, Verizon, Yahoo, and NTT). An OpenFlow switch consists of one or more flow tables and a
Most of the networking hardware vendors such as HP, IBM, group table. It performs packet look-ups and forwarding. The
and CISCO offer switches and routers that use the OpenFlow controller manages the OpenFlow-enabled switch using the
protocol [10]. OpenFlow shares much common ground with OpenFlow protocol over a secure channel. Each flow table in
the architectures proposed by ForCES and SoftRouter; the switch is made up of a set of flow entries in which each
however, the difference lies in inserting the concept of flows flow entry consists of match header fields, counters, and a set
and leveraging the existence of flow tables in commercial of instructions to apply to matching packets [11].
switches [11].
c) OpenFlow Channel
OpenFlow-compliant switches come in two main types:
The OpenFlow channel is the interface that connects each
OpenFlow-only and OpenFlow-hybrid. OpenFlow-only
OpenFlow switch to a controller. Using this interface, the
switches support only OpenFlow operations, i.e., all packets
controller configures and manages the switch. The OpenFlow
are processed by the OpenFlow pipeline. OpenFlow-hybrid
protocol supports three message types, all of which are sent
switches support both OpenFlow operations and normal
over a secure channel. These messages can be categorized as
Ethernet switching operations, i.e., traditional L2 and L3
controller-to-switch, asynchronous, and symmetric, each
switching and routing. These hybrid switches support a
having multiple sub-types. Controller-to-switch messages are
classification mechanism outside of OpenFlow that routes
initiated by the controller and are used to manage or derive
traffic to either of the packet-processing pipelines [11].
information directly about the state of the switch.
Asynchronous messages are initiated by the switch and are
2.3.1 OpenFlow Architecture used to update the controller with network events and changes
Basically, the OpenFlow architecture consists of numerous to the switch state. Symmetric messages are initiated by either
pieces of OpenFlow-enabled switching equipment which are the switch or the controller and are sent without solicitation.
managed by one or more OpenFlow controllers, as shown in The OpenFlow channel is usually encrypted using transport
Fig.5. layer security (TLS), but can also operate directly over TCP
[11].

4
d) OpenFlow Controller counters are used to keep track of the number of packets and
The controller is responsible for maintaining all the network bytes for each flow and the elapsed time since flow initiation.
protocols and policies and distributing appropriate instructions
to the network devices. In other words, the OpenFlow 2.3.3 OpenFlow Protocol
controller is responsible for determining how to handle
packets without valid flow entries. It manages the switch flow An OpenFlow switch contains multiple flow and group tables.
table by adding and removing flow entries over the secure Each flow table consists of many flow entries. These entries
channel using the OpenFlow protocol. The controller are specific to a particular flow and are used to perform packet
essentially centralizes network intelligence. The switch must look-up and forwarding. The flow entries can be manipulated
be able to establish communication with a controller at a user- as desired through OpenFlow messages exchanged between
configurable (but otherwise fixed) IP address using a user- the switch and the controller on a secure channel. By
specified port. The switch initiates a standard TLS or TCP maintaining a flow table, the switch can make forwarding
connection to the controller when it knows its IP address. decisions for incoming packets by a simple look-up on its
Traffic to and from the OpenFlow channel does not travel flow-table entries. OpenFlow switches perform an exact match
through the OpenFlow pipeline. Therefore, the switch must check on specific fields of the incoming packets. For every
identify incoming traffic as local before checking it against the incoming packet, the switch goes through its flow table to find
flow tables. The switch may establish communication with a a matching entry. The flow tables are sequentially numbered,
single controller or with multiple controllers. starting at 0. The packet-processing pipeline always starts at
Having multiple controllers improves reliability because the the first flow table. The packet is first matched against the
switch can continue to operate in OpenFlow mode if one entries of flow table 0. If the packet matches a flow entry in a
controller connection fails. The hand-over between controllers flow table, the corresponding instruction set is executed.
is entirely managed by the controllers themselves, which Instructions associated with each flow entry describe packet
enables load balancing and fast recovery from failure. The forwarding, packet modification, group table processing, and
controllers coordinate the management of the switch among pipeline processing.
themselves, and the goal of the multiple controller Pipeline-processing instructions enable packets to be sent to
functionality is only to help synchronize controller hand-offs subsequent tables for further processing and enable aggregated
performed by the controllers. information (metadata) to be communicated between tables.
The multiple controller functionality addresses only controller Flow entries may also forward to a port. This is usually a
fail-over and load balancing. When OpenFlow operation is physical port, but may also be a virtual port.
initiated, the switch must connect to all controllers with which Flow entries may also point to a group, which specifies
it is configured and try to maintain connectivity with all of additional processing. A group table consisting of group
them concurrently. Many controllers may send controller-to- entries offers additional methods of forwarding (multicast,
switch commands to the switch; the reply or error messages broadcast, fast reroute, link aggregation, etc.). A group entry
related to these commands must be sent only on the controller consists of a group identifier, a group type, counters, and a list
connection associated with that command. Typically, the of action buckets, where each action bucket contains a set of
controller runs on a network-attached server [11]. actions to be executed and associated parameters. Groups also
SDN controllers can be implemented in the following three enable multiple flows to be forwarded to a single identifier,
structures [12]: e.g., IP forwarding to a common next hop. Sometimes packet
may not match a flow entry in any of the flow tables; this is
i. Centralized structure called a “table miss”. The action taken in case of a miss
ii. Distributed structure depends on the table configuration. By default, the packet is
iii. Multi-layer structure. sent to the controller over the secure channel. Another option
is to drop the packet [11].
2.3.2 Flow & Group Tables
Each entry in the flow table has three fields [11]: In summary, SDN provides a new concept and architecture for
• A packet header is specific to the flow and defines it. This managing and configuring networks using a dynamic and agile
header is almost a ten-tuple. Its fields contain information infrastructure. But the networking area is not only
such as VLAN ID, source and destination ports, IP address, experiencing the emergence of SDN but also network
and Ethernet source and destination. virtualization and network function virtualization. The three
•The action specifies how the packets in a flow will be solutions build an automated, scalable, virtualized and agile
processed. An action can be any one of the following: networking and cloud environment. Therefore, the following
i) Forward the packet to a given port or ports section introduces network virtualization, network function
ii) Drop the packet virtualization and their relationship with SDN.
iii) Forward the packet to the controller.
• Statistics include information such as number of packets, 3. NETWORK VIRTUALIZATION
number of bytes, time since the last packet matched the flow, The value of SDN in the enterprise lies specifically in its
and so on for each type of flow [11]. Most of the time, ability to provide network virtualization and automated

5
Physical Layer Virtual Networks
VMs
VSWITCH GATEWAY
VMs
Virtualization
Hardware Layer Software Logical Logical Switch
Layer (Decoupling, Layer Router
Automated
VLANs Configurations
Logical Load Logical
Balancer Firewall
Tunnels

Figure 6: The Concept of Network Virtualization

configuration across the entire network fabric, enabling rapid Starting from the bottom:
deployment of new services and end systems in addition to e) Infrastructure Provider (InP):
minimizing operating cost[13][14]. The infrastructure provider (InP) is responsible for
3.1 Definition maintaining the underlying physical equipment. Each
organization taking on this role must offer its resources
Conceptually, network virtualization decouples and isolates
virtual networks from the underlying network hardware, as virtually to build a virtual network.
shown in Fig.6. f) Virtual Network Provider (VNP)
The isolated networks share the same physical infrastructure. The virtual network provider (VNP) is responsible for
Once virtualized, the underlying physical network is used only requesting virtual resources and assembling the virtual
for packet forwarding. Multiple independent virtual networks network for a virtual network operator (VNO). The virtual
are then overlaid on the existing network hardware, offering network provider can use a number of infrastructure providers
the same features and guarantees as a physical network, but to provide virtual network resources.
with the operating benefits and hardware independence of g) Virtual Network Operator (VNO)
virtual machines [13]. To achieve virtualization at the network VNOs must assess the network requirements for the VNP to
level, a network virtualization platform (NVP) is needed to assemble the virtual resources. VNOs are also responsible for
transform the physical network components into a generalized managing and granting access to virtual networks.
pool of network capacity, similar to how a server hypervisor
h) Service Provider
transforms physical servers into a pool of compute capacity.
Decoupling virtual networks from physical hardware enables Service providers use virtual network resources and services
network capacity to scale without affecting virtual network to tailor specialized services for end users.
operations [13, 14]. i) Virtual Network User/End User
Network virtualization projects the network hardware as a End users consume the resources of the virtual network
business platform capable of delivering a wide range of IT through services provided by the virtual network directly or
services and corporate value. It delivers increased application services provided by a service provider.
performance by dynamically maximizing network asset
utilization while reducing operating requirements [7]. Three components are essential for a virtual network to
With the emergence of SDN, network virtualization becomes function properly: virtual servers, virtual nodes, and virtual
engaged in cloud computing applications. NV provides links. Virtual servers provide end users with a means to access
network management for the interconnection between servers virtual network services by implementing virtual machines.
in DCs. It allows the cloud services to be dynamically The virtual servers can also switch transparently between
allocated and extend the limits of DC into the network virtual machines to enable dynamic service changes. This
infrastructure [15]. feature is particularly helpful in the face of ever-changing
Network virtualization has many aspects, including virtualized client needs. Virtual nodes represent physical nodes such as
dual backbones, network service virtualization, virtual service routers and switches. A virtual node operates in both the data
orchestration, network I/O virtualization, and network-hosted and control planes. The node is configured by VNOs to
storage virtualization. Figure 7 presents a general network- forward data appropriately. Virtual links provide a means of
virtualization architecture consisting of firewall-defined layers dividing and sharing physical links. The concept of virtual
[16, 17]. links ensures flexibility in network topology [16, 17].

6
5) Resource Optimization
The dynamic mapping of multiple virtual network nodes to the
physical substrate ensures that the network hardware is
utilized up to capacity. This approach cuts down on hardware
costs and delivers additional profit to the infrastructure
provider.
6) Deployment of Distinct Network Services
Network services such as wireless local area networks
(WLANs) and Intranet require specific network architectures.
In addition, a multi-national corporation might need to offer
distinct services to its employees. This can add complexity to
the existing overlay network. Network virtualization can help
alleviate these problems by deploying such services in
separate virtual networks.
3.3 Network Function Virtualization
As for the ambiguity between the concepts of network
function virtualization (NFV) and SDN, it is necessary to take
advantage of the definitions and benefits of both technologies.

3.3.1 Definition of NFV


Expansion of the deployment of various applications and
network services induced service providers to come up with
the concept of NFV. Therefore, they established a European
Figure-7: General Network Virtualization Architecture
telecommunication standards institute (ETSI) Industry
Specification Group for NFV. The group defined the real
3.2 Benefits of Network Virtualization concept of NFV together with its requirements and
Some of the key benefits offered by network virtualization are architecture.
mentioned below [18, 19]: NFV decouples network functions, e.g., firewalls, domain
name service (DNS), and caching, from dedicated hardware
1) Co-existence of Dissimilar Networks appliances and entrusts them to a software-based application
Network virtualization makes it possible to create multiple running on a standardized IT infrastructure, high-volume
virtual networks on the same physical hardware. However, servers, switches, and storage devices. The interesting feature
these virtual networks can be isolated from other existing of NFV is its availability for both wired and wireless network
virtual networks. This isolation can be used as a tool in the platforms. NFV reduces capital expenditures (CAPEX) and
deployment of networks using different or even incompatible operating expenditures (OPEX) by minimizing the purchase of
routing protocols. dedicated hardware appliances, as well as their power and
2) Encouraging Network Innovation cooling requirements. Virtualization of network functions
Like SDN, network virtualization can be used to encourage enables fast scale-up or scale-down of various network
innovation in the networking domain. The isolation that can services and provides agile delivery of these services using a
exist between two virtual networks can be used to create software application running on commercial off-the-shelf
separate domains for production traffic and test traffic. This (COTS) servers.
isolation guarantees that a malfunction experiment will not
affect production traffic. 3.3.2 NFV and NV
3) Provisioning of Independent and Diverse Networks At the flow level, NV partitions the network logically and
NV deploys packet handling, quality of service (QoS) and creates logical segments in it. It can be viewed as a tunnel
security policies to configure network operations and connecting any two domains in a network. Therefore, it
behaviors. This configuration allows the categorization of eliminates the physical wiring for each domain connection and
different networks based on their services, users and replaces it with a virtualized infrastructure.
applications. While NV can be viewed as a tunnel, NFV deploys services
4) Deployment of agile network capabilities on it. NFV virtualizes the functions from layer 4 (L4) till layer
The inclusion of agile facilities into the current network 7 (L7) such as load balancing and firewalls. If an administrator
improves the data transport efficiency and provides robust can enable modifications at the top of the infrastructure layer,
network. With the agile manner, NV allows the integration NFV can provide changes for L4-L7 functions virtually
between legacy and advanced networks. Also, it enables Both NV and NFV may run on high performance x86
migration from legacy systems into advanced ones in an agile platforms. NV tunnel facilitates VM migration independently
manner. of the underlying network. NFV enables the functions on NV,
provides an abstraction and virtual services on it. As NV

7
SDN NFV
Motivation  Decoupling of control and data planes Abstraction of network functions from dedicated
 Providing centralized controller and hardware appliances to COTS servers
network programmability
Network Data centers Service provider networks
Location
Network Servers and switches Servers and switches
Devices
Protocols OpenFlow N/A

Applications Cloud orchestration and networking Firewalls, gateways, content delivery network

Standardization Open Networking Forum (ONF) ETSI NFV group


Committee
Table I: Comparison between SDN and NFV.

eliminates the need for network reconfiguration, NFV saves importance of SDN in these fields and describes its various
time on manual training and provisioning. applications in DCNs and NaaS.
4.1 Data-Center Networks
3.3.3 NFV and SDN
SDN and NFV are complementary technologies; they do not 4.1.1 Motivation
depend on each other. However, both concepts can be merged The scale and complexity of data-center networks (DCNs) are
to mitigate potentially the challenges of legacy networks. The approaching the limit of traditional networking equipment and
functions of the SDN controller can be deployed as virtual IT operations [20]. Currently, the infrastructure of data-center
functions, meaning that the OpenFlow switches will be networks is undergoing tremendous and rapid changes.
controlled using NFV software. The multi-tenancy The Enterprise Strategy Group (ESG) has defined the reasons
requirements of the cloud pushed the NFV to support use of a that have provoked these changes and summarizes them as
software overlay network. This software network is created by follows:
SDN. It consists of a set of tunnels and virtual switches that
prohibits sudden interactions between different virtual  Aggressive Alliances in Data Centers
network functions. These functions will be managed using the ESG’s research statistics show that63% of the enterprises
SDN model. polled are planning the fusion of their data centers [20]. A
Merging NFV and SDN enables replacement of expensive and large expansion may occur in these data centers as they
dedicated hardware equipment by software and generic harbour extra applications, network traffic, and devices.
hardware. The control plane is transferred from dedicated Therefore, many associations might consolidate their data
platforms to optimized locations in DCs. Abstraction of this centers into multi-tenant facilities.
plane eliminates the need to upgrade network appliances
simultaneously and thus accelerates the evolution and  Progressive use of virtualization technology
deployment of network services and applications. Table I Large enterprises such as Citrix, Microsoft, and VMware are
provides a comparison between SDN and NFV concepts. deploying server virtualization technologies. In addition, other
organizations are now willing to introduce new initiatives to
In summary, while NV and NFV creates virtual tunnels and their infrastructure that use virtualization technology concepts.
functions to the underlying physical network respectively, Consequently, compact integration among physical
SDN modifies the physical network. Also, NV and NFV can infrastructure, virtual servers, and networks is required.
reside on the servers of the existing network, SDN requires the
construction of new network where the control and data layers  Wide Deployment of Web Applications
are decoupled. Web-based applications are widely used in many
organizations [20]. Moreover, these applications use server-to-
4. SDN APPLICATIONS server communication because they are based on x86 server
SDN is a promising approach that can overcome the tiers and horizontal scaling. Therefore, data centers need to
challenges facing cloud computing services, specifically NaaS brace themselves for an increase in internal traffic due to
and DCNs. Therefore, the following section highlights the massive deployment of these Web applications.

8
Changes in DCN Infrastructure
Proposed Solution Objective Functionality
 Override the traditional  Enable migration of performance and QoS and security
Layer 2 domains and Layer 3 policies with VM migration
SDN-based Vello routing challenges  Provide a unified view and control of the global cloud
Systems [22]  Facilitate live VM for WAN resource optimization
migration within and across  Provide network automation and virtualization of LAN
DCNs and WAN connectivity and resource allocation
 Override the single point of failure problem by using a
distributed controller system
 Transform the DCN into a  Provide load-balancing services
Switching with in- software problem  Guarantee better scalability and fault tolerance
packet Bloom  Leave the responsibility for performance in DCN by using rack managers
filters (SiBF) [23] device implementation to  No evaluation of the proposed routing approach
hardware vendors  Lack of traffic engineering studies of different flow
sizes

Table II: SDN in DCN Infrastructure.


Because dynamic scaling in data-center networks is based on convergence architectures to fix the fractures in the DCN
static network devices (Ethernet and IP packet connections), infrastructure, these solutions do not address the problems in
IT teams encounter a discontinuity gap during the heterogeneous networks. Nevertheless, the software-defined
implementation of scalable data-center networks. However, it network paradigm is a promising solution to solve these
appears that the flood waters are about to overrun tactical challenges in DCN setups.
network sandbags [20]. The ESG describes the main network
challenges as follows: 4.1.2 SDN Deployment in DCNs
a) Network Segmentation and Security In SDN OpenFlow based-networks, the virtual network
Nowadays, DCN segmentation is based on a mix of VLANs, segments are centrally configured, and network security is
IP subnets, device-based access-control lists (ACLs), and simplified by directing flows to security policy services.
firewall rules that have been maintained for years. However, Moreover, the central controller transforms the core and
these hard-wired segmentation and security controls are not aggregation devices into a “high-speed transport backplane”
compatible with data centers that are populated by VM [20]. The controller can provision a new device that is added
workloads and cloud-computing platforms. to the network and allow it to receive the configuration policy
when it appears online. Finally, SDN improves DCN
infrastructure, its power consumption, and its various metrics.
b) Traffic Engineering
Due to these improvements and modifications, different SDN
Any traffic congestion or hardware failure will affect the applications in DCNs have been proposed.
performance and latency of all other devices because network
traffic follows fixed paths and multiple hops. In addition, the a. Changes in DCN Infrastructure
deployment of VMs and virtual servers in recent DCNs adds a Automation and virtualization of data-center LANs and
supplementary burden to network performance [21]. WANs has resulted in a flexible and dynamic infrastructure
that can accommodate operating-cost challenges. As a result,
c) Network Provisioning and Configuration Vello Systems [22] has proposed an open and scalable
Although virtual servers are provisioned by cloud virtualization solution that connects the storage and
orchestration tools, the policies of the data-center equipment computation resources of the data center to private and public
and control paths must be set up on a device-to-device or cloud platforms. To facilitate the migration of VMs from their
flow-to-flow basis, and heterogeneous networks must be Layer 2 network, Layer 2 was extended across multiple DCs
managed by multiple management systems. Even though using Layer 3 routing. However, Layer 3 routing introduces
network management software can help at this stage, network challenges in intra-data center connectivity and cannot meet
configuration changes remain “a tedious link-level slog” [20]. the requirements for VM migration across DCs. Therefore, the
Further information concerning DCN challenges can be found proposed solution is based on a cloud-switching system that
in [20, 21]. Ultimately, DCN discontinuity will be a threat to enables cloud providers and enterprises to overcome the
business operations because it may induce degradations in traditional Layer 2 domains, the direct server-to-server
service level, delays in business initiatives, and increase in IT connection, and virtual server migration.
operating costs [20]. Although networking vendors have Because the switching system supports integration of end-to-
launched some innovations such as network fabric and end network attributes, its operating system can provide a

9
Green DCN
Proposed Solution Objective Functionality
 Estimate the minimum power for a given network
topology
 Satisfy the traffic conditions and QoS
OpenFlow platform Provide guidelines for studying requirements
for energy-aware data energy consumption in DCN  Provide a power module in the controller that
center [24] elements determines the power state of network elements
 No evaluation of the proposed approach on
different network topologies
 Reduce configuration time of network elements
OpenFlow switch Decrease the influence of carbon  Enable flexible power management operations
controller (OSC) [26] emissions in the DCs based on the programmable controller
Table III: SDN in a Green DCN.

framework for SDN. Thus, OpenFlow-based allow the cloud in [24] analyzed the proposed architecture based on the
to migrate performance, QoS, and security policies Reducing Energy Consumption in Data-Center Networks and
concurrently with VM migration. Finally, the SDN-based Traffic Engineering (ECODANE) project. The platform
Vello systems permit a unified view and control of the global provides guidelines for measuring and analyzing energy
cloud for WAN resource optimization. consumption in DCN elements (ports, links, and switches)
In [23], an OpenFlow-based test-bed implementation, based on realistic measurements from NetFPGA-based
switching with in-packet Bloom filters (SiBF), has been OpenFlow switches. The NetFPGA energy model was
proposed as data-center architecture. The suggested extracted from several energy measurements using the Xilinx
architecture was inspired by the onset of SDN, which power-estimation tool. The Xpower tool uses the Verilog
transforms the DCN into a software problem while leaving the source code of the OpenFlow switch as its input and estimates
hardware vendors responsible for device implementation. the power measurements.
SiBF introduces an army of rack managers that act as The power-estimation model was tested using the Mininet [25]
distributed controllers, contain all the flow-setup emulator, a simple testbed for developing OpenFlow
configurations, and require only topology information. applications. The Elastic Tree topology was used to test the
Intrinsically, SiBF uses IP addresses for VM identification and proposed data-center architecture. The OpenFlow switches are
provides load-balanced services based on encoding strategies. controlled by the NOX controller, which consists of an
The architecture is implemented on a multi-rooted tree optimizer, a power controller, and routing modules. The
(CORE, AGGR, and ToR) because this is a common DCN optimizer finds the minimum power of the network subset that
topology. satisfies the traffic conditions and QoS requirements. The
However, other topologies can be considered in a SiBF data- minimum power estimate is deduced from the number of links
center architecture. The OpenFlow controller, e.g., the rack or switches that are turned off or put in sleep mode. The
manager, installs the flow mapping into the ToR switches and power-control module determines the power state of the
consists of directory services, topology services, and topology network elements based on the OpenFlow messages and
discovery. With its modules, the controller can be Mininet APIs and notifies switches to enter the appropriate
implemented as an application in the NOX controller. Flow power-saving mode. The last module is used to find the
requests are handled by neighbouring rack managers in case of optimal routing path in the DCN. This study is a first stage in
any failure in the master controller. However, when an building a green data center based on the new SDN paradigm.
OpenFlow switch fails, its traffic is interrupted until the SiBF Table III summarizes various SDN approaches in a green
installs new mappings (new flow routes) in the ToR switches. DCN.
The proposed data-center architecture, based on distributed Based on the results and the proposed data-center architecture
OpenFlow controllers, guarantees better scalability and fault- described in [24], an extension to OpenFlow switches for
tolerant performance in the DCN. Table II summarizes various saving energy consumption in data centers has been proposed
approaches for implementing SDN in a DCN infrastructure. in [26] and can be used later on as a reference. The authors
presented a solution to decrease the environmental influence
b. The Green DCN of massive carbon emissions in data centers. This solution
Implementing an energy-efficient data-center network is an consists of controlling power consumption in data-center
important step towards a “green” cloud. An energy-aware switches based on an extension to OpenFlow switches. This
data-center architecture based on an OpenFlow platform has extension adds new messages to the OpenFlow protocol which
been proposed in [24]. Because designing an energy-efficient enable the controller to control the switch over different
data center requires an experimental environment, the authors power-saving modes. More detailed information about the

10
DCN Metrics
Proposed Solution Objective Functionality
OpenFlow  Improve performance,  Improve bandwidth performance and the number of flow modifications
platform for scalability, and agility in a per second in the kernel switches
scalable and agile cloud data center  Reduce cost of operations and switch configuration time
data center [25]
Loss-free  Provide lossless delivery  Reduce path-load update overhead of the network
multipathing and better throughput for DCN  Handle any network status and traffic burst states
congestion control using OpenFlow switches and  Use the path load as the only parameter to evaluate traffic in the DCN
DCN [28] a central controller
 Meet the requirements of  Remove the limitation on the number of VLANs
SDN-based DCN different applications in a  Respond to on-demand network updates
solution [30] cloud DCN  Introduce longer flow-setup time compared to legacy networks
 Use the least loaded route and alternative paths for flow congestion
OpenFlow re-  Provide storage of switch statistics, tracking all the detected hosts in the
routing control  Evaluate DCN network and various routing and re-routing functions
mechanism in performance and manage its  Provide better load distribution, throughput, and link utilization compared
DCN [32] flows to other routing mechanisms
 Combat severe packet loss and high packet sojourn time in case of low
processing time for private networks
 Replace redundant header information with a short identifier, the Flow ID
 Modify packet headers to  Combine packets of the same flow in the same ID
Scissor [33] minimize DC traffic and  Improve latency and introduce slight improvements in power gains
network power consumption  Absence of scissor operations within the rack that is responsible for 75%
of DCN traffic
Table IV: Improvements in DCN Metrics with SDN.

new power-control messages and the design of the OpenFlow The integration mechanism was based on a dynamic load-
Switch Controller (OSC) can be found in [26]. OpenFlow can balancing multipathing approach [29]. The proposed
reduce configuration time and enable flexible programmable mechanism uses OpenFlow switches and a central controller
controller-based management operations and is therefore to reduce network overhead (path load updates) and enable the
recommended for use in cloud data centers. switches to deal with any network situation even during traffic
bursts [28]. OpenFlow is enabled only in the access switches.
c. Improving DCN Metrics The controller collects information about the network from
[27] describes an experimental study for improving the their routing tables. The controller updates the switches with
performance, scalability, and agility of a cloud data center any change in the “path load" on the associated routes with a
using the OpenFlow protocol. The authors built a prototype short delay. Although the MP-CC integration mechanism
cloud data center in which the route traffic was controlled by shows lossless delivery due to fast reaction of the switches to
an OpenFlow controller; different metrics were tested on the network changes, the proposed algorithm considers path load
prototype. The proposed algorithms and the prototype design as the only parameter to handle DCN traffic.
are discussed in detail in [27]. Testing of performance, Recent applications have imposed many requirements on
throughput, and bandwidth for various network sizes and cloud service providers, and therefore, cloud data-center
topologies was done using the Mininet emulator with its networks have to be multi-tenant, low-cost, flexible, and
numerous tools. The results show that bandwidth performance reconfigurable on demand. On the other hand, current DCN
and the number of flow modifications per second were better strategies cannot meet all these requirements, and therefore
with the Kernel switches, a test image of OpenFlow, than with [30] proposed an SDN-based network solution.
user-space switches. However, replacement of data-center The proposed prototype consists of a central controller that
switches with OpenFlow switches is not recommended until manages multiple OpenFlow switch instances and packet
standardization of the software platform has been achieved. filtering. The controller stores the database of the management
Furthermore, SDN has often been mentioned as an approach information of the L2 virtual network, called the slice. The
to implementing and improving the metrics of data-center proposed prototype removes the limitations on the number of
networks. In [28], a loss-free multipathing (MP) control VLANs and responds to on-demand network updates based on
congestion (CC) approach for a DCN was proposed. The APIs that simplify these configuration updates. However, the
authors introduced integration between MP and CC to provide flow-setup process in the switch introduces a longer flow-
lossless delivery and better throughput for the DCN. setup time than in legacy networks [30].

11
Virtualized DCN
Proposed Solution Objective Functionality
 Insert new OpenFlow rules to implement inter-DCN
Inter-DCN connectivity in the cloud
connectivity based on Mitigate the interconnection  Support live VM migration between different DCNs
OpenFlow [34] challenges in a cloud DCN  Minimize connectivity interruption of VMs during the
migration process
 Support East-West traffic for migration within DCs
Facilitate live and offline  Support North-South traffic for VM migration to external
CrossRoads [35] VM migration across data clients
centers  Provide negligible overhead with respect to that of legacy
networks
Table V: Virtualized DCN using SDN.

Another study proposed an approach to evaluate DCN IDs. Experimental simulations were carried out to test the
performance by implementing an OpenFlow re-routing control performance of the proposed framework. Results showed that
mechanism to manage DCN flows [31]. Performance is Scissor introduced substantial latency improvements, as high
represented by load distribution, throughput, and link as 30%.The evaluated power gains were only 20% in the best-
utilization metrics. The proposed re-routing scheme initially case scenario because no scissor operations were performed
uses the least loaded route; in case of congestion, large flows within the rack that is responsible for 75% of the DCN traffic
are re-routed onto alternative paths, while small flows pursue [33].
their track. The re-routing framework consists of an NOX
controller, a monitor to store switch statistics, a host tracker d. Virtualization in DCNs
that tracks the entire set of detected hosts in the network, and The SDN approach mitigates the interconnection challenges of
finally a routing engine which is responsible for routing and cloud DCNs [32]. The characteristics of heterogeneous DCN
re-routing functions. architectures (VL2, Portland, and Elastic Tree) are represented
A comparison between the single-path, equal-cost multi-path, by OpenFlow rules. These rules are passed to all DCN
and OpenFlow re-routing mechanisms showed that the elements to implement “inter-DCN connectivity” [34]. These
proposed framework has better load distribution, throughput, rules support VM migration between different DCN schemes
and link utilization. Table IV summarizes the improvements in without connectivity interruption based on re-routing
DCN metrics using the SDN approach. mechanisms.
In spite of the benefits provided by introducing SDN into Live VM migration in DCN is crucial in the case of disaster
DCNs, [32] concluded that building an OpenFlow system recovery, providing fault tolerance, high availability, dynamic
requires observation of the relative load on the OpenFlow workload balance, and server consolidation [35]. This
controller. The authors studied the performance of this reference proposed a network fabric based on OpenFlow,
controller in educational and private networks and concluded “CrossRoads”, that enables both live and offline VM
that a processing time of 240 µs is sufficient for an migration across data centers. CrossRoads supports East-West
educational network, but that private networks require a more traffic for VM migration within data centers and north-south
powerful controller with better processing time or distributed traffic for VM migration to external clients. The framework
controllers; otherwise, severe packet loss and high packet consists of a centralized controller in each data center, thus
sojourn times may occur [32]. extending the controller placement problem. Table V presents
The OMNet++ simulation environment was used to evaluate a couple of implemented SDN approaches to virtualized
system performance by measuring relative packet loss and DCNs.
mean packet sojourn time. Experimental results showed that the proposed network fabric
Packet headers are responsible for 30%–40% of DC traffic has negligible overhead with respect to the default network
[33] and network power consumption. Therefore, the authors and outperforms the default network by 30% [35].In summary;
of [33] proposed a new framework, the “Scissor”, which SDN is a promising solution that alleviates most of the
replaces redundant header information with a short identity, challenges faced by cloud DCNs. However, recent research
the “Flow ID”. studies have been based on small topologies or emulators.
The Flow ID identifies all the packets belonging to the same Therefore, coupling SDN to a DCN and a cloud resource
flow. Trimming of header information is done by micro- environment and testing the performance of the scheme on a
architectural hardware that consists of multiplexers to select real large network is needed to achieve better understanding of
the fields that will be retained by the Scissor, a buffer to hold the performance of SDN-based DCN setups.
the complete header temporarily, ternary content-addressable
memory (TCAM), and a controller that generates the Flow

12
4.2 Network as a Service
4.2.1 Service Oriented Architecture
The service oriented architecture (SOA) is the concept of
building a software system based on multiple integrated
logical units. These units known as services allow better
construction and management to solve large problems in
different domains. The basic components of SOA are
elucidated in Fig.8. The architecture depends on the Services
used by the Service User entity. The Service Provider hands
over these services and the Service Registry coordinates the
services’ information and publishes them for the Service User
[36].
SOA satisfies the requirements of different applications by
balancing the computational resources. It virtualizes and
integrates these resources in form of services entities.
Therefore, the basic aspect of SOA is the “coupling” between
different systems. Every system has information about the
behavior and implementation of its partners. The information Figure-9: SOA in Cloud Computing Environment
gathering procedure facilitates the coupling feature in SOA.
SOA eliminates the tight coupling and lack of interoperability
between diverse middleware in a network. It has been offerings and enables efficient use of the network
endorsed by Cloud Computing (CC) services; Infrastructure as infrastructure in the data center [37]. NaaS is a new Internet-
a Service (IaaS), Platform as a Service (PaaS) and Software as based model that enables a communication service provider to
a Service (SaaS). Fig.9 elucidates SOA in CC environment. provide network resources on demand to the user according to
CC implements SOA in its different fields to exploit the a service-level agreement (SLA). NaaS can also be seen from
resources’ virtualization feature. This in turn allows SOA to the service point of view as an abstraction between network
introduce Network as a Service (NaaS) into CC [7]. functions and protocols [38].
The top abstraction layers deal with NaaS as a service that
uses the network and customizes its capacity. Customization
in the lower layers is replaced by resource management
policies. [13] defines Naas as Telco as a Service (TaaS),
which offers a “common network and IT substrate that can be
virtualized and combined as a slice".
Naas is also defined as a Web 2.0 model that provides
software as a service utility by exposing network capabilities
(billing, charging, location, etc.) as APIs to third-party
application service providers [39]. In NaaS, the owners of the
underlying network infrastructure offer virtual network
services to a third party. There is a clear demarcation between
the roles of infrastructure providers (InPs) and virtual network
operators (VNOs).
The InP is responsible for the operating processes in the
underlying network infrastructure, and the VNO is responsible
Figure-8: Components of SOA for the operating processes in the virtual networks that run on
top of the physical infrastructure.
4.2.2 Motivation The NaaS scenario offers many business incentives, such as
Cloud computing offers on-demand provisioning of higher revenues for InPs and lower capital and operating
computational resources for tenants using a pay-as-you-go expenditures for VNOs, because it enables a number of virtual
model and outsources hardware maintenance and purchases networks to run on the same underlying network
[37]. However, these tenants have limited visibility and infrastructure. Detailed information on the interaction between
control over network resources and must resort to overlay InPs and VNOs is available in [13].
networks to complete their tasks. The separation of In summary, NaaS provides the following benefits to operators
computation from end-to-end routing in traditional networks [40]:
in the cloud-computing environment could affect data-plane  Faster time to transition NaaS to market.
performance and control-plane flexibility.  Self-service provisioning.
These drawbacks can be addressed by NaaS. It provides  Flexibility in upgrading NaaS resources without long-
secure and direct access for tenants to cloud resources and term constraints.

13
 Payment only for resources used. authors proposed optimization techniques to mitigate the
 Repair and maintenance are part of the service. hardware limitations mentioned in [41]. These techniques
 Greater control in adding, changing, and deleting services. were implemented in the network controller and were
designed to optimize traffic during VM placement and
4.2.3 NaaS and SDN Integration forwarding-entry aggregation using the same output ports. The
NaaS is one of the promising opportunities for SDN. NaaS implemented CloudNaaS exhibited good performance with an
providers can use SDN orchestration systems to obtain a increasing number of provisioning requests and used cloud
powerful user interface for controlling and viewing network resources in an effective manner.
layers. A variety of research studies have proposed NaaS
platforms in an SDN environment. b. Network Management in NaaS
[43] presents a scalable graph-query design, NetGraph, which
a. Cloud-NaaS Model supports network-management operations in NaaS modules.
[38] introduced a cloud-based network architecture which NetGraph is implemented on a software module on a SDN
evaluates the provision, delivery, and consumption of NaaS. platform. The network controller consists of multiple service
The proposed cloud-based network consists of four layers: the modules that collect information about the physical and virtual
network resource pool (NRP), the network operation interface network infrastructure.
(NOI), the network run-time environment (NRE), and the The NetGraph module resides in the centralized controller,
network protocol service (NPS). collects information about network topology to calculate the
The NRP consists of network resources: the bandwidth, graph of the existing topology, and supports the service
queues, and addresses for packet forwarding. The NOI is a modules (NaaS modules) in their query mission. Details on the
standardized API for managing and configuring the NRP. The implementation design and the algorithms used (Dijkstra,
NRE is the environment that performs billing, resource TEDI, and APSP) for finding the shortest paths in a weighted
allocation, interconnection, and reliability assurance for graph are addressed in [43]. The authors showed that the
protocol service instances through service migration in cases proposed algorithms have practical compute time and are
of network failures and high load [38]. Finally, the NPS is suitable for centralized architectures.
responsible for describing, managing, and composing the new NaaS can be seen as the ultimate connection between SDN
implemented network protocols. and cloud computing. NaaS is a supplementary scheme for
The proposed architecture is implemented using the OpenFlow SDN; while SDN is responsible for packet forwarding and
protocol. The implementation consists of two layers: the network administration, NaaS provides application-specific
controller control plane and the network data plane. The first packet processing for cloud tenants [42].With NaaS schemes,
layer is responsible for NRE and NPS functions. It consists of the operators can control the bandwidth, routing, and QoS
a master controller that distributes the data stream to the slave requirements of their data. Eventually, with SDN, operators
servers and slave controllers that perform switching, routing, can leverage current NaaS initiatives and build their own SDN
and firewall functions. The data-plane layer contains the infrastructure [44]. However, integration with existing
OpenFlow switches that perform packet forwarding services hardware and software systems and providing diverse and
based on controller instructions. The authors in [38] presented efficient APIs are crucial requirements for adopting the SDN
NaaS in a cloud-based network, but performance and and NaaS concepts [40].
evaluation studies of the suggested implementation were not
carried out.
Reliability
The limitations on tenants in controlling and configuring
networks in current cloud environments provided a motivation
Low Level
for the authors of [41] to implement a CloudNaaS model. The Scalability
Interfaces
proposed networking framework enables the tenant to access
functions for virtual network isolation, addressing, and
deployment of middlebox appliances [42] for caching and SDN
application acceleration. Challenges ASIC and
The CloudNaaS consists of the cloud controller and the Performance CPU
Limitations
network controller. The cloud controller manages virtual
resources and physical hosts and supports the APIs which set
network policies. It also specifies user requirements and
Controller
transforms them into a communication matrix that resides on Placemnet
Security
the OpenNebula framework. These matrices are compiled into
network-level rules by the network controller (NOX
controller). Figure 10: SDN challenges.
The network controller installs these rules in virtual switches,
monitors and manages the configuration of network devices,
and decides on the placement of VMs in the cloud. The Although the SDN concept is attracting the attention of IT
organizations and networking enterprises and has various

14
applications in DCNs and NaaS, the overall adoption of SDN multipathing and control congestion is based on a dynamic
has encountered various obstacles, such as reliability, load-balancing multipathing approach which runs a distributed
scalability, latency, and security challenges. Section V algorithm in case of controller failure. The algorithm updates
describes these challenges and presents some of the recent the switches with any changes in “path load” on the associated
solutions proposed in the literature. Overcoming these routes in cases of traffic congestion and load imbalance.
challenges might assist IT organizations and network
enterprises to improve the various opportunities and services 5.2 Scalability
offered by SDN. Decoupling between the data and control planes distinguishes
SDN from a traditional network. In SDN, both planes can
"evolve independently” as long as APIs connect them [49],
5. SDN CHALLENGES AND EXISTING SOLUTIONS
and this centralized view of the network accelerates changes in
Although SDN is a promising solution for IT and cloud the control plane. However, decoupling has its own
providers and enterprises, it faces certain challenges that could drawbacks. Besides the complexity of defining standard APIs
hinder its performance and implementation in cloud and between both planes, scalability limitations may arise.
wireless networks [45]. Below, a list of SDN challenges and Voellmy et al. [50] concluded that “when the network scales
some of their existing solutions are discussed and illustrated in up in the number of switches and the number of end hosts, the
Fig.10. SDN controller can become a key bottleneck”.
As the bandwidth and the number of switches and flows
5.1 Reliability
increase, more requests will be queued to the controller, which
The SDN controller must intelligently configure and validate may not be able to handle them all. Studies on a SDN
network topologies to prevent manual errors and increase controller (NOX) have shown that it can handle 30K
network availability [46]. However, this intelligence can be requests/s [51]. This may be sufficient for enterprises and
inhibited because of the brain-split problem that makes the campus networks, but it is a bottleneck for data-center
controller liable to a single point of failure [47, 48]. networks with high flow rates. In addition, [51] estimates that
In legacy networks, when one or more network devices fail, a large data center consisting of 2 million virtual machines
network traffic is routed through alternative or nearby nodes may generate 20 million flows per second. However, current
or devices to maintain flow continuity. However, in controllers can support approximately 105 flows per second in
centralized controller architecture (SDN) and in the absence of the optimal case [52, 23]. In addition to controller overload,
a stand-by controller, only one central controller is in charge the flow-setup process may impose limitations on network
of the whole network. If this controller fails, the whole scalability.
network may collapse. To address this challenge, IT
organizations should concentrate on exploiting the main Flow setup consists of four steps:
controller functions that can increase network reliability [46]. 1- A packet arrives at a switch and does not match any flow
In case of path/link failure, the SDN controller should have the entry.
ability to support multiple-path solutions or fast traffic 2- The switch sends a request to the controller to get
rerouting into active links. instructions on how to forward the packet.
If the controller supports technologies such as Virtual Router 3- The controller sends a new flow entry with new
Redundancy Protocol (VRRP) and Multi-Chassis Link forwarding rules back to the switch.
Aggregation Group (MC-LAG), these might contribute to 4- The switch updates its entries in the flow table.
increasing network availability. In case of controller failure, it
is important that the controller can enable clustering of two or The performance of the setup process depends on switch
more SDN controllers in an active stand-by mode; however, resources (CPU, memory, etc…) and controller (software)
memory synchronization between active and stand-by performance. The update time of the switch’s forwarding
controllers must be maintained [46]. information base (FIB) introduces delay in setting up any new
The authors in [23] showed that centralized controller flow. Early benchmarks on SDN controllers and switches
architecture will interrupt network traffic and flow requests in showed that the controller could respond to a flow-setup
case of controller failure. Specifically, they proposed a request within one millisecond, while hardware switches could
distributed architecture, SiBF, which consists of an army of “support a few thousand installations per second with a sub-10
rack managers (RMs), one per rack, acting as controllers. ms latency at best” [49].
Consequently, when the master controller fails, flow requests Flow-setup delays may pose a challenge to network
are handled by another stand-by controller (RM) until the scalability. Furthermore, network broadcast overhead and the
master controller comes back up. In case of switch failure, proliferation of flow-table entries limit SDN scalability [46].
SiBF installs new mappings (new back-up flow entries) in the The SDN platform may cause limited visibility of network
ToR switches for each active entry. The packets in the ToR traffic, making troubleshooting nearly impossible. Prior to
will be routed to their destinations on the alternative paths SDN, a network team could quickly spot, for example, that a
indicated by the back-up entries. Another suggested solution backup was slowing the network down. The solution would
that can counteract reliability limitations in a centralized then be to reschedule the backup to a less busy time.
architecture is described in [28]. The integration between free

15
Unfortunately, with SDN, only a tunnel source and a tunnel reactive. In proactive mode, flow setup takes place before
endpoint with User Datagram Protocol (UDP) traffic are packet arrival at the switch, and therefore, when a packet
visible, but crucially, one cannot see who is using the tunnel. arrives, the switch already knows how to deal with it. This
There is no way to determine whether the problem is the mode has negligible delay and removes the limits on the
replication process, the email system, or something else. The number of flows per second that can be handled by the
true top talker is shielded from view by the UDP tunnels, controller.
which means that when traffic slows and users complain, In general, the SDN controller fills the flow table with the
pinpointing the problem area in the network is a challenge. maximum number of possible flows. In reactive mode, flow
With this loss of visibility, troubleshooting is hindered, setup is performed when a packet arriving at the switch does
scalability limitations emerge, and delays in resolution could not match any of the switch entries. Then the controller will
become detrimental to the business [48, 53]. In order to decide how to process/handle that packet, and the instructions
minimize the proliferation of flow entries, the controller will be cached onto the switch. As a result, reactive flow-setup
should use header rewrites in the network core. The flow time is the sum of the processing time in the controller and the
entries will be at the ingress and egress switches. time for updating the switch as the flow changes. Therefore,
Improved network scalability can also be ensured by enabling flow initiation adds overhead that limits network scalability
VM and virtual storage migration between sites, as in IaaS and introduces reactive flow-setup delay [59, 60].
software middleware based on OpenFlow and “CrossRoads”, In other words, a new flow setup requires a controller to agree
a network fabric based on OpenFlow, which was discussed in on the flow of traffic, which means that every flow now needs
previous sections [34, 35]. Another solution to scalability to go through the controller, which in turn instantiates the flow
concerns is proposed in “DIFANE” [54]. This is a distributed on the switch [61, 62, 63]. However, a controller is an
flow-management architecture that can scale up to meet the application running on a server OS over a 10 GB/sec link
requirements (large numbers of hosts, flows, and rules) of (with a latency of tens of milliseconds). It is in charge of
large networks. controlling a switch which could be switching 1.2 TB/sec of
A viable solution to scalability challenges is proposed in the traffic at an average latency of 1μs. Moreover, the switch may
“CORONET” fault-tolerant SDN architecture, which is deal with 100K flows, with an average of 30K being dropped.
scalable to large networks because of the VLAN mechanism Therefore, a controller may take tens of milliseconds to set up
installed in local switches [55]. CORONET has fast recovery a flow, while the life of a flow transferring 10MB of data (a
from switch or link failures, supports scalable networks, uses typical Web page) is 10 msec [64, 52].
alternative multipath routing techniques, works with any The authors in [57] carried out various setup experiments to
network topology, and uses a centralized controller to forward test the throughput and latency of various controllers. They
packets. It consists of modules responsible for topology varied the number of switches, number of threads, and
discovery, route planning, traffic assignment, and shortest- controller workload. Based on these experiments and
route path calculation (the Dijkstra algorithm). The main simulations, they concluded that adding more threads beyond
feature of CORONET is the use of VLANs, which can the number of switches does not improve latency and that
simplify packet forwarding, minimize the number of flow serving a number of switches larger than the number of
rules, and support scalability properties. available CPUs increases controller response time [60]. The
In another solution, “DevoFlow” [56,57],micro-flows are experiments also showed that controller response time varies
managed in the data plane and more massive flows in the between 4 and 30 ms for different number of switches with 4
controller, meaning that controller load will decrease and threads and 212 requests on the fly. However, the experimental
network scalability will be maximized. This approach setup and assumptions described in [60] need to be verified in
minimizes the cost of controller visibility associated with realistic network environments.
every flow setup and reduces the effect of flow-scheduling Dealing with 100K flows requires that the switch ASICs must
overhead, thus enhancing network performance and have this kind of flow capability. Current ASICs do not have
scalability. this capability, and therefore the flow table must be used as a
Finally, [50] describes a scalable SDN control framework, cache [64]. In conclusion, flow setup rate is anemic at best on
McNettle, which is executed on shared-memory multicore existing hardware [64], and therefore only a limited number of
servers and based on Nettle [58]. Experiments showed that flows per second are possible. The big O notation O(n) linear
McNettle could serve 5000 switches with a single controller lookup for software tables cannot approach the O(1) lookup of
with 46 cores and could handle 14 million flows per second a hardware-accelerated TCAM in a switch, causing a drop in
with latency below 200 μs for light loads and 10 ms for loads the packet-forwarding rate for large wildcard table sizes [62].
consisting of up to 5000 switches [50]. To overcome performance limitations, the key factors that
affect flow-setup time should be considered. As mentioned in
[46], these key factors are the processing and I/O performance
5.3 Performance under Latency Constraints
of the controller. Early benchmarks suggested that controller
SDN is a flow-based technique, and therefore its performance performance can be increased considerably by well-known
is measured based on two metrics: flow-setup time, and the optimization techniques such as I/O batching [60]. Another
number of flows per second that the controller can handle viable solution to alleviate the performance challenge was
[46]. There are two ways to setup a flow: proactive and

16
proposed under the name Maestro [65, 66]. Maestro used two as TCAMs to support flow-table entries and per-entry
basic parameters; the "input batching threshold” (IBT), a counters. However, any silicon area allocated to counters will
tuneable threshold value that determines the stage for creating not be available for look-up tables [67].
a flow-task process to handle the flow request, and the As is well known, switches have a CPU to manage the ASICs,
“pending raw-packet threshold” (PRT) that determines the but the bandwidth between the two is limited [67]. Therefore,
allowable number of pending packets in the queue to be storing the counters in the CPU and DRAM instead of in the
processed. Calibration of these parameters will identify ASIC would simplify the path from the counters to the
suitable values that will decrease latency and maximize controller and minimize the overhead on the controller to
network throughput according to network state. As the values access these counters. Another feasible solution that could
of PRT and IBT increase, throughput increases and delays address the limitations discussed above was suggested in [67].
decrease [65]. Optimization techniques should be used to find The authors proposed software-defined counters (SDCs)
the optimal range for values of PRT and IBT. because implementing counters in software does not require
Finally, the DevoFlow and McNettle architectures described re-designing the ASIC and can support more innovations. In
previously can be considered as feasible solutions to reduce the proposed SDC, the ASIC does not contain any counters,
network latency. McNettle implementations have shown that but it does generate event records that will be added to the
its improvements result in a 50-fold reduction in controller buffer. Whenever a buffer block is full, the ASIC moves it to
latency [50]. the CPU. The CPU extracts the records and updates its
counters, which are stored on the attached DRAM. SDC
5.4 Controlling the Data Path between the ASIC and the CPU
proposes two system designs:
Although the control data path in a line-card ASIC is fast, the i) A SDC switch in which the counter is moved out of the
data path between the ASIC and the CPU is not used in the ASIC and replaced by buffer blocks.
frequent operations of the traditional switch, and therefore it is ii) A SDC switch in which the CPU is installed on the ASIC.
considered as a slow path. The ProCurve 5406lz Ethernet Although the second design requires additional ASIC space, it
switch has a bandwidth of 300 GB/sec, but the measured minimizes the bandwidth between the data plane and the CPU.
loopback bandwidth between the ASIC and the CPU is 35
MB/sec [57]. Note also that the slow-switch CPU limits the 5.5 Use of Low-Level Interfaces between the Controller and
bandwidth between the switch and the controller. For instance, the Network Device
the bandwidth of the flow-setup payload has been measured Although SDN simplifies network management by developing
between the 5406lz switch and the OpenFlow controller and control applications with simple interfaces to determine high-
seems to be 10 MB/sec [57]. However, the DIFANE [54] level network policies, the underlying SDN framework needs
architecture leverages these limitations by distributing the to translate these policies into low-level switch configurations
OpenFlow wildcard rules among various switches to ensure [69]. The controllers available today provide a programming
that forwarding decisions occur in the data plane. interface that supports a low-level, imperative, and event-
Controlling the data path between the ASIC and the CPU is driven model. The interface reacts to network events such as
not a traditional operation [61]. OpenFlow specifies three packet arrivals and link status updates by installing and
counters for each flow-table entry: the number of matches, the uninstalling individual low-level packet-processing rules, rule-
number of packet bytes in these matches, and the flow by-rule and switch-by-switch [70]. In such a situation,
duration. Each counter is specified as 64 bits, and therefore programmers must constantly consider whether un-installing
this adds 192 bits (24 bytes) of extra storage per table entry switch policies will affect other future events monitored by the
[67]. OpenFlow counters and the logic to support them add controller. Also, they must coordinate multiple asynchronous
significant ASIC complexity and area and place more burdens events at the switches to perform even simple tasks.
on the CPU [63, 67, 68]. If counters are implemented in the In addition, this interface generates a time-absorption problem
ASIC hardware, it may be very difficult to change their and requires detailed knowledge of the software module or
function as the SDN protocol evolves because this would hardware device that is performing the required services.
require re-designing the ASIC or deploying new switch Many researchers are developing various programming
hardware [67]. Moreover, transferring the local counter from languages that enable the programmer to describe network
the ASIC to the controller can dramatically limit SDN behaviour using high-level abstractions, leaving the run-time
performance. system and compiler to take care of implementation details.
In addition, adding SDN support to create ASICs means [71] proposes FML, a high-level programming language
finding space for structures not typically found on an ASIC; consisting of operators that allow or deny flows while
the per-flow byte counters used by OpenFlow could be the coordinating the flows through firewalls and maintaining QoS.
largest such structures. In other words, the counters take space However, it is an inflexible language because it cannot
from the ASIC area, in full knowledge that this area in redirect or move flows as they are processed [72].
considered precious because designing an ASIC costs a lot of Finally, Flog, an event-driven logic programming language,
money and time. However, because the cost of switch ASICs was proposed in [70]. Introducing logic programming to SDN
depends on their area, there is an upper limit on the area of a is useful for processing network statistics and incremental
cost-effective ASIC [67]. Because ASIC area is valuable, this controller-state updates. The main feature that differentiates
places limits on the sizes of on-chip memory structures such

17
Flog from other languages is its Ethernet learning switch. The aware controller placement and control-traffic routing in the
learning process consists of monitoring, grouping, and storing network. These heuristics consisted of two algorithms for
the packets that arrive at a switch and then transferring this choosing the best controller location and maximizing the
information to a learning database. Afterwards, the policy connection resiliency metric: the optimized placement
generator creates low-level rules that flood all arriving algorithm and the approximation (greedy) placement
packets, and then, based on the information learned, the policy algorithm.
creates a precise high-level forwarding rule. Finally, the authors of [73] developed a latency-aware
controller placement problem. Their objective was not to find
5.6 Controller Placement Problem the optimal solution for the latency-aware controller
The controller placement problem influences every aspect of a placement problem, but to provide an initial analysis for
decoupled control plane, from flow-setup latencies to network further study of the formulation of fundamental design
reliability, to fault tolerance, and finally to performance problems. Therefore, the problem aimed to minimize the
metrics. For example, long-propagation-delay wide-area average propagation latency based on suitable controller
networks (WANs) limit availability and convergence time. placement. The minimization was based on an optimization
This has practical implications for software design, affecting model generated on the basis of the minimum k-median
whether controllers can respond to events in real-time or problem [73].
whether they must push forwarding actions to forwarding
elements in advance [73]. This problem includes controller 5.7 Security
placement with respect to the available network topology and Based on statistical studies carried out by IT organizations
the number of controllers needed. The user defines various [76], 12% of respondents in IT business technologies stated
metrics (latency, increase in the number of nodes, etc.) that that SDN has security challenges, and 31% of respondents in
control the placement of the controller in a network. IT business technologies were undecided whether SDN is a
Random placement for a small k- value in the k-median less secure or a more secure network paradigm than others.
problem, a clustering analysis algorithm, will result in an Clearly, IT organizations believe that SDN may pose certain
average latency between 1.4x and 1.7x larger than that of the security challenges. According to the above studies, SDN
optimal placement [73]. Finding the optimal controller security risks emerge from the absence of integration with
placement is a hot SDN research topic, especially for wide- existing security technologies and the inability to poke around
area SDN deployments because they require multiple every packet. Furthermore, improving the intelligence of the
controllers and their placement affects every metric in the controller software may increase controller vulnerability to
network. Improving reliability is important because network hackers and attack surfaces. If hackers access the controller,
failures cause disconnections between the control and they will damage every aspect of the network, and it will be
forwarding planes and could disable some of the switches. “game over” [76].
A reliability-aware controller placement problem has been Increasing SDN security requires from the controller the
proposed in [74]. The main objective of the problem can be ability to support the authentication and authorization classes
understood using the following question: how to place a given of the network’s administrators. In addition, leveraging the
number of controllers in a certain physical network such that impact of security requires from the administrators the ability
the predefined objective function is optimized. The authors in to use the same policies for traffic management to prevent
[74] considered the reliability issue as a placement metric access to SDN control traffic. Additional security-aware
which is reflected by the percentage of valid control paths. solutions are the implementation of an intelligent access
They developed an optimization model that maximized the control list (ACL) in the controller to filter packets and
expected percentage of valid control paths. This percentage is complete isolation between the tenants sharing the
affected by the location of the controller on one of the infrastructure. Finally, the controller should be able to alert the
candidate nodes, the number of controller-to-controller administrators in case of any sudden attack and to limit control
adjacencies, the available number of controllers, and the communication during an attack [46].
reservation of the switches on the controller. Finally, a random SDN is a promising technology for computer networks and
placement algorithm and greedy algorithms have been data-center networks, but it still lacks standardization policies.
suggested as heuristics to solve the reliable controller The current SDN architecture does not include standards for
placement problem. understanding topology, delay, or loss. Other features that are
[75] states that any failure that disconnects the forwarding not available include loop detection and the ability to fix
plane from the controller may lead to serious performance errors in a state. SDN does not support horizontal
degradation. Based on this observation, the authors in [75] communications between network nodes to enable
described a (path) resiliency (path protection)-aware controller collaboration between devices [77].
placement problem. They considered connection resiliency
between the controller and the switch as a placement metric
which was reflected by the ability of the switches to protect As SDN gains in popularity, several researchers and
their paths to the controller. The proposed heuristics aimed to enterprises have developed various SDN initiatives. They have
maximize the possibility of fast failover based on resilience- proposed SDN prototypes, development tools, and languages

18
for OpenFlow and SDN controllers and SDN cloud-computing shape of the multi-virtual machine application, while the query
networks [78]. Section VI covers some recent SDN API supports requests for topology views and network
implementations and tools. statistics. The orchestration layer provides services such as a
global view of the data-center topology, routing algorithms,
6. RESEARCH INITIATIVES FOR SDN and scheduling network configuration and control functions.
SDN enables network owners and operators to build a simpler, The lowest layer is responsible for creating virtual networks.
customizable, programmable, and manageable network. In addition to the importance of Meridian in supporting a
According to the network research community, SDN will alter service-level model, it is considered as an initial prototype of
the future of networking and will import new innovations to SDN in the cloud. Researchers would like to explore the
the market [79]. With this in mind, a number of research performance of Meridian in cases of sensitive workloads, the
initiatives have proposed SDN prototypes and applied them to scalability of this framework to support large networks, and its
DCN, wireless networking, software-defined radio, ability to recover failed plans [83].
enterprises, and campus networks.
6.3 SDN Tools and Languages
6.1 SDN Prototypes Various tools and languages are used to monitor and
The concept of SDN emerged in 2005, when the authors of implement SDN. Certain SDN initiatives have focussed on a
[80] proposed a 4D approach to network control and forming platform, Onix, to implement SDN controllers as a
management. Afterwards, a new network architecture, Ethane, distributed system for flexible network management [84].
which provides network control using centralized policies, Other studies have presented a network debugging tool,
was described in [81]. Veriflow [85], which is capable of discovering the faults in
Ethane uses a centralized controller that holds network SDN application rules and hence preventing them from
policies to control flow routing. It also uses Ethane switches disrupting network performance. Additional initiatives [86]
which receive instructions from the controller to forward have developed a routing architecture, Routeflow, which is
packets to their destinations. Policies are programmed using a inspired by SDN concepts and provides interaction between
flow-based security language based on DATALOG. Ethane commercial hardware performance and flexible open-source
was deployed in the Stanford computer science department to routing stacks. Hence, it opens the door to migration from
serve 300 hosts and in a small business to serve 30 hosts. Its traditional IP deployments to SDN.
deployment was an experiment to evaluate central network In addition to recent studies that developed physical SDN
management, and it showed that a single controller could prototypes, other researchers [62] have provided an efficient
support 10,000 new flow requests per second for small SDN innovation, Mininet. Mininet is a virtual emulator which
network designs and that a distributed set of controllers could provides an environment for prototyping any SDN idea.
be deployed for large network topologies. Ethane has two Whenever the prototype evaluation is acceptable, then it can
limitations that prevent it from being implemented using be deployed in research networks and for general use [62].
current traditional network techniques. Initially, it requires However, Mininet’s services are hindered by certain
knowledge about network users and nodes, and it demands limitations: poor performance at high loads and its lightweight
control over routing at the flow level [82]. These limitations virtualization approach.
were addressed by NOX, a network operating-system Research has also been directed toward developing control
framework. support for SDN and describing new language approaches to
Under NOX, applications can access the source and program OpenFlow networks.
destination of each event, and routing modules can perform [72] proposes a design for Frenetic, a high-level language for
constrained routing computations. NOX makes it possible to programming OpenFlow architectures. Frenetic consists of a
build a scalable network with flexible control because it uses query language based on SQL syntax, a stream-processing
flows as its intermediate granularity [82]. language, and a specification language for packet forwarding.
With the combination of these three languages, Frenetic
simplifies the programmer’s task by enabling him/her to
6.2 Cloud Computing and Virtualization in SDN produce forwarding policies as high-level abstractions.
Other recent studies [83] have developed an SDN-based It addresses some of OpenFlow’s shortcomings which are due
controller framework, Meridian, for cloud-computing to the lack of consistency between installing a rule in the
networks. Meridian provides a network services model that switches and allowing other packets to be processed, in
enables users to construct and manage a suitable logical addition to the lack of synchronization between the packet
topology for their cloud workloads. In addition, it allows arrival time and the rule installation time. It consists of two
virtual implementations on the underlying physical networks. abstraction levels, the source-level operators that deal with
Inspired by SDN, Meridian is composed of three logical network traffic, and the run-time system responsible for
layers: the network model and API layer, network installing rules into switches.
orchestration, and interfaces to network devices. The first In addition to the Frenetic language that can program
layer provides interaction with the network through OpenFlow networks, a number of other OpenFlow
declarative and query APIs; the declarative API creates the programming languages have been proposed, such as Procera

19
[87, 88] and Nettle [58]. These languages are based on and SDN-compatible switches. Because SDN makes it
functional reactive programming, facilitate network possible to build programmable and agile networks, academic
management, and support event-driven networks. researchers and network engineers are exploiting its flexibility
and programmability to generate strategies that simplify the
6.4 SDN Vendors
management of data-center LANs and WANs and make them
[89] describes the Floodlight controller platform. It is an more secure. Besides, SDN supports NaaS, the new Internet-
enterprise-class, Apache-licensed, Java-based OpenFlow based model that acts as a link between cloud computing and
controller that supports OpenStack orchestration and virtual SDN. While SDN manages forwarding decisions and network
and physical switches and manages OpenFlow and non- administration, NaaS will provide packet-processing
OpenFlow networks. In addition, NEC has designed a network applications for cloud tenants. In addition, researchers are
virtualization architecture encapsulated as NEC proposing various SDN prototypes that will serve DCNs,
ProgrammableFlow. The ProgrammableFlow technology wireless networks, enterprises, and campus networks. Despite
provides management of their networking fabric. NEC has all the promising opportunities that accompany SDN, it
created custom physical switches, PF5240 and PF5820, to encounters certain technical challenges that might hinder its
facilitate the ProgrammableFlow network architecture. The functionality in cloud computing and enterprises. Therefore,
ProgrammableFlow controller can control any IT organizations and network enterprises should be aware of
ProgrammableFlow or OpenFlow switch in a virtual network these challenges and explore the functionality of the SDN
[90]. [91] provides an option list of existing OpenFlow architecture to counter these criticisms.
controllers (NOX, Beacon [92], Helios, etc...) and switches
(software and hardware options such as Open vSwitch, Pronto,
etc…) to design SDN prototypes.
Besides these initiatives, researchers and enterprises have 8. REFERENCES
designed virtualization platforms for SDN [93]. NICIRA has
created a complete SDN solution: the Network Virtualization [1] Anderson, T., Peterson, L., Shenker, S., Turner, J., "Overcoming the
Platform (NVP). It can be injected over existing network Internet Impasse through Virtualization." Computer, vol. 38, no. 4: 34–
infrastructure or designed into emerging network fabrics. The 41, 2005.
NVP system works in collaboration with Open vSwitches that [2] HP, "Deliver HP Virtual Application Networks,"
are configured in the hypervisor or used as gateways to legacy http://h17007.www1.hp.com/docs/interopny/4AA4-3872ENW.pdf,
VLANs. Network virtualization is tasked to the Controller 2012.
[3] HP, "Realizing the Power of SDN with HP Virtual Application
Cluster. The cluster is an array of control structures running on
Networks," http://h17007.www1.hp.com/docs/interopny/4AA4-
servers separate from the network infrastructure. Control is
3871ENW.pdf, 2012.
separated not only from network devices, but also from the [4] Brocade Communications Systems, “Network Transformation with
network itself. Each cluster is capable of controlling thousands Software-Defined Networking and Ethernet Fabrics,” California, USA,
of Open vSwitch devices. The NVP architecture combines http://www.brocade.com/downloads/documents/positioning-
control and switching abstractions to provide a versatile papers/network-transformation-sdn-wp.pdf, 2012.
network solution [94]. [5] Xia, W.F., Foh, C.H., Xie, H.Y., Niyato, D., Wen, Y.G., "A Survey on
Finally, IT organizations and enterprises are focusing on Software-Defined Networking," (In Submission) IEEE Journal of
Communications Surveys and Tutorials, May 2013.
applying SDN not only to data-center networks (LANs), but
[6] Mendonça, M.,Nunes Astuto, B., Nguyen, X.N., Obraczka, K., Turletti,
also to wireless local-area networks (WLANs) and wide-area T., "A Survey of Software-Defined Networking: Past, Present, and
networks (WANs), where OpenFlow will function as an Future of Programmable Networks," (In Submission) Networking and
overlay over L2 and L3 virtual private networks (VPN). HP Telecommunication, HAL, INRIA, June 2013.
has announced that an SDN-centralized controller can [7] Duan, Q., Yan, Y.H., Vasilakos, A.V., "A Survey on Service-Oriented
minimize the cost and complexity of implementing WAN Network Virtualization toward Convergence of Networking and Cloud
optimization schemes. A prototype of SDN, Odin, was Computing," IEEE Transactions on Network and Service Management,
described in [95] and was intended to enable network vol.9, no.4, pp.373–392, December 2012.
[8] Big Switch Networks, The Open SDN Architecture,
operators to deploy WLAN services as network applications.
http://www.bigswitch.com/sites/default/files/sdn_overview.pdf, 2012.
Odin consists of a master, agents, and applications. The master
[9] Shin, M.K., Ki-Hyuk Nam, K.H., Kim, H.J., "Software-Defined
runs as an application on the OpenFlow controller, controls Networking (SDN): A Reference Architecture and Open APIs,"
the agents, and updates the forwarding table of access points Proceedings, 2012 International Conference on ICT Convergence
(APs) and switches, and the agents run on the APs and collect (ICTC), pp.360–361, 15–17 October 2012.
information about the clients. [10] IBM, Software-Defined Networking: A New Paradigm for Virtual,
Dynamic, Flexible Networking, October 2012.
7. CONCLUSIONS [11] OpenFlow Switch Consortium, OpenFlow Spec, v1.3.0
SDN aims to simplify network architecture by centralizing the https://www.opennetworking.org/images/stories/downloads/sdn-
control-plane intelligence of L2 switching and L3 routing resources/onf-specifications/openflow/openflow-spec-v1.3.0.pdf, 2012.
equipment. It also markets network hardware as a product [12] Ferro, G., "OpenFlow and Software-Defined Networking,"
http://etherealmind.com/software-defined-networking-openflow-so-far-
service and forms the basis of network virtualization. The
and-so-future/, Nov. 2012
generalized SDN architecture consists of the SDN controller

20
[13] NICIRA, "It’s Time to Virtualize the Network," Computer and Telecommunication Systems (MASCOTS), pp.403–406,
http://www.netfos.com.tw/PDF/Nicira/It%20is%20Time%20To%20Virt 17–19 August 2010.
ualize%20the%20Network%20White%20Paper.pdf, 2012. [30] Sun, L., Suzuki, K., Yasunobu, C., Hatano, Y., Shimonishi, H., "A
[14] Lippis, N. J., "Network Virtualization: The New Building Blocks of Network Management Solution Based on OpenFlow Towards New
Network Design," Challenges of Multitenant Data Centers,"Proceedings,2012 Ninth Asia-
https://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns431/ns Pacific Symposium on Information and Telecommunication
725/net_implementation_white_paper0900aecd80707cb6.pdf, 2007. Technologies (APSITT), pp.1–6, 5–9 November 2012.
[15] Wen; Heming, Tiwary; Prabhat Kumar, Le-Ngoc; Tho, “Network [31] Kanagavelu, R., Mingjie, L.N., Khin, M.M., Lee, F.B.-S., Heryandi, H.,
Virtualization: Overview,” Wireless Virtualization, Springer Briefs, pp. "OpenFlow-Based Control for Re-Routing with Differentiated Flows in
5-10, Google Books Web, 2013 Data Center Networks,"Proceedings,2012 18th IEEE International
[16] Carapinha, J., Feil, P., Weissmann, P., Thorsteinsson, S., Etemoglu, Ç. Conference on Networks (ICON), pp.228–233, 12–14 December 2012.
Ingþórsson, Ó., Çiftçi, S., Melo, M., "Network Virtualization: [32] Pries, R., Jarschel, M., Goll, S., "On the Usability of OpenFlow in Data
Opportunities and Challenges for Operators," Future Internet-FIS 2010: Center Environments,"Proceedings,2012 IEEE International Conference
138–147, 2012. on Communications (ICC), pp.5533–5537, 10–15 June 2012.
[17] Khan, A., Zugenmaier, A., Jurca, D., Kellerer, W., "Network [33] Kannan, K., Banerjee, S., "Scissors: Dealing with Header Redundancies
Virtualization: a Hypervisor for the Internet?" IEEE Communications in Data centers through SDN, "Proceedings, 2012 Eighth International
Magazine, vol.50, no.1, pp.136–143, January 2012. Conference and 2012 Workshop on Systems Virtualization Management
[18] Leon-Garcia, A., Mason, L.G., “Virtual Network Resource Management (SVM), Network and Service Management (CNSM), pp.295–301, 22–26
for Next-Generation Networks,” IEEE Commun. Mag., vol. 41, no. 7, October 2012.
pp.102–109, July 2003. [34] Boughzala, B., Ben Ali, R., Lemay, M., Lemieux, Y., Cherkaoui, O.,
[19] Network Virtualization Study Group, “Advanced Network "OpenFlow Supporting Inter-Domain Virtual Machine
Virtualization: Definition, Benefits, Applications, and Technical Migration,"Proceedings,2011 Eighth International Conference on
Challenges,” https://nvlab.nakao-lab.org/nv-study-group-white-
paper.v1.0.pdf, 2012. Wireless and Optical Communications Networks (WOCN), pp.1–7, 24–
26 May 2011.
[20] Oltsik, J., Laliberte, B., IBM and NEC Bring SDN/OpenFlow to
[35] Mann, V., Vishnoi, A., Kannan, K., Kalyanaraman, S., "CrossRoads:
Enterprise Data Center Networks, Enterprise Strategy Group Tech
Seamless VM Mobility across Data Centers through Software-Defined
Brief, 2012.
Networking,"2012 IEEE Network Operations and Management
[21] Murphy, A., Keeping Your Head Above the Cloud: Seven Data Center
Symposium (NOMS), pp.88–96, 16–20 April 2012.
Challenges to Consider Before Going Virtual, F5 Networks, U.S.A.,
[36] Santhi; Ram Kumar, “A Service Based Approach for Future Internet
2008.
Architectures,” PhD Thesis University of Agder, 2010, Web, Feb. 2011
[22] Vello Systems, “Optimizing Cloud Infrastructure with Software-Defined
[37] Costa, P., Migliavacca, M., Pietzuch, P., Wolf, A., "NaaS: Network-as-
Networking,”http://www.margallacomm.com/downloads/VSI_11Q4_OP
a-Service in the Cloud," 2012 USENIX: Management of Internet, Cloud,
N_GA_WP_01_1012_Booklet.pdf, 2012.
and Enterprise Networks and Services (Hot-ICE'12), San Jose CA,,
[23] Macapuna, C.A.B., Rothenberg, C.E., Magalhaes, M.F., "In-Packet
April 2012.
Bloom Filter-Based Data-Center Networking with Distributed
[38] Feng, T.,Bi, J.,Hu, H.Y.,Cao, H., "Networking as a Service: A Cloud-
OpenFlow Controllers, "IEEE 2010GLOBECOM Workshops, pp.584–
Based Network Architecture." Journal of Networks, vol.6, no.7,pp.
588, 6–10 December 2010.
1084–1090, July 2011.
[24] Thanh, N.H.,Nam, P.N.,Truong, T.-H.,Hung, N.T.,Doanh, L.K., Pries,
[39] Gutierrez, M.A.F., Ventura, N., "Mobile Cloud Computing Based on
R., "Enabling Experiments for Energy-Efficient Data-Center Networks
Service-Oriented Architecture: Embracing Network as a Service for
on an OpenFlow-Based Platform,"Proceedings,2012 Fourth
Third-Party Application Service Providers," Proceedings, ITU
International Conference on Communications and Electronics (ICCE),
Kaleidoscope 2011: The Fully Networked Human—Innovations for
pp.239–244, 1–3 August 2012.
Future Networks and Services (K-2011), pp.1–7, 12–14 December 2011.
[25] Stanford University, “Mininet: Rapid Prototyping for Software-Defined
[40] TERENA Network Architects Workshop, Network as a Service
Networks,”http://yuba.stanford.edu/foswiki/bin/view/OpenFlow/Mininet
principle virtual CPE as a Service,
, 2012.
http://www.terena.org/activities/netarch/ws1/slides/reijs-NaaS-221112-
[26] Vu, T.H.,Nam, P.N.,Thanh, T.,Hung, L.T.,Van, L.A.,Linh, N.D.,Thien,
NAW.pdf, November 2012.
T.D.,Thanh, N.H., "Power-Aware OpenFlow Switch Extension for
[41] Benson, T., Shaikh, A.A.A., Sahu, S., "CloudNaaS: A Cloud
Energy Saving in Data Centers, "Proceedings, 2012 International
Networking Platform for Enterprise Applications," Proceedings, Second
Conference on Advanced Technologies for Communications (ATC), ,
ACM Symposium on Cloud Computing (SOCC '11),New York, NY,
pp.309–313, 10–12 October 2012.
2011.
[27] Baker, C., Anjum, A., Hill, R., Bessis, N., Kiani, S.L., "Improving
[42] Costa, P., "Bridging the Gap between Applications and Networks in
Cloud Datacenter Scalability, Agility and Performance Using
Data Centers," ACM SIGOPS Operating Systems Review,vol.47,no. 1,
OpenFlow,"Proceedings,2012 Fourth International Conference on
pp.3–8, January 2013.
Intelligent Networking and Collaborative Systems (INCoS), pp.20–27,
[43] Raghavendra, R., Lobo, J.,Lee, K.-W., "Dynamic Graph Query
19–21 September 2012.
Primitives for SDN-Based Cloud Network Management," Proceedings,
[28] Fang, S.,Yu, Y., Foh, C.H., Aung, K.M.M., "A Loss-Free Multipathing
First Workshop on Hot Topics in Software-Defined Networks(HotSDN
Solution for Data Center Network Using Software-Defined Networking
'12), New York, NY, pp.97–102, 2012.
Approach," IEEE Transactions on Magnetics, vol.49, no.6, pp.2723–
[44] Khan, F., Naas as Step towards SDN,
2730, June 2013. http://www.telecomlighthouse.com/naas-as-step-towards-sdn/,March
[29] Yu, Y.,Aung, K.M.M., Tong, E.K.K., Foh, C.H., "Dynamic Load 2013.
Balancing Multipathing for Converged Enhanced Ethernet," 2010 IEEE [45] Sharkh, M.A.; Jammal, M.; Shami, A.; Ouda, A., "Resource allocation
International Symposium on Modeling, Analysis, and Simulation of in a network-based cloud computing environment: design challenges,"

21
Communications Magazine, IEEE , vol.51, no.11, pp.46,52, November [64] Pluribus Networks, “Of Controllers and Why NICIRAHad to Do a Deal
2013. (Part III: SDN and OpenFlow Enabling Network Virtualization in the
[46] Ashton, Metzler, and Associates, Ten Things to Look for in an SDN Cloud),” https://www.pluribusnetworks.com/blog/item/5-of-controllers-
and-why-nicira-had-to-do-a-deal-part-iii-sdn-and-openflow-enabling-
Controller, Technical Report, 2013.
network-virtualization-in-the-
[47] Ferro, G., OpenFlow and Software-Defined Networking, cloudhttp://pluribusnetworks.com/blog/,August 2012.
http://etherealmind.com/software-defined-networking-openflow-so-far-
[65] Cai, Z.,Cox, A.L.,Ng, T.S.E., Maestro: A System for Scalable OpenFlow
and-so-future/, November 2012.
Control, Rice University Technical Report TR10-08, December 2010.
[48] Yazıcı, V.,Sunay, O., Ercan, A.O., “Controlling a Software-Defined [66] Cai, Z.,Cox, A.L.,Ng, T.S.E., Maestro: Balancing Fairness, Latency,
Network via Distributed Controllers”, NEM Summit, Istanbul, Turkey, and Throughput in the OpenFlow Control Plane, Rice University
http://faculty.ozyegin.edu.tr/aliercan/files/2012/10/YaziciNEM12.pdf, Technical Report TR11-07, December 2011.
October 2012. [67] Mogul, J.C., Congdon, P., "Hey, You Darned Counters! Get Off my
[49] Yeganeh, S.H., Tootoonchian, A., Ganjali, Y., "On Scalability of ASIC!" First Workshop on Hot Topics in Software-Defined Networks,
Software-Defined Networking," IEEE Communications Magazine, pp. 25–30, 2012.
vol.51, no.2, pp.136–141, February 2013. [68] Wolf, W., "A Decade of Hardware/Software Codesign," Computer,
[50] Voellmy, A.,Wang, J.C., "Scalable Software-Defined Network vol.36, no.4, pp.38–43, April 2003.
Controllers," Proceedings, ACM SIGCOMM 2012 Conference on [69] Scott, R.C.,Wundsam, A., et al., What, Where, and When: Software
Applications, Technologies, Architectures, and Protocols for Computer Fault Localization for SDN, Technical Report No. UCB/EECS-2012-
Communication, pp. 289–290, 2012. 178, July 2012.
[51] Tavakoli, A., Casado, M.,Koponen, T.,Shenker, S., "Applying NOX to [70] Katta, N.P.,Rexford, J.,Walker, D., “Logic Programming for Software-
the Data Center," Proceedings, Ninth ACM SIGCOMM Workshop on Defined Networks,”
Topics in Networks (Hotnets-IX), October 2009. http://www.cs.princeton.edu/~jrex/papers/xldi12.pdf,July 2012.
[52] Enterasys Networks, “Software-Defined Networking (SDNs) in the [71] Hinrichs, T.L.,Gude, N.S., Casado, M., Mitchell, J.C.,Shenker, S.,
Enterprise”,http://www.enterasys.com/company/literature/SDN_tsbrief. “Practical Declarative Network Management,” First ACM Workshop on
pdf, October 2012. Research in Enterprise Networking (WREN '09), pp.1–10, 2009.
[53] Bae, H., “SDN Promises Revolutionary Benefits, but Watch Out for the [72] Foster, N.,Harrison, R., et al., "Frenetic: A Network Programming
Traffic Visibility Challenge,” Language,” Proceedings, 16th ACM SIGPLAN International Conference
http://www.networkworld.com/news/tech/2013/010413-sdn-traffic- on Functional Programming (ICFP '11), New York, NY, pp.279–291,
visibility-265515.html,January 2013.
2011.
[54] Yu, M.L., Rexford, J.,Freedman, M.J.,Wang, J., "Scalable Flow-Based [73] Heller, B., Sherwood, R., McKeown, N., "The Controller Placement
Networking with DIFANE," Proceedings, ACM SIGCOMM 2010 Problem,” First Workshop on Hot Topics in Software-Defined Networks,
Conference (SIGCOMM '10), New York NY, pp. 351–362, 2010. pp. 7–12, 2012.
[55] Kim, H.J., Santos, J.R., Turner, Y., Schlansker, M., Tourrilhes, J., [74] Hu, Y.-N.,Wang,W.-D.,et al., “On the Placement of Controllers in
Feamster, N., "CORONET: Fault Tolerance for Software-Defined Software-Defined Networks,”
Networks,"Proceedings,2012 20th IEEE International Conference on http://www.sciencedirect.com/science/article/pii/S100588851160438X,
Network Protocols (ICNP), pp.1–2, October 30–November 2, 2012. October 2012.
[56] Curtis, A., Mogul, J.,et al., "DevoFlow: Scaling Flow Management for [75] Beheshti, N.,Zhang, Y., “Fast Failover for Control Traffic in Software-
High-Performance Networks," Proceedings, ACM SIGCOMM 2011 Defined Networks,” Next-Generation Networking and Internet
Conference (SIGCOMM '11), New York NY,pp.254–265, 2011. Symposium(Globecom), Ericsson Research, 2012.
[57] Mogul, J.C., Tourrilhes, J.,et al., "DevoFlow: Cost-Effective Flow [76] Metzler, J., "Understanding Software-Defined Networks,"
Management for High-Performance Enterprise Networks," Proceedings, InformationWeek Reports, pp.1–25,
Ninth ACM SIGCOMM Workshop on Hot Topics in Networks(Hotnets- http://reports.informationweek.com/abstract/6/9044/Data-
IX),New York, NY, 2010. Center/research-understanding-software-defined-networks.html, October
[58] Voellmy, A., Hudak, P., “Nettle: Functional Reactive Programming of 2012.
OpenFlow Networks,” Practical Aspects of Declarative Languages [77] Marsan, C.D., “IAB Panel Debates Management Benefits, Security
(PADL) Symposium, January 2011. Challenges of Software-Defined Networking,” IETF Journal, October
[59] Rexford, J., “Software-Defined Networking”, COS 461: Computer 2012.
Networks Lecture, [78] Abu Sharkh, M.; Ouda, A.; Shami, A., "A resource scheduling model for
http://www.cs.princeton.edu/courses/archive/spring12/cos461/docs/lec2 cloud computing data centers," Wireless Communications and Mobile
4-sdn.pdf, 2012. Computing Conference (IWCMC), 2013 9th International , pp.213,218,
[60] Tootoonchian, A., Gorbunov, S.,et al., "On Controller Performance in 1-5 July 2013.
Software-Defined Networks," Proceedings,2nd USENIX Conference on [79] Kobayashi, M.,Seetharaman, S.,et al., "Maturing of OpenFlow and
Hot Topics in Management of Internet, Cloud, and Enterprise Networks Software-Defined Networking through Deployments," Elsevier, pp.1–
and Services, p. 10, 2012. 50, August 2012.
[61] Chu, J.,Malik, M.S.,Software-Defined Networks. [80] Greenberg, A., Hjalmtysson,G., et al., "A Clean-Slate 4D Approach to
http://www.cs.illinois.edu/~pbg/courses/cs538fa11/lectures/25-Jonathan- Network Control and Management," ACM SIGCOMM Computer
Salman.pdf,September 2012. Communication Review,vol.35,no. 5, pp.41–54, October 2005.
[62] Lantz, B., Heller, B., McKeown, N., "A Network in a Laptop: Rapid [81] Casado, M., Freedman, M.J., Pettit, J.,Luo, J.Y., Gude, N., McKeown,
Prototyping for Software-Defined Networks," Proceedings, Ninth ACM N., Shenker, S., "Rethinking Enterprise Network Control," IEEE/ACM
SIGCOMM Workshop on Hot Topics in Networks, New York, NY, Transactions on Networking, vol.17, no.4, pp.1270–1283, August 2009.
2010. [82] Gude, N.,Koponen, T.,et al., "NOX: Towards an Operating System for
[63] Foster, N., Freedman, M.,et al., “Language Abstractions for Software- Networks," ACM SIGCOMM Computer Communication
Defined Networks (LADA),” Philadelphia, PA, January 2012. Review,vol.38,no. 3, pp.105–110, July 2008.

22
[83] Banikazemi, M., Olshefski, D., Shaikh, A., Tracey, J.,Wang, G.H.,
"Meridian: an SDN Platform for Cloud Network Services," IEEE
Communications Magazine, vol.51, no.2, pp.120–127, February 2013.
[84] Koponen, T.,Casado, M.,et al., "Onix: A Distributed Control Platform
for Large-Scale Production Networks," Proceedings, Ninth USENIX
Conference on Operating Systems Design and Implementation
(OSDI'10),Berkeley, CA, 2010.
[85] Khurshid, A.,Zhou, W.X., Caesar, M.,Godfrey, P.B., "VeriFlow:
Verifying Network-Wide Invariants in Real Time," First Workshop on
Hot Topics in Software-Defined Networks (HotSDN '12),New York,
NY,pp.49–54, 2012.
[86] Nascimento, M.R., Rothenberg, C.E.,et al., "Virtual Routers as a
Service: The RouteFlow Approach Leveraging Software-Defined
Networks," Proceedings, Sixth International Conference on Future
Internet Technologies (CFI '11), New York, NY,pp.34–37, 2011.
[87] Kim, H.J., Feamster, N., "Improving Network Management with
Software-Defined Networking," IEEE Communications Magazine,
vol.51, no.2, pp.114–119, February 2013.
[88] Voellmy, A.,Kim, H.J., Feamster, N., "Procera: A Language for High-
Level Reactive Network Control," First Workshop on Hot Topics in
Software-Defined Networks (HotSDN '12), New York, NY, pp. 43–48,
2012.
[89] Big Switch Networks, FloodLight OpenFlow Controller,
http://www.projectfloodlight.org/floodlight/,2013.
[90] Mehra, R., Designing and Building a DataCenter Network: An
Alternative Approach with OpenFlow,
http://www.nec.com/en/global/prod/pflow/images_documents/Designing
_and_Building_a_Datacenter_Network.pdf, January 2012.
[91] OpenFlow Components, http://archive.openflow.org/wp/openflow-
components/, 2011.
[92] Erickson, D., Beacon,
https://openflow.stanford.edu/display/Beacon/Home, February 2013.
[93] Zarifis, K, Kontesidu, G., "OpenFlow Virtual Networking: A Flow-
Based Network Virtualization Architecture," Telecommunication
Systems Laboratory and School of Information and Communication
Technology, Master of Science Thesis, Stockholm, Sweden, 2009.
[94] NICIRA, Network Virtualization Platform,
https://www.vmware.com/products/nsx/, February 2013.
[95] Suresh, L.,Schulz-Zander, J.,et al., "Towards Programmable Enterprise
WLANs with Odin," First Workshop on Hot Topics in Software-Defined
Networks (HotSDN '12), New York, NY, pp.115–120, 2012.

23
MANAR JAMMAL received her Dr. Shami has chaired key symposia for IEEE
M.S. degree in electrical and GLOBECOM, IEEE ICC, IEEE ICNC, and ICCIT. Dr.
electronics engineering in 2012 Shami is a Senior Member of IEEE.
from Lebanese University, Beirut,
Lebanon in cooperation with
University of Technology of Rasool Asal received his PhD
Compiègne. She is currently degree in physics from the
working towards the Ph.D. degree University of Leicester
in cloud computing and virtualization technology at
Western Ontario University. Her research interests (Leicester, UK). He then
include cloud computing, virtualization, software moved to the University of
defined network and virtual machine migrations. Southampton to take up a post-
doctoral position within the
Electronic and Computer
TARANPREET SINGH
received his Masters in Science Department. Dr.
engineering (M.Eng) degree in Rasool is a Chief Researcher at Etisalat BT Innovation
Communications and Data Center (EBTIC) leading EBTIC research and innovation
Networking from the University of activities in the area of Cloud Computing. For the past
Western Ontario, London, Canada fifteen years, he has been working with British
in September 2013. He has Telecommunications Group at Adastral Park (Ipswich,
worked as a Consultant with
U.K), designing and developing a considerable volume
Accenture Technology Services and holds special
interest in the Cisco Networking domain. His research of high-performance enterprise applications, mostly in
interests include Software Defined Networking, the area of telecommunications. Dr. Rasool is a speaker
Network Function Virtualization and Network at many international conferences and events, most
Security. recently at the IEEE 8th International World Congress
on Services (Cloud & Services 2012), Hawaii, USA. He
Abdallah Shami received the has edited two books and published research papers in
B.E. degree in Electrical and leading international conferences and journals. He is
Computer Engineering from currently acting as Senior Guest Editor for Journal of
the Lebanese University, Mobile Multimedia Special Issue on Cloud Computing
Beirut, Lebanon in 1997, and Operation. His current interest focuses primarily on the
the Ph.D. Degree in Electrical Cloud Technologies, Cloud Security Architectures and
Engineering from the the design of wide-area distributed cloud compliance
Graduate School and enterprise systems that scale to millions of users.
University Center, City
University of New York, New York, NY in September
2002. In September 2002, he joined the Department of Yiming Li received a B.Eng.
in Electrical Engineering from
Electrical Engineering at Lakehead University, Thunder
Western University, London,
Bay, ON, Canada as an Assistant Professor. Since July Ontario, Canada. Yiming is an
2004, he has been with Western University, Canada Assistant Product Manager at
where he is currently an Associate Professor in the StarTech.com. He is
Department of Electrical and Computer Engineering. His Responsible for product
current research interests are in the area of network planning and product
development. His research
optimization, cloud computing, and wireless networks.
interests are in the areas of cloud computing, software
Dr. Shami is an Editor for IEEE Communications defined networking and network virtualization.
Tutorials and Survey and has served on the editorial
board of IEEE Communications Letters (2008-2013).

24

You might also like