SDN U-Ii
SDN U-Ii
It has been found useful in many installations to use an operating system to simulate the existence of several
machines on a single physical set of hardware. This technique allows an installation to multiprogram several
different operating systems (or different versions of the same operating system) on a single physical machine.
The dynamic-address-translation hardware allows such a simulator to be efficient enough to be used, in many
cases, in production mode.
Virtualization encompasses a variety of technologies for managing computing resources, providing a software
translation layer, known as an abstraction layer, between the software and the physical hardware. Virtualization
turns physical resources into logical, or virtual, resources. Virtualization enables users, applications, and
management software operating above the abstraction layer to manage and use resources without needing to be
aware of the physical details of the underlying resources.
7.1-VIRTUAL MACHINES
Traditionally, applications have run directly on an operating system (OS) on a personal computer (PC) or on a
server. Each PC or server would run only one OS at a time. Therefore, application vendors had to rewrite parts of
its applications for each OS/platform they would run on and support, which increased time to market for new
features/functions, increased the likelihood of defects, increased quality testing efforts, and usually led to
increased price. To support multiple operating systems, application vendors needed to create, manage, and support
multiple hardware and operating system infrastructures, a costly and resource-intensive process. One effective
strategy for dealing with this problem is known as hardware virtualization. Virtualization technology enables a
single PC or server to simultaneously run multiple operating systems or multiple sessions of a single OS. A
machine running virtualization software can host numerous applications, including those that run on different
operating systems, on a single hardware platform. In essence, the host operating system can support a number of
virtual machines (VMs), each of which has the characteristics of a particular OS and, in some versions of
virtualization, the characteristics of a particular hardware platform.
The Virtual Machine Monitor
The solution that enables virtualization is a virtual machine monitor (VMM), or hypervisor. This software sits
between the hardware and the VMs acting as a resource broker (see Figure 7.1). Simply put, the hypervisor allows
multiple VMs to safely coexist on a single physical server host and share that host’s resources. The number of
guests that can exist on a single host is measured as a consolidation ratio. For example, a host that is supporting
six VMs is said to have a consolidation ration of 6 to 1, also written as 6:1 (see Figure 7.2). If a company
virtualized all of their servers, they could remove 75 percent of the servers from their data centers. More important,
they could remove the cost as well, which often ran into the millions or tens of millions of dollars annually. With
fewer physical servers, less power and less cooling was needed. Also this leads to fewer cables, fewer network
switches, and less floor space.
The VM approach is a common way for businesses and individuals to deal with legacy applications and to
optimize their hardware usage by maximizing the various kinds of applications that a single computer can handle.
Commercial hypervisor offerings by companies such as VMware and Microsoft are widely used, with millions
of copies having been sold. A key aspect of server virtualization is that, in addition to the capability of running
multiple VMs on one machine, VMs can be viewed as network resources. Server virtualization has become a
central element in dealing with big data applications and in implementing cloud computing infrastructures.
Architectural Approaches
Virtualization is all about abstraction. Virtualization abstracts the physical hardware from the VMs it supports.
A VM is a software construct that mimics the characteristics of a physical server. It is configured with some
number of processors, some amount of RAM, storage resources, and connectivity through the network ports.
Once that VM is created, it can be powered on like a physical server, loaded with an operating system and
applications, and used in the manner of a physical server. The hypervisor facilitates the translation of I/O from
the VM to the physical server devices, and back again to the correct VM. To achieve this, certain privileged
instructions that a “native” operating system would be executing on its host’s hardware now trigger a hardware
trap and are run by the hypervisor as a proxy for the VM. This creates some performance degradation in the
virtualization process though over time both hardware and software improvements have minimalized this
overhead.
VMs are made up of files. There is a configuration file that describes the attributes of the VM. It contains the
server definition, how many virtual processors (vCPUs) are allocated to this VM, how much RAM is allocated,
which I/O devices the VM has access to, how many network interface cards (NICs) are in the virtual server, and
more. When a VM is powered on, or instantiated, additional files are created for logging, for memory paging, and
other functions.
To create a copy of a physical server, additional hardware needs to be acquired, installed, configured, loaded with
an operating system, applications, and data, and then patched to the latest revisions, before being turned over to
the users.
Because a VM consists of files, by duplicating those files, in a virtual environment there is a perfect copy of the
server available in a matter of minutes. There are a few configuration changes to make (server name and IP
address to name two), but administrators routinely stand up new VMs in minutes or hours, as opposed to months.
In addition to consolidation and rapid provisioning, virtual environments have become the new model for data
center infrastructures for many reasons. One of these is increased availability. VM hosts are clustered together to
form pools of computer resources. Multiple VMs are hosted on each of these servers and in the case of a physical
server failure, the VMs on the failed host can be quickly and automatically restarted on another host in the cluster.
Compared with providing this type of availability for a physical server, virtual environments can provide higher
availability at significantly lower cost and less complexity.
There are two types of hypervisors, distinguished by whether there is another operating system between the
hypervisor and the host. A Type 1 hypervisor (see part a of Figure 7.3) is loaded as a thin software layer directly
into a physical server, much like an operating system is loaded. Once it is installed and configured, usually within
a matter of minutes, the server can then support VMs as guests. Some examples of Type 1 hypervisors are
VMware ESXi, Microsoft Hyper-V, and the various open source Xen variants. They are more comfortable with
a solution that works as a traditional application, program code that is loaded on top of a Microsoft Windows or
UNIX/Linux operating system environment. This is exactly how a Type 2 hypervisor (see part b of Figure 7.3) is
deployed. Some examples of Type 2 hypervisors are VMware Workstation and Oracle VM Virtual Box.
There are some important differences between the Type 1 and the Type 2 hypervisors. A Type 1 hypervisor is
deployed on a physical host and can directly control the physical resources of that host, whereas a Type 2
hypervisor has an operating system between itself and those resources and relies on the operating system to handle
all the hardware interactions on the hypervisor’s behalf. Typically, Type 1 hypervisors perform better than Type
2 because Type 1 hypervisors do not have that extra layer. Because a Type 1 hypervisor doesn’t compete for
resources with an operating system, there are more resources available on the host, and by extension, more VMs
can be hosted on a virtualization server using a Type 1 hypervisor.
Container Virtualization
A relatively recent approach to virtualization is known as container virtualization. In this approach, software,
known as a virtualization container, runs on top of the host OS kernel and provides an execution environment
for applications (Figure 7.4). Unlike hypervisor-based VMs, containers do not aim to emulate physical servers.
Instead, all containerized applications on a host share a common OS kernel. This eliminates the resources needed
to run a separate OS for each application and can greatly reduce overhead.
Because the containers execute on the same kernel, thus sharing most of the base OS, containers are much smaller
and lighter weight compared to a hypervisor/guest OS VM arrangement.
7.3-NFV CONCEPTS
Network functions virtualization (NFV) is the virtualization of network functions by implementing these
functions in software and running them on VMs. NFV decouples network functions, such as Network Address
Translation (NAT), firewalling, intrusion detection, Domain Name Service (DNS), and caching, from proprietary
hardware appliances so that they can run in software on VMs. NFV builds on standard VM technologies,
extending their use into the networking domain.
Network function devices: Such as switches, routers, network access points, customer premises
equipment (CPE), and deep packet inspectors
Network-related compute devices: Such as firewalls, intrusion detection systems, and network
management systems.
Network-attached storage: File and database servers attached to the network.
In traditional networks, all devices are deployed on proprietary/closed platforms. All network elements are
enclosed boxes, and hardware cannot be shared. Each device requires additional hardware for increased capacity,
but this hardware is idle when the system is running below capacity. With NFV, however, network elements are
independent applications that are flexibly deployed on a unified platform comprising standard servers, storage
devices, and switches. In this way, software and hardware are decoupled, and capacity for each application is
increased or decreased by adding or reducing virtual resources (figure 7.5).
Part a of Figure 7.6 highlights the network functions that are relevant to the service provider and customer.
The interconnections among the NFs and endpoints are depicted by dashed lines, representing logical links.
These logical links are supported by physical paths through infrastructure networks (wired or wireless).
Part b of Figure 7.6 illustrates a virtualized network service configuration that could be implemented on the
physical configuration of part a of Figure 7.6. VNF-1 provides network access for endpoint A, and VNF-2
provides network access for B. The figure also depicts the case of a nested VNF forwarding graph (VNF-FG-2)
constructed from other VNFs (that is, VNF-2A, VNF-2B and VNF-2C). All of these VNFs run as VMs on
physical machines, called points of presence (PoPs). VNF-FG-2 consists of three VNFs even though ultimately
all the traffic transiting VNF-FG-2 is between VNF-1 and VNF-3. The reason for this is that three separate and
distinct network functions are being performed.
NFV Principles
Three key NFV principles are involved in creating practical network services:
Service chaining: VNFs are modular and each VNF provides limited functionality on its own. For a given
traffic flow within a given application, the service provider steers the flow through multiple VNFs to
achieve the desired network functionality. This is referred to as service chaining.
Management and orchestration (MANO): This involves deploying and managing the lifecycle of VNF
instances. Examples include VNF instance creation, VNF service chaining, monitoring, relocation,
shutdown, and billing. MANO also manages the NFV infrastructure elements.
Distributed architecture: A VNF may be made up of one or more VNF components (VNFC), each of
which implements a subset of the VNF’s functionality. Each VNFC may be deployed in one or multiple
instances. These instances may be deployed on separate, distributed hosts to provide scalability and
redundancy.
High-Level NFV Framework
Figure 7.7 shows a high-level view of the NFV framework defined by ISG NFV. This framework supports the
implementation of network functions as software-only VNFs. We use this to provide an overview of the NFV
architecture.
The NFV framework consists of three domains of operation:
Virtualized network functions: The collection of VNFs, implemented in software, that run over the
NFVI.
NFV infrastructure (NFVI): The NFVI performs a virtualization function on the three main categories
of devices in the network service environment: computer devices, storage devices, and network devices.
NFV management and orchestration: Encompasses the orchestration and lifecycle management of
physical/software resources that support the infrastructure virtualization, and the lifecycle management
of VNFs.
two types of relations between VNFs are supported:
VNF forwarding graph (VNF FG): Covers the case where network connectivity between VNFs is
specified, such as a chain of VNFs on the path to a web server tier (for example, firewall, network
address translator, load balancer).
VNF set: Covers the case where the connectivity between VNFs is not specified, such as a web
server pool.
NFV Requirements
NFV must be designed and implemented to meet a number of requirements and technical challenges, including
the following:
Portability/interoperability: The capability to load and execute VNFs provided by different vendors on
a variety of standardized hardware platforms. The challenge is to define a unified interface that clearly
decouples the software instances from the underlying hardware, as represented by VMs and their
hypervisors.
Performance trade-off: Because the NFV approach is based on industry standard hardware (that is,
avoiding any proprietary hardware such as acceleration engines), a probable decrease in performance has
to be taken into account. The challenge is how to keep the performance degradation as small as possible
by using appropriate hypervisors and modern software technologies, so that the effects on latency,
throughput, and processing overhead are minimized.
Migration and coexistence with respect to legacy equipment: The NFV architecture must support a
migration path from today’s proprietary physical network appliance-based solutions to more open
standards- based virtual network appliance solutions. In other words, NFV must work in a hybrid network
composed of classical physical network appliances and virtual network appliances.
Management and orchestration: A consistent management and orchestration architecture is required.
NFV presents an opportunity, through the flexibility afforded by software network appliances operating
in an open and standardized infrastructure, to rapidly align management and orchestration northbound
interfaces to well defined standards and abstract specifications.
Automation: NFV will scale only if all the functions can be automated. Automation of process is
paramount to success.
Security and resilience: The security, resilience, and availability of their networks should not be impaired
when VNFs are introduced.
Network stability: Ensuring stability of the network is not impacted when managing and orchestrating
a large number of virtual appliances between different hardware vendors and hypervisors. This is
particularly important when, for example, virtual functions are relocated, or during reconfiguration
events (for example, because of hardware and software failures) or because of cyber-attack.
Simplicity: Ensuring that virtualized network platforms will be simpler to operate than those that exist
today. A significant focus for network operators is simplification of the plethora of complex network
platforms and support systems that have evolved over decades of network technology evolution, while
maintaining continuity to support important revenue generating services.
Integration: Network operators need to be able to “mix and match” servers from different vendors,
hypervisors from different vendors, and virtual appliances from different vendors without incurring
significant integration costs and avoiding lock-in. The ecosystem must offer integration services and
maintenance and third-party support; it must be possible to resolve integration issues between several
parties.
also useful to view the architecture as consisting of three layers. The NFVI together with the virtualized
infrastructure manager provide and manage the virtual resource environment and its underlying physical
resources. The VNF layer provides the software implementation of network functions, together with element
management systems and one or more VNF managers. Finally, there is a management, orchestration, and control
layer consisting of OSS/BSS and the NFV orchestrator.
NNFV Management and Orchestration
The NFV management and orchestration facility includes the following functional blocks:
NFV orchestrator: Responsible for installing and configuring new network services (NS) and virtual
network function (VNF) packages, NS lifecycle management, global resource management, and
validation and authorization of NFVI resource requests.
VNF manager: Oversees lifecycle management of VNF instances.
Virtualized infrastructure manager: Controls and manages the interaction of a VNF with computing,
storage, and network resources under its authority, in addition to their virtualization.
Reference Points
The main reference points include the following considerations:
Vi-Ha: Marks interfaces to the physical hardware. A well-defined interface specification will facilitate
for operators sharing physical resources for different purposes, reassigning resources for different
purposes, evolving software and hardware independently, and obtaining software and hardware
component from different vendors.
Vn-Nf: These interfaces are APIs used by VNFs to execute on the virtual infrastructure. Application
developers, whether migrating existing network functions or developing new VNFs, require a consistent
interface the provides functionality and the ability to specify performance, reliability, and scalability
requirements.
Nf-Vi: Marks interfaces between the NFVI and the virtualized infrastructure manager (VIM). This
interface can facilitate specification of the capabilities that the NFVI provides for the VIM. The VIM must
be able to manage all the NFVI virtual resources, including allocation, monitoring of system utilization,
and fault management.
Or-Vnfm: This reference point is used for sending configuration information to the VNF manager and
collecting state information of the VNFs necessary for network service lifecycle management.
Vi-Vnfm: Used for resource allocation requests by the VNF manager and the exchange of resource
configuration and state information.
Or-Vi: Used for resource allocation requests by the NFV orchestrator and the exchange of resource
configuration and state information.
Os-Ma: Used for interaction between the orchestrator and the OSS/BSS systems.
Ve-Vnfm: Used for requests for VNF lifecycle management and exchange of configuration and state
information.
Se-Ma: Interface between the orchestrator and a data set that provides information regarding the VNF
deployment template, VNF forwarding graph, service-related information, and NFV infrastructure
information models.
Implementation
The key objectives of OPNFV are as follows:
Develop an integrated and tested open source platform that can be used to investigate and demonstrate
core NFV functionality.
Secure proactive participation of leading end users to validate that OPNFV releases address participating
operators’ needs.
Influence and contribute to the relevant open source projects that will be adopted in the OPNFV reference
platform.
Establish an open ecosystem for NFV solutions based on open standards and open source software.
Promote OPNFV as the preferred open reference platform to avoid unnecessary and costly duplication
of effort.
The initial scope of OPNFV will be on building NFVI, VIM, and including application programmable interfaces
(APIs) to other NFV elements, which together form the basic infrastructure required for VNFs and MANO
components. This scope is highlighted in Figure 7.9 as consisting of NFVI and VMI. With this platform as a
common base, vendors can add value by developing VNF software packages and associated VNF manager and
orchestrator software.
8- NFV Functionality
8.1-NFV INFRASTRUCTURE
The heart of the NFV architecture is a collection of resources and functions known as the NFV infrastructure
(NFVI). The NFVI encompasses three domains, as illustrated in Figure 8.1 and described in the list that follow-
Compute domain: Provides commercial off-the-shelf (COTS) high-volume servers and storage.
Hypervisor domain: Mediates the resources of the compute domain to the VMs of the software
appliances, providing an abstraction of the hardware.
Infrastructure network domain (IND): Comprises all the generic high volume switches interconnected
into a network that can be configured to supply infrastructure network services.
Container Interface
The ETSI documents make a distinction between a functional block interface and a container interface, as follows:
Functional block interface: An interface between two blocks of software that perform separate (perhaps
identical) functions. The interface allows communication between the two blocks. The two functional blocks
may or may not be on the same physical host.
Container interface: An execution environment on a host system within which a functional block executes.
The functional block is on the same physical host as the container that provides the container interface.
Figure 8.2 relates container and functional block interfaces to the domain structure of NFVI.
The ETSI NFVI Architecture Overview document makes the following points concerning this figure:
The architecture of the VNFs is separated from the architecture hosting the VNFs (that is, the NFVI).
The architecture of the VNFs may be divided into a number of domains with consequences for the NFVI
and vice versa.
Given the current technology and industrial structure, compute (including storage), hypervisors, and
infrastructure networking are already largely separate domains and are maintained as separate domains
within the NFVI.
Management and orchestration tends to be sufficiently distinct from the NFVI as to warrant being defined
as its own domain; however, the boundary between the two is often only loosely defined with functions
such as element management functions in an area of overlap.
The interface between the VNF domains and the NFVI is a container interface and not a functional block
interface.
The management and orchestration functions are also likely to be hosted in the NFVI (as VMs) and
therefore also likely to sit on a container interface.
When a VNF is composed of multiple VNFCs, it is not necessary that all the VNFCs execute in the same host.
As shown in part b of Figure 8.3, the VNFCs can be distributed across multiple compute nodes interconnected by
network hosts forming the infrastructure network domain.
Logical Structure of NFVI Domains
The NFVI domain logical structure provides a framework for such development and identifies the interfaces
between the main components, as shown in Figure 8.4.
Compute Domain
The principal elements in a typical compute domain may include the following:
CPU/memory: A COTS processor, with main memory, that executes the code of the VNFC.
Internal storage: Nonvolatile storage housed in the same physical structure as the processor, such as
flash memory.
Accelerator: Accelerator functions for security, networking, and packet processing may also be included.
External storage with storage controller: Access to secondary memory devices.
Network interface card (NIC): Provides the physical interconnection with the infrastructure network
domain.
Control and admin agent: Connects to the virtualized infrastructure manager (VIM).
Eswitch: Server embedded switch. However, functionally it forms an integral part of the infrastructure
network domain.
Compute/storage execution environment: This is the execution environment presented to the hypervisor
software by the server or storage device.
Control plane workloads: Concerned with signaling and control plane protocols such as BGP. Typically,
these workloads are more processor rather than I/O intensive and do not place a significant burden on the
I/O system.
Data plane workloads: Concerned with the routing, switching, relaying or processing of network traffic
payloads. Such workloads can require high I/O throughput.
Hypervisor Domain
The hypervisor domain is a software environment that abstracts hardware and implements services, such as
starting a VM, terminating a VM, acting on policies, scaling, live migration, and high availability. The principal
elements in the hypervisor domain are the following:
Compute/storage resource sharing/management: Manages these resources and provides virtualized
resource access for VMs.
Network resource sharing/management: Manages these resources and provides virtualized resource
access for VMs.
Virtual machine management and API: This provides the execution environment of a single VNFC
instance.
Control and admin agent: Connects to the virtualized infrastructure manager (VIM).
Vswitch: The vswitch function, described in the next paragraph, is implemented in the hypervisor
domain. However, functionally it forms an integral part of the infrastructure network domain.
VNF Interfaces
As discussed earlier, a VNF consists of one or more VNF components (VNFCs). The VNFCs of a single VNF
are connected internal to the VNF. This internal structure is not visible to other VNFs or to the VNF user.
.
Figure 8.6 shows the interfaces relevant to a discussion of VNFs as described in the list that follows.
SWA-1: This interface enables communication between a VNF and other VNFs, PNFs, and endpoints.
Note that the interface is to the VNF as a whole and not to individual VNFCs. SWA-1 interfaces are
logical interfaces that primarily make use of the network connectivity services available at the SWA-5
interface.
SWA-2: This interface enables communications between VNFCs within a VNF. This interface is vendor
specific and therefore not a subject for standardization. This interface may also make use of the network
connectivity services available at the SWA-5 interface. However, if two VNFCs within a VNF are
deployed on the same host, other technologies may be used to minimize latency and enhance throughput,
as described below.
SWA-3: This is the interface to the VNF manager within the NFV management and orchestration module.
The VNF manager is responsible for lifecycle management (creation, scaling, termination, and so on).
The interface typically is implemented as a network connection using IP.
SWA-4: This is the interface for runtime management of the VNF by the element manager.
SWA-5: This interface describes the execution environment for a deployable instance of a VNF. Each
VNFC maps to a virtualized container interface to a VM.
NFV Orchestrator
The NFV orchestrator (NFVO) is responsible for resource orchestration and network service orchestration.
Resource orchestration manages and coordinates the resources under the management of different VIMs.
NFVO coordinates, authorizes, releases and engages NFVI resources among different PoPs or within one PoP.
This does so by engaging with the VIMs directly through their northbound APIs instead of engaging with the
NFVI resources directly.
Network services orchestration manages/coordinates the creation of an end-to-end service that involves VNFs
from different VNFMs domains. Service orchestration does this in the following way:
It creates end-to-end service between different VNFs. It achieves this by coordinating with the
respective VNFMs so that it does not need to talk to VNFs directly. An example is creating a service
between the base station VNFs of one vendor and core node VNFs of another vendor.
It can instantiate VNFMs, where applicable.
It does the topology management of the network services instances (also called VNF forwarding
graphs).
Repositories
Associated with NFVO are four repositories of information needed for the management and orchestration
functions:
Network services catalog: List of the usable network services. A deployment template for a network
service in terms of VNFs and description of their connectivity through virtual links is stored in NS catalog
for future use.
VNF catalog: Database of all usable VNF descriptors. A VNF descriptor (VNFD) describes a VNF in
terms of its deployment and operational behavior requirements. It is primarily used by VNFM in the
process of VNF instantiation and lifecycle management of a VNF instance. The information provided in
the VNFD is also used by the NFVO to manage and orchestrate network services and virtualized resources
on NFVI.
NFV instances: List containing details about network services instances and related VNF instances.
NFVI resources: List of NFVI resources utilized for the purpose of establishing NFV services.
Element Management
The element management is responsible for fault, configuration, accounting, performance, and security (FCAPS)
management functionality for a VNF. These management functions are also the responsibility of the VNFM. But
EM can do it through a proprietary interface with the VNF in contrast to VNFM. However, EM needs to make
sure that it exchanges information with VNFM through open reference point (VeEm-Vnfm). The EM may be
aware of virtualization and collaborate with VNFM to perform those functions that require exchange of
information regarding the NFVI resources associated with VNF. EM functions include the following:
Configuration for the network functions provided by the VNF
Fault management for the network functions provided by the VNF
Accounting for the usage of VNF functions
Collecting performance measurement results for the functions provided by the VNF
Security management for the VNF functions
OSS/BSS
The OSS/BSS are the combination of the operator’s other operations and business support functions that are not
otherwise explicitly captured in the present architectural framework, but are expected to have information
exchanges with functional blocks in the NFV-MANO architectural framework. OSS/BSS functions may provide
management and orchestration of legacy systems and may have full end-to-end visibility of services provided by
legacy network functions in an operator’s network.
In principle, it would be possible to extend the functionalities of existing OSS/BSS to manage VNFs and NFVI
directly, but that may be a proprietary implementation of a vendor. Because NFV is an open platform, managing
NFV entities through open interfaces (as that in MANO) makes more sense. The existing OSS/BBS, however,
can add value to the NFV MANO by offering additional functions if they are not supported by a certain
implementation of NFV MANO. This is done through an open reference point (Os-Ma) between NFV MANO
and existing OSS/BSS.
i- NFVI as a Service
NFVIaaS is a scenario in which a service provider implements and deploys an NFVI that may be used to
support VNFs both by the NFVIaaS provider and by other network service providers. For the NFVIaaS
provider, this service provides for economies of scale. The infrastructure is sized to support the provider’s
own needs for deploying VNFs and extra capacity that can be sold to other providers. The NFVIaaS
customer can offer services using the NFVI of another service provider. The NFVIaaS customer has
flexibility in rapidly deploying VNFs, either for new services or to scale out existing services. Cloud
computing providers may find this service particularly attractive.
ii- VNF as a Service
Whereas NFVIaaS is similar to the cloud model of Infrastructure as a Service (IaaS), VNFaaS corresponds
to the cloud model of Software as a Service (SaaS). NFVIaaS provides the virtualization infrastructure to
enable a network service provider to develop and deploy VNFs with reduced cost and time compared to
implementing the NFVI and the VNFs. With VNFaaS, a provider develops VNFs that are then available off
the shelf to customers. This model is well suited to virtualizing customer premises equipment such as routers
and firewalls.
iii- Virtual Network Platform as a Service
VNPaaS is similar to an NFVIaaS that includes VNFs as components of the virtual network infrastructure.
The primary differences are the programmability and development tools of the VNPaaS that allow the
subscriber to create and configure custom ETSI NFV-compliant VNFs to augment the catalog of VNFs
offered by the service provider.
iv- VNF Forwarding Graphs
VNF FG allows virtual appliances to be chained together in a flexible manner. This technique is called
service chaining. For example, a flow may pass through a network monitoring VNF, a load-balancing VNF,
and finally a firewall VNF in passing from one endpoint to another.
9.1-VIRTUAL LANS
Figure 9.1 shows a relatively common type of hierarchical LAN configuration. In this example, the devices on
the LAN are organized into four segments, each served by a LAN switch. The LAN switch is a store-and- forward
packet-forwarding device used to interconnect a number of end systems to form a LAN segment. The switch can
forward a media access control (MAC) frame from a source-attached device to a destination- attached device.
It can also broadcast a frame from a source-attached device to all other attached devices. Multiples switches can
be interconnected so that multiple LAN segments form a larger LAN. A LAN switch can also connect to a
transmission link or a router or other network device to provide connectivity to the Internet or other WANs.
2- MPLS VPNs
An alternative, and popular, means of constructing VPNs is using MPLS. Multiprotocol Label Switching (MPLS)
is a set of Internet Engineering Task Force (IETF) specifications for including routing and traffic engineering
information in packets. MPLS comprises a number of interrelated protocols, which can be referred to as the MPLS
protocol suite. It can be used in IP networks but also in other types of packet-switching networks. MPLS is used
to ensure that all packets in a particular flow take the same route over a backbone.
In essence, MPLS is an efficient technique for forwarding and routing packets. MPLS was designed with IP
networks in mind, but the technology can be used without IP to construct a network with any link-level protocol.
In an MPLS network, a fixed-length label encapsulates an IP packet or a data link frame. The MPLS label contains
all the information needed by an MPLS-enabled router to perform routing, delivery, QoS, and traffic management
functions. Unlike IP, MPLS is connection oriented.
An MPLS network or internet consists of a set of nodes, called label-switching routers (LSRs) capable of
switching and routing packets on the basis of a label appended to each packet. Labels define a flow of packets
between two endpoints or, in the case of multicast, between a source endpoint and a multicast group of destination
endpoints. For each distinct flow, called a forwarding equivalence class (FEC), a specific path through the
network of LSRs is defined, called a label-switched path (LSP). All packets in an FEC receive the same
treatment en route to the destination. These packets follow the same path and receive the same QoS treatment at
each hop. In contrast to forwarding in ordinary IP networks, the assignment of a particular packet to a particular
FEC is done just once, when the packet enters the network of MPLS routers.
9.4-NETWORK VIRTUALIZATION
NV is a far broader concept than VPNs, which only provide traffic isolation, or VLANs, which provide a basic
form of topology management. NV implies full administrative control for customizing virtual networks both in
terms of the physical resources used and the functionalities provided by the virtual networks.
The virtual network presents an abstracted network view whose virtual resources provide users with services
similar to those provided by physical networks. Because the virtual resources are software defined, the manager
or administrator of a virtual network potentially has a great deal of flexibility in altering topologies, moving
resources, and changing the properties and service of various resources. In addition, virtual network users can
include not only users of services or applications but also service providers. For example, a cloud service provider
can quickly add new services or expanded coverage by leasing virtual networks as needed.
A Simplified Example
Figure 9.9 shows a network consisting of three servers and five switches. One server is a trusted platform with a
secure operating system that hosts firewall software. All the servers run a hypervisor (virtual machine monitor)
enabling them to support multiple VMs. The resources for one enterprise (Enterprise 1) are hosted across the
servers and consist of three VMs (VM1a, VM1b, and VM1c) on physical server 1, two VMs (VM1d and VM1e)
on physical server 2, and firewall 1 on physical server 3. The virtual switches are used to set up any desired
connectivity between the VMs across the servers through the physical switches. The physical switches provide
the connectivity between the physical servers. Each enterprise network is layered as a separate virtual network on
top of the physical network. Thus, the virtual network for Enterprise 1 is indicated in Figure 9.9 by a dashed circle
and labeled VN1. The labeled circle VN2 indicates another virtual network.
Network Virtualization Architecture
An excellent overview of the many elements that contribute to an NV environment is provided by the conceptual
architecture defined in Y.3011 and shown in Figure 9.11.
The architecture depicts NV as consisting of four levels:
I. Physical resources
II. Virtual resources
III. Virtual networks
IV. Services
A single physical resource can be shared among multiple virtual resources. In turn, each LINP (virtual network)
consists of multiple virtual resources and provides a set of services to users.
Various management and control functions are performed at each level, not necessarily by the same provider.
There are management functions associated with each physical network and its associated resources. A virtual
resource manager (VRM) manages a pool of virtual resources created from the physical resources. A VRM
interacts with physical network managers (PNMs) to obtain resource commitments. The VRM constructs LINPs,
and an LINP manager is allocated to each LINP.
The upper part of Figure 9.14 is a virtual network example. VRT is defined as the vRouter, and BR1 and BR2 are
defined as vBridges. Interfaces of the vRouter and vBridges are connected using vLinks. Once a user of VTN
Manager has defined a virtualnetwork, the VTN Coordinator maps physical network resources to the constructed
virtual network. Mapping identifies which virtual network each packet transmitted or received by an OpenFlow
switch belongs to, as well as which interface in the OpenFlow switch transmits or receives that packet. There are
two mapping methods:
Port mapping: This mapping method is used to map a physical port as an interface of virtual node
(vBridge/vTerminal). Port-map is enabled when the network topology is known in advance.
VLAN mapping: This mapping method is used to map VLAN ID of VLAN tag in incoming Layer 2
frame with the vBridge. This mapping is used when the affiliated network and its VLAN tag are known.
Whenever this mapping method is used, it is possible to reduce the number of commands to be set.
vBridge vRouter vRouter Interface
Figure 9.14 shows a mapping example. An interface of BR1 is mapped to a port on OpenFlow switch SW1.
Packets received from that SW1 port are regarded as those from the corresponding interface of BR1. The interface
if1 of vBridge (BR1) is mapped to the port GBE0/1 of switch1 using port-map. Packets received or transmitted
by GBE0/1 of switch1 are considered as those from or to the interface if1 of vBridge. vBridge BR2 is mapped to
VLAN 200 using vlan-map. Packets having the VLAN ID of 200 received or transmitted by the port of any
switch in the network are mapped to the vBridge BR2.
Figure 9.15 shows the overall architecture of VTN. The VTN Manager is part of the OpenDaylight controller and
uses base network service functions to learn the topology and statistics of the underlying network. A user or
application creates virtual networks and specifies network behavior to the VTN Coordinator across a web or
REST interface. The VTN Coordinator translates these commands into detailed instructions to the VTN Manager,
which in turn uses OpenFlow to map virtual networks to the physical network infrastructure.
9.6-SOFTWARE-DEFINED INFRASTRUCTURE
With SDI, a data center or network infrastructure can autoconfigure itself at run time based on
application/business requirements and operator constraints. Automation in SDIs enables infrastructure operators
to achieve higher conformance to SLAs, avoid overprovisioning, and automate security and other network-related
functions.
Another key characteristic of SDI is that it is highly application driven. Applications tend to change much more
slowly than the ecosystem (hardware, system software, networks) that supports them. Individuals and enterprises
stay with chosen applications for long periods of time, whereas they replace the hardware and other infrastructure
elements at a fast pace. So, providers are at an advantage if the entire infrastructure is software defined and thus
able to cope with rapid changes in infrastructure technology.
SDN and NFV are the key enabling technologies for SDI. SDN provides network control systems with the
flexibility to steer and provision network resources dynamically. NFV virtualizes network functions as
prepackaged software services that are easily deployable in a cloud or network infrastructure environment. So
instead of hard-coding a service deployment and its network services, these can now be dynamically provisioned;
traffic is then steered through the software services, significantly increasing the agility with which these are
provisioned.
Software-Defined Storage
As mentioned, SDN and NFV are key elements of SDI. A third, equally important element is the emerging
technology known as software-defined storage (SDS). SDS is a framework for managing a variety of storage
systems in the data center that are traditionally not unified. SDS provides the ability to manage these storage
assets to meet specific SLAs and to support a variety of applications.
Figure 9.16 illustrates the main elements of a typical SDS architecture. Physical storage consists of a number of
magnetic and solid-state disk arrays, possibly from multiple vendors. Separate from this physical storage plane is
a unified set of control software. This must include adaptation logic that can interface with a variety of vendor
equipment and controlling and monitoring that equipment. On top of this adaptation layer are a number of basic
storage services. An application interface provides an abstracted view of data storage so that applications need
not be concerned with the location, attributes, or capacity of individual storage systems. There is also an
administrative interface to enable the SDS administrator to manage the distributed storage suite.
FIGURE 9.16 Software-Defined Storage Architecture
SDS puts the emphasis on storage services instead of storage hardware. By decoupling the storage control
software from the hardware, a storage resource can be used more efficiently and its administration simplified.
When additional resources are needed by an application, the storage control software automatically adds the
resources. Conversely, resources are freed up when not in use. The storage control software automatically
removes failed components and systems that fail.
SDI Architecture
There is no standardized specification for SDI, and there are numerous differences in the different initiatives. A
typical example is the SDI architecture defined by Intel. This architecture is organized into three layers, as
illustrated in Figure 9.17 and described in the list that follows.
Orchestration: A policy engine that allows higher level frameworks to manage composition dynamically
without interrupting ongoing operations.
Composition: A low-level layer of system software that continually and automatically manages the pool
of hardware resources.
Hardware pool: An abstracted pool of modular hardware resources.
The orchestration layer drives the architecture. Intel’s initial focus appears to be on cloud providers, but other
application areas, such as big data and other data center applications, lend themselves to the SDI approach. This
layer continually monitors status data, enabling it to solve service issues faster and to continually optimize
hardware resource assignment.
The composition layer is a control layer that manages VMs, storage, and network assets. In this architecture, the
VM is seen as a dynamic federation of compute, storage, and network resources assembled to run an application
instance. With software-defined allocation of resources, more flexibility is available in creating, provisioning,
managing, moving, and retiring VMs. Similarly, SDS provides the opportunity to use storage more efficiently.
Composition enables the logical disaggregation of compute, network, and storage resources, so that each VM
provides exactly what an application needs. Supporting this at the level of the hardware is Intel’s rack scale
architecture (RSA). RSA exploits extremely high data rate optical connection components to redesign the way
computer rack systems are implemented. In an RSA design, the speed of the silicon interconnects means that
individual components (processors, memory, storage, and network) no longer need to reside in the same box.