0% found this document useful (0 votes)
6 views47 pages

SDN U-Ii

Facilitation

Uploaded by

mahitheboss11008
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views47 pages

SDN U-Ii

Facilitation

Uploaded by

mahitheboss11008
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Network Functions Virtualization

(NFV):Concepts and Architecture


Introduction:
7-

It has been found useful in many installations to use an operating system to simulate the existence of several
machines on a single physical set of hardware. This technique allows an installation to multiprogram several
different operating systems (or different versions of the same operating system) on a single physical machine.
The dynamic-address-translation hardware allows such a simulator to be efficient enough to be used, in many
cases, in production mode.
Virtualization encompasses a variety of technologies for managing computing resources, providing a software
translation layer, known as an abstraction layer, between the software and the physical hardware. Virtualization
turns physical resources into logical, or virtual, resources. Virtualization enables users, applications, and
management software operating above the abstraction layer to manage and use resources without needing to be
aware of the physical details of the underlying resources.

7.1-BACKGROUND AND MOTIVATION FOR NFV


NFV originated from discussions among major network operators and carriers about how to improve network
operations in the high-volume multimedia era. These discussions resulted in the publication of the original NFV
white paper, Network Functions Virtualization: An Introduction, Benefits, Enablers, Challenges & Call for
Action. The white paper highlights that the source of the need for this new approach is that networks include a
large and growing variety of proprietary hardware appliances, leading to the following negative consequences:
 New network services may require additional different types of hardware appliances, and finding the
space and power to accommodate these boxes is becoming increasingly difficult.
 New hardware means additional capital expenditures.
 Once new types of hardware appliances are acquired, operators are faced with the rarity of skills
necessary to design, integrate, and operate increasingly complex hardware-based appliances.
 Hardware-based appliances rapidly reach end of life, requiring much of the procure-design-integrate-
deploy cycle to be repeated with little or no revenue benefit.
 As technology and services innovation accelerates to meet the demands of an increasingly network-
centric IT environment, the need for an increasing variety of hardware platforms inhibits the
introduction of new revenue-earning network services.

7.1-VIRTUAL MACHINES
Traditionally, applications have run directly on an operating system (OS) on a personal computer (PC) or on a
server. Each PC or server would run only one OS at a time. Therefore, application vendors had to rewrite parts of
its applications for each OS/platform they would run on and support, which increased time to market for new
features/functions, increased the likelihood of defects, increased quality testing efforts, and usually led to
increased price. To support multiple operating systems, application vendors needed to create, manage, and support
multiple hardware and operating system infrastructures, a costly and resource-intensive process. One effective
strategy for dealing with this problem is known as hardware virtualization. Virtualization technology enables a
single PC or server to simultaneously run multiple operating systems or multiple sessions of a single OS. A
machine running virtualization software can host numerous applications, including those that run on different
operating systems, on a single hardware platform. In essence, the host operating system can support a number of
virtual machines (VMs), each of which has the characteristics of a particular OS and, in some versions of
virtualization, the characteristics of a particular hardware platform.
The Virtual Machine Monitor
The solution that enables virtualization is a virtual machine monitor (VMM), or hypervisor. This software sits
between the hardware and the VMs acting as a resource broker (see Figure 7.1). Simply put, the hypervisor allows
multiple VMs to safely coexist on a single physical server host and share that host’s resources. The number of
guests that can exist on a single host is measured as a consolidation ratio. For example, a host that is supporting
six VMs is said to have a consolidation ration of 6 to 1, also written as 6:1 (see Figure 7.2). If a company
virtualized all of their servers, they could remove 75 percent of the servers from their data centers. More important,
they could remove the cost as well, which often ran into the millions or tens of millions of dollars annually. With
fewer physical servers, less power and less cooling was needed. Also this leads to fewer cables, fewer network
switches, and less floor space.

The VM approach is a common way for businesses and individuals to deal with legacy applications and to
optimize their hardware usage by maximizing the various kinds of applications that a single computer can handle.
Commercial hypervisor offerings by companies such as VMware and Microsoft are widely used, with millions
of copies having been sold. A key aspect of server virtualization is that, in addition to the capability of running
multiple VMs on one machine, VMs can be viewed as network resources. Server virtualization has become a
central element in dealing with big data applications and in implementing cloud computing infrastructures.

Architectural Approaches
Virtualization is all about abstraction. Virtualization abstracts the physical hardware from the VMs it supports.
A VM is a software construct that mimics the characteristics of a physical server. It is configured with some
number of processors, some amount of RAM, storage resources, and connectivity through the network ports.
Once that VM is created, it can be powered on like a physical server, loaded with an operating system and
applications, and used in the manner of a physical server. The hypervisor facilitates the translation of I/O from
the VM to the physical server devices, and back again to the correct VM. To achieve this, certain privileged
instructions that a “native” operating system would be executing on its host’s hardware now trigger a hardware
trap and are run by the hypervisor as a proxy for the VM. This creates some performance degradation in the
virtualization process though over time both hardware and software improvements have minimalized this
overhead.
VMs are made up of files. There is a configuration file that describes the attributes of the VM. It contains the
server definition, how many virtual processors (vCPUs) are allocated to this VM, how much RAM is allocated,
which I/O devices the VM has access to, how many network interface cards (NICs) are in the virtual server, and
more. When a VM is powered on, or instantiated, additional files are created for logging, for memory paging, and
other functions.
To create a copy of a physical server, additional hardware needs to be acquired, installed, configured, loaded with
an operating system, applications, and data, and then patched to the latest revisions, before being turned over to
the users.
Because a VM consists of files, by duplicating those files, in a virtual environment there is a perfect copy of the
server available in a matter of minutes. There are a few configuration changes to make (server name and IP
address to name two), but administrators routinely stand up new VMs in minutes or hours, as opposed to months.
In addition to consolidation and rapid provisioning, virtual environments have become the new model for data
center infrastructures for many reasons. One of these is increased availability. VM hosts are clustered together to
form pools of computer resources. Multiple VMs are hosted on each of these servers and in the case of a physical
server failure, the VMs on the failed host can be quickly and automatically restarted on another host in the cluster.
Compared with providing this type of availability for a physical server, virtual environments can provide higher
availability at significantly lower cost and less complexity.
There are two types of hypervisors, distinguished by whether there is another operating system between the
hypervisor and the host. A Type 1 hypervisor (see part a of Figure 7.3) is loaded as a thin software layer directly
into a physical server, much like an operating system is loaded. Once it is installed and configured, usually within
a matter of minutes, the server can then support VMs as guests. Some examples of Type 1 hypervisors are
VMware ESXi, Microsoft Hyper-V, and the various open source Xen variants. They are more comfortable with
a solution that works as a traditional application, program code that is loaded on top of a Microsoft Windows or
UNIX/Linux operating system environment. This is exactly how a Type 2 hypervisor (see part b of Figure 7.3) is
deployed. Some examples of Type 2 hypervisors are VMware Workstation and Oracle VM Virtual Box.
There are some important differences between the Type 1 and the Type 2 hypervisors. A Type 1 hypervisor is
deployed on a physical host and can directly control the physical resources of that host, whereas a Type 2
hypervisor has an operating system between itself and those resources and relies on the operating system to handle
all the hardware interactions on the hypervisor’s behalf. Typically, Type 1 hypervisors perform better than Type
2 because Type 1 hypervisors do not have that extra layer. Because a Type 1 hypervisor doesn’t compete for
resources with an operating system, there are more resources available on the host, and by extension, more VMs
can be hosted on a virtualization server using a Type 1 hypervisor.
Container Virtualization
A relatively recent approach to virtualization is known as container virtualization. In this approach, software,
known as a virtualization container, runs on top of the host OS kernel and provides an execution environment
for applications (Figure 7.4). Unlike hypervisor-based VMs, containers do not aim to emulate physical servers.
Instead, all containerized applications on a host share a common OS kernel. This eliminates the resources needed
to run a separate OS for each application and can greatly reduce overhead.

Because the containers execute on the same kernel, thus sharing most of the base OS, containers are much smaller
and lighter weight compared to a hypervisor/guest OS VM arrangement.
7.3-NFV CONCEPTS
Network functions virtualization (NFV) is the virtualization of network functions by implementing these
functions in software and running them on VMs. NFV decouples network functions, such as Network Address
Translation (NAT), firewalling, intrusion detection, Domain Name Service (DNS), and caching, from proprietary
hardware appliances so that they can run in software on VMs. NFV builds on standard VM technologies,
extending their use into the networking domain.
 Network function devices: Such as switches, routers, network access points, customer premises
equipment (CPE), and deep packet inspectors
 Network-related compute devices: Such as firewalls, intrusion detection systems, and network
management systems.
 Network-attached storage: File and database servers attached to the network.
In traditional networks, all devices are deployed on proprietary/closed platforms. All network elements are
enclosed boxes, and hardware cannot be shared. Each device requires additional hardware for increased capacity,
but this hardware is idle when the system is running below capacity. With NFV, however, network elements are
independent applications that are flexibly deployed on a unified platform comprising standard servers, storage
devices, and switches. In this way, software and hardware are decoupled, and capacity for each application is
increased or decreased by adding or reducing virtual resources (figure 7.5).

FIGURE 7.5 Vision for Network Functions Visualization


Simple Example of the Use of NFV
This section considers a simple example from the NFV Architectural Framework document. Part a of Figure 7.6
shows a physical realization of a network service. At a top level, the network service consists of endpoints
connected by a forwarding graph of network functional blocks, called network functions (NFs). Examples of NFs
are firewalls, load balancers, and wireless network access points. In the Architectural Framework, NFs are viewed
as distinct physical nodes. The endpoints are beyond the scope of the NFV specifications and include all customer-
owned devices. So, in the figure, endpoint A could be a smartphone and endpoint B a content delivery network
(CDN) server.

Part a of Figure 7.6 highlights the network functions that are relevant to the service provider and customer.
The interconnections among the NFs and endpoints are depicted by dashed lines, representing logical links.
These logical links are supported by physical paths through infrastructure networks (wired or wireless).
Part b of Figure 7.6 illustrates a virtualized network service configuration that could be implemented on the
physical configuration of part a of Figure 7.6. VNF-1 provides network access for endpoint A, and VNF-2
provides network access for B. The figure also depicts the case of a nested VNF forwarding graph (VNF-FG-2)
constructed from other VNFs (that is, VNF-2A, VNF-2B and VNF-2C). All of these VNFs run as VMs on
physical machines, called points of presence (PoPs). VNF-FG-2 consists of three VNFs even though ultimately
all the traffic transiting VNF-FG-2 is between VNF-1 and VNF-3. The reason for this is that three separate and
distinct network functions are being performed.
NFV Principles
Three key NFV principles are involved in creating practical network services:
 Service chaining: VNFs are modular and each VNF provides limited functionality on its own. For a given
traffic flow within a given application, the service provider steers the flow through multiple VNFs to
achieve the desired network functionality. This is referred to as service chaining.
 Management and orchestration (MANO): This involves deploying and managing the lifecycle of VNF
instances. Examples include VNF instance creation, VNF service chaining, monitoring, relocation,
shutdown, and billing. MANO also manages the NFV infrastructure elements.
 Distributed architecture: A VNF may be made up of one or more VNF components (VNFC), each of
which implements a subset of the VNF’s functionality. Each VNFC may be deployed in one or multiple
instances. These instances may be deployed on separate, distributed hosts to provide scalability and
redundancy.
High-Level NFV Framework
Figure 7.7 shows a high-level view of the NFV framework defined by ISG NFV. This framework supports the
implementation of network functions as software-only VNFs. We use this to provide an overview of the NFV
architecture.
The NFV framework consists of three domains of operation:
 Virtualized network functions: The collection of VNFs, implemented in software, that run over the
NFVI.
 NFV infrastructure (NFVI): The NFVI performs a virtualization function on the three main categories
of devices in the network service environment: computer devices, storage devices, and network devices.
 NFV management and orchestration: Encompasses the orchestration and lifecycle management of
physical/software resources that support the infrastructure virtualization, and the lifecycle management
of VNFs.
two types of relations between VNFs are supported:
 VNF forwarding graph (VNF FG): Covers the case where network connectivity between VNFs is
specified, such as a chain of VNFs on the path to a web server tier (for example, firewall, network
address translator, load balancer).
 VNF set: Covers the case where the connectivity between VNFs is not specified, such as a web
server pool.

7.4-NFV BENEFITS AND REQUIREMENTS


NFV Benefits
The following are the most important potential benefits:
 Reduced CapEx, by using commodity servers and switches, consolidating equipment, exploiting
economies of scale, and supporting pay-as-you grow models to eliminate wasteful overprovisioning.
 Reduced OpEx, in terms of power consumption and space usage, by using commodity servers and
switches, consolidating equipment, and exploiting economies of scale, and reduced network
management and control expenses. Reduced CapeX and OpEx are perhaps the main drivers for NFV.
 The ability to innovate and roll out services quickly, reducing the time to deploy new networking
services to support changing business requirements, seize new market opportunities, and improve return
on investment of new services. Also lowers the risks associated with rolling out new services, allowing
providers to easily trial and evolve services to determine what best meets the needs of customers.
 Ease of interoperability because of standardized and open interfaces.
 Use of a single platform for different applications, users and tenants. This allows network operators to
share resources across services and across different customer bases.
 Provided agility and flexibility, by quickly scaling up or down services to address changing demands.
 Targeted service introduction based on geography or customer sets is possible. Services can be rapidly
scaled up/down as required.
 A wide variety of ecosystems and encourages openness. It opens the virtual appliance market to pure
software entrants, small players and academia, encouraging more innovation to bring new services and
new revenue streams quickly at much lower risk.

NFV Requirements
NFV must be designed and implemented to meet a number of requirements and technical challenges, including
the following:
 Portability/interoperability: The capability to load and execute VNFs provided by different vendors on
a variety of standardized hardware platforms. The challenge is to define a unified interface that clearly
decouples the software instances from the underlying hardware, as represented by VMs and their
hypervisors.
 Performance trade-off: Because the NFV approach is based on industry standard hardware (that is,
avoiding any proprietary hardware such as acceleration engines), a probable decrease in performance has
to be taken into account. The challenge is how to keep the performance degradation as small as possible
by using appropriate hypervisors and modern software technologies, so that the effects on latency,
throughput, and processing overhead are minimized.
 Migration and coexistence with respect to legacy equipment: The NFV architecture must support a
migration path from today’s proprietary physical network appliance-based solutions to more open
standards- based virtual network appliance solutions. In other words, NFV must work in a hybrid network
composed of classical physical network appliances and virtual network appliances.
 Management and orchestration: A consistent management and orchestration architecture is required.
NFV presents an opportunity, through the flexibility afforded by software network appliances operating
in an open and standardized infrastructure, to rapidly align management and orchestration northbound
interfaces to well defined standards and abstract specifications.
 Automation: NFV will scale only if all the functions can be automated. Automation of process is
paramount to success.
 Security and resilience: The security, resilience, and availability of their networks should not be impaired
when VNFs are introduced.
 Network stability: Ensuring stability of the network is not impacted when managing and orchestrating
a large number of virtual appliances between different hardware vendors and hypervisors. This is
particularly important when, for example, virtual functions are relocated, or during reconfiguration
events (for example, because of hardware and software failures) or because of cyber-attack.
 Simplicity: Ensuring that virtualized network platforms will be simpler to operate than those that exist
today. A significant focus for network operators is simplification of the plethora of complex network
platforms and support systems that have evolved over decades of network technology evolution, while
maintaining continuity to support important revenue generating services.
 Integration: Network operators need to be able to “mix and match” servers from different vendors,
hypervisors from different vendors, and virtual appliances from different vendors without incurring
significant integration costs and avoiding lock-in. The ecosystem must offer integration services and
maintenance and third-party support; it must be possible to resolve integration issues between several
parties.

7.5-NFV REFERENCE ARCHITECTURE


Figure 7.8 shows a more detailed look at the ISG NFV reference architectural framework. You can view this
architecture as consisting of four major blocks.
 NFV infrastructure (NFVI): Comprises the hardware and software resources that create the environment
in which VNFs are deployed. NFVI virtualizes physical computing, storage, and networking and places
them into resource pools.
 VNF/EMS: The collection of VNFs implemented in software to run on virtual computing, storage, and
networking resources, together with a collection of element management systems (EMS) that manage the
VNFs.
 NFV management and orchestration (NFV-MANO): Framework for the management and
orchestration of all resources in the NFV environment. This includes computing, networking, storage, and
VM resources.
 OSS/BSS: Operational and business support systems implemented by the VNF service provider.
It
is

also useful to view the architecture as consisting of three layers. The NFVI together with the virtualized
infrastructure manager provide and manage the virtual resource environment and its underlying physical
resources. The VNF layer provides the software implementation of network functions, together with element
management systems and one or more VNF managers. Finally, there is a management, orchestration, and control
layer consisting of OSS/BSS and the NFV orchestrator.
NNFV Management and Orchestration
The NFV management and orchestration facility includes the following functional blocks:
 NFV orchestrator: Responsible for installing and configuring new network services (NS) and virtual
network function (VNF) packages, NS lifecycle management, global resource management, and
validation and authorization of NFVI resource requests.
 VNF manager: Oversees lifecycle management of VNF instances.
 Virtualized infrastructure manager: Controls and manages the interaction of a VNF with computing,
storage, and network resources under its authority, in addition to their virtualization.
Reference Points
The main reference points include the following considerations:
 Vi-Ha: Marks interfaces to the physical hardware. A well-defined interface specification will facilitate
for operators sharing physical resources for different purposes, reassigning resources for different
purposes, evolving software and hardware independently, and obtaining software and hardware
component from different vendors.
 Vn-Nf: These interfaces are APIs used by VNFs to execute on the virtual infrastructure. Application
developers, whether migrating existing network functions or developing new VNFs, require a consistent
interface the provides functionality and the ability to specify performance, reliability, and scalability
requirements.
 Nf-Vi: Marks interfaces between the NFVI and the virtualized infrastructure manager (VIM). This
interface can facilitate specification of the capabilities that the NFVI provides for the VIM. The VIM must
be able to manage all the NFVI virtual resources, including allocation, monitoring of system utilization,
and fault management.
 Or-Vnfm: This reference point is used for sending configuration information to the VNF manager and
collecting state information of the VNFs necessary for network service lifecycle management.
 Vi-Vnfm: Used for resource allocation requests by the VNF manager and the exchange of resource
configuration and state information.
 Or-Vi: Used for resource allocation requests by the NFV orchestrator and the exchange of resource
configuration and state information.
 Os-Ma: Used for interaction between the orchestrator and the OSS/BSS systems.
 Ve-Vnfm: Used for requests for VNF lifecycle management and exchange of configuration and state
information.
 Se-Ma: Interface between the orchestrator and a data set that provides information regarding the VNF
deployment template, VNF forwarding graph, service-related information, and NFV infrastructure
information models.
Implementation
The key objectives of OPNFV are as follows:
 Develop an integrated and tested open source platform that can be used to investigate and demonstrate
core NFV functionality.
 Secure proactive participation of leading end users to validate that OPNFV releases address participating
operators’ needs.
 Influence and contribute to the relevant open source projects that will be adopted in the OPNFV reference
platform.
 Establish an open ecosystem for NFV solutions based on open standards and open source software.
 Promote OPNFV as the preferred open reference platform to avoid unnecessary and costly duplication
of effort.
The initial scope of OPNFV will be on building NFVI, VIM, and including application programmable interfaces
(APIs) to other NFV elements, which together form the basic infrastructure required for VNFs and MANO
components. This scope is highlighted in Figure 7.9 as consisting of NFVI and VMI. With this platform as a
common base, vendors can add value by developing VNF software packages and associated VNF manager and
orchestrator software.

8- NFV Functionality
8.1-NFV INFRASTRUCTURE
The heart of the NFV architecture is a collection of resources and functions known as the NFV infrastructure
(NFVI). The NFVI encompasses three domains, as illustrated in Figure 8.1 and described in the list that follow-
 Compute domain: Provides commercial off-the-shelf (COTS) high-volume servers and storage.
 Hypervisor domain: Mediates the resources of the compute domain to the VMs of the software
appliances, providing an abstraction of the hardware.
 Infrastructure network domain (IND): Comprises all the generic high volume switches interconnected
into a network that can be configured to supply infrastructure network services.
Container Interface
The ETSI documents make a distinction between a functional block interface and a container interface, as follows:
 Functional block interface: An interface between two blocks of software that perform separate (perhaps
identical) functions. The interface allows communication between the two blocks. The two functional blocks
may or may not be on the same physical host.
 Container interface: An execution environment on a host system within which a functional block executes.
The functional block is on the same physical host as the container that provides the container interface.
Figure 8.2 relates container and functional block interfaces to the domain structure of NFVI.
The ETSI NFVI Architecture Overview document makes the following points concerning this figure:
 The architecture of the VNFs is separated from the architecture hosting the VNFs (that is, the NFVI).
 The architecture of the VNFs may be divided into a number of domains with consequences for the NFVI
and vice versa.
 Given the current technology and industrial structure, compute (including storage), hypervisors, and
infrastructure networking are already largely separate domains and are maintained as separate domains
within the NFVI.
 Management and orchestration tends to be sufficiently distinct from the NFVI as to warrant being defined
as its own domain; however, the boundary between the two is often only loosely defined with functions
such as element management functions in an area of overlap.
 The interface between the VNF domains and the NFVI is a container interface and not a functional block
interface.
 The management and orchestration functions are also likely to be hosted in the NFVI (as VMs) and
therefore also likely to sit on a container interface.

Deployment of NFVI Containers


A single compute or network host can host multiple virtual machines (VMs), each of which can host a single
VNF. The single VNF hosted on a VM is referred to as a VNF component (VNFC). A network function may be
virtualized by a single VNFC, or multiple VNFCs may be combined to form a single VNF. Part a of Figure 8.3
shows the organization of VNFCs on a single compute node. The compute container interface hosts a hypervisor,
which in turn can host multiple VMs, each hosting a VNFC.

When a VNF is composed of multiple VNFCs, it is not necessary that all the VNFCs execute in the same host.
As shown in part b of Figure 8.3, the VNFCs can be distributed across multiple compute nodes interconnected by
network hosts forming the infrastructure network domain.
Logical Structure of NFVI Domains
The NFVI domain logical structure provides a framework for such development and identifies the interfaces
between the main components, as shown in Figure 8.4.
Compute Domain
The principal elements in a typical compute domain may include the following:
 CPU/memory: A COTS processor, with main memory, that executes the code of the VNFC.
 Internal storage: Nonvolatile storage housed in the same physical structure as the processor, such as
flash memory.
 Accelerator: Accelerator functions for security, networking, and packet processing may also be included.
 External storage with storage controller: Access to secondary memory devices.
 Network interface card (NIC): Provides the physical interconnection with the infrastructure network
domain.
 Control and admin agent: Connects to the virtualized infrastructure manager (VIM).
 Eswitch: Server embedded switch. However, functionally it forms an integral part of the infrastructure
network domain.
 Compute/storage execution environment: This is the execution environment presented to the hypervisor
software by the server or storage device.
 Control plane workloads: Concerned with signaling and control plane protocols such as BGP. Typically,
these workloads are more processor rather than I/O intensive and do not place a significant burden on the
I/O system.
 Data plane workloads: Concerned with the routing, switching, relaying or processing of network traffic
payloads. Such workloads can require high I/O throughput.

NFVI Implementation Using Compute Domain Nodes


The VNFCs run as software on hypervisor domain containers that in turn run on hardware in the compute domain.
Although virtual links and networks are defined through the infrastructure network domain, the actual
implementation of network functions at the VNF level consists of software on compute domain nodes. The IND
interfaces with the compute domain and not directly with the hypervisor domain or the VNFs
An NFVI-Node as collection of physical devices deployed and managed as a single entity, providing the NFVI
functions required to support the execution environment for VNFs. NFVI nodes are in the compute domain and
encompass the following types of compute domain nodes:
 Compute node: A functional entity which is capable of executing a generic computational instruction
set (each instruction be being fully atomic and deterministic) in such a way that the execution cycle
time is of the order of units to tens of nanoseconds irrespective of what specific state is required for
cycle execution. In practical terms, this defines a compute node in terms of memory access time
 Gateway node: A single identifiable, addressable, and manageable element within an NFVI-Node that
implements gateway functions. Gateway functions provide the interconnection between NFVI-PoPs
and the transport networks. A gateway may operate at the transport level, dealing with IP and data-link
packets, or at the application level.
 Storage node: A single identifiable, addressable, and manageable element within an NFVI-Node that
provides storage resource using compute, storage, and networking functions. Storage may be physically
implemented in a variety of ways. An example of such a storage node may be a physical device
accessible via a remote storage technology, such as Network File System (NFS) and Fibre Channel.
 Network node: A single identifiable, addressable, and manageable element within an NFVI-Node that
provides networking (switching/routing) resources using compute, storage, and network forwarding
functions.
A compute domain within an NFVI node will often be deployed as a number of interconnected physical devices.
Physical compute domain nodes may include a number of physical resources, such as a multicore processor,
memory subsystems, and NICs. An interconnected set of these nodes comprise one NFVI-Node and constitutes
one NFVI point of presence (NFVI-PoP).
The deployment scenarios include the following:
 Monolithic operator: One organization owns and houses the hardware equipment and deploys and
operates the VNFs and the hypervisors they run on. A private cloud or a data center are examples of this
deployment model.
 Network operator hosting virtual network operators: Based on the monolithic operator scenario, with
the addition that the monolithic operator host other virtual network operators within the same facility. A
hybrid cloud is an example of this deployment model.
 Hosted network operator: An IT services organization (for example, HP, Fujitsu) operates the compute
hardware, infrastructure network, and hypervisors on which a separate network operator (for example,
BT, Verizon) runs VNFs. These are physically secured by the IT services organization.
 Hosted communications providers: Similar to the hosted network operator scenario, but in this case
multiple communications providers are hosted. A community cloud is an example of this deployment
model.
 Hosted communications and application providers: Similar to the previous scenario. In addition to host
network and communications providers, servers in a data center facility are offered to the public for
deploying virtualized applications. A public cloud is an example of this deployment model.
 Managed network service on customer premises: Similar to the monolithic operator scenario. In this
case, the NFV provider’s equipment is housed on the customer’s premises. One example of this model is
a remotely managed gateway in a residential or enterprise location.
 Managed network service on customer equipment: Similar to the monolithic operator scenario. In this
case, the equipment is housed on the customer’s premises on customer equipment. This scenario could be
used for managing an enterprise network.

Hypervisor Domain
The hypervisor domain is a software environment that abstracts hardware and implements services, such as
starting a VM, terminating a VM, acting on policies, scaling, live migration, and high availability. The principal
elements in the hypervisor domain are the following:
 Compute/storage resource sharing/management: Manages these resources and provides virtualized
resource access for VMs.
 Network resource sharing/management: Manages these resources and provides virtualized resource
access for VMs.
 Virtual machine management and API: This provides the execution environment of a single VNFC
instance.
 Control and admin agent: Connects to the virtualized infrastructure manager (VIM).
 Vswitch: The vswitch function, described in the next paragraph, is implemented in the hypervisor
domain. However, functionally it forms an integral part of the infrastructure network domain.

Infrastructure Network Domain


The infrastructure network domain (IND) performs a number of roles. It provides
 The communication channel between the VNFCs of a distributed VNF
 The communications channel between different VNFs
 The communication channel between VNFs and their orchestration and management
 The communication channel between components of the NFVI and their orchestration and management
 The means of remote deployment of VNFCs
 The means of interconnection with the existing carrier network
Virtualization in IND creates virtual networks for interconnecting VNFCs with each other and with network nodes
outside the NFV ecosystem.
Virtual Networks
In general terms, a virtual network is an abstraction of physical network resources as seen by some upper software
layer. Virtual network technology enables a network provider to support multiple virtual networks that are isolated
from one another. Users of a single virtual network are not aware of the details of the underlying physical network
or of the other virtual network traffic sharing the physical network resources. Two common approaches for
creating virtual networks are (1) protocol-based methods that define virtual networks based on fields in protocol
headers, and (2) virtual-machine-based methods, in which networks are created among a set of VMs by the
hypervisor. The NFVI network virtualization combines both these forms.
L2 Versus L3 Virtual Networks
Protocol-based virtual networks can be classified by whether they are defined at protocol Layer 2 (L2), which is
typically the LAN media access control (MAC) layer, or Layer 3 (L3), which is typically the Internet Protocol
(IP). With an L2 VN, a virtual LAN is identified by a field in the MAC header, such as the MAC address or a
virtual LAN ID field inserted into the header. So, for example, within a data center, all the servers and end systems
connected to a single Ethernet switch could support virtual LANs among the connected devices. Now suppose
there are IP routers connecting segments of the data center, as illustrated in Figure 8.5. Normally, an IP router
will strip off the MAC header of incoming Ethernet frames and insert a new MAC header when forwarding the
packet to the next network. The L2 VN could be extended across this router only if the router had additional
capability to support the L2 VN, such as being able to reinsert the virtual LAN ID field in the outgoing MAC
frame. Similarly, if an enterprise had two data centers connected by a router and a dedicated line, that router
would need the L2 VN capability to extend a VN.
8.2-VIRTUALIZED NETWORK FUNCTIONS
A VNF is a virtualized implementation of a traditional network function. Below table contains examples of
functions that could be virtualized.

VNF Interfaces
As discussed earlier, a VNF consists of one or more VNF components (VNFCs). The VNFCs of a single VNF
are connected internal to the VNF. This internal structure is not visible to other VNFs or to the VNF user.

.
Figure 8.6 shows the interfaces relevant to a discussion of VNFs as described in the list that follows.
 SWA-1: This interface enables communication between a VNF and other VNFs, PNFs, and endpoints.
Note that the interface is to the VNF as a whole and not to individual VNFCs. SWA-1 interfaces are
logical interfaces that primarily make use of the network connectivity services available at the SWA-5
interface.
 SWA-2: This interface enables communications between VNFCs within a VNF. This interface is vendor
specific and therefore not a subject for standardization. This interface may also make use of the network
connectivity services available at the SWA-5 interface. However, if two VNFCs within a VNF are
deployed on the same host, other technologies may be used to minimize latency and enhance throughput,
as described below.
 SWA-3: This is the interface to the VNF manager within the NFV management and orchestration module.
The VNF manager is responsible for lifecycle management (creation, scaling, termination, and so on).
The interface typically is implemented as a network connection using IP.
 SWA-4: This is the interface for runtime management of the VNF by the element manager.
 SWA-5: This interface describes the execution environment for a deployable instance of a VNF. Each
VNFC maps to a virtualized container interface to a VM.

VNFC to VNFC Communication


The VNF appears as a single functional system in the network it supports. However, internal connectivity between
VNFCs within the same VNF or across co-located VNFs needs to be specified by the VNF provider, supported
by the NFVI, and managed by the VNF manager. The VNF Architecture document describes a number of
architecture design models that are intended to provide desired performance and quality of service (QoS), such
as access to storage or compute resources.
Figure 8.7, from the ETSI VNF Architecture document, illustrates six scenarios using different network
technologies to support communication between VNFCs.
1. Communication through a hardware switch. In this case, the VMs supporting the VNFCs bypass the
hypervisor to directly access the physical NIC. This provides enhanced performance for VNFCs on different
physical hosts.
2. Communication through the vswitch in the hypervisor. This is the basic method of communication
between co-located VNFCs but does not provide the QoS or performance that may be required for some
VNFs.
3. Greater performance can be achieved by using appropriate data processing acceleration libraries and
drivers compatible with the CPU being used. The library is called from the vswitch. An example of a suitable
commercial product is the Data Plane Development Kit (DPDK), which is a set of data plane libraries and
network interface controller drivers for fast packet processing on Intel architecture platforms. Scenario 3
assumes a Type 1 hypervisor (see Figure 7.3).
4. Communication through an embedded switch (eswitch) deployed in the NIC with Single Root I/O
Virtualization (SR-IOV). SR-IOV is a PCI-SIG specification that defines a method to split a device into
multiple PCI express requester IDs (virtual functions) in a fashion that allows an I/O memory management
unit (MMU) to distinguish different traffic streams and apply memory and interrupt translations so that these
traffic streams can be delivered directly to the appropriate VM, and in a way that prevents nonprivileged
traffic flows from impacting other VMs.
5. Embedded switch deployed in the NIC hardware with SR-IOV, and with data plane acceleration software
deployed in the VNFC.
6. A serial bus connects directly two VNFCs that have extreme workloads or very low-latency requirements.
This is essentially an I/O channel means of communication rather than a NIC means.

Scenario #4 Scenario #5 Scenario #6

FIGURE 8.7 VNFC to VNFC Communication


VNF Scaling
An important property of VNFs is referred to as elasticity, which simply means the ability to scale up/down or
scale out/in. Every VNF has associated with it an elasticity parameter of no elasticity, scale up/down only, scale
out/in only, or both scale up/down and scale out/in.
A VNF is scaled by scaling one or more of its constituent VNFCs. Scale out/in is implemented by
adding/removing VNFC instances that belong to the VNF being scaled. Scale up/down is implemented by
adding/removing resources from existing VNFC instances that belong to the VNF being scaled.

8.3-NFV MANAGEMENT AND ORCHESTRATION


The NFV management and orchestration (MANO) component of NFV has as its primary function the
management and orchestration of an NFV environment. Figure 8.8, from the ETSI MANO document, shows the
basic structure of NFV-MANO and its key interfaces. As can be seen, there are five management blocks: three
within NFV-MANO, EMS associated with VNFs, and OSS/BSS.
Virtualized Infrastructure Manager
Virtualized infrastructure management (VIM) comprises the functions that are used to control and manage the
interaction of a VNF with computing, storage, and network resources under its authority, as well as their
virtualization.
A VIM performs the following:
 Resource management, in charge of the
o Inventory of software (for example, hypervisors), computing, storage and network resources
dedicated to NFV infrastructure.
o Allocation of virtualization enablers, for example, VMs onto hypervisors, compute resources,
storage, and relevant network connectivity
o Management of infrastructure resource and allocation, for example, increase resources to VMs,
improve energy efficiency, and resource reclamation
 Operations, for
o Visibility into and management of the NFV infrastructure
o Root cause analysis of performance issues from the NFV infrastructure perspective Collection of
infrastructure fault information
o Collection of information for capacity planning, monitoring, and optimization

Virtual Network Function Manager


A VNF manager (VNFM) is responsible for VNFs. Multiple VNFMs may be deployed; a VNFM may be
deployed for each VNF, or a VNFM may serve multiple VNFs. Among the functions that a VNFM performs are
the following:
 VNF instantiation, including VNF configuration if required by the VNF deployment template (for
example, VNF initial configuration with IP addresses before completion of the VNF instantiation
operation)
 VNF instantiation feasibility checking, if required
 VNF instance software update/upgrade
 VNF instance modification
 VNF instance scaling out/in and up/down
 VNF instance-related collection of NFVI performance measurement results and faults/events information,
and correlation to VNF instance-related events/faults
 VNF instance assisted or automated healing
 VNF instance termination
 VNF lifecycle management change notification
 Management of the integrity of the VNF instance through its lifecycle
 Overall coordination and adaptation role for configuration and event reporting between the VIM and the
EM.

NFV Orchestrator
The NFV orchestrator (NFVO) is responsible for resource orchestration and network service orchestration.
Resource orchestration manages and coordinates the resources under the management of different VIMs.
NFVO coordinates, authorizes, releases and engages NFVI resources among different PoPs or within one PoP.
This does so by engaging with the VIMs directly through their northbound APIs instead of engaging with the
NFVI resources directly.
Network services orchestration manages/coordinates the creation of an end-to-end service that involves VNFs
from different VNFMs domains. Service orchestration does this in the following way:
 It creates end-to-end service between different VNFs. It achieves this by coordinating with the
respective VNFMs so that it does not need to talk to VNFs directly. An example is creating a service
between the base station VNFs of one vendor and core node VNFs of another vendor.
 It can instantiate VNFMs, where applicable.
 It does the topology management of the network services instances (also called VNF forwarding
graphs).

Repositories
Associated with NFVO are four repositories of information needed for the management and orchestration
functions:
 Network services catalog: List of the usable network services. A deployment template for a network
service in terms of VNFs and description of their connectivity through virtual links is stored in NS catalog
for future use.
 VNF catalog: Database of all usable VNF descriptors. A VNF descriptor (VNFD) describes a VNF in
terms of its deployment and operational behavior requirements. It is primarily used by VNFM in the
process of VNF instantiation and lifecycle management of a VNF instance. The information provided in
the VNFD is also used by the NFVO to manage and orchestrate network services and virtualized resources
on NFVI.
 NFV instances: List containing details about network services instances and related VNF instances.
 NFVI resources: List of NFVI resources utilized for the purpose of establishing NFV services.

Element Management
The element management is responsible for fault, configuration, accounting, performance, and security (FCAPS)
management functionality for a VNF. These management functions are also the responsibility of the VNFM. But
EM can do it through a proprietary interface with the VNF in contrast to VNFM. However, EM needs to make
sure that it exchanges information with VNFM through open reference point (VeEm-Vnfm). The EM may be
aware of virtualization and collaborate with VNFM to perform those functions that require exchange of
information regarding the NFVI resources associated with VNF. EM functions include the following:
 Configuration for the network functions provided by the VNF
 Fault management for the network functions provided by the VNF
 Accounting for the usage of VNF functions
 Collecting performance measurement results for the functions provided by the VNF
Security management for the VNF functions

OSS/BSS
The OSS/BSS are the combination of the operator’s other operations and business support functions that are not
otherwise explicitly captured in the present architectural framework, but are expected to have information
exchanges with functional blocks in the NFV-MANO architectural framework. OSS/BSS functions may provide
management and orchestration of legacy systems and may have full end-to-end visibility of services provided by
legacy network functions in an operator’s network.

In principle, it would be possible to extend the functionalities of existing OSS/BSS to manage VNFs and NFVI
directly, but that may be a proprietary implementation of a vendor. Because NFV is an open platform, managing
NFV entities through open interfaces (as that in MANO) makes more sense. The existing OSS/BBS, however,
can add value to the NFV MANO by offering additional functions if they are not supported by a certain
implementation of NFV MANO. This is done through an open reference point (Os-Ma) between NFV MANO
and existing OSS/BSS.

8.4-NFV USE CASES


There are currently nine use cases, which can be divided into the categories of architectural use cases and service-
oriented use cases.

1- Architectural Use Cases


The four architectural use cases focus on providing general-purpose services and applications based on the NFVI
architecture.

i- NFVI as a Service
NFVIaaS is a scenario in which a service provider implements and deploys an NFVI that may be used to
support VNFs both by the NFVIaaS provider and by other network service providers. For the NFVIaaS
provider, this service provides for economies of scale. The infrastructure is sized to support the provider’s
own needs for deploying VNFs and extra capacity that can be sold to other providers. The NFVIaaS
customer can offer services using the NFVI of another service provider. The NFVIaaS customer has
flexibility in rapidly deploying VNFs, either for new services or to scale out existing services. Cloud
computing providers may find this service particularly attractive.
ii- VNF as a Service
Whereas NFVIaaS is similar to the cloud model of Infrastructure as a Service (IaaS), VNFaaS corresponds
to the cloud model of Software as a Service (SaaS). NFVIaaS provides the virtualization infrastructure to
enable a network service provider to develop and deploy VNFs with reduced cost and time compared to
implementing the NFVI and the VNFs. With VNFaaS, a provider develops VNFs that are then available off
the shelf to customers. This model is well suited to virtualizing customer premises equipment such as routers
and firewalls.
iii- Virtual Network Platform as a Service
VNPaaS is similar to an NFVIaaS that includes VNFs as components of the virtual network infrastructure.
The primary differences are the programmability and development tools of the VNPaaS that allow the
subscriber to create and configure custom ETSI NFV-compliant VNFs to augment the catalog of VNFs
offered by the service provider.
iv- VNF Forwarding Graphs
VNF FG allows virtual appliances to be chained together in a flexible manner. This technique is called
service chaining. For example, a flow may pass through a network monitoring VNF, a load-balancing VNF,
and finally a firewall VNF in passing from one endpoint to another.

2- Service-Oriented Use Cases


These use cases focus on the provision of services to end customers, in which the underlying infrastructure is
transparent.
i- Virtualization of Mobile Core Network and IP Multimedia Subsystem
Mobile cellular networks have evolved to contain a variety of interconnected network function elements,
typically involving a large variety of proprietary hardware appliances. NFV aims at reducing the network
complexity and related operational issues by leveraging standard IT virtualization technologies to
consolidate different types of network equipment onto industry standard high-volume servers, switches, and
storage, located in NFVI-PoPs.
ii- Virtualization of Mobile Base Station
The focus of this use case is radio access network (RAN) equipment in mobile networks. RAN is the part
of a telecommunications system that implements a wireless technology to access the core network of the
mobile network service provider. At minimum, it involves hardware on the customer premises or in the
mobile device and equipment forming a base station for access to the mobile network.
iii- Virtualization of the Home Environment
This use case deals with network provider equipment located as customer premises equipment (CPE) in a
residential location. These CPE devices mark the operator/service provider presence at the customer
premises and usually include a residential gateway (RGW) for Internet and Voice over IP (VoIP) services
(for example, a modem/router for digital subscriber line [DSL] or cable), and a set-top box (STB) for media
services normally supporting local storage for personal video recording (PVR) services.
iv- Virtualization of CDNs
Delivery of content, especially of video, is one of the major challenges of all operator networks because of
the massive growing amount of traffic to be delivered to end customers of the network. The growth of video
traffic is driven by the shift from broadcast to unicast delivery via IP, by the variety of devices used for
video consumption and by increasing quality of video delivered via IP networks in resolution and frame
rate. Some Internet service providers (ISPs) are deploying proprietary Content Delivery Network (CDN)
cache nodes in their networks to improve delivery of video and other high-bandwidth services to their
customers.
v- Fixed Access Network Functions Virtualization
NFV offers the potential to virtualize remote functions in the hybrid fiber/copper access network and passive
optical network (PON) fiber to the home and hybrid fiber/wireless access networks. This use case has the
potential for cost savings by moving complex processing closer to the network. An additional benefit is that
virtualization supports multiple tenancy, in which more than one organizational entity can either be
allocated, or given direct control of, a dedicated partition of a virtual access node.

8.5-SDN AND NFV


The relationship between SDN and NFV is perhaps viewed as SDN functioning as an enabler of NFV. A major
challenge with NFV is to best enable the user to configure a network so that VNFs running on servers are
connected to the network at the appropriate place, with the appropriate connectivity to other VNFs, and with
desired QoS. With SDN, users and orchestration software can dynamically configure the network and the
distribution and connectivity of VNFs. Without SDN, NFV requires much more manual intervention, especially
when resources beyond the scope of NFVI are part of the environment.
Some of the ways that ETSI believes that NFV and SDN complement each other include the following:
 The SDN controller fits well into the broader concept of a network controller in an NFVI network
domain.
 SDN can play a significant role in the orchestration of the NFVI resources, both physical and virtual,
enabling functionality such as provisioning, configuration of network connectivity, bandwidth
allocation, automation of operations, monitoring, security, and policy control.
 SDN can provide the network virtualization required to support multitenant NFVIs.
 Forwarding graphs can be implemented using the SDN controller to provide automated provisioning of
service chains, while ensuring strong and consistent implementation of security and other policies.
 The SDN controller can be run as a VNF, possibly as part of a service chain including other VNFs. For
example, applications and services originally developed to run on the SDN controller could also be
implemented as separate VNFs.
Figure 8.10, from the ETSI VNF Architecture document, indicates the potential relationship between SDN and
NFV. The arrows can be described as follows-
 SDN enabled switch/NEs include physical switches, hypervisor virtual switches, and embedded switches
on the NICs.
 Virtual networks created using an infrastructure network SDN controller provide connectivity services
between VNFC instances.
 SDN controller can be virtualized, running as a VNF with its EM and VNF manager. Note that there may
be SDN controllers for the physical infrastructure, the virtual infrastructure, and the virtual and physical
network functions. As such, some of these SDN controllers may reside in the NFVI or management and
orchestration (MANO) functional blocks (not shown in figure).
 SDN enabled VNF includes any VNF that may be under the control of an SDN controller (for example,
virtual router, virtual firewall).
 SDN applications, for example service chaining applications, can be VNF themselves.
 Nf-Vi interface allows management of the SDN enabled infrastructure.
 Ve-Vnfm interface is used between the SDN VNF (SDN controller VNF, SDN network functions VNF,
SDN applications VNF) and their respective VNF Manager for lifecycle management.
 Vn-Nf allows SDN VNFs to access connectivity services between VNFC interfaces.
9-Network Virtualization
Virtual networks have two important benefits:
 They enable the user to construct and manage networks independent of the underlying physical network
and with assurance of isolation from other virtual networks using the same physical network.
 They enable network providers to efficiently use network resources to support a wide range of user
requirements.

9.1-VIRTUAL LANS
Figure 9.1 shows a relatively common type of hierarchical LAN configuration. In this example, the devices on
the LAN are organized into four segments, each served by a LAN switch. The LAN switch is a store-and- forward
packet-forwarding device used to interconnect a number of end systems to form a LAN segment. The switch can
forward a media access control (MAC) frame from a source-attached device to a destination- attached device.
It can also broadcast a frame from a source-attached device to all other attached devices. Multiples switches can
be interconnected so that multiple LAN segments form a larger LAN. A LAN switch can also connect to a
transmission link or a router or other network device to provide connectivity to the Internet or other WANs.

FIGURE 9.1 A LAN Configuration


Traditionally, a LAN switch operated exclusively at the MAC level. Contemporary LAN switches generally
provide greater functionality, including multilayer awareness (Layers 3, 4, application), quality of service (QoS)
support, and trunking for wide-area networking.
The three lower groups in Figure 9.1 might correspond to different departments, which are physically separated,
and the upper group could correspond to a centralized server farm that is used by all the departments.
Consider the transmission of a single MAC frame from workstation X. Suppose the destination MAC address in
the frame is workstation Y. This frame is transmitted from X to the local switch, which then directs the frame
along the link to Y. If X transmits a frame addressed to Z or W, its local switch forwards the MAC frame through
the appropriate switches to the intended destination. All these are examples of unicast addressing, in which the
destination address in the MAC frame designates a unique destination. A MAC frame may also contain a
broadcast address, in which case the destination MAC address indicates that all devices on the LAN should
receive a copy of the frame. Thus, if X transmits a frame with a broadcast destination address, all the devices on
all the switches in Figure 9.1 receive a copy of the frame. The total collection of devices that receive broadcast
frames from each other is referred to as a broadcast domain.
In many situations, a broadcast frame is used for a purpose, such as network management or the transmission of
some type of alert, with a relatively local significance. Thus, in Figure 9.1, if a broadcast frame has information
that is useful only to a particular department, transmission capacity is wasted on the other portions of the LAN
and on the other switches.

The Use of Virtual LANs


A more effective alternative is the creation of VLANs. In essence, a virtual local-area network (VLAN) is a
logical subgroup within a LAN that is created by software rather than by physically moving and separating
devices. It combines user stations and network devices into a single broadcast domain regardless of the physical
LAN segment they are attached to and allows traffic to flow more efficiently within populations of mutual interest.
The VLAN logic is implemented in LAN switches and functions at the MAC layer. Because the objective is to
isolate traffic within the VLAN, a router is required to link from one VLAN to another. Routers can be
implemented as separate devices, so that traffic from one VLAN to another is directed to a router, or the router
logic can be implemented as part of the LAN switch, as shown in Figure 9.3.
VLANs enable any organization to be physically dispersed throughout the company while maintaining its group
identity. For example, accounting personnel can be located on the shop floor, in the research and development
center, in the cash disbursement office, and in the corporate offices, while all members reside on the same virtual
network, sharing traffic only with each other.
Figure 9.3 shows five defined VLANs. A transmission from workstation X to server Z is within the same VLAN,
so it is efficiently switched at the MAC level. A broadcast MAC frame from X is transmitted to all devices in all
portions of the same VLAN. But a transmission from X to printer Y goes from one VLAN to another.
Accordingly, router logic at the IP level is required to move the IP packet from X to Y. Figure 9.3 shows that
logic integrated into the switch, so that the switch determines whether the incoming MAC frame is destined for
another device on the same VLAN. If not, the switch routes the enclosed IP packet at the IP level.
Defining VLANs
A VLAN is a broadcast domain consisting of a group of end stations, perhaps on multiple physical LAN segments,
that are not constrained by their physical location and can communicate as if they were on a common LAN. A
number of different approaches have been used for defining membership, including the following:
 Membership by port group: Each switch in the LAN configuration contains two types of ports: a
trunk port, which connects two switches; and an end port, which connects the switch to an end system.
A VLAN can be defined by assigning each end port to a specific VLAN. This approach has the
advantage that it is relatively easy to configure. The principle disadvantage is that the network manager
must reconfigure VLAN membership when an end system moves from one port to another.
 Membership by MAC address: Because MAC layer addresses are hardwired into the workstation’s
network interface card (NIC), VLANs based on MAC addresses enable network managers to move a
workstation to a different physical location on the network and have that workstation automatically
retain its VLAN membership. The main problem with this method is that VLAN membership must be
assigned initially. In networks with thousands of users, this is no easy task. Also, in environments where
notebook PCs are used, the MAC address is associated with the docking station and not with the
notebook PC. Consequently, when a notebook PC is moved to a different docking station, its VLAN
membership must be reconfigured.
 Membership based on protocol information: VLAN membership can be assigned based on IP
address, transport protocol information, or even higher-layer protocol information. This is a quite
flexible approach, but it does require switches to examine portions of the MAC frame above the MAC
layer, which may have a performance impact.

Communicating VLAN Membership


Switches must have a way of understanding VLAN membership (that is, which stations belong to which VLAN)
when network traffic arrives from other switches; otherwise, VLANs would be limited to a single switch. One
possibility is to configure the information manually or with some type of network management signaling protocol,
so that switches can associate incoming frames with the appropriate VLAN.
A more common approach is frame tagging, in which a header is typically inserted into each frame on interswitch
trunks to uniquely identify to which VLAN a particular MAC-layer frame belongs.

IEEE 802.1Q VLAN Standard


The IEEE 802.1Q standard, defines the operation of VLAN bridges and switches that permits the definition,
operation, and administration of VLAN topologies within a bridged/switched LAN infrastructure.
Recall that a VLAN is an administratively configured broadcast domain, consisting of a subset of end stations
attached to a LAN. A VLAN is not limited to one switch but can span multiple interconnected switches. In that
case, traffic between switches must indicate VLAN membership. This is accomplished in 802.1Q by inserting a
tag with a VLAN identifier (VID) with a value in the range from 1 to 4094. Each VLAN in a LAN configuration
is assigned a globally unique VID. By assigning the same VID to end systems on many switches, one or more
VLAN broadcast domains can be extended across a large network.
Figure 9.4 shows the position and content of the 802.1 tag, referred to as Tag Control Information (TCI). The
presence of the two-octet TCI field is indicated by inserting a Length/Type field in the 802.3 MAC frame with a
value of 8100 hex. The TCI consists of three subfields, as described in the list that follows.
 User priority (3 bits): The priority level for this frame.
 Canonical format indicator (1 bit): Is always set to 0 for Ethernet switches. CFI is used for
compatibility between Ethernet type networks and Token Ring type networks. If a frame received at an
Ethernet port has a CFI set to 1, that frame should not be forwarded as it is to an untagged port.
 VLAN identifier (12 bits): The identification of the VLAN. Of the 4096 possible VIDs, a VID of 0 is
used to identify that the TCI contains only a priority value, and 4095 (0xFFF) is reserved, so the
maximum possible number of VLAN configurations is 4094.
Nested VLANs
The original 802.1Q specification allowed for a single VLAN tag field to be inserted into an Ethernet MAC frame.
More recent versions of the standard allow for the insertion of two VLAN tag fields, allowing the definition of
multiple sub-VLANs within a single VLAN.
For example, a single VLAN level suffices for an Ethernet configuration entirely on a single premise. However,
it is not uncommon for an enterprise to make use of a network service provider to interconnect multiple LAN
locations, and to use metropolitan area Ethernet links to connect to the provider. Multiple customers of the service
provider may wish to use the 802.1Q tagging facility across the service provider network (SPN).
One possible approach is for the customer’s VLANs to be visible to the service provider. In that case, the service
provider could support a total of only 4094 VLANs for all its customers. Instead, the service provider inserts a
second VLAN tag into Ethernet frames. For example, consider two customers with multiple sites, both of which
use the same SPN (see part a of Figure 9.6). Customer A has configured VLANs 1 to 100 at their sites, and
similarly Customer B has configured VLANs 1 to 50 at their sites. The tagged data frames belonging to the
customers must be kept separate while they traverse the service provider’s network. The customer’s data frame
can be identified and kept separate by associating another VLAN for that customer’s traffic. This results in the
tagged customer data frame being tagged again with a VLAN tag, when it traverses the SPN (see part b of Figure
9.6). The additional tag is removed at the edge of the SPN when the data enters the customer’s network again.
Packed VLAN tagging is known as VLAN stacking or as Q-in-Q.
b) Position of tags in Ethernet frame
FIGURE 9.6 Use of Stacked VLAN Tags

9.2-OPENFLOW VLAN SUPPORT


A traditional 802.1Q VLAN requires that the network switches have a complete knowledge of the VLAN
mapping. Another drawback is related to the choice of one of three ways of defining group membership (port
group, MAC address, protocol information). The network administrator must evaluate the trade-offs according to
the type of network they wish to deploy and choose one of the possible approaches. It would be difficult to deploy
a more flexible definition of a VLAN or even a custom definition (for example, use a combination of IP addresses
and ports) with traditional networking devices. Reconfiguring VLANs is also a daunting task for administrators:
Multiple switches and routers have to be reconfigured whenever VMs are relocated.
SDN, and in particular OpenFlow, allows for much more flexible management and control of VLANs. It should
be clear how OpenFlow can set up flow table entries for forwarding based on one or both VLAN tags, and how
tags can be added, modified, and removed.

9.3-VIRTUAL PRIVATE NETWORKS (VPN)


A VPN is a private network that is configured within a public network (a carrier’s network or the Internet) to take
advantage of the economies of scale and management facilities of large networks. VPNs are widely used by
enterprises to create WANs that span large geographic areas, to provide site-to-site connections to branch offices,
and to allow mobile users to dial up their company LANs. Traffic designated as VPN traffic can only go from a
VPN source to a destination in the same VPN. It is often the case that encryption and authentication facilities are
provided for the VPN.
A typical scenario for an enterprise that uses VPNs is the following. At each corporate site, one or more LANs
link workstations, servers, and databases. The LANs are under the control of the enterprise and can be configured
and tuned for cost-effective performance. VPNs over the Internet or some other public network can be used to
interconnect sites, providing a cost savings over the use of a private network and offloading the WAN
management task to the public network provider. That same public network provides an access path for
telecommuters and other mobile employees to log on to corporate systems from remote sites.
1- IPsec VPNs
Use of a shared network, such as the Internet or a public carrier network, as part of an enterprise network
architecture exposes corporate traffic to eavesdropping and provides an entry point for unauthorized users. To
counter this problem, IPsec can be used to construct VPNs. The principal feature of IPsec that enables it to support
these varied applications is that it can encrypt/authenticate traffic at the IP level. Therefore, all distributed
applications, including remote logon, client/server, e-mail, file transfer, web access, and so on, can be secured.
Part a of Figure 9.7 shows the packet format for an IPsec option known as tunnel mode. Tunnel mode makes use
of the combined authentication/encryption function IPsec called Encapsulating Security Payload (ESP), and a key
exchange function. For VPNs, both authentication and encryption are generally desired, because it is important
both to (1) ensure that unauthorized users do not penetrate the VPN, and (2) ensure that eavesdroppers on the
Internet cannot read messages sent over the VPN.
Part b of Figure 9.7 is a typical scenario of IPsec usage. An organization maintains LANs at dispersed locations.
Nonsecure IP traffic is conducted on each LAN. For traffic offsite, through some sort of private or public WAN,
IPsec protocols are used.These protocols operate in networking devices, such as a router or firewall, that connect
each LAN to the outside world. The IPsec networking device will typically encrypt all traffic going into the WAN,
and decrypt and authenticate traffic coming from the WAN; these operations are transparent to workstations and
servers on the LAN. Secure transmission is also possible with individual users who connect to the WAN.
Using IPsec to construct a VPN has the following benefits:
 When IPsec is implemented in a firewall or router, it provides strong security that can be applied to all
traffic crossing the perimeter. Traffic within a company or workgroup does not incur the overhead of
security-related processing.
 IPsec in a firewall is resistant to bypass if all traffic from the outside must use IP and the firewall is the
only means of entrance from the Internet into the organization.
 IPsec is below the transport layer (TCP, UDP) and so is transparent to applications. There is no need to
change software on a user or server system when IPsec is implemented in the firewall or router. Even
if IPsec is implemented in end systems, upper-layer software, including applications, is not affected.
 IPsec can be transparent to end users. There is no need to train users on security mechanisms, issue
keying material on a per-user basis, or revoke keying material when users leave the organization.
 IPsec can provide security for individual users if needed. This is useful for offsite workers and for
setting up a secure virtual subnetwork within an organization for sensitive applications.

2- MPLS VPNs
An alternative, and popular, means of constructing VPNs is using MPLS. Multiprotocol Label Switching (MPLS)
is a set of Internet Engineering Task Force (IETF) specifications for including routing and traffic engineering
information in packets. MPLS comprises a number of interrelated protocols, which can be referred to as the MPLS
protocol suite. It can be used in IP networks but also in other types of packet-switching networks. MPLS is used
to ensure that all packets in a particular flow take the same route over a backbone.
In essence, MPLS is an efficient technique for forwarding and routing packets. MPLS was designed with IP
networks in mind, but the technology can be used without IP to construct a network with any link-level protocol.
In an MPLS network, a fixed-length label encapsulates an IP packet or a data link frame. The MPLS label contains
all the information needed by an MPLS-enabled router to perform routing, delivery, QoS, and traffic management
functions. Unlike IP, MPLS is connection oriented.
An MPLS network or internet consists of a set of nodes, called label-switching routers (LSRs) capable of
switching and routing packets on the basis of a label appended to each packet. Labels define a flow of packets
between two endpoints or, in the case of multicast, between a source endpoint and a multicast group of destination
endpoints. For each distinct flow, called a forwarding equivalence class (FEC), a specific path through the
network of LSRs is defined, called a label-switched path (LSP). All packets in an FEC receive the same
treatment en route to the destination. These packets follow the same path and receive the same QoS treatment at
each hop. In contrast to forwarding in ordinary IP networks, the assignment of a particular packet to a particular
FEC is done just once, when the packet enters the network of MPLS routers.

Layer 2 MPLS VPN


With a Layer 2 MPLS VPN, there is mutual transparency between the customer network and the provider network.
In effect, the customer requests a mesh of unicast LSPs among customer switches that attach to the provider
network. Each LSP is viewed as a Layer 2 circuit by the customer. In an L2VPN, the provider’s equipment
forwards customer data based on information in the Layer 2 headers, such as an Ethernet MAC address.
Figure 9.8 depicts key elements in an L2VPN. Customers connect to the provider by means of a Layer 2 device,
such as an Ethernet switch; the customer device that connects to the MPLS network is generally referred to as a
customer edge (CE) device. The MPLS edge router is referred to as a provider edge (PE) device. The link between
the CE and the PE operates at the link layer (for example, Ethernet), and is referred to as an attachment circuit
(AC). The MPLS network then sets up an LSP that acts as a tunnel between two edge routers (that is, two PEs)
that attach to two networks of the same enterprise. This tunnel can carry multiple virtual channels (VCs) using
label stacking. In a manner very similar to VLAN stacking, the use of multiple MPLS labels enables the nesting
of VCs.
When a link-layer frame arrives at the PE from the CE, the PE creates an MPLS packet. The PE pushes a label
that corresponds to the VC assigned to this frame. Then the PE pushes a second label onto the label stack for this
packet that corresponds to the tunnel between the source and destination PE for this VC. The packet is then routed
across the LSP associated with this tunnel, using the top label for label switched routing. At the destination edge,
the destination PE pops the tunnel label and examines the VC label. This tells the PE how to construct a link-
layer frame to deliver the payload across to the destination CE.

Layer 3 MPLS VPN


Whereas L2VPNs are constructed based on link-level addresses (for example, MAC addresses), L3VPNs are
based on VPN routes between CEs based on IP addresses. As with an L2VPN, an MPLS-based L3VPN typically
uses a stack of two labels. The inner label identifies a specific VPN instance; the outer label identifies a tunnel or
route through the MPLS provider network. The tunnel label is associated with an LSP and is used for label
swapping and forwarding. At the egress PE, the tunnel label is stripped off, and the VPN label is used to direct
the packet to the proper CE and to the proper logical flow at that CE.
For an L3VPN, the CE implements IP and is thus a router. The CE routers advertise their networks to the provider.
The provider network can then use an enhanced version of Border Gateway Protocol (BGP) to establish VPNs
between CEs. Inside the provider network, MPLS tools are used to establish routes between edge PEs supporting
a VPN. Thus, the provider’s routers participate in the customer’s L3 routing function.

9.4-NETWORK VIRTUALIZATION
NV is a far broader concept than VPNs, which only provide traffic isolation, or VLANs, which provide a basic
form of topology management. NV implies full administrative control for customizing virtual networks both in
terms of the physical resources used and the functionalities provided by the virtual networks.
The virtual network presents an abstracted network view whose virtual resources provide users with services
similar to those provided by physical networks. Because the virtual resources are software defined, the manager
or administrator of a virtual network potentially has a great deal of flexibility in altering topologies, moving
resources, and changing the properties and service of various resources. In addition, virtual network users can
include not only users of services or applications but also service providers. For example, a cloud service provider
can quickly add new services or expanded coverage by leasing virtual networks as needed.

A Simplified Example
Figure 9.9 shows a network consisting of three servers and five switches. One server is a trusted platform with a
secure operating system that hosts firewall software. All the servers run a hypervisor (virtual machine monitor)
enabling them to support multiple VMs. The resources for one enterprise (Enterprise 1) are hosted across the
servers and consist of three VMs (VM1a, VM1b, and VM1c) on physical server 1, two VMs (VM1d and VM1e)
on physical server 2, and firewall 1 on physical server 3. The virtual switches are used to set up any desired
connectivity between the VMs across the servers through the physical switches. The physical switches provide
the connectivity between the physical servers. Each enterprise network is layered as a separate virtual network on
top of the physical network. Thus, the virtual network for Enterprise 1 is indicated in Figure 9.9 by a dashed circle
and labeled VN1. The labeled circle VN2 indicates another virtual network.
Network Virtualization Architecture
An excellent overview of the many elements that contribute to an NV environment is provided by the conceptual
architecture defined in Y.3011 and shown in Figure 9.11.
The architecture depicts NV as consisting of four levels:
I. Physical resources
II. Virtual resources
III. Virtual networks
IV. Services
A single physical resource can be shared among multiple virtual resources. In turn, each LINP (virtual network)
consists of multiple virtual resources and provides a set of services to users.
Various management and control functions are performed at each level, not necessarily by the same provider.
There are management functions associated with each physical network and its associated resources. A virtual
resource manager (VRM) manages a pool of virtual resources created from the physical resources. A VRM
interacts with physical network managers (PNMs) to obtain resource commitments. The VRM constructs LINPs,
and an LINP manager is allocated to each LINP.

Benefits of Network Virtualization


Following are the benefits of NV.
 Flexibility: NV enables the network to be quickly moved, provisioned, and scaled to meet the ever-
changing needs of virtualized compute and storage infrastructures.
 Operational cost savings: Virtualization of the infrastructure streamlines the operational processes and
equipment used to manage the network. Similarly, base software can be unified and more easily supported,
with a single unified infrastructure to manage services. This unified infrastructure also allows for automation
and orchestration within and between different services and components. From a single set of management
components, administrators can coordinate resource availability and automate the procedures necessary to
make services available, reducing the need for human operators to manage the process and reducing the
potential for error.
 Agility: Modifications to the network’s topology or how traffic is handled can be tried in different ways,
without needing to modify the existing physical networks.
 Scalability: A virtual network can be rapidly scaled to respond to shifting demands by adding or removing
physical resources from the pool of available resources.
 Capital cost savings: A virtualized deployment can reduce the number of devices needed, providing capital
as well as operational costs savings.
 Rapid service provisioning/time to market: Physical resources can be allocated to virtual networks on
demand, so that within an enterprise resources can be quickly shifted as demand by different users or
applications changes. From a user perspective, resources can be acquired and released to minimize
utilization demand on the system. New services require minimal training and can be deployed with minimal
disruption to the network infrastructure.
 Equipment consolidation: NV enables the more efficient use of network resources, thus allowing for
consolidating equipment purchases to fewer, more off-the-shelf products.

9.5-OPENDAYLIGHT’S VIRTUAL TENANT NETWORK


Virtual Tenant Network (VTN) is an OpenDaylight (ODL) plug-in developed by NEC. It provides multitenant
virtual networks on an SDN, using VLAN technology. The VTN abstraction functionality enables users to design
and deploy a virtual network without knowing the physical network topology or bandwidth restrictions. VTN
allows the users to define the network with a look and feel of a conventional L2/L3 (LAN switch/IP router)
network. Once the network is designed on VTN, it is automatically mapped onto the underlying physical network,
and then configured on the individual switches leveraging the SDN control protocol.
VTN consists of two components:
 VTN Manager: An ODL controller plug-in that interacts with other modules to implement the
components of the VTN model. It also provides a REST interface to configure VTN components in the
controller.
 VTN Coordinator: An external application that provides a REST interface to users for VTN
virtualization. It interacts with VTN Manager plug-in to implement the user configuration. It is also
capable of multiple controller orchestration.
Below table shows the elements that are building blocks for constructing a virtual network.
Name of Element Description
vBridge The logical representation of L2 switch function.
vRouter The logical representation of router function.
vTep The logical representation of Tunnel End Point - TEP.
vTunnel The logical representation of Tunnel.
vBypass The logical representation of connectivity between controlled networks.
Virtual interface The representation of end point on the virtual node.
Virtual Linkv(vLink) The logical representation of L1 connectivity between virtual interfaces.

The upper part of Figure 9.14 is a virtual network example. VRT is defined as the vRouter, and BR1 and BR2 are
defined as vBridges. Interfaces of the vRouter and vBridges are connected using vLinks. Once a user of VTN
Manager has defined a virtualnetwork, the VTN Coordinator maps physical network resources to the constructed
virtual network. Mapping identifies which virtual network each packet transmitted or received by an OpenFlow
switch belongs to, as well as which interface in the OpenFlow switch transmits or receives that packet. There are
two mapping methods:
 Port mapping: This mapping method is used to map a physical port as an interface of virtual node
(vBridge/vTerminal). Port-map is enabled when the network topology is known in advance.
 VLAN mapping: This mapping method is used to map VLAN ID of VLAN tag in incoming Layer 2
frame with the vBridge. This mapping is used when the affiliated network and its VLAN tag are known.
Whenever this mapping method is used, it is possible to reduce the number of commands to be set.
vBridge vRouter vRouter Interface

FIGURE 9.14 VTN Mapping Example

Figure 9.14 shows a mapping example. An interface of BR1 is mapped to a port on OpenFlow switch SW1.
Packets received from that SW1 port are regarded as those from the corresponding interface of BR1. The interface
if1 of vBridge (BR1) is mapped to the port GBE0/1 of switch1 using port-map. Packets received or transmitted
by GBE0/1 of switch1 are considered as those from or to the interface if1 of vBridge. vBridge BR2 is mapped to
VLAN 200 using vlan-map. Packets having the VLAN ID of 200 received or transmitted by the port of any
switch in the network are mapped to the vBridge BR2.
Figure 9.15 shows the overall architecture of VTN. The VTN Manager is part of the OpenDaylight controller and
uses base network service functions to learn the topology and statistics of the underlying network. A user or
application creates virtual networks and specifies network behavior to the VTN Coordinator across a web or
REST interface. The VTN Coordinator translates these commands into detailed instructions to the VTN Manager,
which in turn uses OpenFlow to map virtual networks to the physical network infrastructure.
9.6-SOFTWARE-DEFINED INFRASTRUCTURE
With SDI, a data center or network infrastructure can autoconfigure itself at run time based on
application/business requirements and operator constraints. Automation in SDIs enables infrastructure operators
to achieve higher conformance to SLAs, avoid overprovisioning, and automate security and other network-related
functions.
Another key characteristic of SDI is that it is highly application driven. Applications tend to change much more
slowly than the ecosystem (hardware, system software, networks) that supports them. Individuals and enterprises
stay with chosen applications for long periods of time, whereas they replace the hardware and other infrastructure
elements at a fast pace. So, providers are at an advantage if the entire infrastructure is software defined and thus
able to cope with rapid changes in infrastructure technology.
SDN and NFV are the key enabling technologies for SDI. SDN provides network control systems with the
flexibility to steer and provision network resources dynamically. NFV virtualizes network functions as
prepackaged software services that are easily deployable in a cloud or network infrastructure environment. So
instead of hard-coding a service deployment and its network services, these can now be dynamically provisioned;
traffic is then steered through the software services, significantly increasing the agility with which these are
provisioned.

Following are some of the key features of an SDI offering:


 Distributed storage resources with fully inline data deduplication and compression.
 Fully automated and integrated backups that are application aware, with autoconfiguring and autotesting.
 Fully automated and integrated disaster recovery that is application aware, with autoconfiguring and
autotesting.
 Fully integrated hybrid cloud computing, with resources in the public cloud consumed as easily as local.
The ability to move between multiple cloud providers, based on cost, data sovereignty requirements, or
latency/locality needs.
 WAN optimization technology.
 A hypervisor or hypervisor/container hybrid running on the metal.
 Management software to allow administrators to manage the hardware and the hypervisor.
 Adaptive monitoring software that will detect new applications and operating systems and automatically
monitor them properly.
 Predictive analytics software that will determine when resources will exceed capacity, when hardware is
likely to fail, or when licensing can no longer be worked around.
 Automation and load maximization software that will make sure the hardware and software components
are used to their maximum capacity, given the existing hardware and existing licensing bounds.
 Orchestration software that will not only spin up groups of applications on demand or as needed, but will
provide an “App Store”-like experience for selecting new workloads and getting them up and running on
your local infrastructure in just a couple of clicks.
 Autobursting, as an adjunct of orchestration, will intelligently decide between hot-adding capacity to
legacy workloads (CPU, RAM, and so on) or spinning up new instances of modern burstable applications
to handle load.
 Hybrid identity services that work across private infrastructure and public cloud spaces. They will not
only manage identity but also provide complete user experience management solutions that work
anywhere.
 Complete software-defined networking stack, including Layer 2 extension between data centers as well
as the public and private cloud. This means that spinning up a workload will automatically configure
networking, firewalls, intrusion detection, application layer gateways, mirroring, load balancing, content
distribution network registration, certificates, and so forth.
 Chaos creation in the form of randomized automated testing for failure of all nonlegacy workloads and
infrastructure elements to ensure that the network still meets requirements.

Software-Defined Storage
As mentioned, SDN and NFV are key elements of SDI. A third, equally important element is the emerging
technology known as software-defined storage (SDS). SDS is a framework for managing a variety of storage
systems in the data center that are traditionally not unified. SDS provides the ability to manage these storage
assets to meet specific SLAs and to support a variety of applications.
Figure 9.16 illustrates the main elements of a typical SDS architecture. Physical storage consists of a number of
magnetic and solid-state disk arrays, possibly from multiple vendors. Separate from this physical storage plane is
a unified set of control software. This must include adaptation logic that can interface with a variety of vendor
equipment and controlling and monitoring that equipment. On top of this adaptation layer are a number of basic
storage services. An application interface provides an abstracted view of data storage so that applications need
not be concerned with the location, attributes, or capacity of individual storage systems. There is also an
administrative interface to enable the SDS administrator to manage the distributed storage suite.
FIGURE 9.16 Software-Defined Storage Architecture

SDS puts the emphasis on storage services instead of storage hardware. By decoupling the storage control
software from the hardware, a storage resource can be used more efficiently and its administration simplified.
When additional resources are needed by an application, the storage control software automatically adds the
resources. Conversely, resources are freed up when not in use. The storage control software automatically
removes failed components and systems that fail.

SDI Architecture
There is no standardized specification for SDI, and there are numerous differences in the different initiatives. A
typical example is the SDI architecture defined by Intel. This architecture is organized into three layers, as
illustrated in Figure 9.17 and described in the list that follows.
 Orchestration: A policy engine that allows higher level frameworks to manage composition dynamically
without interrupting ongoing operations.
 Composition: A low-level layer of system software that continually and automatically manages the pool
of hardware resources.
 Hardware pool: An abstracted pool of modular hardware resources.
The orchestration layer drives the architecture. Intel’s initial focus appears to be on cloud providers, but other
application areas, such as big data and other data center applications, lend themselves to the SDI approach. This
layer continually monitors status data, enabling it to solve service issues faster and to continually optimize
hardware resource assignment.
The composition layer is a control layer that manages VMs, storage, and network assets. In this architecture, the
VM is seen as a dynamic federation of compute, storage, and network resources assembled to run an application
instance. With software-defined allocation of resources, more flexibility is available in creating, provisioning,
managing, moving, and retiring VMs. Similarly, SDS provides the opportunity to use storage more efficiently.
Composition enables the logical disaggregation of compute, network, and storage resources, so that each VM
provides exactly what an application needs. Supporting this at the level of the hardware is Intel’s rack scale
architecture (RSA). RSA exploits extremely high data rate optical connection components to redesign the way
computer rack systems are implemented. In an RSA design, the speed of the silicon interconnects means that
individual components (processors, memory, storage, and network) no longer need to reside in the same box.

You might also like