0% found this document useful (0 votes)
7 views

Module 3 Virtual Layer

Uploaded by

Yuvika
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Module 3 Virtual Layer

Uploaded by

Yuvika
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Module: Virtual Layer

Upon completion of this module, you should be able to:


• Describe the virtual layer and virtualization software
• Describe a resource pool and virtual resources

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 1

This module focuses on the entities of the virtual layer of the cloud computing reference
model. This module focuses on virtualization software, resource pool, and virtual resources.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 1
Cloud Computing Reference Model
Virtual Layer

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 2

Virtual layer is deployed on the physical layer. It specifies the entities that operate at this
layer, such as virtualization software, resource pools, and virtual resources. The key function of
this layer is to abstract physical resources, such as compute, storage, and network, and
making them appear as virtual resources. Other key functions of this layer include executing
the requests generated by control and forwarding requests to the physical layer to get them
executed. Examples of requests generated by control layers include creating pools of resources
and creating virtual resources.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 2
Lesson: Virtual Layer Overview
This lesson covers the following topics:
• Virtual layer
• Virtualization software
• Resource pool
• Virtual resources

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 3

This lesson covers an overview of virtual layer, virtualization software, resource pool, and
virtual resources.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 3
Introduction to Virtualization
Virtualization

Refers to the logical abstraction of physical resources, such as compute,


network, and storage that enables a single hardware resource to support
multiple concurrent instances of systems or multiple hardware resources to
support single instance of system.

• Enables a resource to appear larger or smaller than it actually is


• Enables a multitenant environment improving utilization of
physical resources

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 4

Virtualization refers to the logical abstraction of physical resources, such as compute, network,
and storage that enables a single hardware resource to support multiple concurrent instances
of systems or multiple hardware resources to support single instance of system. For example, a
single disk drive can be partitioned and presented as multiple disk drives to a compute system.
Similarly, multiple disk drives can be concatenated and presented as a single disk drive to a
compute system.

With virtualization, it is also possible to make a resource appear larger or smaller than it
actually is. Further, the abstraction of physical resources due to virtualization enables
multitenant environment, which improves utilization of the physical resources.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 4
Benefits of Virtualization
• Optimizes utilization of IT resources
• Reduces cost and management complexity
• Reduces deployment time
• Increases flexibility

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 5

Virtualization when deployed offers several benefits to build a cloud infrastructure. It enables
consolidation of IT resources that helps service providers to optimize their utilization of
infrastructure resource. Improving the utilization of IT assets can help service providers to
reduce the costs associated with the purchasing of a new hardware. It also reduces space and
energy costs associated with maintaining the resources. Moreover, less people are required to
administer these resources, which further lower the cost. Virtual resources are created using
software that enables service providers to deploy infrastructure faster as compared to
deploying physical resources. Virtualization increases flexibility by allowing to create and
reclaim the logical resources that are based on business requirements.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 5
Virtual Layer Overview
• Virtualized compute, network, and storage forms the virtual
layer
• Enables fulfilling two characteristics of cloud infrastructure
– Resource pooling
– Rapid elasticity
• Specifies the entities operating at this layer
– Virtualization software
– Resource pools
– Virtual resources

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 6

While building cloud infrastructure, virtual layer is deployed on physical layer. This layer
enables fulfilling two key characteristics of cloud infrastructure: resource pooling and rapid
elasticity.

Virtual layer specifies the entities that operate at this layer, such as virtualization software,
resource pools, and virtual resources. Virtual layer is built by deploying virtualization software
on compute systems, network devices, and storage devices.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 6
Virtual Layer
Virtualization Process and Operations

Step 1: Deploy Step 2: Create resource Step 3: Create virtual Virtual


virtualization software pools: resources:
resources are
on: • Processing power and • Virtual machines
packaged and
• Compute systems memory • Virtual networks
offered as
• Network devices • Network bandwidth • LUNs
services
• Storage devices • Storage

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 7

There are three key steps involved in making resources available to consumers. They are:

1) Deploying virtualization software

2) Creating resource pools

3) Creating virtual resources

The virtualization software preforms the abstraction of the physical resources and are deployed
on compute systems, network devices, and storage devices. The key functions of a
virtualization software is to create resource pools and create virtual resources.

A resource pool is an aggregation of computing resources, such as processing power, memory,


storage, and network bandwidth, which provides an aggregated view of these resources to the
control layer. Virtualization software in collaboration with the control software pools the
resources. For example, storage virtualization software pools capacity of multiple storage
devices to appear as a single large storage capacity. Similarly, by using compute virtualization
software, the processing power and memory capacity of the pooled physical compute system
can be viewed as an aggregation of the power of all processors (in megahertz) and all memory
(in megabyte). Resource pool is discussed in the next lesson.

Virtualization software in collaboration with control layer creates virtual resources. These
virtual resources are created by allocating physical resources from the resource pool. These
virtual resources share pooled physical resources. Examples of virtual resources include virtual
machines, LUNs, and virtual networks. Virtual resources are discussed later in the module.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 7
Compute Virtualization Software
Hypervisor

Hypervisor

Software that is installed on a compute system and enables multiple OSs to


run concurrently on a physical compute system.

• Hypervisor kernel
– Provides functionality similar to an OS kernel
– Designed to run multiple VMs concurrently

• Virtual machine manager (VMM) VMM VMM


– Abstracts hardware Hypervisor Kernel
– Each VM is assigned a VMM
– Each VMM gets a share of physical resources

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 8

The software used for compute virtualization is known as the hypervisor. The hypervisor is a
software that is installed on a compute system and enables multiple operating systems to run
concurrently on a physical compute system. The hypervisor along with hypervisor management
software (also known as control software, which is discussed in ‘Control Layer’ module) is the
fundamental component for deploying software defined compute environment. The hypervisor
abstracts the physical compute hardware to create multiple virtual machine, which to the
operating systems look and behave like physical compute systems. The hypervisor provides
standardized hardware resources, such as processor, memory, network, and disk to all
the virtual machines.

A hypervisor has two key components: kernel and virtual machine manager (VMM). A
hypervisor kernel provides the same functionality like the kernel of any other
operating system, including process creation, file system management, and process
scheduling. It is designed and optimized to run multiple virtual machines concurrently.
A VMM abstracts hardware and appears as a physical compute system with processor,
memory, I/O devices, and other components that are essential for operating systems
and applications to run. Each virtual machine is assigned a VMM that gets a share of
the processor, memory, I/O devices, and storage from the physical compute system to
successfully run the virtual machine.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 8
Compute Virtualization Software (Cont'd)
Types of Hypervisor

Bare-metal Hypervisor Hosted Hypervisor


• It is an operating system • Installed as an application on an
OS
• Installed on a bare-metal
hardware • Relies on OS, running on
physical machine for device
• Requires certified hardware
support
• Suitable for enterprise data
• Suitable for development,
centers and cloud infrastructure
testing, and training purposes

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 9

Hypervisors can be categorized into two types: bare-metal hypervisor and hosted hypervisor. A
bare-metal hypervisor is directly installed on the hardware. It has direct access to the
hardware resources of the compute system. Therefore, it is more efficient than a hosted
hypervisor. However, this type of hypervisor may have limited device drivers built-in.
Therefore, hardware certified by the hypervisor vendor is usually required to run bare-metal
hypervisors. A bare-metal hypervisor is designed for enterprise data centers and cloud
infrastructure. It also supports advanced capabilities such as resource management, high
availability, security, and so on. In contrast to a bare-metal hypervisor, a hosted hypervisor is
installed as an application on an operating system. In this approach, the hypervisor does not
have direct access to the hardware and all requests must pass through the operating system
running on the physical compute system. Hosted hypervisors are compatible with all the
devices that are supported by the operating system on which it is installed. Using this type of
hypervisor adds overhead compared to a bare-metal hypervisor, because there are many
services and processes running on an operating system that are consuming compute system
resources. Therefore, a hosted hypervisor is most suitable for development, testing, and
training purposes.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 9
Network Virtualization Software
• Abstracts physical network resources to create virtual
resources:
– Virtual LAN/virtual SAN
– Virtual Switch
• Network virtualization software can be:
– Built into the operating environment of a network device
– Installed on an independent compute system
• Fundamental component for deploying software defined network
– Hypervisor’s capability

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 10

The network virtualization software is either built into the operating environment of a network
device, installed on an independent compute system (discussed in ‘Control Layer’ module) or
available as hypervisor’s capability. The network virtualization software abstracts physical
network resources to create virtual resources such as virtual LANs or virtual SANs.

The network virtualization software built into the network device operating environment has
the ability to abstract the physical network. It has the ability to divide a physical network into
multiple virtual networks, such as virtual LANs and virtual SANs.

The network virtualization software installed on an independent compute system is the


fundamental component for deploying software defined network environment. This software
provides a single control point to the entire network infrastructure enabling automated and
policy based network management.

The network virtualization can also be available as hypervisor’s capability, which emulates
network connectivity among VMs on a physical compute system. This software enables creating
virtual switches that appears to the VM as physical switches.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 10
Storage Virtualization Software
• Abstracts physical storage resources to create virtual resources:
– Virtual volumes
– Virtual disk files
– Virtual arrays
• Storage virtualization software can be:
– Built into the operating environment of a storage device
– Installed on an independent compute system
• Fundamental component for deploying software defined storage
– Hypervisor’s capability

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 11

The storage virtualization software is either built into the operating environment of a storage
device, installed on an independent compute system (discussed in ‘Control Layer’ module), or
available as hypervisor’s capability. The storage virtualization software abstracts physical
storage resources to create virtual resources, such as virtual volumes or virtual arrays.

The storage virtualization software built into the array operating environment has the ability to
pool and abstract the physical storage devices and present it as a logical storage.

The storage virtualization software installed on an independent compute system is the


fundamental component for deploying software defined storage environment. The software has
the ability to pool and abstract the existing physical storage devices and present it as an open
storage platform. With the help of control software (discussed in ‘Control Layer’ module), the
storage virtualization software can perform tasks such as virtual volume creation apart from
creating virtual arrays. This software provide a single control point to the entire storage
infrastructure, enabling automated and policy based management.

The storage virtualization can also be available as hypervisor’s capability, which enables
creating virtual disk that appears to the operating systems as physical disk drives.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 11
Lesson Summary
During this lesson the following topics were covered:
• Virtual layer
• Virtualization software
• Resource pool
• Virtual resources

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 12

This lesson covered an overview of virtual layer, virtualization software (compute, network,
storage), resource pool, and virtual resources.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 12
Lesson: Resource Pool
This lesson covers the following topics:
• Resource pool
• Examples of resource pooling
• Identity pool

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 13

This lesson covers resource pool, examples of resource pooling, and identity pool.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 13
Introduction to Resource Pool
Resource Pool
A logical abstraction of the aggregated computing resources, such as
processing power, memory capacity, storage, and network bandwidth that
are managed collectively.
• Cloud services obtain computing resources from resource pools
– Resources are dynamically allocated as per consumer demand

• Resource pools are sized according to service requirements


Resources for Aggregation Service A
Consumer A

Resource Allocation

Consumer B
Resource Pool
Service B

Cloud Infrastructure

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 14

A resource pool is a logical abstraction of the aggregated computing resources, such as


processing power, memory capacity, storage, and network bandwidth that is managed
collectively. Cloud services obtain computing resources from resource pools. Resources from
the resource pools are dynamically allocated according to consumer demand, up to a limit
defined for each cloud service. The allocated resources are returned to the pool when they are
released by consumers, making them available for reallocation. The figure on the slide shows
the allocation of resources from a resource pool to service A and service B that are assigned to
consumer A and consumer B respectively.

Resource pools are designed and sized according to the service requirements. A cloud
administrator can create, remove, expand, or contract a resource pool as needed. In a cloud
infrastructure, multiple pools of same or different resource types may be configured to provide
various cloud services. For example, two independent storage pools in a cloud having different
performance characteristics can provide resources to a high-end and a mid-range storage
service. Another example is an application service, which can obtain processing power from a
processor pool and network bandwidth from a network bandwidth pool.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 14
Example: Pooling Processing Power and
Memory Capacity Assigned to
Consumer A

Resource Allocation Processing power = 1500 MHz


Memory Capacity = 2 GB

Compute System 1 Resource Allocation Processing power = 1500 MHz


Processing Power = 4000 MHz Memory Capacity = 2 GB
Memory Capacity = 6 GB

Service A

Service B
Compute System 2
Processing Power = 4000 MHz Resource Allocation Processing power = 1500 MHz
Memory Capacity = 6 GB Memory Capacity = 2 GB

Resource Allocation Processing power = 1500 MHz


Memory Capacity = 2 GB
Compute System 3
Processing Power = 4000 MHz Resource Allocation Processing power = 1500 MHz
Memory Capacity = 6 GB
Memory Capacity = 2 GB
Processor Pool: Processing Power = 12000 MHz
Memory Pool: Memory Capacity = 18 GB
Assigned to
Consumer B

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 15

Cloud services comprising virtual machines (VMs) consume processing power and memory
capacity respectively from the processor and memory pools from which they are created. The
figure on the slide illustrates an example of pooling processing power and memory capacity,
and allocating resources to VMs that are elements of service A and service B. These cloud
services are assigned to consumer A and consumer B.

The figure on the slide shows a processor pool aggregates the processing power of three
physical compute systems running hypervisor. Likewise, a memory pool aggregates the
memory capacity of these compute systems. Therefore, the processor pool has 12000 MHz of
processing power and the memory pool possesses 18 GB of memory capacity. Each VM is
allocated 1500 MHz of processing power and 2 GB of memory capacity at the time they are
created. After allocation of resources to the VMs, the processor pool has 4500 MHz processing
power and the memory pool has 8 GB memory capacity remaining, which can be allocated to
new or existing VMs according to service demand.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 15
Example: Pooling Storage in a Block-based
Storage System
Block-based Storage System

Service A

Resource Allocation
Assigned to
Consumer A
Drive 1 LUN
Storage Space = Storage Space = 200 GB
1000 GB

Service B
Drive 2
Storage Space = Resource Allocation
1000 GB
Assigned to
Consumer B
LUN
Storage Space = 400 GB
Drive 3
Storage Space
= 1000 GB Service C

Resource Allocation
Assigned to
Drive 4 Consumer C
Storage Space =
LUN
1000 GB
Storage Space = 800 GB

Storage Pool: Storage Space = 4000 GB

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 16

A storage pool in a block-based storage system comprises the aggregated physical storage
space of a set of physical drives. Storage space is allocated from the storage pool to logical
unit number (LUN) that are created from the pool. These LUNs are provisioned to consumers
upon receiving their storage requests. The figure on the slide illustrates an example where
storage space of a set of physical drives are pooled and required storage space is allocated to
LUNs from the pool.

The figure on the slide shows a storage pool in a block-based storage system aggregates the
storage space of four physical drives. Combining the usable storage space of these drives, the
storage pool has 4000 GB of storage space. Three LUNs are provisioned from this pool, which
are elements of service A, service B, and service C. These services are assigned to three
consumers – consumer A, consumer B, and consumer C. These LUNs are allocated 200 GB,
400 GB, and 800 GB of storage space as per the storage requirement of the consumers. After
allocation of storage resources to the LUNs, the storage pool has 2600 GB storage space
remaining, which can be allocated to new or existing LUNs according to service demand.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 16
Example: Pooling Storage Across Block-based
Storage Systems
Assigned to Assigned to Assigned to
Consumer A Consumer B Consumer C

Service A Service B Service C

LUN LUN LUN

Block-based Block-based Block-based Block-based


Storage System Storage System Storage System Storage System

Storage Storage
Pool Pool

Higher-level Storage Pool

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 17

The figure on the slide illustrates an example of a more complex storage pooling, where a
higher-level storage pool is created by aggregating the storage space of four storage pools
configured within four block-based storage systems. Storage space from the higher-level
storage pool is allocated to LUNs that are elements of service A, service B, and service C.
These services are assigned to consumer A, consumer B, and consumer C.

Pooling across multiple storage systems provides a unified platform for provisioning storage
services that can store data at a massive scale Multiple pools can be created in a cloud
environment having different performance and availability levels. They cater to the needs of
various storage service offerings.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 17
Example: Pooling Network Bandwidth of NICs
Service A Service B

Assigned to Assigned to
Consumer A Consumer B

Resource Allocation

Bandwidth = 600 Mbps Bandwidth = 300 Mbps

Compute System

NIC 1 NIC 2 NIC 3


Bandwidth = Bandwidth = Bandwidth =
1000 Mbps 1000 Mbps 1000 Mbps

Network Bandwidth Pool: Bandwidth = 3000 Mbps

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 18

Cloud services comprising VMs obtain network bandwidth from network bandwidth pools. VMs
are allocated appropriate network bandwidth to meet the required service level. The figure on
the slide illustrates an example where a network bandwidth pool is created by aggregating the
network bandwidth of three physical network interface cards (NICs). These NICs are installed
on a physical compute system running a hypervisor.

The figure on the slide shows the network bandwidth pool has 3000 Mbps of network
bandwidth. Service A and service B are allocated 600 Mbps and 300 Mbps network bandwidth
respectively as per the data transfer requirement of the consumers. Service A and Service B
are assigned to Consumer A and Consumer B respectively. After allocation of bandwidth to the
services, the network bandwidth pool has 2100 Mbps of network bandwidth remaining, which
can be allocated to new or existing services as needed.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 18
Identity Pool
• Specifies a range of network identifiers (IDs) such as virtual network
IDs and MAC addresses
– IDs are allocated from the identity pools to the elements of cloud services

• An identity pool may map to a particular service or to a group of


services
Service A

ID Allocation Assigned to
ID Range: 1 to 10 Consumer A

Identity Pool A
Service B

ID Allocation Assigned to
ID Range: 11 to 100 Consumer B

Identity Pool B

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 19

An identity pool, unlike a resource pool, specifies a range of network identifiers (IDs), such as
virtual network IDs and MAC addresses. These IDs are allocated from the identity pools to the
elements of cloud services. For example, in a service, the constituent virtual networks obtain
IDs from a virtual network ID pool. Likewise, VMs in a service get MAC addresses from a MAC
address pool.

An identity pool may map or allocate IDs to a particular service or to a group of services. For
example, service A is mapped to pool A containing IDs 1 to 10 and service B is mapped to pool
B containing IDs 11 to 100. If an identity pool is run out of IDs, then administrators may
create an additional pool or add more IDs to the existing pool. The 1-to-1 mapping between an
identity pool and a service simplifies the tracking of the use of IDs by a particular service.
However, this increases management complexity as many identity pools must be created and
managed depending on the number of services.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 19
Lesson Summary
During this lesson the following topics were covered:
• Resource pool
• Examples of resource pooling
• Identity pool

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 20

This lesson covered resource pool and examples of resource pooling, such as pooling
processing power, memory capacity, storage, and network bandwidth. It also covered identity
pool.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 20
Lesson: Virtual Resources – I
This lesson covers the following topics:
• Virtual machine (VM) and VM hardware
• VM files and file system to manage VM files
• VM console
• VM template
• Virtual appliance
• VM network and its components

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 21

This lesson covers virtual machine (VM), VM hardware, VM files, and file system to manage VM
files. It also covers VM console, VM template, and virtual appliance. Finally, this lesson covers
VM network and its components.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 21
Virtual Machine (VM)
Virtual Machine

A logical compute system that, like a physical compute system, runs an OS


and applications.

• Created by a hypervisor installed on a physical compute system


• Comprises virtual hardware, such as virtual processor, memory,
storage, and network resources
– Appears as a physical compute system to the guest OS
– Hypervisor maps the virtual hardware to the physical hardware
• Provider provisions VMs to consumers for deploying applications
– VMs on the same compute system or cluster run in isolation

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 22

A virtual machine (VM) is a logical compute system that, like a physical compute system, runs
an operating system (OS) and applications. A VM is created by a hosted or a bare-metal
hypervisor installed on a physical compute system. A VM has a self-contained operating
environment, comprising OS, applications, and virtual hardware, such as a virtual processor,
memory, storage, and network resources. An OS, called—a ‘guest’ OS—is installed on a VM in
the same way like it is installed on a physical compute system. From the perspective of the
guest OS, the VM appears as a physical compute system. As discussed in lesson 1, a virtual
machine monitor (VMM) is responsible for the execution of a VM. Each VM has a dedicated
VMM. Each VM has its own configuration for hardware, software, network, security, and so on.
The VM behaves like a physical compute system, but does not have direct access either to the
underlying host OS (when a hosted hypervisor is used) or to the hardware of the physical
compute system on which it is created. The hypervisor translates the VM’s resource requests
and maps the virtual hardware of the VM to the hardware of the physical compute system. For
example, a VM’s I/O requests to a virtual disk drive are translated by the hypervisor and
mapped to a file on the physical compute system’s disk drive.

Compute virtualization software enables creating and managing several VMs—each with a
different OS of its own—on a physical compute system or on a compute cluster. In a cloud
environment, a provider typically provisions VMs to consumers to deploy their applications. The
VM hardware and software are configured to meet the application’s requirements. The VMs of
consumers are isolated from each other so that the applications and the services running on
one VM do not interfere with those running on other VMs. The isolation also provides fault
tolerance so that if one VM crashes, the other VMs remain unaffected.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 22
VM Hardware

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 23

When a VM is created, it is presented with virtual hardware components that appear as


physical hardware components to the guest OS. Within a given vendor’s environment, each VM
has standardized hardware components that make them portable across physical compute
systems. Based on the requirements, the virtual components can be added or removed from a
VM. However, not all components are available for addition and configuration. Some hardware
devices are part of the virtual motherboard and cannot be modified or removed. For example,
the video card and the PCI controllers are available by default and cannot be removed. The
figure on the slide shows the typical hardware components of a VM include virtual processor(s),
virtual motherboard, virtual RAM, virtual disk, virtual network adapter, optical drives, serial and
parallel ports, peripheral devices, and so on.

A VM can be configured with one or more virtual processors. The number of virtual processors
in a VM can be increased or reduced based on the requirements. When a VM starts, its virtual
processors are scheduled by the hypervisor kernel to run on the physical processors. Each VM
is assigned a virtual motherboard with the standardized devices essential for a compute system
to function. Virtual RAM is the amount of physical memory allocated to a VM and it can be
configured based on the requirements. The virtual disk is a large physical file, or a set of files
that stores the VM’s OS, program files, application data, and other data associated with the
VM. A virtual network adapter functions like a physical network adapter. It provides
connectivity between VMs running on the same or different compute systems, and between a
VM and physical compute systems. Virtual optical drives and floppy drives can be configured to
connect to either physical devices or to image files, such as ISO and floppy images (.flp), on
the storage. SCSI/IDE virtual controllers provide a way for the VMs to connect to the storage
devices. The virtual USB controller is used to connect to a physical USB controller and to
access connected USB devices. Serial and parallel ports provide an interface for connecting
peripherals to the VM.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 23
VM Files
• From a hypervisor’s perspective, a VM is a discrete set of files
such as:
• Stores information, such as VM name, BIOS
Configuration file information, guest OS type, memory size

Virtual disk file • Stores the contents of the VM's disk drive

• Stores the memory contents of a VM in a suspended


Memory state file state

Snapshot file • Stores the VM settings and virtual disk of a VM

• Keeps a log of the VM’s activity and is used in


Log file troubleshooting

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 24

From a hypervisor’s perspective, a VM is a discrete set of files on a storage device. Some of the
key files that make up a VM are the configuration file, the virtual disk file, the memory file, and
the log file. The configuration file stores the VM’s configuration information, including VM
name, location, BIOS information, guest OS type, virtual disk parameters, number of
processors, memory size, number of adapters and associated MAC addresses, SCSI controller
type, and disk drive type. The virtual disk file stores the contents of a VM’s disk drive. A VM
can have multiple virtual disk files, each of which appears as a separate disk drive to the VM.
The memory state file stores the memory contents of a VM and is used to resume a VM that is
in a suspended state. The snapshot file stores the running state of the VM including its settings
and the virtual disk, and may optionally include the memory state of the VM. It is typically
used to revert the VM to a previous state. Log files are used to keep a record about the VM’s
activity and are often used for troubleshooting purposes.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 24
File System to Manage VM Files
• Hypervisor’s native file system
– Clustered file system deployed on local or
external storage
– Enables multiple hypervisors to perform
concurrent reads and writes
– Enables high availability to protect against
hypervisor or compute system failure
• Shared file system
– Enables storing VM files on remote file
servers or NAS devices
– Hypervisors have built-in NFS or CIFS clients

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 25

When a VM is created, the associated VM files are created and placed on the storage that is
presented to the hypervisor. A file system is configured on the storage to manage the VM files.
Most hypervisors support two types of file systems: the hypervisor’s native file system, and a
shared file system, such as NFS or CIFS.

A hypervisor’s native file system is usually a clustered file system, and the storage presented
to it is typically optimized to store the VM files. The file system can be deployed on the storage
provisioned either from a local storage, or from external storage devices connected through
Fibre Channel, iSCSI, or FCoE. The file system allows multiple hypervisors, running on different
physical compute systems, to read from and write to the same shared storage resources
concurrently. This enables high availability capabilities, such as the migration of VMs between
clustered hypervisors in the event of failure of one of the hypervisors or compute systems. A
locking mechanism ensures that a VM is not powered on by multiple hypervisors at the same
time. When a hypervisor fails, the locking mechanism for each VM running on the physical
compute system is released. It is then possible for the VMs to be restarted on other
hypervisors.

A shared file system enables VM files to be stored on remote file servers or NAS devices that
are accessed over an IP-based network. The file systems are accessed using file sharing
protocols, such as NFS and CIFS. Hypervisors have built-in NFS or CIFS clients that enable
communication with the file servers and NAS devices.

The capacity of the file system can be dynamically increased without disrupting the VMs
running on a physical compute system. If the volumes on which the file system resides have
additional configurable capacity, then the file system can be extended to increase its capacity.
However, if there is no configurable capacity available on the volumes, then additional capacity
must be assigned before the file system can be extended.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 25
VM Console
• VM console is an interface to view and manage the VMs on a
compute system or a cluster
• VM console may be:
– Installed locally on a compute system
– Web-based
– Accessed over a remote desktop connection
• Used to perform activities such as:
– Installing a guest OS and accessing VM BIOS
– Powering a VM on or off
– Configuring virtual hardware and troubleshooting

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 26

An administrator connects to a VM using its console, which is an interface to view and manage
the VMs on a compute system or a cluster. The console may be installed locally on a compute
system, web-based, or accessed over a remote desktop connection. An administrator uses the
console to perform activities, such as installing a guest OS, accessing the BIOS of the VM,
powering a VM on or off, editing startup and shutdown settings, configuring virtual hardware,
removing VMs, troubleshooting, and so on.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 26
VM Template
VM Template

A master copy of a VM with standardized virtual hardware and software


configuration that is used to create new VMs

• Created in two ways:


– Converting a VM into a template
– Cloning a VM to a template
• Steps involved in updating a VM template are:
1. Convert the template into VM
2. Install new software or OS/software patches
3. Convert the VM back to a template

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 27

A VM template is a master copy of a virtual machine with a standardized virtual hardware and
software configuration that can be used to create and provision new VMs. The VM template
typically includes a guest OS, a set of applications, and the hardware and software
configurations required to deploy a VM. Templates can be created in two ways either by
converting a VM to a template or by cloning a VM to a template. When a VM is converted to a
template, the original VM is replaced by the template. When a VM is cloned to a template, the
original VM is retained. A VM template provides preinstalled and preconfigured software, which
makes the provisioning of VMs faster and eliminates installation, configuration, and
maintenance overheads. It also enables ensuring consistency and standardization across VMs,
which makes it easier to diagnose and troubleshoot problems.

A VM template can be updated with new software, and with OS and software patches. Updating
the VM template involves the conversion of the template back to a VM and then the installation
of the new software or patches on the VM. After the update is complete, the VM is converted
back into a template. While updating the template, the relevant VM must be isolated to prevent
user access.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 27
Virtual Appliance
Virtual Appliance

Preconfigured virtual machine(s) preinstalled with a guest OS and an


application dedicated to a specific function.

• Used for functions, such as providing SaaS, routing packets, or


deploying a firewall
• Simplifies the delivery and operation of an application
– Simplifies installation and eliminates configuration issues
– The application is protected from issues in other virtual appliances
• Typically created using Open Virtualization Format (OVF)

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 28

A virtual appliance is a preconfigured virtual machine preinstalled with a guest operating


system and an application dedicated to a specific function. In a cloud environment,
virtual appliances are used for different functions, such as to provide Software as a Service, to
run cloud management software, and to route packets. They can also be used to provide
security features, such as a firewall or network intrusion detection.

Using a virtual appliance simplifies the delivery and operation of an application. Typically, the
process is time-consuming and error-prone, and involves setting up a new VM, installing the
guest OS and then the application. In contrast, a virtual appliance deployment is faster
because the VM is preconfigured and has preinstalled software. This simplifies installation and
eliminates configuration issues, such as software or driver compatibility problems. Also, the
application runs in isolation within the virtual appliance, and it is protected against the crashes
and the security issues of the other virtual appliances. Virtual appliances are typically created
using the Open Virtualization Format (OVF) – an open hypervisor-independent packaging and
distribution format.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 28
VM Network
VM Network

A logical network that provides Ethernet connectivity and enables


communication between VMs within a compute system.

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 29

A VM network is a logical network that provides Ethernet connectivity and enables


communication between the VMs running on a hypervisor within a compute system. A
VM network includes logical switches called virtual switches. Virtual switches function
similar to physical Ethernet switches, but may not have all the functionalities of a
physical Ethernet switch.

Consider the example of a web application that is running on a VM and needs to


communicate with a database server. The database server could be running on
another VM on the same compute system. The two VMs can be connected via a VM
network to enable them to communicate with each other. Because the traffic between
the VMs does not travel over a network external to the compute system, the data
transfer speed between the VMs is increased.

In some cases, the VMs residing on different compute systems may need to
communicate either with each other, or with other physical compute systems, such as
a client machines. To transfer these types of network traffic, the VM network must be
connected to the network of physical compute systems. In this case, the VM traffic
travels over both the VM network and the network of physical compute systems. The
figure on the slide shows two physical compute systems, each with a VM network and
both the VM networks connected to a network of physical compute systems.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 29
VM Network Components
Component Description

Virtual switch • A logical OSI Layer 2 Ethernet switch created in a compute system
• Connects VMs locally and also directs VM traffic to a physical network
• Forwards frames to a virtual switch port based on destination address
• A distributed virtual switch can function across multiple physical
compute systems
Virtual NIC • Connects a VM to a virtual switch and functions like a physical NIC
• Has unique MAC and IP addresses
• Forwards the VM’s network I/O in the form of Ethernet frames to the
virtual switch
Uplink NIC • A physical NIC connected to the uplink port of a virtual switch
• Functions as an ISL between virtual and physical Ethernet switches
• Not addressable from the network

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 30

VM networks comprise virtual switches, virtual NICs, and uplink NICs that are created on a
physical compute system running a hypervisor.

A virtual switch is a logical OSI Layer 2 Ethernet switch created within a compute system. A
virtual switch is either internal or external. An internal virtual switch connects only the VMs on
a compute system. It has no connection to any physical NIC and cannot forward traffic to a
physical network. An external virtual switch connects the VMs on a compute system to each
other and also to one or more physical NICs. It enables the VMs to communicate internally and
also to send traffic to external networks. A physical NIC already connected to a virtual switch
cannot be attached to any other virtual switch. A virtual switch also provides traffic
management for the VMs and maintains a MAC address table for forwarding frames to a virtual
switch port based on the destination address. A single virtual switch, called distributed virtual
switch, can also function across multiple physical compute systems. It is created and
configured from a centralized management server. Once created, instances of the distributed
virtual switch with identical networking configurations appear on each physical compute system
managed by the management server. Configuration changes to the distributed virtual switch
are applied to all its instances.

A virtual NIC connects a VM to a virtual switch and functions similar to a physical NIC. Virtual
NICs send and receive VM traffic to and from the VM network. A VM can have one or more
virtual NICs. Each virtual NIC has unique MAC and IP addresses and uses the Ethernet protocol
exactly as a physical NIC does. The hypervisor generates the MAC addresses and allocates
them to virtual NICs. The guest OS installed on a VM sends network I/O to the virtual NIC
using a device driver similar to that of a physical NIC. A virtual NIC forwards the I/Os in the
form of Ethernet frames to the virtual switch for transmission to the destination. It adds its
MAC and IP addresses as source addresses to the Ethernet frames it forwards.

An uplink NIC is a physical NIC connected to the uplink port of a virtual switch and functions as
an ISL between the virtual switch and a physical Ethernet switch. It is called uplink because it
only provides a physical interface to connect a compute system to the network and is not
addressable from the network. Uplink NICs are neither assigned an IP address nor are their
built-in MAC addresses available to any compute system in the network. It simply forwards the
VM traffic between the VM network and the external physical network without modification.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 30
Lesson Summary
During this lesson the following topics were covered:
• Virtual machine and VM hardware
• VM files and file system to manage VM files
• VM console
• VM template
• Virtual appliance
• VM network and its components

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 31

This lesson covered virtual machines and VM hardware. It also covered the files associated with
a VM and the file system to store and manage the VM files. Next, it covered VM console, VM
template, and virtual appliance. Finally, it covered VM network and its components.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 31
Lesson: Virtual Resources – II
This lesson covers the following topics:
• Logical unit number (LUN)
• Creating LUN from RAID set
• Creating LUN from storage pool

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 32

This lesson covers Logical unit number (LUN) and the different ways to create LUNs.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 32
Logical Unit Number (LUN)
Logical Unit Number (LUN)

Abstracts the identity and internal functions of storage system(s) and


appear as physical storage to the compute system.
• Mapping of virtual to physical storage is performed by the
virtualization layer.
• Provider provisions LUN to consumers for storing data
– Storage capacity of a LUN can be dynamically expanded or reduced

• LUN can be created from


– RAID set (traditional approach)
– Storage pool

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 33

LUN (logical unit number) is created by abstracting the identity and internal function of storage
system(s) and appears as physical storage to the compute system. The mapping of virtual to
physical storage is performed by the virtualization layer. LUNs are assigned to the compute
system to create a file system for storing and managing files. In a shared environment, there
may be a chance that this LUN can be accessed by an unauthorized compute system. LUN
masking is a process that provides data access control by defining which LUNs a compute
system can access. This ensures that volume access by compute systems is controlled
appropriately, preventing unauthorized or accidental access. In a cloud environment, the LUNs
are created and assigned to different services based on the requirements. For example, if a
consumer requires 500 GB of storage for their archival purpose, the service provider creates a
500 GB LUN and assigned it to the consumer. The storage capacity of a LUN can be
dynamically expanded or reduced based on the requirements. A LUN can be created from a
RAID set (traditional approach) or from a storage pool. The subsequent slides discuss these in
detail.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 33
Creating LUNs from RAID Set
• LUNs are created from a RAID set by partitioning the available
capacity into smaller units
– Spread across all the physical disks that belong to a RAID set
• Suited for applications that require predictable performance

Compute
Virtual
Volume 0 System 1

Compute
Virtual System 2
Volume 1

RAID Set

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 34

In the traditional approach, a LUN is created from a RAID set by partitioning the available
capacity into smaller units. A RAID set includes physical disks that are logically grouped
together and the required RAID level is applied. LUNs are spread across all the physical disks
that belong to a RAID set. The figure on the slide shows a RAID set consisting of four disks
that has been partitioned into two LUNs: LUN 0 and LUN 1. Traditional LUNs are suited for
applications that require predictable performance. Traditional LUNs provide full control for
precise data placement and allow an administrator to create LUNs on different RAID groups, if
there is any workload contention. Organizations that are not highly concerned about storage
space efficiency may still use traditional LUNs.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 34
Creating LUNs from Storage Pool
• Two types of volumes are created from Thin LUN
(10 TB)

storage pool: Thick


LUN

– Thin LUN Compute


Reported
Capacity 4 TB
Compute
4 TB Reported
• Does not require physical storage to be Allocated
Allocated
Capacity

completely allocated at the time of creation


• Consumes storage as needed from the
underlying storage pool in increments called
thin LUN extents
– Thick LUN Disk Drives

• Physical storage is completely allocated at the


time of creation Storage Pool

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 35

LUNs can be created from the storage pool that comprises a set of physical drives that provide
the actual physical storage used by the volumes. A storage pool can contain a few drives or
hundreds of drives. Two types of LUNs can be created from the storage pool: Thin LUNs and
Thick LUNs.

Thin LUNs do not require physical storage to be completely allocated to them at the time they
are created and presented to a compute system. From the operating system’s perspective, a
thin LUN appears as a traditional LUN. Thin LUNs consume storage as needed from the
underlying storage pool in increments called thin LUN extents. The thin LUN extent defines the
minimum amount of physical storage that is consumed from a storage pool at a time by a thin
LUN. When a thin LUN is destroyed, its allocated capacity is reclaimed to the pool.

Thick LUN is the one whose space is fully allocated upon creation. However, when a thick LUN
is created its entire capacity is reserved and allocated in the pool for use by that LUN.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 35
Use of Thin LUN
• Thin LUNs are appropriate for applications that can tolerate
performance variations
– In some cases, performance improvement is seen when using a
thin volume due to striping across large number of drives in the
pool
• Environments where cost, storage utilization, space, and energy
efficiency is paramount
• For applications where storage space consumption is difficult to
forecast
• Environment that needs optimized self provisioning

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 36

A thick LUN created from a storage pool provides better performance than a thin LUN
that is created from the same storage pool. However, a thin LUN offers several benefits.
Thin LUNs are appropriate for applications that can tolerate performance variations. In some
cases, performance improvement is seen when using a thin LUN, due to striping across a large
number of drives in the pool. However, when multiple thin LUNs contend for the shared
storage resources in a given pool, and when utilization reaches higher levels, the performance
can degrade. Thin LUNs provide the best storage space efficiency and are particularly suitable
for applications where space consumption is difficult to forecast. Using thin LUNs, cloud service
providers can reduce storage costs and simplify their storage management.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 36
Lesson Summary
During this lesson the following topics were covered:
• LUN
• Creating LUN from RAID set
• Creating LUN from storage pool

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 37

This lesson covered LUN and the different ways to create LUNs.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 37
Lesson: Virtual Resources – III
This lesson covers the following topics:
• Virtual network
• Types of virtual networks: VLAN and VSAN
• Mapping between VLANs and VSANs in an FCoE SAN

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 38

This lesson covers virtual network and the types of virtual networks including VLAN and VSAN.
It also covers the mapping between VLANs and VSANs in an FCoE SAN.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 38
Virtual Network
Virtual Network

A software-based logical network that is either a segment of a physical


network or spans across multiple physical networks.

• Appears as a physical network to the connected nodes


• Virtual networks share network components without leaking
information between them
• Network traffic is routed only when two nodes in different virtual
networks are communicating
• All types of networks can be virtualized, such as compute network,
SAN, and VM network

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 39

A Virtual network is a software-based logical network that is created from a unified pool of
network resources. A virtual network can be created by segmenting a single physical network
into multiple logical networks. For example, multiple virtual networks may be created on a
common network infrastructure for the use of the different departments in an organization.
Also, multiple physical networks can be consolidated into a single virtual network. A virtual
network utilizes the underlying physical network only for simple packet forwarding. It appears
as a physical network to the nodes connected to it, because existing network services are
reproduced in the logical space. Nodes with a common set of requirements can be functionally
grouped in a virtual network, regardless of the geographic location of the nodes.

Two nodes connected to a virtual network can communicate with each other without the
routing of frames even if they are in different physical locations. Network traffic must be routed
when two nodes in different virtual networks communicate even if they are connected to the
same physical network. Virtual networks are isolated and are independent of each other. Each
virtual network has unique attributes, such as routing, switching, independent polices, quality
of service, bandwidth, security, and so on. The network management traffic and the broadcasts
within a virtual network generally do not propagate to the nodes in another virtual network.

All types of network can be virtualized, including networks of physical compute systems, SANs,
and VM networks. Virtual networks are programmatically created, provisioned, and managed
from a network management workstation. Network and security services become part of
individual VMs in accordance with networking and security policies defined for each connected
application. When a VM is moved to a hypervisor on another compute system, its networking
and security services also move with it. When new VMs are created to scale an application, the
necessary policies are dynamically applied to those VMs as well.

Virtual networks enable a cloud provider to create logical networks that span physical
boundaries, allowing network extension and optimizing resource utilization across clusters and
cloud data centers. Virtual networks can also be scaled without reconfiguring the underlying
physical hardware. Providers can also integrate network services, such as firewalls, and load
balancers. A single console for all the services further simplifies management operations.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 39
Virtual Network Example

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 40

The figure on the slide shows two virtual networks that are created on both virtual and physical
switches. Virtual network 1 connects VM1 and VM3 and enables communication between them
without routing. Similarly, VM2 and VM4 are connected by virtual network 2. Communication
between VM1 and VM2 or between VM3 and VM4 must be routed. The network traffic
movement between virtual networks may be controlled by deploying access control at the
router.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 40
Common Types of Virtual Networks
• Virtual LAN (VLAN)
• Private VLAN (PVLAN)
• Stretched VLAN
• Virtual extensible LAN (VXLAN)
• Virtual SAN (VSAN)

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 41

The slide lists the common types of virtual networks.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 41
Virtual LAN (VLAN)
Virtual LAN (VLAN)

A virtual network created on a LAN enabling communication between a


group of nodes with a common set of functional requirements, independent
of their physical location in the network.

• A VLAN is identified by a unique 12-bit VLAN ID


• Configuring a VLAN:
– Define VLAN on physical and virtual switches and assign VLAN ID
– Configure VLAN membership based on port, MAC address,
protocol, IP subnet address, or application

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 42

A virtual LAN (VLAN) is a virtual network consisting of virtual and/or physical switches,
which divides a LAN into smaller logical segments. A VLAN groups the nodes with a
common set of functional requirements, independent of the physical location of the
nodes. In a multi-tenant cloud environment, the provider typically creates and assigns a
separate VLAN to each consumer. This provides a private network and IP address space to a
consumer, and ensures isolation from the network traffic of other consumers.

Traditionally in a physical network, a router is typically used to create a LAN and the LAN is
further segmented by using switches and hubs. In a physical LAN, the nodes, switches, and
routers are physically connected to each other and must be located in the same area. VLANs
enable a network administrator to logically segment a LAN, and the nodes do not have to be
physically located on the same LAN. For example, a cloud provider may place the VMs of a
consumer in the same VLAN, and the VMs may be on the same compute system or different
ones. Also, if a node is moved to another location, depending on the VLAN configuration, it
may still stay on the same VLAN without requiring any reconfiguration. This simplifies network
configuration and administration. A node (VM, physical compute system, or storage system)
may be a member of multiple VLANs, provided the OS, hypervisor, and storage array OS
support such configurations.

To configure VLANs, an administrator first defines the VLANs on the physical and virtual
switches. Each VLAN is identified by a unique 12-bit VLAN ID (as per IEEE specification
802.1Q). The next step is to configure the VLAN membership based on different techniques,
such as port-based, MAC-based, protocol-based, IP subnet address-based, or application-
based. In the port-based technique, membership in a VLAN is defined by assigning a VLAN
ID to a physical or virtual switch port or port group. In the MAC-based technique, the
membership in a VLAN is defined on the basis of the MAC address of the node. In the
protocol-based technique, different VLANs are assigned to different protocols based on
the protocol type field found in the OSI Layer 2 header. In the IP subnet address-
based technique, membership is based on the network IP subnet address of the OSI
Layer 3 header. In the application-based technique, a specific application, for example,
a file transfer protocol (FTP) application can be configured to execute on one VLAN. A
detailed discussion on these VLAN configuration techniques is beyond the scope of this course.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 42
Private VLAN (PVLAN)
Private VLAN

A sub-VLAN that segregates the nodes within a standard VLAN, called as


primary VLAN. A PVLAN can be configured as either isolated or community.

• Enables a provider to support a


larger number of consumers
• Provides security between
nodes on the same VLAN
• Simplifies network
management

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 43

A private VLAN (PVLAN) is an extension of the VLAN standard and further segregates the nodes
within a VLAN into sub-VLANs. A PVLAN is made up of a primary VLAN and one or more
secondary (or private) VLANs. The primary VLAN is the original VLAN that is being segregated
into smaller groups. Each secondary PVLAN exists only inside the primary VLAN. It has a
unique VLAN ID and isolates the OSI Layer 2 traffic from the other PVLANs. Primary VLANs are
promiscuous, which means that ports on the PVLANs can communicate with ports configured
as the primary VLAN. Routers are typically attached to promiscuous ports.

There are two types of secondary PVLANs within a primary VLAN: Isolated and Community.

• Isolated: A node attached to a port in an isolated secondary PVLAN can only communicate
with the promiscuous PVLAN.

• Community: A node attached to a port in a community secondary PVLAN can communicate


with the other ports in the same community PVLAN as well as with the promiscuous PVLAN.
Nodes in different community PVLANs cannot communicate with each other.

To configure PVLANs, the PVLAN feature must be supported and enabled on a physical switch
or a distributed virtual switch. To create PVLANs, the administrator first creates standard
VLANs on a switch, and then configures the VLANs as primary and secondary. The figure on the
slide illustrates how different types of PVLANs enable and restrict communications between
VMs (nodes) that are connected to a distributed virtual switch.

PVLANs enable a cloud provider to support a larger number of consumers and addresses the
issues with scalability encountered in VLANs. If a service provider assigns one VLAN per
customer, it limits the number of consumers that can be supported. Also, a block of addresses
are assigned to each consumer VLAN, which can result in unused IP addresses. Additionally, if
the number of nodes in the VLAN increases, the number of assigned addresses may not be
large enough to accommodate them. In a PVLAN, all members share a common address space,
which is allocated to the primary VLAN. When nodes are connected to secondary VLANs, they
are assigned IP addresses from the block of addresses allocated to the primary VLAN. When
new nodes are added in different secondary VLANs, they are assigned subsequent IP addresses
from the pool of addresses.

(Cont'd)

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 43
Stretched VLAN
Stretched VLAN

A VLAN that spans multiple sites and enables Layer 2 communication


between a group of nodes over a Layer 3 WAN infrastructure, independent
of their physical location.

• Layer 2 WAN frames are


encapsulated in Layer 3
WAN packets
• Enables movement of VMs
across locations without
changing their network
configuration

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 45

A stretched VLAN is a VLAN that spans across multiple sites over a WAN connection. In
a typical multi-site environment, two sites are connected over an OSI Layer 3 WAN
connection and all network traffic between them is routed. Because of the routing, it is
not possible to transmit OSI Layer 2 WAN traffic between the nodes in the two sites. A
stretched VLAN extends a VLAN across the sites and enables nodes in the two different
sites to communicate over a WAN as if they are connected to the same network.

Stretched VLANs also allow the movement of VMs between sites without having to
change their network configurations. This enables the creation of high-availability
clusters, VM migration, and application and workload mobility across sites. For
example, in the event of a disaster or during the maintenance of one site, a provider
typically moves VMs to an alternate site. Without a stretched VLAN, the IP addresses
of the VMs must be changed to match the addressing scheme at the other site.

Stretched VLANs may be created by simply connecting two sites using long distance
fiber and can be configured using different methods depending upon the underlying
WAN technology. Stretched VLANs may be created by simply connecting two sites
using long distance fiber, Dense Wave Division Multiplexing (DWDM), Coarse Wave
Division Multiplexing (CWDM), multi-protocol label switching (MPLS) network, and IP
network. An elaboration of these methods is beyond the scope of this course.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 45
Virtual Extensible LAN (VXLAN)
Virtual Extensible LAN

A logical Layer 2 overlay network built on a Layer 3 network, which uses


MAC-in-UDP encapsulation to enable communication between a group of
nodes, independent of their physical location.

• VXLAN header is added to a Layer 2 frame, which is placed in a UDP-IP


packet and tunneled over a Layer 3 network
– Enables transparent Layer 2 communication between nodes over physical
networks spanning Layer 3 boundaries
– Encapsulation and decapsulation are performed by Virtual Tunnel
Endpoints (VTEPs)

• 24-bit VXLAN ID provides up to 16 million VXLANs

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 46

A VXLAN is a OSI Layer 2 overlay network built on a OSI Layer 3 network. An overlay network
is a virtual network that is built on top of existing network. VXLANs, unlike stretched VLANs,
are based on LAN technology. VXLANs use the MAC Address-in-User Datagram Protocol (MAC-
in-UDP) encapsulation technique. In this scheme, a VXLAN header is added to the original
Layer 2 (MAC) frame, which is then placed in a UDP-IP packet and tunneled over a Layer 3
network. Communication is established between two tunnel end points called Virtual
Tunnel Endpoints (VTEPs). At the transmitting node, a VTEP encapsulates the network
traffic into a VXLAN header and at the destination node, a VTEP removes the
encapsulation before presenting the original Layer 2 packet to the node. VXLANs
enable the creation of a logical network of nodes across different networks. In case of
VM communication, the VTEP is built into the hypervisor on the compute system
hosting the VMs. VXLANs enable the separation of nodes, such as VMs, from physical
networks. They allow the VMs to communicate with each other using the transparent overlay
scheme over physical networks that could span Layer 3 boundaries. This provides a means to
extend a Layer 2 network across sites. The VMs are unaware of the physical network
constraints and only see the virtual Layer 2 adjacency.

Nodes are identified uniquely by the combination of their MAC addresses and a VXLAN
ID. VXLANs use a 24-bit VXLAN ID, which makes it theoretically possible to have up to 16
million Layer 2 VXLANs co-existing on a common Layer 3 infrastructure. VXLANs make it easier
for administrators to scale a cloud infrastructure while logically isolating the applications and
resources of multiple consumers from each other. VXLANs also enable VM migration across
sites and over long distances.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 46
Virtual SAN (VSAN)
Virtual SAN

A logical fabric, created on a physical FC or FCoE SAN enabling


communication between a group of nodes with a common set of
requirements, independent of their physical location in the fabric.

• A VSAN has its own fabric services, configuration, and set of FC


addresses
• Traffic disruptions in one VSAN do not affect other VSANs
• A VSAN may be extended across sites similar to a stretched
VLAN

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 47

A virtual SAN (VSAN) or virtual fabric is a logical fabric created on a physical FC or FCoE SAN.
A VSAN enables communication between a group of nodes with a common set of requirements,
independent of their physical location in the fabric. A VSAN functions conceptually in the same
way as a VLAN. Each VSAN behaves and is managed as an independent fabric. Each VSAN has
its own fabric services, configuration, and set of FC addresses. Fabric-related configurations in
one VSAN do not affect the traffic in another VSAN. Also, the events causing traffic disruptions
in one VSAN are contained within that VSAN and are not propagated to the other VSANs.
Similar to a stretched VLAN, a VSAN may be extended across sites by using long distance fiber,
DWDM, CWDM, or FCIP links to carry the FC frames.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 47
Virtual SAN (VSAN) (Cont'd)
• Configuring VSAN:
– Define VSANs on fabric switch with
specific VSAN IDs
– Assign VSAN IDs to F_Ports to include
them in the VSANs
• An N_Port connecting to an F_Port in a
VSAN becomes a member of that VSAN

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 48

To configure VSANs on a fabric switch, the VSANs are first defined with specific VSAN
IDs. Then the F_Ports on the switch are assigned the VSAN IDs to include them in the
respective VSANs. If an N_Port connects to an F_Port that belongs to a VSAN, it
becomes a member of that VSAN.

Note: VSAN vs. Zone

Both VSANs and zones enable node ports within a fabric to be logically segmented into groups.
But they are not same and their purposes are different. There is a hierarchical relationship
between them. An administrator first assigns physical ports to VSANs and then configures
independent zones for each VSAN. A VSAN has its own independent fabric services, but the
fabric services are not available on a per-zone basis.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 48
Mapping VLANs and VSANs in an FCoE SAN
• Mapping determines which VLAN carries a VSAN traffic
• Mapping considerations:
– Configure a dedicated VLAN for each VSAN
– VLANs configured for VSANs should not carry regular LAN traffic

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 49

The FCoE protocol enables transmission of FC SAN traffic through a LAN that supports Data
Center Bridging (DCB) functionalities. The FC frames remain encapsulated into Ethernet frames
during transmission through the LAN. If VLANs and VSANs are created on the LAN and FC SAN
respectively, a mapping is required between the VLANs and VSANs. The mapping determines
which VLAN will carry FC traffic that belongs to a VSAN. The mapping of VSAN to VLAN is
performed at the FCoE switch. Multiple VSANs are not allowed to share a VLAN. Hence, a
dedicated VLAN must be configured at the FCoE switch for each VSAN. Also, it is recommended
that VLANs that carry regular LAN traffic should not be used for VSAN traffic.

The figure on the slide shows an example of a mapping between VLANs and VSANs. In the
example, the FCoE switch is configured with four VLANs: VLAN 100, VLAN 200, VLAN 300, and
VLAN 400. The Ethernet switch is configured with two VLANs: VLAN 100 and VLAN 200. Both
VLAN 100 and VLAN 200 transfer regular Ethernet traffic to enable compute-to-compute
communication. The fabric switch has VSAN 100 and VSAN 200 configured. To allow data
transfer between the compute system and the FC fabric through the FCoE switch, VSAN 100
and VSAN 200 must be mapped to VLANs configured on the FCoE switch. Since VLAN 100 and
VLAN 200 are already being used for LAN traffic, VSAN 100 and VSAN 200 should be mapped
to VLAN 300 and VLAN 400, respectively.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 49
Lesson Summary
During this lesson the following topics were covered:
• Virtual network
• Types of virtual network: VLAN, private VLAN, stretched
VLAN, VXLAN, and VSAN
• Mapping between VLANs and VSANs in an FCoE SAN

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 50

This lesson covered virtual network and the common types of virtual networks including VLAN,
private VLAN, stretched VLAN, VXLAN, and VSAN. It also covered the mapping between VLANs
and VSANs in an FCoE SAN.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 50
Concepts in Practice
• VMware ESXi

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 51

The Concepts in Practice section covers VMware ESXi.

Note:

For the latest information on VMware products, visit www.vmware.com.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 51
VMware ESXi

ESXi
• Bare-metal hypervisor

• Abstracts processor, memory, storage, and network resources


into multiple VMs

• Comprises underlying VMkernel OS that supports running


multiple VMs
- VMkernel controls and manages compute resources

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 52

VMware ESXi is a bare-metal hypervisor with a compact architecture that is designed for
integration directly into virtualization-optimized compute system hardware, enabling rapid
installation, configuration, and deployment. ESXi abstracts processor, memory, storage, and
network resources into multiple VMs that run unmodified operating systems and applications.
The ESXi architecture comprises underlying operating system called VMkernel, that provides a
means to run management applications and VMs. VMkernel controls all hardware resources on
the compute system and manages resources for the applications. It provides core OS
functionality, such as process management, file system, resource scheduling, and device
drivers.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 52
Module Summary
Key points covered in this module:
• Virtual layer
• Virtualization software
• Resource pool
• Virtual resources

© Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 53

This module covered the functions of virtual layer, virtualization software, resource pool, and
virtual resources.

Copyright 2014 EMC Corporation. All rights reserved. Module: Virtual Layer 53

You might also like