h8063 Data Migration Vsphere WP
h8063 Data Migration Vsphere WP
A Detailed Review
EMC Information Infrastructure Solutions
Abstract
This white paper profiles and compares various methods of data migration in a virtualized environment. In-array,
cross-array, and host-based methods are examined. While the primary focus of this paper is on the VMware vSphere
4.1 infrastructure, it also assesses replication options, storage virtualization, and the tools that can assist
administrators in migrating not just storage, but also the applications and services utilizing that storage. Information
about the most appropriate replication strategy for each different scenario is provided.
November 2010
Copyright 2010 EMC Corporation. All rights reserved.
EMC believes the information in this publication is accurate as of its publication date. The information is subject to
change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS
PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR
FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software
license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com
All other trademarks used herein are the property of their respective owners.
Part number: H8063
Data Migration Techniques for VMware vSphereA Detailed Review
2
Table of Contents
Executive summary ........................................................................................................................... 6
Business case ............................................................................................................................... 6
Solution overview .......................................................................................................................... 6
Key results ..................................................................................................................................... 6
Introduction ....................................................................................................................................... 7
Introduction to this white paper ..................................................................................................... 7
Purpose ......................................................................................................................................... 7
Scope ............................................................................................................................................ 7
Audience ....................................................................................................................................... 8
Terminology ................................................................................................................................... 8
Technology overview ........................................................................................................................ 9
Introduction .................................................................................................................................... 9
EMC CLARiiON CX4-480 ............................................................................................................. 9
EMC Unisphere ............................................................................................................................. 9
EMC VPLEX Metro........................................................................................................................ 9
EMC VPLEX Local ...................................................................................................................... 10
EMC RecoverPoint ...................................................................................................................... 10
EMC SAN Copy ........................................................................................................................... 10
CLARiiON LUN Migrator ............................................................................................................. 10
VMware vSphere ......................................................................................................................... 10
VMware vCenter SRM ................................................................................................................ 10
VMware vCenter Converter ......................................................................................................... 11
Configuration ................................................................................................................................... 12
Overview ..................................................................................................................................... 12
Physical environment .................................................................................................................. 12
Hardware resources .................................................................................................................... 13
Software resources ..................................................................................................................... 13
Data migration with VMware vCenter Storage vMotion .................................................................. 14
Overview ..................................................................................................................................... 14
Administering the virtual infrastructure ........................................................................................ 14
Migration options ......................................................................................................................... 15
Destination disk types in Storage vMotion .................................................................................. 15
Using VMware Storage vMotion .................................................................................................. 16
Storage vMotion requirements and limitations ............................................................................ 19
Storage hardware acceleration ................................................................................................... 19
Comparing Storage vMotion with and without VAAI enabled ..................................................... 21
Conclusion ................................................................................................................................... 24
Data migration with VMware vCenter Converter............................................................................. 25
VMware vCenter Converter ......................................................................................................... 25
Data Migration Techniques for VMware vSphereA Detailed Review
3
Hot cloning with synchronization and switch features ................................................................. 25
Configuring a hot cloning operation ............................................................................................ 26
Performance impact of hot cloning .............................................................................................. 32
Cold cloning ................................................................................................................................. 33
Data migration with EMC VPLEX Metro ......................................................................................... 34
Introduction to the EMC VPLEX family ....................................................................................... 34
Data migration with VPLEX ......................................................................................................... 35
Configuration of VPLEX Metro .................................................................................................... 36
VPLEX virtual volumes ................................................................................................................ 37
Introducing VPLEX into an existing environment ........................................................................ 37
Migration scenario ....................................................................................................................... 38
Conclusion ................................................................................................................................... 41
Data migration with CLARiiON SAN Copy ...................................................................................... 42
Introduction to EMC CLARiiON SAN Copy ................................................................................. 42
Environment topology ................................................................................................................. 42
Uses for SAN Copy ..................................................................................................................... 43
Migrations available with SAN Copy ........................................................................................... 43
Benefits of SAN Copy ................................................................................................................. 43
SAN Copy full and incremental sessions .................................................................................... 43
SAN Copy in a VMware environment.......................................................................................... 44
Requirements and considerations ............................................................................................... 44
Migration duration and performance ........................................................................................... 44
CLARiiON SAN Copy setup overview ......................................................................................... 46
Migrating to third-party storage arrays ........................................................................................ 47
Migration scenario ....................................................................................................................... 48
Conclusion ................................................................................................................................... 54
Data migration with CLARiiON LUN Migrator ................................................................................. 55
CLARiiON virtual LUN Migrator ................................................................................................... 55
Migration duration and performance ........................................................................................... 56
CLARiiON LUN migration procedure .......................................................................................... 56
Scenario: Migrating applications with CLARiiON LUN Migrator ................................................. 58
Environment topology ................................................................................................................. 59
Array performance during the migration ..................................................................................... 60
Application performance during the migration ............................................................................ 61
Conclusion ................................................................................................................................... 62
Data migration with VMware vCenter Site Recovery Manager ....................................................... 63
Test environment......................................................................................................................... 63
Requirements for using VMware vCenter SRM as a migration tool ........................................... 64
Using VMware vCenter SRM as a migration tool ........................................................................ 65
Installing and configuring VMware vCenter SRM failovers ......................................................... 65
Impact on production performance ............................................................................................. 67
Using RecoverPoint SRA for local array or LUN migration ......................................................... 69
Summary of data migration techniques .......................................................................................... 70
Data Migration Techniques for VMware vSphereA Detailed Review
4
Comparing the data migration techniques .................................................................................. 70
Scenario 1: Migration of individual virtual disks .......................................................................... 70
Scenario 2: Migration of datastore contents to another LUN on same array .............................. 71
Scenario 3: Migration of entire contents of a datastore to a LUN on another array ................... 71
Scenario 4: Migration of the entire virtual infrastructure to another geographical location ......... 72
Scenario 5: Migration from physical to virtual or inter-hypervisor ............................................... 73
Conclusion ...................................................................................................................................... 74
Summary ..................................................................................................................................... 74
Findings ....................................................................................................................................... 74
Next steps ................................................................................................................................... 74
References ...................................................................................................................................... 75
White papers ............................................................................................................................... 75
Other documentation ................................................................................................................... 75
Data Migration Techniques for VMware vSphereA Detailed Review
5
Executive summary
Business case To meet the business challenges presented by today's on-demand 24x7 world, data
must be highly availablein the right place, at the right time, and at the right cost to
the enterprise. IT organizations are increasingly being tasked with increasing
flexibility and agility within the enterprise and at the center of those capabilities is
data migration. VMware vSphere increases flexibility at the server level. For flexibility
to be realized across the enterprise, the data and storage must mirror that flexibility.
Data migration must occur seamlessly and without impacting applications or end
users.
Solution
overview
A variety of techniques and tools are available to customers when migrating virtual
data centers, each with its own advantages and disadvantages. This white paper
investigates several of the methods commonly used for virtual data center
migrations. It mainly examines the VMware vSphere 4.1 infrastructure but also
assesses replication options, storage virtualization, and the tools that can assist
administrators in migrating not just storage but also the applications and services
using that storage.
The main focus areas are:
Migrating virtual environments using native VMware and EMC tools and
functionality, such as VMware vCenter Converter, VMware Storage vMotion,
EMC
CLARiiON
Disaster recovery (DR) with VMware vCenter Site Recovery Manager (SRM) as
a migration tool for coordinating, testing, and executing a data center migration
EMC VPLEX
storage systems.
Virtual machine A software implementation of a machine that executes
programs like a physical machine.
VMDK Virtual Machine Disk format. A VMDK file stores the
contents of a virtual machine's hard disk drive. The file
can be accessed in the same way as a physical hard
disk.
VMware vCenter SRM VMware vCenter Site Recovery Manager.
Data Migration Techniques for VMware vSphereA Detailed Review
8
Technology overview
Introduction This section briefly describes the key technologies deployed in the test environment.
EMC CLARiiON
CX4-480
The EMC CLARiiON CX4 series delivers industry-leading innovation in midrange
storage with the fourth-generation CLARiiON CX storage platform. The unique
combination of flexible, scalable hardware design and advanced software
capabilities enables the CLARiiON CX4 series systems to meet the growing and
diverse needs of todays midsize and large enterprises. Through innovative
technologies like Flash drives, UltraFlex
, customers can:
Decrease costs and energy use
Optimize availability and virtualization
CLARiiON CX4-480 is a versatile and cost-effective solution for organizations
seeking an alternative to server-based storage. It delivers performance, scalability,
and advanced data management features in one, easy-to-use storage solution.
EMC Unisphere EMC Unisphere provides a flexible, integrated experience for managing existing
CLARiiON storage systems and next-generation EMC unified storage offerings in a
single screen. This new approach to midtier storage management fosters simplicity,
flexibility, and automation. Unisphere's unprecedented ease of use is reflected in
intuitive task-based controls, customizable dashboards, and single-click access to
realtime support tools and online customer communities.
EMC VPLEX
Metro
EMC VPLEX
Metro enables disparate storage arrays at two separate locations to
appear as a single, shared array to application hosts, enabling customers to easily
migrate and plan the relocation of application servers and data, whether physical or
virtual, within and/or between data centers across distances of up to 100 km.
VPLEX Metro enables companies to ensure effective information distribution by
sharing and pooling storage resources across multiple hosts over synchronous
distances.
VPLEX Metro empowers companies with new ways to manage their virtual
environment over synchronous distances so they can:
Transparently share and balance resources across physical data centers
Ensure instant, realtime data access for remote users
Increase protection to reduce unplanned application outages
Data Migration Techniques for VMware vSphereA Detailed Review
9
EMC VPLEX
Local
EMC VPLEX Local provides seamless data mobility and lets organizations manage
multiple heterogeneous arrays from a single interface within a data center. VPLEX
Local provides a next-generation architecture that enables customers to increase
availability and improve utilization across multiple arrays.
EMC
RecoverPoint
EMC RecoverPoint supports cost-effective, continuous data protection and
continuous remote replication for on-demand protection and recovery to any point in
time. RecoverPoint's advanced capabilities include policy-based management,
application integration, and bandwidth reduction.
RecoverPoint provides a single, unified solution to protect and/or replicate data
across heterogeneous storage. With RecoverPoint, organizations can simplify
management and reduce costs, recover data at a local or remote site to any point in
time, and ensure continuous replication to a remote site without impacting
performance.
EMC SAN Copy EMC CLARiiON SAN Copy is a storage-system-based application that is available
as an optional package. SAN Copy is designed as a multipurpose replication product
for data mobility, migrations, content distribution, and disaster recovery. SAN Copy
enables the storage system to copy data at a block level directly across the SAN,
from one storage system to another, or within a single CLARiiON system. While the
software runs on the CLARiiON storage system, it can copy data from, and send
data to, other supported storage systems on the SAN.
CLARiiON LUN
Migrator
CLARiiON LUN Migrator is a feature that moves data, without disruption to host
applications, from a source LUN to a destination LUN of the same or larger size, and
with requisite characteristics within a single storage system. LUN migration
leverages EMC FLARE
/VE 5.4.1
EMC RecoverPoint 3.3
EMC SAN Copy FLARE 30
EMC Unisphere 1.0
VMware vCenter 4.1
VMware vCenter Converter Standalone 4.3
VMware vCenter Site Recovery Manager 4.1
VMware vSphere 4.1
Microsoft Exchange 2010 RTM (Build 14.00.0639.021)
Microsoft SQL Server 2005 Enterprise Edition
Microsoft Windows 2008 Enterprise Edition
Data Migration Techniques for VMware vSphereA Detailed Review
13
Data Migration Techniques for VMware vSphereA Detailed Review
14
Data migration with VMware vCenter Storage vMotion
Overview With VMware vCenter Storage vMotion, a virtual machine and its disk files can be
migrated from one datastore to another while the virtual machine is running. These
datastores can be on the same storage array, or they can be on separate storage
arrays. Storage vMotion is supported for use with FC, network file system (NFS), and
Internet small computer system interface (iSCSI) storage protocols.
Administering
the virtual
infrastructure
Storage vMotion can be used in several ways to administer the virtual infrastructure:
Storage maintenance and reconfigurationStorage vMotion can be used to
move virtual machines off a storage device to enable maintenance or
reconfiguration of the storage device without virtual machine downtime.
Redistributing storage loadStorage vMotion can be used to manually
redistribute virtual machines or virtual disks to different storage volumes to
balance capacity or improve performance.
Upgrading VMware ESX/ESXi without virtual machine downtimeDuring an
upgrade from ESX server 2.x to ESX/ESXi 3.5 or later, running virtual machines
can be migrated from a VMFS2 datastore to a VMFS3 datastore. The VMFS2
datastore can be upgraded without any impact on the virtual machines. Storage
vMotion can then be used to migrate the virtual machines back to the original
datastore without any virtual machine downtime.
From an infrastructure perspective, the main requirement is that both the source and
target datastores are accessible to the ESX/ESXi host on which the virtual machine
is hosted as shown in Figure 2. The virtual machine does not change execution host
during a migration with Storage vMotion.
Figure 2: Migration with Storage vMotion
Data Migration Techniques for VMware vSphereA Detailed Review
15
Migration
options
Depending on the running state of the virtual machine, there is a slight difference in
e options available to the user. A powered-off virtual machine provides the full
virtual machine
th
range of migration options that can occur simultaneously, whereas a powered-on
virtual machine is restricted to migrating either the resources or the data in the same
job.
These options are detailed in Table 3.
Table 3: Migration options
Powered-off or suspended
Change host Move the virtual machine to another ESX/ESXi host
Change datastore Move the virtual machines configuration file and virtual
disks
Change both host and
datastore
its configuration file and virtual disks
Move the virtual machine to another ESX/ESXi host and
move
Powered-on virtual machine
Change host Move the virtual machine to another ESX/ESXi host
Change datastore Move the virtual machines configuration file and virtual
disks
To execute a VMware Storage vMotion operation, select the Migrate option from the
ontext menu of the virtual machine to be migrated. The same menu option is
ou cannot perform vMotion and Storage vMotion simultaneously
c
selected when executing a VMware vMotion operation. Table 3 details the migration
options presented.
Note The Change host option is a vMotion operation only, not a Storage vMotion
operation. Y
on a running virtual machine. This testing ran on powered-on virtual
machines, so the selected option was Change datastore.
Destination
disk types in
During a migration with Storage vMotion, virtual disks can be transformed from thick-
rovisioned to thin-provisioned or from thin-provisioned to thick-provisioned. The
sk. If this option is selected for a raw
DM) disk in either the physical or virtual compatibility mode,
Use the thin format to save storage space. The thin virtual disk only uses as
e as it needs for its initial operations. When the virtual disk
.
Storage
vMotion
p
following format options are available:
Same as Source
Use the format of the original virtual di
device mapping (R
only the mapping file is migrated.
Thin-provisioned
much storage spac
requires more space, it can grow in size up to its maximum allocated capacity
This option is not available for RDMs in physical compatibility mode. If this
option is selected for a virtual compatibility mode RDM, the RDM is converted to
a virtual disk. RDMs converted to virtual disks cannot be converted back to
RDMs.
Data Migration Techniques for VMware vSphereA Detailed Review
16
a fixed amount of hard disk space to the virtual disk. The virtual disk in
s not change its size and, from the beginning, occupies the
ace provisioned to it. This option is not available for RDMs in
er. If a disk is left in its original
location, the disk format is not converted, regardless of the selection made.
Ther and
RDM
RDMs in virtual mode, it is possible to migrate the mapping file or convert to
the
Thick-provisioned
Allocate
the thick format doe
entire datastore sp
physical compatibility mode. If this option is selected for a virtual compatibility
mode RDM, the RDM is converted to a virtual disk. RDMs converted to virtual
disks cannot be converted back to RDMs.
Disks are converted from thin to thick format or thick to thin format only when
they are copied from one datastore to anoth
e is also a difference in the way Storage vMotion operates with virtual disks
s.
For virtual disks in persistent mode, the entire virtual disk is migrated.
For
thick-provisioned or thin-provisioned disks during migration, as long as
destination is not an NFS datastore.
For RDMs in physical mode, only the mapping file is migrated, the RDM does
not move.
Refer to Storage vMotion requirements and limitations for more information about
requirements and limitations.
Using VMware
Storage
vMotion
The VMware vCenter Migration wizard can be used to migrate a powered-on virtual
ne from one host to another, using vMotion technology. To relocate the disks
f a powered-on virtual machine, the virtual machine is migrated using Storage
on
achine as follows:
machi
o
vMotion.
Both of these migration methods can be executed from the same Migrate option
a virtual m
1. In the vSphere client, right-click the virtual machine for all available options, as
shown in Figure 3.
Figure 3: Selecting the Migrate option
The Select Migration Type screen is displayed, with the option to either move
the virtual machine to another host or to move the virtual machines storage to
another datastore, as shown in Figure 4.
When the virtual machine is powered on, Change datastore is the default
datastore migration type. The basic settings provide the ability to migrate all of the
storage to a single datastore only.
Figure 4: Selecting the Migration Type
2. To display a list of available target datastores, that enable the mapping of
individual virtual machine disk (VMDK) files to a specific datastore, select the
Advanced option as shown in Figure 5.
Data Migration Techniques for VMware vSphereA Detailed Review
17
Figure 5: Selecting the Advanced option
When selecting the destination datastore, as shown in Figure 6, it is important to
consider the placement of individual virtual disks when dealing with I/O-intensive
applications. Certain components of the application, such as database, logs, and
indexes, may perform better or be better protected from component failure, if
placed on separate storage devices.
Figure 6: Selecting the datastore
When all of the relevant source virtual disks have been mapped to their
respective target datastores, the Ready to Complete screen provides a final
summary of the selected Storage vMotion job as shown in Figure 7.
Figure 7: Reviewing the summary screen
Data Migration Techniques for VMware vSphereA Detailed Review
18
As can be seen from Figure 7, in this scenario, the OS device was set to remain on
its NFS storage and the remaining eight virtual disks were set to be moved to the
TARGET_SVMOTION_EXCHANGE datastores.
For more information about using the Migration wizard, refer to the vSphere
Datacenter Administration Guide.
Storage
vMotion
requirements
and limitations
A virtual machine and its host must meet resource and configuration requirements
for the virtual machine disks to be migrated with Storage vMotion.
Storage vMotion is subject to the following requirements and limitations:
Virtual machines with snapshots cannot be migrated using Storage vMotion. To
migrate these machines, the snapshots must be deleted or reverted.
Virtual machine disks must be in persistent mode or must be RDMs. For virtual
compatibility mode RDMs, it is possible to migrate the mapping file or convert to
thick-provisioned or thin-provisioned disks during migration, as long as the
destination is not an NFS datastore. For physical compatibility mode RDMs, it is
only possible to migrate the mapping file.
The migration of virtual machines during VMware Tools installation is not
supported.
The host on which the virtual machine is running must be licensed for either the
Enterprise or Enterprise Plus editions to execute a Storage vMotion operation
ESX/ESXi 3.5 hosts must be licensed and configured for vMotion. ESX/ESXi
4.0 and later hosts do not require vMotion configuration to perform migration
with Storage vMotion.
The host on which the virtual machine is running must have access to both the
source and target datastores.
A particular host can be involved in up to two migrations with vMotion or
Storage vMotion at one time.
VMware vSphere supports a maximum of eight simultaneous vMotion, cloning,
deployment, or Storage vMotion accesses to a single VMFS3 datastore, and a
maximum of four simultaneous vMotion, cloning, deployment, or Storage
vMotion accesses to a single NFS or VMFS2 datastore. A migration with
vMotion involves one access to the datastore. A migration with Storage vMotion
involves one access to the source datastore and one access to the destination
datastore.
Storage
hardware
acceleration
Through the use of VMware vStorage APIs for Array Integration (VAAI)Full Copy,
it is possible to accelerate Storage vMotion using compliant storage hardware,
enabling the host to offload specific virtual machine and storage management
operations to the hardware layer. With storage hardware assistance, the host
performs these operations faster and consumes less CPU, memory, and storage
fabric bandwidth.
Note VAAI licensing requires the Enterprise edition or higher.
The Full Copy feature offloads the cloning operations to the storage array. The host
issues the EXTENDED COPY SCSI command to the array and directs the array to
Data Migration Techniques for VMware vSphereA Detailed Review
19
copy the data from the source LUN to a destination LUN, or to the same source
LUN, if required, depending on how the VMFS datastores are configured on the
relevant LUNs. The array uses its efficient internal mechanism to copy the data and
confirms Done to the host. Figure 8 shows how the storage hardware acceleration
process is managed.
Figure 8: Storage hardware acceleration with VAAI Full Copy
Full Copy (or VAAI) enables arrays to make copies of certain virtualization objects
within the array, without the need to have the ESX server read and write those
objects.
To benefit from the hardware acceleration functionality, you must have:
ESX version 4.1 or later
A storage array that supports hardware acceleration (for example, CLARiiON
FLARE 30)
On the VMware vSphere Server, hardware acceleration is enabled by default.
To change this setting, go to ESX Server Configuration Tab > Software
Advanced Settings > DataMover > DataMover.HardwareAcceleratedMove, as
shown in Figure 9, where:
0 = disabled
1 = enabled
Data Migration Techniques for VMware vSphereA Detailed Review
20
Figure 9: Hardware acceleration settings
To enable hardware acceleration, run the following command on the ESX version 4.1
console:
esxcfg-advcfg s 1 /DataMover/HardwareAcceleratedMove
A required configuration step when using a CLARiiON array that supports the Full
Copy/Array Accelerated Copy feature: the ESX host initiator records must be
configured using failovermode 4, that is, asymmetric logical unit access (ALUA)
mode on the CLARiiON.
The Full Copy feature is only supported when the source and destination LUNs
belong to the same storage array. Currently, it is not supported for cross-array
migrations.
For more information on VAAI, visit the VMware Knowledge Base:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displ
ayKC&externalId=1021976
Comparing
Storage
vMotion with
and without
VAAI enabled
In the test environment, a single Windows virtual machine running Microsoft
Exchange 2010 was used. The virtual machines boot device was on NFS storage,
so the goal was to migrate the application data only. Apart from the boot device, the
virtual machine had eight virtual disks, spread evenly across two 100 GB datastores
that were configured on two separate FC LUNs as shown in Table 4.
Table 4: Datastore configuration
Datastore 1 (100 GB) Datastore 2 (100 GB)
DB-01-Database (25 GB) DB-01-Logs (10 GB)
DB-02-Logs (10 GB) DB-02-Database (25 GB)
DB-03-Database (25 GB) DB-03-Logs (10 GB)
DB-04-Logs (10 GB) DB-04-Database (25 GB)
The Storage vMotion operation, although executed against a single virtual machine,
required the simultaneous migration of eight separate virtual disks from two source
datastores to two separate target datastores.
Figure 10 displays the disk activity, as seen through the ESX storage adapter
counters (vmhba0 and vmhba1), within the vSphere Client performance views for
both VAAI and non-VAAI Storage vMotion operations.
Data Migration Techniques for VMware vSphereA Detailed Review
21
Figure 10: Disk activity
These Storage vMotion operations were conducted with the virtual machine powered
up but not under any load. As can be seen when VAAI is enabled, the average
commands per second are vastly reduced, from a combined total of approximately
3,600 commands per second down to approximately 800 commands per second.
This drop in host bus adapter (HBA) activity is directly attributable to the fact that,
with VAAI enabled, ESX no longer needs to conduct as many inter-datastore
operations because the array manages this data transfer on the back end.
While the VAAI-enabled migration was slightly quicker, it is noticeable that the profile
of the read activity was completely different when VAAI was enabled. There was a
short spike in the read commands per second at the beginning of the VAAI-enabled
Storage vMotion, which then ceased for the remainder of the migration.
This is the nature of hardware offloading, as the ESX server reads the full contents
of the source device once and sends it to the array. The array migrates the data to
the target device and, on completion, sends the Done command to the ESX server.
Impact on the CLARiiON storage processors
It is also interesting to note the impact this offloading has on the CLARiiON
throughput, as shown in Figure 11. Both datastores were configured on separate
storage processors (SPA and SPB) so that the load is spread evenly across both.
Data Migration Techniques for VMware vSphereA Detailed Review
22
Figure 11: CLARiiON throughput comparison
The array statistics correlate perfectly with what the ESX server observes during the
online Storage vMotion operations. With the traditional, non-VAAI Storage vMotion
operation, the array detects far more read and write activity from the ESX server.
With VAAI enabled, this activity greatly decreases because, once the data is copied
to the array, the hardware offloading takes care of the subsequent copy operations
to the source device at the back end.
It is interesting to note that the CLARiiON storage processors are actually busier
when VAAI is not enabled, as shown in Figure 12. So instead of the hardware
offloading putting an increased load on the CLARiiON storage array, it did the
opposite in this case; the array was able to use its own internal efficiencies for the
back-end copy rather than servicing front-end I/O with non-VAAI Storage vMotion
operations.
Figure 12: CLARiiON SP utilization comparison
Impact on Exchange 2010
Neither the virtual machine nor the ESX server was short of CPU or memory
resources during the Storage vMotion operations, but the associated increase in I/O
activity could have had an adverse effect on the response times of the application.
Data Migration Techniques for VMware vSphereA Detailed Review
23
To validate Exchange 2010 performance, Microsoft LoadGen was run for two 2-hour
periods, once with VAAI disabled and once with VAAI enabled, with the Storage
vMotion operation executed 30 minutes into each test.
While the final results in terms of the overall IOPS achieved and the average
response times for databases and logs were the same for each run, there was a
noticeable increase in the database seconds per read (DB sec/Read) response
times for the duration of the Storage vMotion operations as shown in Figure 13.
Figure 13: Exchange 2010 latencies comparison
The baseline figures reflect what the normal latencies were at either side of the
Storage vMotion operations. In terms of the impact to Exchange, there was no
distinguishable difference in read and write latencies during VAAI and non-VAAI
migrations. Both types of migration added an extra three microseconds to the DB
sec/Read, and one extra microsecond to the database seconds per write (DB
sec/Write). The log writes were unaffected by the migrations. This is not to say that
Storage vMotion has no effect on the log devices, simply that the impact was not
measurable in this case, since the sequential nature of Exchange log devices lends
itself to good performance when the underlying storage is correctly configured.
Conclusion VMware Storage vMotion is possibly the easiest and most convenient method of
migrating storage in a virtualized environment. It is one of the few methods that
provides the VMware administrator with online, nondisruptive mobility across the
underlying storage infrastructure. Once the vSphere Server detects both the source
and target datastores, it is simply a matter of scheduling the migration process itself.
Storage vMotion is now further enhanced by VAAI with Hardware Offloading and Full
Copy functions that dramatically reduce the server workload required to complete the
migration using storage array technology and efficiencies.
Data Migration Techniques for VMware vSphereA Detailed Review
24
Data migration with VMware vCenter Converter
VMware
vCenter
Converter
Two versions of the VMware vCenter Converter are available: standalone and
integrated. The VMware vCenter Converter Standalone used in this test scenario
was version 4.0.1 (build 161434). The VMware vCenter Converter module tested
was the version integrated into VMware vCenter 4.1.
Both tools allow for hot cloning (converting the powered-on machine) or cold cloning
(booting from the VMware vCenter Converter boot CD), regardless of whether the
source is a physical or virtual machine.
While in most scenarios VMware vCenter Converter is used to convert a physical
host or a virtual machine from a different hypervisor to a VMware virtual machine, it
is often considered as a potential candidate tool for the migration of a VMware virtual
machine from one place to another.
One distinction should be considered in terms of using VMware vCenter Converter
for migrations. Strictly speaking, VMware vCenter Converter copies and clones the
virtual machine to another location, rather than migrating or moving it. A second,
separate machine is created that retains all of the operating system and data
characteristics.
When converting from physical machines or non-VMware hypervisor machines, it is
assumed that the resulting VMware virtual machine will have a different set of
hardware. This is partially the case also if the VMware vCenter Converter is used to
migrate an existing VMware virtual machine to another VMware virtual machinefor
example, resulting in different MAC addresses, which could potentially have
consequences for any MAC address-based licensing or network security that is
already in place.
The following scenarios were tested for the standalone and integrated versions of
VMware vCenter Converter:
Hot cloning with synchronization and switch features
Cold cloning
Hot cloning with
synchronization
and switch
features
Hot cloning is supported by both the VMware vCenter Converter Standalone and
the VMware vCenter Converter module (vCenter plug-in). This enables a copy of
the running machine (physical or virtual) to be created on a VMware virtual machine
running on a vSphere host, without interruption of service. Note that there is a
distinction between creating a copy without interruption, and transferring production
without interruption. VMware vCenter Converter Standalone supports the former,
but not the latter. The usefulness of VMware vCenter Converter as a migration tool
is therefore limited if its synchronization and switch features are disabled.
With the synchronization feature on, VMware vCenter Converter performs an
initial copy of all requested volumes, and then does an incremental
resynchronization of the data that was changed during the initial copy window.
In an active system, data is continually changing. The resynchronization
feature executes only once, so if data is still being changed during
resynchronization, then this data is not captured.
Data Migration Techniques for VMware vSphereA Detailed Review
25
To provide for this, VMware vCenter Converter enables the user to select services
(in the case of Windows) that should be shut down prior to performing the
resynchronization, thereby preventing data loss.
With the switch feature on, VMware vCenter Converter powers down the
source machine, and powers up the newly created virtual machine, in addition
to performing any requested customization.
Configuring a
hot cloning
operation
The VMware vCenter Converter Standalone interface is similar to the VMware
vCenter Converter integrated interface, which is used to illustrate the following hot
cloning configuration process.
Before starting the configuration:
Install the VMware vCenter Converter module
Install the plug-in installed on the VI Client
Use the following steps to configure the hot cloning operation, as done in this
scenario:
1. Select Import Machine from the context menu of the import destination (in this
case, a vSphere host) as shown in Figure 14.
Figure 14: Selecting Import Machine
Data Migration Techniques for VMware vSphereA Detailed Review
26
The Source System screen is displayed as shown in Figure 15.
Figure 15: Source System screen
2. Select Powered-on Machine as the source type and provide the relevant
credentials for the powered-on machine: IP address or name, User name,
Password, and OS Family.
3. Click Next to continue.
A message is displayed as shown in Figure 16.
Figure 16: Uninstall options
Data Migration Techniques for VMware vSphereA Detailed Review
27
To complete the import, a VMware vCenter Converter agent is temporarily
installed on the source machine. These files can be automatically or manually
removed after the import, depending on the method selected.
4. Select the automatic uninstall method and click Yes to continue.
The Destination Location screen is displayed as shown in Figure 17.
Figure 17: Destination Location screen
5. Select a datacenter from the Inventory to hold the destination virtual machine
and provide the relevant names for the Virtual machine name, Datastore, and
Virtual machine version.
Note If the source is an existing virtual machine in the same vCenter instance,
then specify a new name for the virtual machine.
6. Click Next.
The Options screen is displayed as shown in Figure 18 where parameters can
be configured for the conversion.
Data Migration Techniques for VMware vSphereA Detailed Review
28
Figure 18: Options screen
7. In the Data to Copy section, from the Advanced section, select target
datastores individually for each volume in the source machine.
Note It is also possible to covert to thin format at this point, if required.
8. Ensure that the correct virtual network on which to run the machine is selected
as shown in Figure 19.
VMware vCenter Converter defaults to the first alphabetical network name (and
not necessarily the previous network used if the source is a VMware virtual
machine).
Data Migration Techniques for VMware vSphereA Detailed Review
29
Figure 19: Options screen - selecting the correct virtual network
9. If resynchronizing the data after the initial copy, select the services you want to
stop on the source machine before starting to resynchronize. This is to ensure
that no data is lost before switching over to the new target machine.
In this scenario, the SQL server is running actively on the source and the SQL
services are stopped before performing the final resynchronization as shown in
Figure 20.
Data Migration Techniques for VMware vSphereA Detailed Review
30
Figure 20: Options screen - stopping SQL services before resynchronization
Note This scenario also uses advanced options, including those to resynchronize
and power off the source as well as to power on the destination machines.
Since an existing virtual machine is being migrated in this scenario, there is no
need to install VMware tools as these were already present.
10. Click Next.
The Summary screen is displayed showing all of the selected options prior to
cloning as shown in Figure 21.
Data Migration Techniques for VMware vSphereA Detailed Review
31
Figure 21: Summary screen
Performance
impact of hot
cloning
During a hot cloning operation, the source disks are copied to another location. This
results in a substantial increase in read I/O on the source LUNs. This may or may
not have an impact on the source system performance, depending on the underlying
configuration of the source LUNs.
In this case, there was no measurable impact to the performance of the SQL Server
response times or the transactions per minute executed by the SQL DVD Store
application. However, the additional read I/O was very apparent when looking at the
utilization of the source LUNs, as demonstrated in the graph in Figure 22.
Data Migration Techniques for VMware vSphereA Detailed Review
32
Figure 22: LUN utilization
Cold cloning Cold cloning creates a copy of the virtual machine on the target VMware vSphere
system while the source machine (physical or virtual) is shut down. More accurately,
this means a virtual machine that is powered off, or a physical machine that is not
running its installed operating system but has instead been booted from a VMware
vCenter Converter Boot CD.
The length of time taken to complete the copy (and therefore the level of disruption
to service) is therefore entirely dependent on the quantity of data to be transferred,
as well as the specification of the source and target hardware. The advantage of a
cold cloning process is that there is no chance of data being updated on the source
system, so there is no need for incremental resynchronization as in the hot cloning
process.
There is no performance impact to measure since the source system is inactive
during the cloning process.
As with the hot cloning process, the target system is a copy of the source system,
but with a different set of similar hardware, so there are different MAC addresses
and so on, on the target virtual machine.
The process of cold cloning is very similar to hot cloning, except for the step where
the type of source machine is selected. Previously, a powered-on virtual machine
was selected but, in this scenario, one of the other options was selected as
appropriate, depending on whether the source is an existing vSphere virtual
machine, a virtual machine from an alternate hypervisor, or a physical machine.
I/O ceases on original
LUNs and continues on
target LUNs
Start of Converter
operation
Normal SQL
operation
Data Migration Techniques for VMware vSphereA Detailed Review
33
Data migration with EMC VPLEX Metro
Introduction to
the EMC VPLEX
family
The EMC VPLEX family, with the EMC GeoSynchrony
) provide.
Environment
topology
Figure 32 illustrates the environment topology for the SAN Copy data migration
scenario.
Figure 32: SAN Copy data migration
Data Migration Techniques for VMware vSphereA Detailed Review
42
Uses for SAN
Copy
The common uses for SAN Copy are:
Data mobilityeliminate impact on production activities during data mobility
tasks
Data migrationeasily migrate data from qualified storage systems to the
CLARiiON system
Content distributionregularly push updated production data to remote
locations
Disaster recoveryprotect and manage applications through integration with
Replication Manager
For the purpose of this white paper, the focus is on using SAN Copy for performing
data migrations in a VMware environment.
Migrations
available with
SAN Copy
With SAN Copy, the following data migrations are possible:
Migration between CLARiiON arrays
Migration within the same CLARiiON array
Migration between CLARiiON, Symmetrix, and third-party arrays
Benefits of SAN
Copy
The main benefits of SAN Copy are:
Optimal performance, as data is copied directly through the SAN
No host resources are required, as SAN Copy executes on the storage array
Interoperability with many heterogeneous storage systems
SAN Copy full
and
incremental
sessions
SAN Copy supports two types of sessions:
Full SAN Copy
A full session copies the entire contents of the source LUN to the destination
LUN(s) every time the session is executed. Full sessions can be a push or a
pull with any qualified storage system.
Incremental SAN Copy
An incremental session requires a full copy only once, which is referred to as an
initial synchronization. Each session after that copies only the changed data
from the source LUN to the destination LUN. For incremental sessions, the
source LUN must reside on a CLARiiON system but the destination LUN can
reside on any qualified storage system.
Data Migration Techniques for VMware vSphereA Detailed Review
43
SAN Copy in a
VMware
environment
Since it is array-based, the focus of SAN Copy is entirely on the source LUN and its
contents. From an operational perspective, the VMware layer is irrelevant to SAN
Copy. Regardless of whether a LUN is formatted as a VMFS datastore containing
multiple virtual disks, or a LUN is passed through to a virtual machine as an RDM,
SAN Copy copies every block of data within that source LUN to the destination LUN.
If using Full SAN Copy to migrate a SAN LUN containing a VMFS datastore, the
VMFS datastore must be quiesced before the migration can take place to ensure a
consistent point-in-time image of the data. As this migration would be a disruptive
once-off operation, the simplest method to quiesce the VMFS datastore is to shut
down the virtual machines currently residing on and accessing the datastore. Of
course, it is also possible to use Replication Manager to quiesce the VMFS
datastore. Replication Manager is normally used in the event of regular (daily and
weekly), incremental SAN Copy sessions where the production systems need to
remain online while SAN Copy sends updated data to the destination LUN.
Requirements
and
considerations
SAN Copy is bound by the following requirements and considerations.
Data consistencythere is a difference in the requirements for quiescing the
source LUN when using full and incremental SAN Copy sessions. The source
LUN must be quiesced for the duration of a full SAN Copy session. For an
incremental SAN Copy session, the source LUN only needs to be quiesced just
before it begins. Quiescing the source LUN can be achieved by using the
Navisphere admhost utility on Windows machines to flush the file systems, or
in the case of a VMFS datastore, shutting down the virtual machines residing on
the datastore achieves the same goal. If quiescing the source LUN is not
possible, then the destination image is crash-consistent on completion of the
SAN Copy session.
Application consistencyReplication Manager can be used with SAN Copy
to ensure application consistency. Replication Manager supports Microsofts
virtual shadow copy service (VSS) architecture, which allows for the creation of
hot, point-in-time Exchange images, without disrupting the production server.
SQL, SharePoint, Oracle, and DB2 are also supported (including their relevant
hot and online backup modes) as well as VMware VMFS datastores and Hyper-
V guests that use iSCSI and CLARiiON or Celerra storage.
Source LUNs and incremental SAN Copy supportCLARiiON storage
systems leverage EMC SnapView