VMware ESXi: Difference between revisions
→vMotion: live migration: not a how to guide |
|||
Line 120: | Line 120: | ||
===vMotion: live migration=== |
===vMotion: live migration=== |
||
Live migration (vMotion) in ESX allows a virtual machine to move between two different hosts. Live storage migration (Storage vMotion) enables live migration of virtual disks on the fly.<ref>[http://www.usenix.org/events/atc11/tech/final_files/Mashtizadeh.pdf The Design and Evolution of Live Storage Migration in VMware ESX]</ref> |
Live migration (vMotion) in ESX allows a virtual machine to move between two different hosts. Live storage migration (Storage vMotion) enables live migration of virtual disks on the fly.<ref>[http://www.usenix.org/events/atc11/tech/final_files/Mashtizadeh.pdf The Design and Evolution of Live Storage Migration in VMware ESX]</ref> |
||
During vMotion Live Migration (vLM) of a running virtual machine (VM) the content of the (ram) memory of the VM is sent from the running VM to the new VM (the instance on another host that will become the running VM after the vLM). The content of memory is by its nature changing all the time. ESX uses a system where the content is sent to the other VM and then it will check what data is changed and send that, each time smaller blocks. And at the last moment it will very briefly 'freeze' the existing VM, transfer the last changes in the RAM content and then start the new VM. Because of transferring the content of RAM in blocks where in the end only the last 'changes' are transferred the 'freeze' time for the actual last transfer + taking over functionality can be so short that the end-users will hardly notice it<ref name="vmblog">VMWare Blog by Kyle Gleed: [http://blogs.vmware.com/uptime/2011/02/vmotion-whats-going-on-under-the-covers.html vMotion: what's going on under the covers], 25 February 2011, visited: 2 February 2012</ref><ref name="vmbroch">VMware website [http://www.vmware.com/files/pdf/VMware-VMotion-DS-EN.pdf vMotion brochure]. Retrieved 3 February 2012</ref> |
|||
==Versions== |
==Versions== |
Revision as of 20:22, 5 June 2013
Developer(s) | VMware, Inc. |
---|---|
Stable release | 5.1.0a (build 838463)
/ 25 October 2012[1] |
Platform | i386 (discontinued in 4.0 onwards),[2] x86-64 |
Type | Hypervisor |
License | Proprietary |
Website | VMware ESX |
VMware ESX is an enterprise-level computer virtualization product offered by VMware, Inc. ESX is a component of VMware's larger offering, VMware Infrastructure, and adds management and reliability services to the core server product. VMware is replacing the original ESX with ESXi.[3]
VMware ESX and VMware ESXi are bare metal embedded hypervisors that are VMware's enterprise software hypervisors for guest virtual servers that run directly on host server hardware without requiring an additional underlying operating system.[4]
The basic server requires some form of persistent storage (typically an array of hard disk drives) that store the hypervisor and support files. A smaller footprint variant, ESXi, does away with the first requirement by permitting placement of the hypervisor on a dedicated compact storage device. Both variants support the services offered by VMware Infrastructure.[5]
Naming
ESX is apparently derived from "Elastic Sky X",[6] but with rare exceptions[7] this doesn't appear in official VMware material.
Technical description
VMware, Inc. refers to the hypervisor used by VMware ESX as "vmkernel".
Architecture
VMware states that the ESX product runs on bare metal.[8] In contrast to other VMware products, it does not run atop a third-party operating system,[9] but instead includes its own kernel. Up through the current ESX version 5.1, a Linux kernel is started first,[10] and is used to load a variety of specialized virtualization components, including VMware's vmkernel component. This previously booted Linux kernel then becomes the first running virtual machine and is called the service console. Thus, at normal run-time, the vmkernel is running on the bare computer and the Linux-based service console runs as the first virtual machine.
The vmkernel itself, which VMware says is a microkernel,[11] has three interfaces to the outside world:
- hardware
- guest systems
- service console (Console OS)
Interface to hardware
The vmkernel handles CPU and memory directly, using scan-before-execution (SBE) to handle special or privileged CPU instructions[12] [13] and the SRAT (system resource allocation table) to track allocated memory.[14]
Access to other hardware (such as network or storage devices) takes place using modules. At least some of the modules derive from modules used in the Linux kernel. To access these modules, an additional module called vmklinux
implements the Linux module interface. According to the README file, "This module contains the Linux emulation layer used by the vmkernel."[15]
The vmkernel uses the device drivers:[15]
- net/e100
- net/e1000
- net/e1000e
- net/bnx2
- net/tg3
- net/forcedeth
- net/pcnet32
- block/cciss
- scsi/adp94xx
- scsi/aic7xxx
- scsi/aic79xx
- scsi/ips
- scsi/lpfcdd-v732
- scsi/megaraid2
- scsi/mptscsi_2xx
- scsi/qla2200-v7.07
- scsi/megaraid_sas
- scsi/qla4010
- scsi/qla4022
- scsi/vmkiscsi
- scsi/aacraid_esx30
- scsi/lpfcdd-v7xx
- scsi/qla2200-v7xx
These drivers mostly equate to those described in VMware's hardware compatibility list.[16] All these modules fall under the GPL. Programmers have adapted them to run with the vmkernel: VMware Inc has changed the module-loading and some other minor things.[15]
Service console
The Service Console is a vestigial general purpose operating system most significantly used as bootstrap for the VMware kernel, vmkernel, and secondarily used as a management interface. Both of these Console Operating System functions are being deprecated as VMware migrates exclusively to the 'embedded' ESX model, current version being ESXi.[17] The Service Console, for all intents and purposes, is the operating system used to interact with VMware ESX and the virtual machines that run on the server.
Linux dependencies
ESX uses a Linux kernel to load additional code: often referred to by VMware, Inc. as the "vmkernel". The dependencies between the "vmkernel" and the Linux part of the ESX server have changed drastically over different major versions of the software. The VMware FAQ[18] states: "ESX Server also incorporates a service console based on a Linux 2.4 kernel that is used to boot the ESX Server virtualization layer". The Linux kernel runs before any other software on an ESX host.[10] On ESX versions 1 and 2, no VMkernel processes run on the system during the boot process.[19] After the Linux kernel has loaded, the S90vmware script loads the vmkernel.[19] VMware Inc states that vmkernel does not derive from Linux, but acknowledges that it has adapted certain device-drivers from Linux device drivers. The Linux kernel continues running, under the control of the vmkernel, providing functions including the proc file system used by the ESX and an environment to run support applications.[19] ESX version 3 loads the VMkernel from the Linux initrd, thus much earlier in the boot-sequence than in previous ESX versions.
In traditional systems, a given operating system runs a single kernel. The VMware FAQ mentions that ESX has both a Linux 2.4 kernel and vmkernel – hence confusion over whether ESX has a Linux base. An ESX system starts a Linux kernel first, but it loads vmkernel (also described by VMware as a kernel), which according to VMware 'wraps around' the linux kernel, and which (according to VMware Inc) does not derive from Linux.
The ESX userspace environment, known as the "Service Console" (or as "COS" or as "vmnix"), derives from a modified version of Red Hat Linux, (Red Hat 7.2 for ESX 2.x and Red Hat Enterprise Linux 3 for ESX 3.x). In general, this Service Console provides management interfaces (CLI, webpage MUI, Remote Console).
As a further detail which differentiates the ESX from other VMware virtualization products: ESX supports the VMware proprietary cluster file system VMFS. VMFS enables multiple hosts to access the same SAN LUNs simultaneously, while file-level locking provides simple protection to file-system integrity.
Purple diagnostic screen
In the event of a hardware error, the vmkernel can 'catch' a Machine Check Exception.[20] This results in an error message displayed on a purple diagnostic screen. This is colloquially known as a purple diagnostic screen, or purple screen of death (PSOD, cf. Blue Screen of Death (BSOD)).
Upon displaying a purple diagnostic screen, the vmkernel writes debug information to the core dump partition. This information, together with the error codes displayed on the purple diagnosic screen can be used by VMware support to determine the cause of the problem.
vMotion: live migration
Live migration (vMotion) in ESX allows a virtual machine to move between two different hosts. Live storage migration (Storage vMotion) enables live migration of virtual disks on the fly.[21] During vMotion Live Migration (vLM) of a running virtual machine (VM) the content of the (ram) memory of the VM is sent from the running VM to the new VM (the instance on another host that will become the running VM after the vLM). The content of memory is by its nature changing all the time. ESX uses a system where the content is sent to the other VM and then it will check what data is changed and send that, each time smaller blocks. And at the last moment it will very briefly 'freeze' the existing VM, transfer the last changes in the RAM content and then start the new VM. Because of transferring the content of RAM in blocks where in the end only the last 'changes' are transferred the 'freeze' time for the actual last transfer + taking over functionality can be so short that the end-users will hardly notice it[22][23]
Versions
VMware ESX is available in two main types: ESX and ESXi, although since version 5 only ESXi is continued.
VMware ESX
Version release history:
- VMware ESX Server 1.0 Build 1062 (23-March-2001) First release was in 2001
- VMware ESX Server 1.1 (7 January 2002)
VMware ESX 1.5
- VMware ESX Server 1.5 (13 May 2002)
VMware ESX 2.0 (21 July 2003)
- VMware ESX Server 2.1 Build 22983 (13 April 2006)
- VMware ESX Server 2.0.2 Build 23922 (May 4, 2006)
VMware ESX 2.5 (14 December 2004)
- VMware ESX Server 2.5.0 Build 11343 (29 November 2004)
- VMware ESX Server 2.5.1 Build 13057 (20 May 2005)
- VMware ESX Server 2.5.1 Build 14182 (20 June 2005)
- VMware ESX Server 2.5.2 Build 16390 (15 September 2005)
- VMware ESX Server 2.5.3 Build 22981 (13 April 2006)
- VMware ESX Server 2.5.4 Build 32233 (October 5, 2006)
- VMware ESX Server 2.5.5 Build 57619 (October 8, 2007)
VMware Infrastructure 3.0 (VI3) (5 June 2006)
- VMware ESX Server 3.0 Build 27701 (13 June 2006)
- VMware ESX Server 3.0.1 Build 32039 (25 September 2006)
- VMware ESX Server 3.0.2 Build 52542 (31 July 2007)
- VMware ESX Server 3.0.3 Build 104629 (8 August 2008)
- VMware ESX Server 3.0.3 Update 1 Build 231127 (8 March 2010)
- VMware ESX Server 3.5 (10 December 2007)
- VMware ESX Server 3.5 Build 64607 (20 February 2008)
- VMware ESX Server 3.5 Update 1 Build 82663 (10 April 2008)
- VMware ESX Server 3.5 Update 2 Build 110268 (13 August 2008)
- VMware ESX Server 3.5 Update 3 Build 123630 (6 November 2008)
- VMware ESX Server 3.5 Update 4 Build 153875 (30 March 2009)
- VMware ESX Server 3.5 Update 5 Build 207095 (3 December 2009) This was the last version to support 32-bit systems [24]
VMware vSphere 4.0 (20 May 2009)
- VMware ESX 4.0 Build 164009 (21 May 2009)
- VMware ESX 4.0 Update 1 Build 208167 (19 November 2009)
- VMware ESX 4.0 Update 2 Build 261974 (10 June 2010)
- VMware ESX 4.0 Update 3 Build 398348 (5 May 2011)
- VMware ESX 4.0 Update 4 Build 504850 (17 November 2011)
- VMware ESX 4.1 Build 260247 (13 July 2010)
- VMware ESX 4.1 Update 1 Build 348481 (10 February 2011)
- VMware ESX 4.1 Update 2 Build 502767 (27 October 2011)
- VMware ESX 4.1 Update 3 Build 800380 (30 August 2012)
ESX and ESXi before version 5.0 doesn't support Windows 8/Windows 2012. These latest Microsoft operating systems can only run on ESXi 5.x or later[25]
18 July 2010 vSphere 4.1 and its subsequent update and patch releases are the last releases to include both ESX and ESXi hypervisor architectures. Future major releases of VMware vSphere will include only the VMware ESXi architecture. For this reason, VMware recommends that deployments of vSphere 4.x utilize the ESXi hypervisor architecture.
VMware ESXi
Developer(s) | VMware, Inc. |
---|---|
Stable release | 5.1.0 (build 799733)
/ 10 September 2012[26] |
Platform | x86-64 |
Type | Virtual machine monitor |
License | Proprietary |
Website | VMware ESXi |
VMware ESXi is a smaller footprint version of ESX that does not include ESX's Service Console. It is available as a free download from VMware though certain features are disabled[27] without the purchase of a vCenter license.
VMware ESXi was originally a compact version of VMware ESX that allowed for a smaller 32 MB disk footprint on the Host. With a simple configuration console for mostly network configuration and remote based VMware Infrastructure Client Interface, this allows for more resources to be dedicated to the Guest environments.
There are two variations of ESXi, VMware ESXi Installable and VMware ESXi Embedded Edition. It has the ability to upgrade to VMware Infrastructure 3[28] or VMware vSphere 4.0 ESXi.
Originally named VMware ESX Server ESXi edition, through several revisions the product finally become VMware ESXi 3. New editions then followed: ESXi 3.5, ESXi 4 and now ESXi 5.
To virtualize Windows 8 or Windows Server 2012 as guest operating systems, the ESXi version must be 5.x or greater.[25]
Version release history:
- VMware ESX 3 Server ESXi edition
- – unknown –
- VMware ESXi 3.5 First Public Release (Build 67921) (31 December 2007 )
- VMware ESXi 3.5 Initial Release (Build 70348)
- VMware ESXi 3.5 Update 1 (Build 82664)
- VMware ESXi 3.5 Update 2 (Build 110271)
- VMware ESXi 3.5 Update 3 (Build 123629)
- VMware ESXi 3.5 Update 4 (Build 153875)
- VMware ESXi 3.5 Update 5 (Build 207095)
- VMware ESXi 4.0 (Build 164009) (21 May 2009 )
- VMware ESXi 4.0 Update 1 (Build 208167) (9 December 2009 )
- VMware ESXi 4.0 Update 2 (Build 261974) (10 June 2010 )
- VMware ESXi 4.0 Update 3 Build 398348 (5 May 2011 )
- VMware ESXi 4.0 Update 4 Build 504850 (17 November 2011 )
- VMware ESXi 4.1 (Build 260247) (13 July 2010 )
- VMware ESXi 4.1 Update 1 (Build 348481) (10 February 2011 )
- VMware ESXi 4.1 Update 2 (Build 502767) (27 October 2011 )
- VMware ESXi 4.1 Patch 5 (Build 582267) (30 January 2012 )
- VMware ESXi 4.1 Patch 6 (Build 659051) (26 April 2012 )
- VMware ESXi 4.1 Patch 7 (Build 702113) (3 May 2012 )
- VMware ESXi 4.1 Patch 8 (Build 721871) (14 June 2012 )
- VMware ESXi 4.1 Update 3 (Build 800380) (30 August 2012 )
- VMware ESXi 5.0 (Build 469512) (24 August 2011 )
- VMware ESXi 5.0 Update 1 (Build 623860) (15 March 2012 )
- VMware ESXi 5.0 (Build 721882) (14 June 2012 )
- VMware ESXi 5.0 (Build 768111) (12 July 2012 )
- VMware ESXi 5.0 (Build 821926) (27 September 2012 )
- VMware ESXi 5.0 Update 2 (Build 914586) (20 December 2012 )
- VMware ESXi 5.1 (Build 799733) (10 September 2012 )
- VMware ESXi 5.1 Update 1 (Build 1065491) (25 April 2013 )
- VMware ESXi 5.1 (Build 1117900) (22 May 2013 )
Related or additional products
This section's use of external links may not follow Wikipedia's policies or guidelines. (November 2012) |
The following products operate in conjunction with ESX:
- vCenter Server, enables monitoring and management of multiple ESX, ESXi and GSX servers. In addition, users must install it to run infrastructure services such as:
- VMotion (transferring virtual machines between servers on the fly, with zero downtime)[22][23]
- SVMotion (transferring virtual machines between Shared Storage LUNs on the fly, with zero downtime)[29]
- DRS (automated VMotion based on host/VM load requirements/demands)
- HA (restarting of Virtual Machine Guests in the event of a physical ESX Host failure)
- Fault Tolerance (Almost instant stateful fail over of a VM in the event of a physical host failure)[30]
- Converter, enables users to create VMware ESX Server- or Workstation-compatible virtual machines from either physical machines or from virtual machines made by other virtualization products. Converter replaces the VMware "P2V Assistant" and "Importer" products — P2V Assistant allowed users to convert physical machines into virtual machines; and Importer allowed the import of virtual machines from other products into VMware Workstation.
- vSphere Client (formerly VMware Infrastructure Client), enables monitoring and management of a single instance of ESX or ESXi server. After ESX 4.1, vSphere Client was no longer available from the ESX/ESXi server, but must be downloaded from the VMware web site.
Cisco Nexus 1000v
Network-connectivity between ESX hosts and the VM's running on it relies on virtual NIC's (inside the VM) and virtual switches. The latter exists in two versions: the 'standard' vSwitch allowing several VM's on a single ESX host to share a physical NIC and the 'distributed vSwitch' where the vSwitches on different ESX hosts form together one logical switch. Cisco offers in their Cisco Nexus product-line the Nexus 1000v as an advanced version of the standard distributed vSwitch. A Nexus1000v consists of two parts: a supervisor modules (VSM) and on each ESX host a virtual ethernet module (VEM). The VSM runs as a virtual appliance within the ESX cluster or on dedicated hardware (Nexus 1010 series) and the VEM runs as module on each host and replaces s standard vDS (virtual distributed switch) from VMWare.
Configuration of the switch is done on the VSM using the standard NX-OS CLI. It offers capabilities to create standard port-profiles which can then be assigned to a virtual machines with vCenter. One of the most notable differences between the standard vDS and the N1000v is that the last one also supports LACP link aggregation where the standard VMWare virtual switches only support static LAG's.[31] The Nexus1000v is developed in co-operation between Cisco and VMWare and uses the API of the vDS[32]
Third party management tools
Because VMWare ESX is market leader on the server-virtualisation market[33] software and hardware-vendors offer a range of tools to integrate their products or services with ESX. Examples are the products from Veeam with backup- and management-applications[34] and a plugin to monitor and manage ESX using HP OpenView,[35] Quest Software with a range of management and backup-applications and most major backup-solution providers have plugins or modules for ESX. Using Microsoft Operations Manager (SCOM) 2007/2012 with a Bridgeways ESX managementpack gives you a realtime ESX datacenter healthview. Backup and Disaster Recovery solutions like Vembu StoreGrid has inbuilt plugin to perform VMWare ESX/ESXi backup and BMR[disambiguation needed].
Also hardware-vendors such as HP and Dell include tools to support the use of ESX(i) on their hardware platforms. An example is the ESX module for Dell's OpenManage management platform.[36]
Even when ESX itself is running on a Linux base there is no vSphere client available for the Linux Desktop. VMware have added a Webclient[37] since v5 but it will work on vCenter only and contains not all features.[38] vEMan[39] is a Linux application which is trying to fill that gap. These are just a few examples: there are numerous 3rd party products to manage, monitor or backup ESX infrastructures and the VM's running on them[40]
Known limitations
Known limitations of VMware ESX, as of May 2009, include the following:
Infrastructure limitations
Some limitations in ESX Server 4 may constrain the design of data centers:[41][42]
- Guest system maximum RAM: 255 GB
- Host system maximum RAM: 1 TB[41]
- Number of hosts in a high availability cluster: 32
- Number of Primary Nodes in ESX Cluster high availability: 5
- Number of hosts in a Distributed Resource Scheduler cluster: 32
- Maximum number of processors per virtual machine: 8
- Maximum number of processors per host: 160
- Maximum number of cores per processor: 12
- Maximum number of virtual machines per host: 320
- VMFS-3 limits files to 262,144 (218) blocks, which translates to 256 GB for 1 MB block sizes (the default) or up to 2 TB for 8 MB block sizes.[43] However, on a VMFS Boot drive, it is usually very difficult to use anything other than 1 MB Block size.[44]
- ESX and ESXi prior to version 5 doesn't support the latest Microsoft operating system Windows 8 and Windows 2012[25]
With ESXI 5 there has been some changes to these limits[45]
- Guest system maximum RAM: 1 TB
- Host system maximum RAM: 2 TB
- Number of hosts in a high availability cluster: 32
- Maximum number of processors per virtual machine: 32
- Maximum number of processors per host: 160
- Maximum number of cores per processor: 25
- Maximum number of virtual machines per host: 512
- VMFS-3 is supported and has the same limits as before
- VMFS-5 however has a max volume size of 64 TB and a max file size of 2 TB – 512 B
Performance limitations
In terms of performance, virtualization imposes a cost in the additional work the CPU has to perform to virtualize the underlying hardware. Instructions that perform this extra work, and other activities that require virtualization, tend to lay in operating system calls. In an unmodified operating system, OS calls introduce the greatest portion of virtualization "overhead".[citation needed]
Paravirtualization or other virtualization techniques may help with these issues. VMware developed the Virtual Machine Interface for this purpose, and selected operating systems currently[update] support this. A comparison between full virtualization and paravirtualization for the ESX Server[46] shows that in some cases paravirtualization is much faster.
Network limitations
When using the advanced and extended network capabilities by using the Cisco Nexus 1000v distributed virtual switch the following network-related limitations apply:[32]
- 64 ESX/ESXi hosts per VSM (Virtual Supervisor Module)
- 2048 virtual ethernet interfaces per VMWare vDS (virtual distributed switch)
- and a maximum of 216 virtual interfaces per ESX/ESXi host
- 2048 active VLAN's (one to be used for communication between VEM's and VSM)
- 2048 port-profiles
- 32 physical NIC's per ESX/ESXi (physical) host
- 256 port-channels per VMWare vDS (virtual distributed switch)
- and a maximum of 8 port-channels per ESX/ESXi host
See also
- Comparison of platform virtual machines
- kvm - Linux Kernal Virtual Module native linux virtualisation
- Hyper-V – a competitor of VMware ESX from Microsoft
- Xen – an open source hypervisor platform
- Virtual appliance
- Virtual machine
- Virtual disk image
- VMware VMFS
- x86 virtualization
References
- ^ "VMware ESX 5.1". VMware, Inc.
- ^ "VMware ESX 4.0 only installs and runs on servers with 64bit x86 CPUs. 32bit systems are no longer supported". VMware, Inc.
- ^ VMWare website on ESXi 5.0:Upgrade to ESXi, visited 2 February 2012
- ^ "ESX Server Architecture". Vmware.com. Archived from the original on 7 November 2009. Retrieved 22 October 2009.
- ^ "Meet the Next Generation of Virtual Infrastructure Technology". VMware. Retrieved 21 September 2007.
- ^ "What does ESX stand for?"
- ^
"Glossary" (PDF). Developer’s Guide to Building vApps and Virtual Appliances: VMware Studio 2.5. Palo Alto: VMware Inc. 2011. p. 153. Retrieved 9 November 2011.
ESXi[:] Elastic Sky X (ESX) pared down to a bare-metal server [...]
- ^ "ESX Server Datasheet"
- ^ "ESX Server Architecture". Vmware.com. Archived from the original on 29 September 2007. Retrieved 1 July 2009.
- ^ a b "ESX machine boots". Video.google.com.au. 12 June 2006. Retrieved 1 July 2009.
- ^ "Support for 64-bit Computing". Vmware.com. 19 April 2004. Retrieved 1 July 2009.
- ^ Gerstel, Markus: "Virtualisierungsansätze mit Schwepunkt Xen"[dead link]
- ^ <a href="http://www.vmware.com/resources/techresources/1009">VMware ESX</a>
- ^
"VMware ESX Server 2: NUMA Support" (PDF). Palo Alto, California: VMware Inc. 2005. p. 7. Retrieved 29 March 2011.
SRAT (system resource allocation table) – table that keeps track of memory allocated to a virtual machine.
- ^ a b c "ESX Server Open Source". Vmware.com. Retrieved 1 July 2009.
- ^ "ESX Hardware Compatibility List". Vmware.com. 10 December 2008. Retrieved 1 July 2009.
- ^ "ESXi vs. ESX: A comparison of features". Vmware, Inc. Retrieved 1 June 2009.
- ^ VMware FAQ[dead link]
- ^ a b c ESX Server Advanced Technical Design Guide[dead link]
- ^ "KB: Decoding Machine Check Exception (MCE) output after a purple diagnostic screen |publisher=VMware, Inc."
- ^ The Design and Evolution of Live Storage Migration in VMware ESX
- ^ a b VMWare Blog by Kyle Gleed: vMotion: what's going on under the covers, 25 February 2011, visited: 2 February 2012
- ^ a b VMware website vMotion brochure. Retrieved 3 February 2012
- ^ http://kb.vmware.com/kb/1003661
- ^ a b c VMWare KBArticle Windows 8/Windows 2012 doesn't boot on ESX, visited 12 September 2012
- ^ "VMware vSphere ® 5.1 Release Notes". VMware, Inc.
- ^ "VMware ESX and ESXi 4.1 Comparison". Vmware.com. Retrieved 9 June 2011.
- ^ "Free VMware ESXi: Bare Metal Hypervisor with Live Migration". Vmware.com. Retrieved 1 July 2009.
- ^ http://www.vmware.com/files/pdf/VMware-Storage-VMotion-DS-EN.pdf
- ^ http://www.vmware.com/files/pdf/VMware-Fault-Tolerance-FT-DS-EN.pdf
- ^ Cisco brochure Cisco1000v Virtual Switch, PDF. Retrieved 9 July 2012
- ^ a b Overview of the Nexus 1000v virtual switch, visited 9 July 2012
- ^ VMWare continues virtualization market romp, 18 April 2012. Visited: 9 July 2012
- ^ About Veeam, visited 9 July 2012
- ^ Veeam Openview plugin for VMWare, visited 9 July 2012
- ^ OpenManage (omsa) support for ESXi 5.0, visited 9 July 2012
- ^ VMware info about Webclient – VMware ESXi/ESX 4.1 and ESXi 5.0 Comparison
- ^ Availability of vSphere Client for Linux systems – What the webclient can do and what not
- ^ vEMan website vEMan – Linux vSphere client
- ^ Petri website 3rd party ESX tools, 23 December 2008. Visited: 9 July 2012
- ^ a b "Configuration Maximums" (PDF). VMware, Inc. 13 July 2010. Retrieved 13 July 2010.
- ^ "What's new in VMware vSphere 4: Performance Enhancements" (PDF). VMware, Inc.
- ^ "Configuration Maximums for VMware Infrastructure 3" (PDF). VMware. 23 July 2007. Retrieved 26 September 2007.
- ^ "Increasing the block size of local VMFS storage in ESX 4.x during installation". VMware Knowledge Base, Article:1012683. VMware, Inc. Retrieved 17 January 2012.
- ^ "Configuration Maximums VMware Vsphere 5" (PDF). VMware. 2010–2011. Retrieved 7 March 2012.
{{cite web}}
: CS1 maint: date format (link) - ^ "Performance of VMware VMI" (PDF). VMware, Inc. 13 February 2008. Retrieved 22 January 2009.