0% found this document useful (0 votes)
93 views9 pages

VCDX #200 Blog of One Vmware Infrastructure Designer: Vmware Virtual Disk (VMDK) in Multi Write Mode

The blog post discusses VMware virtual disks (VMDKs) that allow multi-write access from multiple virtual machines. It explains that VMFS normally prevents this for safety, but third-party cluster-aware applications can leverage multi-writer VMDKs. The post details the limitations of multi-writer mode, such as requiring eager zeroed thick disks and an 8 host limit. It then describes tests conducted in a nested vSphere environment to validate the 8 host limit applies per shared VMDK, not per cluster. The post provides answers to frequently asked questions about multi-writer VMDK usage and management.

Uploaded by

Mohd Ayoob
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views9 pages

VCDX #200 Blog of One Vmware Infrastructure Designer: Vmware Virtual Disk (VMDK) in Multi Write Mode

The blog post discusses VMware virtual disks (VMDKs) that allow multi-write access from multiple virtual machines. It explains that VMFS normally prevents this for safety, but third-party cluster-aware applications can leverage multi-writer VMDKs. The post details the limitations of multi-writer mode, such as requiring eager zeroed thick disks and an 8 host limit. It then describes tests conducted in a nested vSphere environment to validate the 8 host limit applies per shared VMDK, not per cluster. The post provides answers to frequently asked questions about multi-writer VMDK usage and management.

Uploaded by

Mohd Ayoob
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

VCDX #200

Blog of one VMware


Infrastructure Designer
I believe the Next Generation Computing is Software Defined
Infrastructure on top of the robust physical infrastructure. You can ask
me anything about enterprise infrastructure (virtualization, compute,
storage, network) and we can discuss it deeply on this blog. Don't
hesitate to contact me.
 Home
 Series & Topics
 Links
 Tools
 Philosophy
 Contact
 About me
 Disclaimer
 Sponsors
Thursday, October 11, 2018

VMware virtual disk (VMDK) in Multi Write Mode


VMFS is a clustered file system that disables (by default) multiple virtual machines from
opening and writing to the same virtual disk (vmdk file). This prevents more than one
virtual machine from inadvertently accessing the same vmdk file. This is the safety
mechanism to avoid data corruption in cases where the applications in the virtual
machine do not maintain consistency in the writes performed to the shared
disk. However, you might have some third-party cluster-aware application, where the
multi-writer option allows VMFS-backed disks to be shared by multiple virtual machines
and leverage third-party OS/App cluster solutions to share a single VMDK disk on
VMFS filesystem. These third-party cluster-aware applications, in which the applications
ensure that writes originate from multiple different virtual machines, does not cause data
loss. Examples of such third-party cluster-aware applications are Oracle RAC, Veritas
Cluster Filesystem, etc.

There is VMware KB “Enabling or disabling simultaneous write protection provided by


VMFS using the multi-writer flag (1034165)” available
at https://kb.vmware.com/kb/1034165 KB describes how to enable or disable
simultaneous write protection provided by VMFS using the multi-writer flag. It is the
official resource how to use multi-write flag but the operational procedure is a little bit
obsolete as vSphere 6.x supports configuration from WebClient (Flash) or vSphere Client
(HTML5) GUI as highlighted in the screenshot below.

However, KB 1034165 contains several important limitations which should be


considered and addressed in solution design. Limitations of multi-writer mode are:
 The virtual disk must be eager zeroed thick; it cannot be zeroed thick or thin
provisioned.
 Sharing is limited to 8 ESXi/ESX hosts with VMFS-3 (vSphere 4.x) and VMFS-
5 (vSphere 5.x) and VMFS-6 in multi-writer mode.
 Hot adding a virtual disk removes Multi-Writer Flag. 

Let’s focus on 8 ESXi host limit. The above statement about scalability is a little bit
unclear. That’s the reason why one of my customers has asked me what does it really
mean. I did some research on internal VMware resources and fortunately enough I’ve
found internal VMware discussion about this topic, so I think sharing the info about this
topic will help to broader VMware community.

Here is 8 host limit explanation in other words …

“8 host limit implies how many ESXi hosts can simultaneously open the same virtual disk
(aka VMDK file). If the cluster-aware application is not going to have more than 8
nodes, it works and it is supported. This limitation applies to a group of VMs sharing the
same VMDK file for a particular instance of the cluster-aware application. In case, you
need to consolidate multiple application clusters into a single vSphere cluster, you can
safely do it and app nodes from one app cluster instance can run on other ESXi nodes
than app nodes from another app cluster instance. It means that if you have more than
one app cluster instance, all app cluster instances can leverage resources from more
than 8 ESXi hosts in vSphere Cluster.”
   
The best way to fully understand specific behavior is to test it. That’s why I have a pretty
decent home lab. However, I do not have 10 physical ESXi host, therefore I have created
a nested vSphere environment with vSphere Cluster having 9 ESXi hosts. You can see
vSphere cluster with two App Cluster Instances (App1, App2) on the screenshot below.

Application Cluster instance App1 is composed of 9 nodes (9 VMs) and App2 instance
just from 2 nodes. Each instance is sharing their own VMDK disk. The whole test
infrastructure is conceptually depicted on the figures below.

Test Step 1: I have started 8 of 9 VMs of App1 cluster instance on 8 ESXi hosts
(ESXi01-ESXi08). Such setup works perfectly fine as there is 1 to 1 mapping between
VMs and ESX hosts within the limit of 8 ESXi hosts having shared VMDK1 opened.

Test Step 2: Next step is to test the Power-On operation of App1-VM9 on ESXi09. Such
operation fails. This is expected result because 9th ESXi host cannot open the VMDK1
file on VMFS datastore.
The error message is visible on the screenshot below.

Test Step 3: Next step is to Power On App1-VM9 on ESXi01. This operation is


successful as two app cluster nodes (virtual machines App1-VM1 and App1-VM9) are
running on single ESXi host (ESX01) therefore only 8 ESXi hosts have the VMDK1 file
open and we are in the supported limits.

Test Step 4: Let’s test vMotion of App1-VM9 from ESXi01 to ESX09. Such operation
fails. This is expected result because of the same reason as on Power-On operation. App1
Cluster instance would be stretched across 9 ESXi hosts but 9th ESXi host cannot open
VMDK1 file on VMFS datastore.

The error message is a little bit different but the root cause is the same.
Test Step 5: Let’s test vMotion of App2-VM2 from ESXi08 to ESX09. Such operation
works because App2 Cluster instance is still stretched across two ESXi hosts only so it is
within supported 8 ESXi hosts limit.

Test step 6: The last test is the vMotion of App2-VM2 from vSphere Cluster (ESXi08)
to standalone ESXi host outside of the vSphere cluster (ESX01). Such operation works
because App2 Cluster instance is still stretched across two ESXi hosts only so it is within
supported 8 ESXi hosts limit. vSphere cluster is not the boundary for multi-write VMDK
mode.

FAQ
Q: What exactly does it mean the limitation of 8 ESXi hosts?
A: 8 ESXi host limit implies how many ESXi hosts can simultaneously open the same
virtual disk (aka VMDK file). If the cluster-aware application is not going to have more
than 8 nodes, it works and it is supported. Details and various scenarios are described in
this article.
Q: Where are stored the information about the locks from ESXi hosts?
A: The normal VMFS file locking mechanism is in use, therefore there are VMFS file
locks which can be displayed by ESXi command: vmkfstools -D
The only difference is that multi-write VMDKs can have multiple locks as is shown in
the screenshot below.

Q: Is it supported to use DRS rules for vmdk multi-write in case that is more than 8
ESXi hosts in the cluster where VMs with configured multi-write vmdks are
running?
A: Yes. It is supported. DRS rules can be beneficial to keep all nodes of the particular
App Cluster Instance on specified ESXi hosts. This is not necessary nor required from the
technical point of view, but it can be beneficial from a licensing point of view.  

Q: How ESXi life cycle can be handled with the limit 8 ESXi hosts?


A: Let’s discuss specific VM operations and supportability of multi-write vmdk
configuration. The source for the answers is VMware
KB https://kb.vmware.com/kb/1034165
      Power on, off, restart virtual machine – supported
      Suspend VM – unsupported
      Hot add virtual disks - only to existing adapters
      Hot remove devices – supported
      Hot extend virtual disk – unsupported
      Connect and disconnect devices – supported
      Snapshots – unsupported
      Snapshots of VMs with independent-persistent disks – supported
      Cloning – unsupported
      Storage vMotion – unsupported
      Changed Block Tracking (CBT) – unsupported
      vSphere Flash Read Cache (vFRC) – unsupported
      vMotion – supported by VMware for Oracle RAC only and limited to 8 ESX/ESXi hosts.
Note: other cluster-aware applications are not supported by VMware but can be
supported by partners. For example, Veritas products have supportability documented
here https://sort.veritas.com/public/documents/sfha/6.2/vmwareesx/productguides/ht
ml/sfhas_virtualization/ch01s05s01.htm Please, verify current supportability directly
with specific partners.

Q: Is it possible to migrate VMs with multi-write vmdks to different cluster when it


will be offline?
A: Yes. VM can be Shut Down or Power Off and Power On on any ESXi host outside of
the vSphere cluster. The only requirement is to have the same VMFS datastore available
on source and target ESXi host. Please, keep in mind that the maximum supported
number of ESXi hosts connected to a single VMFS datastore is 64.
-->

UPDATE 2019-07-10: From vSphere 6.7 Update 1 onwards, the virtual disks sharing
support in multi-writer has been extended to more than 8 hosts. In order to enable this
feature, you need to enable /VMFS3/GBLAllowMW advance config option. For more info
see https://kb.vmware.com/s/article/1034165
Posted by David Pasek at 2:12 AM 
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

3 comments:

Ricardo said...
Hello David,

I have a question, can i extend the virtual disk(Multi Write from the vsphere client?.

Regards
2:31 AM

Ricardo said...
Hello David,

I have a question, can i extend the virtual disk(Multi Write from the vsphere client?.

Regards
2:31 AM

David Pasek said...
Hello Ricardo,

you can extend virtual disk with multi-writer flag, but only when VMs are powered off.
"Hot extend virtual disk" is not supported for multi-writer flag.

There are other limitations with disks having multi-writer flag.

For further information look at https://kb.vmware.com/s/article/1034165


5:56 PM

Post a Comment

Newer PostOlder PostHome

Subscribe to: Post Comments (Atom)


Blog Archive

 ►  2021 (14)
 ►  2020 (23)
 ►  2019 (24)
 ▼  2018 (25)
o ►  December (4)
o ►  November (2)
o ▼  October (1)
 VMware virtual disk (VMDK) in Multi Write Mode
o ►  September (2)
o ►  August (1)
o ►  June (3)
o ►  May (1)
o ►  April (4)
o ►  March (2)
o ►  February (1)
o ►  January (4)
 ►  2017 (30)
 ►  2016 (33)
 ►  2015 (33)
 ►  2014 (75)
 ►  2013 (91)
 ►  2012 (107)
 ►  2011 (56)
 ►  2010 (36)
 ►  2009 (55)
 ►  2008 (24)
 ►  2007 (10)
 ►  2006 (10)
Subscribe To

 Posts
 Comments
(c) 2006-2016 blog.iGICS.com ........... Simple theme. Powered by Blogger.

You might also like