0% found this document useful (0 votes)
191 views

Failover-Clustering Windows Server

This document provides an overview of new features in Failover Clustering in Windows Server 2016, including: 1. Cluster Operating System Rolling Upgrade which allows upgrading the OS of cluster nodes without stopping workloads like Hyper-V or file servers. 2. Storage Replica which enables storage-agnostic replication between servers or clusters for disaster recovery and stretching clusters between sites. 3. Cloud Witness which uses Microsoft Azure as an arbitration point for Failover Cluster quorum instead of requiring a third datacenter.

Uploaded by

Azrael Ajeet
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
191 views

Failover-Clustering Windows Server

This document provides an overview of new features in Failover Clustering in Windows Server 2016, including: 1. Cluster Operating System Rolling Upgrade which allows upgrading the OS of cluster nodes without stopping workloads like Hyper-V or file servers. 2. Storage Replica which enables storage-agnostic replication between servers or clusters for disaster recovery and stretching clusters between sites. 3. Cloud Witness which uses Microsoft Azure as an arbitration point for Failover Cluster quorum instead of requiring a third datacenter.

Uploaded by

Azrael Ajeet
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 89

Table of Contents

Failover Clustering
What's New in Failover Clustering
Health Service
Fault domain awareness
VM Load Balancing
VM Load Balancing Deep-Dive
Deploy a Cloud Witness for a Failover Cluster
Cluster Operating System Rolling Upgrade
Simplified SMB Multichannel and Multi-NIC Cluster Networks
Cluster-Aware Updating
Requirements and best practices
Advanced options
FAQ
Plug-ins
Change history for Failover Clustering topics
Failover Clustering in Windows Server 2016
4/24/2017 • 2 min to read • Edit Online

Applies To: Windows Server 2016

Failover clustering - a Windows Server feature that enables you to group multiple servers
together into a fault-tolerant cluster - provides new and improved features for software-
defined datacenter customers and many other workloads running clusters on physical
hardware or in virtual machines.
A failover cluster is a group of independent computers that work together to increase the
availability and scalability of clustered roles (formerly called clustered applications and services). The clustered
servers (called nodes) are connected by physical cables and by software. If one or more of the cluster nodes fail,
other nodes begin to provide service (a process known as failover). In addition, the clustered roles are proactively
monitored to verify that they are working properly. If they are not working, they are restarted or moved to another
node.
Failover clusters also provide Cluster Shared Volume (CSV) functionality that provides a consistent, distributed
namespace that clustered roles can use to access shared storage from all nodes. With the Failover Clustering
feature, users experience a minimum of disruptions in service.
Failover Clustering has many practical applications, including:
Highly available or continuously available file share storage for applications such as Microsoft SQL Server and
Hyper-V virtual machines
Highly available clustered roles that run on physical servers or on virtual machines that are installed on servers
running Hyper-V

What's new
Here are some of the new features in Windows Server 2016 - for more details, see What's new in Failover
Clustering:
Cluster operating system rolling upgrades
Enables an administrator to upgrade the operating system of the cluster nodes from without stopping the Hyper-V
or the Scale-Out File Server workloads.
Cloud Witness for a Failover Cluster
A new type of quorum witness that leverages Microsoft Azure to help determine which cluster node should be
considered authoritative if a node goes offline.
Health Service
Improves the day-to-day monitoring, operations, and maintenance experience of Storage Spaces Direct clusters.
Fault Domains
Enables you to define what fault domain to use with a Storage Spaces Direct cluster. A fault domain is a set of
hardware that share a single point of failure, such as a server node, server chassis, or rack.
VM load balancing
Helps load be evenly distributed across nodes in a Failover Cluster by identifying busy nodes and live-migrating
VMs on these nodes to less busy nodes.
Simplified SMB Multichannel and multi-NIC cluster networks
Enables easier configuration of multiple network adapters in a cluster.

Planning
Failover Clustering Hardware Requirements and Storage Options
Validate Hardware for Failover Clustering
Network Recommendations for a Hyper-V Cluster

Deployment
Installing the Failover Clustering Feature and Tools
Validate Hardware for a Failover Cluster
Prestage Cluster Computer Objects in Active Directory Domain Services
Creating a Failover Cluster
Deploy Hyper-V over SMB
Deploy a Scale-Out File Server
iSCSI Target Block Storage, How To
Deploy an Active Directory Detached Cluster
Using Guest Clustering for High Availability
Deploy a Guest Cluster using a Shared Virtual Hard Disk
Building Your Cloud Infrastructure: Scenario Overview

Operations
Configure and Manage the Quorum in a Failover Cluster
Use Cluster Shared Volumes in a Failover Cluster
Cluster-Aware Updating Overview

Tools and settings


Failover Clustering PowerShell Cmdlets
Cluster Aware Updating PowerShell Cmdlets

Community resources
High Availability (Clustering) Forum
Failover Clustering and Network Load Balancing Team Blog
What's new in Failover Clustering in Windows Server
2016
4/24/2017 • 8 min to read • Edit Online

Applies To: Windows Server 2016

This topic explains the new and changed functionality in Failover Clustering in Windows Server 2016.

Cluster Operating System Rolling Upgrade


A new feature in Failover Clustering, Cluster Operating System Rolling Upgrade, enables an administrator to
upgrade the operating system of the cluster nodes from Windows Server 2012 R2 to Windows Server 2016
without stopping the Hyper-V or the Scale-Out File Server workloads. Using this feature, the downtime penalties
against Service Level Agreements (SLA) can be avoided.
What value does this change add?
Upgrading a Hyper-V or Scale-Out File Server cluster from Windows Server 2012 R2 to Windows Server 2016 no
longer requires downtime. The cluster will continue to function at a Windows Server 2012 R2 level until all of the
nodes in the cluster are running Windows Server 2016. The cluster functional level is upgraded to Windows Server
2016 by using the Windows PowerShell cmdlt Update-ClusterFunctionalLevel .

WARNING
After you update the cluster functional level, you cannot go back to a Windows Server 2012 R2 cluster functional level.
Until the Update-ClusterFunctionalLevel cmdlet is run, the process is reversible, and Windows Server 2012 R2 nodes
can be added and Windows Server 2016 nodes can be removed.

What works differently?


A Hyper-V or Scale-Out File Server failover cluster can now easily be upgraded without any downtime or need to
build a new cluster with nodes that are running the Windows Server 2016 operating system. Migrating clusters to
Windows Server 2012 R2 involved taking the existing cluster offline and reinstalling the new operating system for
each nodes, and then bringing the cluster back online. The old process was cumbersome and required downtime.
However, in Windows Server 2016, the cluster does not need to go offline at any point.
The cluster operating systems for the upgrade in phases are as follows for each node in a cluster:
The node is paused and drained of all virtual machines that are running on it.
The virtual machines (or other cluster workload) are migrated to another node in the cluster.The virtual
machines are migrated to another node in the cluster.
The existing operating system is removed and a clean installation of the Windows Server 2016 operating
system on the node is performed.
The node running the Windows Server 2016 operating system is added back to the cluster.
At this point, the cluster is said to be running in mixed mode, because the cluster nodes are running either
Windows Server 2012 R2 or Windows Server 2016.
The cluster functional level stays at Windows Server 2012 R2. At this functional level, new features in Windows
Server 2016 that affect compatibility with previous versions of the operating system will be unavailable.
Eventually, all nodes are upgraded to Windows Server 2016.
Cluster functional level is then changed to Windows Server 2016 using the Windows PowerShell cmdlet
Update-ClusterFunctionalLevel . At this point, you can take advantage of the Windows Server 2016 features.

For more information, see Cluster Operating System Rolling Upgrade.

Storage Replica
Storage Replica is a new feature that enables storage-agnostic, block-level, synchronous replication between
servers or clusters for disaster recovery, as well as stretching of a failover cluster between sites. Synchronous
replication enables mirroring of data in physical sites with crash-consistent volumes to ensure zero data loss at the
file-system level. Asynchronous replication allows site extension beyond metropolitan ranges with the possibility of
data loss.
What value does this change add?
Storage Replica enables you to do the following:
Provide a single vendor disaster recovery solution for planned and unplanned outages of mission critical
workloads.
Use SMB3 transport with proven reliability, scalability, and performance.
Stretch Windows failover clusters to metropolitan distances.
Use Microsoft software end to end for storage and clustering, such as Hyper-V, Storage Replica, Storage
Spaces, Cluster, Scale-Out File Server, SMB3, Data Deduplication, and ReFS/NTFS.
Help reduce cost and complexity as follows:
Is hardware agnostic, with no requirement for a specific storage configuration like DAS or SAN.
Allows commodity storage and networking technologies.
Features ease of graphical management for individual nodes and clusters through Failover Cluster
Manager.
Includes comprehensive, large-scale scripting options through Windows PowerShell.
Help reduce downtime, and increase reliability and productivity intrinsic to Windows.
Provide supportability, performance metrics, and diagnostic capabilities.
For more information, see the Storage Replica in Windows Server 2016.

Cloud Witness
Cloud Witness is a new type of Failover Cluster quorum witness in Windows Server 2016 that leverages Microsoft
Azure as the arbitration point. The Cloud Witness, like any other quorum witness, gets a vote and can participate in
the quorum calculations. You can configure cloud witness as a quorum witness using the Configure a Cluster
Quorum Wizard.
What value does this change add?
Using Cloud Witness as a Failover Cluster quorum witness provides the following advantages:
Leverages Microsoft Azure and eliminates the need for a third separate datacenter.
Uses the standard publicly available Microsoft Azure Blob Storage which eliminates the extra maintenance
overhead of VMs hosted in a public cloud.
Same Microsoft Azure Storage Account can be used for multiple clusters (one blob file per cluster; cluster
unique id used as blob file name).
Provides a very low on-going cost to the Storage Account (very small data written per blob file, blob file
updated only once when cluster nodes' state changes).
For more information, see Deploy a Cloud Witness For a Failover Cluster.
What works differently?
This capability is new in Windows Server 2016.

Virtual Machine Resiliency


Compute Resiliency Windows Server 2016 includes increased virtual machines compute resiliency to help
reduce intra-cluster communication issues in your compute cluster as follows:
Resiliency options available for virtual machines: You can now configure virtual machine resiliency
options that define behavior of the virtual machines during transient failures:
Resiliency Level: Helps you define how the transient failures are handled.
Resiliency Period: Helps you define how long all the virtual machines are allowed to run isolated.
Quarantine of unhealthy nodes: Unhealthy nodes are quarantined and are no longer allowed to join the
cluster. This prevents flapping nodes from negatively effecting other nodes and the overall cluster.
For more information virtual machine compute resiliency workflow and node quarantine settings that control how
your node is placed in isolation or quarantine, see Virtual Machine Compute Resiliency in Windows Server 2016.
Storage Resiliency In Windows Server 2016, virtual machines are more resilient to transient storage failures. The
improved virtual machine resiliency helps preserve tenant virtual machine session states in the event of a storage
disruption. This is achieved by intelligent and quick virtual machine response to storage infrastructure issues.
When a virtual machine disconnects from its underlying storage, it pauses and waits for storage to recover. While
paused, the virtual machine retains the context of applications that are running in it. When the virtual machine's
connection to its storage is restored, the virtual machine returns to its running state. As a result, the tenant
machine's session state is retained on recovery.
In Windows Server 2016, virtual machine storage resiliency is aware and optimized for guest clusters too.

Diagnostic Improvements in Failover Clustering


To help diagnose issues with failover clusters, Windows Server 2016 includes the following:
Several enhancements to cluster log files (such as Time Zone Information and DiagnosticVerbose log) that
makes is easier to troubleshoot failover clustering issues. For more information, see Windows Server 2016
Failover Cluster Troubleshooting Enhancements - Cluster Log.
A new a dump type of Active memory dump, which filters out most memory pages allocated to virtual
machines, and therefore makes the memory.dmp much smaller and easier to save or copy. For more
information, see Windows Server 2016 Failover Cluster Troubleshooting Enhancements - Active Dump.

Site-aware Failover Clusters


Windows Server 2016 includes site- aware failover clusters that enable group nodes in stretched clusters based on
their physical location (site). Cluster site-awareness enhances key operations during the cluster lifecycle such as
failover behavior, placement policies, heartbeat between the nodes, and quorum behavior. For more information,
see Site-aware Failover Clusters in Windows Server 2016.
Workgroup and Multi-domain clusters
In Windows Server 2012 R2 and previous versions, a cluster can only be created between member nodes joined to
the same domain. Windows Server 2016 breaks down these barriers and introduces the ability to create a Failover
Cluster without Active Directory dependencies. You can now create failover clusters in the following configurations:
Single-domain Clusters. Clusters with all nodes joined to the same domain.
Multi-domain Clusters. Clusters with nodes which are members of different domains.
Workgroup Clusters. Clusters with nodes which are member servers / workgroup (not domain joined).
For more information, see Workgroup and Multi-domain clusters in Windows Server 2016

Virtual Machine Load Balancing


Virtual machine Load Balancing is a new feature in Failover Clustering that facilitates the seamless load balancing
of virtual machines across the nodes in a cluster. Over-committed nodes are identified based on virtual machine
Memory and CPU utilization on the node. Virtual machines are then moved (live migrated) from an over-
committed node to nodes with available bandwidth (if applicable). The aggressiveness of the balancing can be
tuned to ensure optimal cluster performance and utilization. Load Balancing is enabled by default in Windows
Sever 2016 Technical Preview. However, Load Balancing is disabled when SCVMM Dynamic Optimization is
enabled.

Virtual Machine Start Order


Virtual machine Start Order is a new feature in Failover Clustering that introduces start order orchestration for
Virtual machines (and all groups) in a cluster. Virtual machines can now be grouped into tiers, and start order
dependencies can be created between different tiers. This ensures that the most important virtual machines (such
as Domain Controllers or Utility virtual machines) are started first. Virtual machines are not started until the virtual
machines that they have a dependency on are also started.

Simplified SMB Multichannel and Multi-NIC Cluster Networks


Failover Cluster networks are no longer limited to a single NIC per subnet / network. With Simplified SMB
Multichannel and Multi-NIC Cluster Networks, network configuration is automatic and every NIC on the subnet can
be used for cluster and workload traffic. This enhancement allows customers to maximize network throughput for
Hyper-V, SQL Server Failover Cluster Instance, and other SMB workloads.
For more information, see Simplified SMB Multichannel and Multi-NIC Cluster Networks.

See Also
Storage
What's New in Storage in Windows Server 2016
Health Service in Windows Server 2016
4/24/2017 • 10 min to read • Edit Online

Applies to Windows Server 2016

The Health Service is a new feature in Windows Server 2016 that improves the day-to-day monitoring and
operational experience for clusters running Storage Spaces Direct.

Prerequisites
The Health Service is enabled by default with Storage Spaces Direct. No additional action is required to set it up or
start it. To learn more about Storage Spaces Direct, see Storage Spaces Direct in Windows Server 2016.

Metrics
The Health Service reduces the work required to get live performance and capacity information from your Storage
Spaces Direct cluster. One new cmdlet provides a curated list of essential metrics, which are collected efficiently and
aggregated dynamically across nodes, with built-in logic to detect cluster membership. All values are real-time and
point-in-time only.
Coverage
In Windows Server 2016, the Health Service provides the following metrics:
IOPS (Read, Write, Total)
IO Throughput (Read, Write, Total)
IO Latency (Read, Write)
Physical Capacity (Total, Remaining)
Pool Capacity (Total, Remaining)
Volume Capacity (Total, Remaining)
CPU Utilization %, All Machines Average
Memory, All Machines (Total, Available)
Usage
Use the following PowerShell cmdlet to get metrics for the entire Storage Spaces Direct cluster:

Get-StorageSubSystem Cluster* | Get-StorageHealthReport

The optional Count parameter indicates how many sets of values to return, at one second intervals.

Get-StorageSubSystem Cluster* | Get-StorageHealthReport -Count <Count>

You can also get metrics for one specific volume or node using the following cmdlets:

Get-Volume -FileSystemLabel <Label> | Get-StorageHealthReport -Count <Count>

Get-StorageNode -Name <Name> | Get-StorageHealthReport -Count <Count>


NOTE
The metrics returned in each case will be the subset applicable to that scope.

Capacity: Putting it all together


The notion of available capacity in Storage Spaces is nuanced. To help you plan effectively, the Health Service
provides six distinct metrics for capacity. Here is what each represents:
Physical Capacity Total: The sum of the raw capacity of all physical storage devices managed by the cluster.
Physical Capacity Available: The physical capacity which is not in any non-primordial storage pool.
Pool Capacity Total: The amount of raw capacity in storage pools.
Pool Capacity Available: The pool capacity which is not allocated to the footprint of volumes.
Volume Capacity Total: The total usable ("inside") capacity of existing volumes.
Volume Capacity Available: The amount of additional data which can be stored in existing volumes.
The following diagram illustrates the relationship between these quantities.

Faults
The Health Service constantly monitors your Storage Spaces Direct cluster to detect problems and generate
"Faults". One new cmdlet displays any current Faults, allowing you to easily verify the health of your deployment
without looking at every entity or feature in turn. Faults are designed to be precise, easy to understand, and
actionable.
Each Fault contains five important fields:
Severity
Description of the problem
Recommended next step(s) to address the problem
Identifying information for the faulting entity
Its physical location (if applicable)
For example, here is a typical fault:

Severity: MINOR
Reason: Connectivity has been lost to the physical disk.
Recommendation: Check that the physical disk is working and properly connected.
Part: Manufacturer Contoso, Model XYZ9000, Serial 123456789
Location: Seattle DC, Rack B07, Node 4, Slot 11
NOTE
The physical location is derived from your fault domain configuration. For more information about fault domains, see Fault
Domains in Windows Server 2016. If you do not provide this information, the location field will be less helpful - for example, it
may only show the slot number.

Coverage
In Windows Server 2016, the Health Service provides the following Fault coverage:
Essential cluster hardware:
Node down, quarantined, or isolated
Node network adapter failure, disabled, or disconnected
Node missing one or more cluster networks
Node temperature sensor
Essential storage hardware:
Physical disk media failure, lost connectivity, or unresponsive
Storage enclosure lost connectivity
Storage enclosure fan failure or power supply failure
Storage enclosure current, voltage, or temperature sensors triggered
The Storage Spaces software stack:
Storage pool unrecognized metadata
Data not fully resilient, or detached
Volume low capacity1
Storage Quality of Service (Storage QoS)
Storage QoS malformed policy
Storage QoS policy breach2
Storage Replica
Replication failed to sync, write, start, or stop
Target or source replication group failure or lost communication
Unable to meet configured recovery point objective
Log or metadata corruption
Health Service
Any issues with automation, described in later sections
Quarantined physical disk device
1 Indicates the volume has reached 80% full (minor severity) or 90% full (major severity).
2 Indicates some .vhd(s) on the volume have not met their Minimum IOPS for over 10% (minor), 30% (major), or
50% (critical) of rolling 24-hour window.

NOTE
The health of storage enclosure components such as fans, power supplies, and sensors is derived from SCSI Enclosure
Services (SES). If your vendor does not provide this information, the Health Service cannot display it.

Usage
To see any current Faults, run the following cmdlet in PowerShell:
Get-StorageSubSystem Cluster* | Debug-StorageSubSystem

This returns any Faults which affect the overall Storage Spaces Direct cluster. Most often, these Faults relate to
hardware or configuration. If there are no Faults, this cmdlet will return nothing.

NOTE
In a non-production environment, and at your own risk, you can experiment with this feature by triggering Faults yourself -
for example, by removing one physical disk or shutting down one node. Once the Fault has appeared, re-insert the physical
disk or restart the node and the Fault will disappear again.

You can also view Faults that are affecting only specific volumes or file shares with the following cmdlets:

Get-Volume -FileSystemLabel <Label> | Debug-Volume

Get-FileShare -Name <Name> | Debug-FileShare

This returns any faults that affect only the specific volume or file share. Most often, these Faults relate to data
resiliency or features like Storage QoS or Storage Replica.

NOTE
In Windows Server 2016, it may take up to 30 minutes for certain Faults to appear. Improvements are forthcoming in
subsequent releases.

Root Cause Analysis


The Health Service can assess the potential causality among faulting entities to identify and combine faults which
are consequences of the same underlying problem. By recognizing chains of effect, this makes for less chatty
reporting. For now, this functionality is limited to nodes, enclosures, and physical disks in the event of lost
connectivity.
For example, if an enclosure has lost connectivity, it follows that those physical disk devices within the enclosure
will also be without connectivity. Therefore, only one Fault will be raised for the root cause - in this case, the
enclosure.

Actions
The next section describes workflows which are automated by the Health Service. To verify that an action is indeed
being taken autonomously, or to track its progress or outcome, the Health Service generates "Actions". Unlike logs,
Actions disappear shortly after they have completed, and are intended primarily to provide insight into ongoing
activity which may impact performance or capacity (e.g. restoring resiliency or rebalancing data).
Usage
One new PowerShell cmdlet displays all Actions:

Get-StorageHealthAction

Coverage
In Windows Server 2016, the Get-StorageHealthAction cmdlet can return any of the following information:
Retiring failed, lost connectivity, or unresponsive physical disk
Switching storage pool to use replacement physical disk
Restoring full resiliency to data
Rebalancing storage pool

Automation
This section describes workflows which are automated by the Health Service in the disk lifecycle.
Disk Lifecycle
The Health Service automates most stages of the physical disk lifecycle. Let's say that the initial state of your
deployment is in perfect health - which is to say, all physical disks are working properly.
Retirement
Physical disks are automatically retired when they can no longer be used, and a corresponding Fault is raised. There
are several cases:
Media Failure: the physical disk is definitively failed or broken, and must be replaced.
Lost Communication: the physical disk has lost connectivity for over 15 consecutive minutes.
Unresponsive: the physical disk has exhibited latency of over 5.0 seconds three or more times within an
hour.

NOTE
If connectivity is lost to many physical disks at once, or to an entire node or storage enclosure, the Health Service will not
retire these disks since they are unlikely to be the root problem.

If the retired disk was serving as the cache for many other physical disks, these will automatically be reassigned to
another cache disk if one is available. No special user action is required.
Restoring resiliency
Once a physical disk has been retired, the Health Service immediately begins copying its data onto the remaining
physical disks, to restore full resiliency. Once this has completed, the data is completely safe and fault tolerant
anew.

NOTE
This immediate restoration requires sufficient available capacity among the remaining physical disks.

Blinking the indicator light


If possible, the Health Service will begin blinking the indicator light on the retired physical disk or its slot. This will
continue indefinitely, until the retired disk is replaced.

NOTE
In some cases, the disk may have failed in a way that precludes even its indicator light from functioning - for example, a total
loss of power.

Physical replacement
You should replace the retired physical disk when possible. Most often, this consists of a hot-swap - i.e. powering
off the node or storage enclosure is not required. See the Fault for helpful location and part information.
Verification
When the replacement disk is inserted, it will be verified against the Supported Components Document (see the
next section).
Pooling
If allowed, the replacement disk is automatically substituted into its predecessor's pool to enter use. At this point,
the system is returned to its initial state of perfect health, and then the Fault disappears.

Supported Components Document


The Health Service provides an enforcement mechanism to restrict the components used by Storage Spaces Direct
to those on a Supported Components Document provided by the administrator or solution vendor. This can be
used to prevent mistaken use of unsupported hardware by you or others, which may help with warranty or support
contract compliance. This functionality is currently limited to physical disk devices, including SSDs, HDDs, and
NVMe drives. The Supported Components Document can restrict on model, manufacturer (optional), and firmware
version (optional).
Usage
The Supported Components Document uses an XML-inspired syntax. We recommend using your favorite text
editor, such as Visual Studio Code (available for free here) or Notepad, to create an XML document which you can
save and reuse.
Sections
The document has two independent sections: Disks and Cache.
If the Disks section is provided, only the drives listed are allowed to join pools. Any unlisted drives are prevented
from joining pools, which effectively precludes their use in production. If this section is left empty, any drive will be
allowed to join pools.
If the Cache section is provided, only the drives listed will be used for caching. If this section is left empty, Storage
Spaces Direct will attempt to guess based on media type and bus type. For example, if your deployment uses solid-
state drives (SSD) and hard disk drives (HDD), the former is automatically chosen for caching; however, if your
deployment uses all-flash, you may need to specify the higher endurance devices you'd like to use for caching here.

IMPORTANT
The Supported Components Document does not apply retroactively to drives already pooled and in use.

Example
<Components>

<Disks>
<Disk>
<Manufacturer>Contoso</Manufacturer>
<Model>XYZ9000</Model>
<AllowedFirmware>
<Version>2.0</Version>
<Version>2.1</Version>
<Version>2.2</Version>
</AllowedFirmware>
<TargetFirmware>
<Version>2.1</Version>
<BinaryPath>\\path\to\image.bin</BinaryPath>
</TargetFirmware>
</Disk>
</Disks>

<Cache>
<Disk>
<Manufacturer>Fabrikam</Manufacturer>
<Model>QRSTUV</Model>
</Disk>
</Cache>

</Components>

To list multiple drives, simply add additional <Disk> tags within either section.
To inject this XML when deploying Storage Spaces Direct, use the -XML flag:

Enable-ClusterS2D -XML <MyXML>

To set or modify the Supported Components Document once Storage Spaces Direct has been deployed (i.e. once
the Health Service is already running), use the following PowerShell cmdlet:

$MyXML = Get-Content <\\path\to\file.xml> | Out-String


Get-StorageSubSystem Cluster* | Set-StorageHealthSetting -Name "System.Storage.SupportedComponents.Document" -
Value $MyXML

NOTE
The model, manufacturer, and the firmware version properties should exactly match the values that you get using the Get-
PhysicalDisk cmdlet. This may differ from your "common sense" expectation, depending on your vendor's implementation.
For example, rather than "Contoso", the manufacturer may be "CONTOSO-LTD", or it may be blank while the model is
"Contoso-XZY9000".

You can verify using the following PowerShell cmdlet:

Get-PhysicalDisk | Select Model, Manufacturer, FirmwareVersion

Settings
Many of the parameters which govern the behavior of the Health Service are exposed as settings. You can modify
these to tune the aggressiveness of faults or actions, turn certain behaviors on/off, and more.
Use the following PowerShell cmdlet to set or modify settings.
Usage

Get-StorageSubSystem Cluster* | Set-StorageHealthSetting -Name <SettingName> -Value <Value>

Example

Get-StorageSubSystem Cluster* | Set-StorageHealthSetting -Name


"System.Storage.Volume.CapacityThreshold.Warning" -Value 70

Common settings
Some commonly modified settings are listed below, along with their default values.
Volume Capacity Threshold

"System.Storage.Volume.CapacityThreshold.Enabled" = True
"System.Storage.Volume.CapacityThreshold.Warning" = 80
"System.Storage.Volume.CapacityThreshold.Critical" = 90

Pool Reserve Capacity Threshold

"System.Storage.StoragePool.CheckPoolReserveCapacity.Enabled" = True

Physical Disk Lifecycle

"System.Storage.PhysicalDisk.AutoPool.Enabled" = True
"System.Storage.PhysicalDisk.AutoRetire.OnLostCommunication.Enabled" = True
"System.Storage.PhysicalDisk.AutoRetire.OnUnresponsive.Enabled" = True
"System.Storage.PhysicalDisk.AutoRetire.DelayMs" = 900000 (i.e. 15 minutes)
"System.Storage.PhysicalDisk.Unresponsive.Reset.CountResetIntervalSeconds" = 360 (i.e. 60 minutes)
"System.Storage.PhysicalDisk.Unresponsive.Reset.CountAllowed" = 3

Supported Components Document


See the previous section.
Firmware Rollout

"System.Storage.PhysicalDisk.AutoFirmwareUpdate.SingleDrive.Enabled" = True
"System.Storage.PhysicalDisk.AutoFirmwareUpdate.RollOut.Enabled" = True
"System.Storage.PhysicalDisk.AutoFirmwareUpdate.RollOut.LongDelaySeconds" = 604800 (i.e. 7 days)
"System.Storage.PhysicalDisk.AutoFirmwareUpdate.RollOut.ShortDelaySeconds" = 86400 (i.e. 1 day)
"System.Storage.PhysicalDisk.AutoFirmwareUpdate.RollOut.LongDelayCount" = 1
"System.Storage.PhysicalDisk.AutoFirmwareUpdate.RollOut.FailureTolerance" = 3

Platform / Quiescence

"Platform.Quiescence.MinDelaySeconds" = 120 (i.e. 2 minutes)


"Platform.Quiescence.MaxDelaySeconds" = 420 (i.e. 7 minutes)

Metrics

"System.Reports.ReportingPeriodSeconds" = 1

Debugging

"System.LogLevel" = 4
See also
Storage Spaces Direct in Windows Server 2016
Developer documentation, sample code, and API reference on MSDN
Fault domain awareness in Windows Server 2016
4/24/2017 • 7 min to read • Edit Online

Applies To: Windows Server 2016

Failover Clustering enables multiple servers to work together to provide high availability – or put another way, to
provide node fault tolerance. But today's businesses demand ever-greater availability from their infrastructure. To
achieve cloud-like uptime, even highly unlikely occurrences such as chassis failures, rack outages, or natural
disasters must be protected against. That's why Failover Clustering in Windows Server 2016 introduces chassis,
rack, and site fault tolerance as well.
Fault domains and fault tolerance are closely related concepts. A fault domain is a set of hardware components that
share a single point of failure. To be fault tolerant to a certain level, you need multiple fault domains at that level.
For example, to be rack fault tolerant, your servers and your data must be distributed across multiple racks.
This short video presents an overview of fault domains in Windows Server 2016:

Benefits
Storage Spaces, including Storage Spaces Direct, uses fault domains to maximize data safety.
Resiliency in Storage Spaces is conceptually like distributed, software-defined RAID. Multiple copies of all
data are kept in sync, and if hardware fails and one copy is lost, others are recopied to restore resiliency. To
achieve the best possible resiliency, copies should be kept in separate fault domains.
The Health Service uses fault domains to provide more helpful alerts.
Each fault domain can be associated with location metadata, which will automatically be included in any
subsequent alerts. These descriptors can assist operations or maintenance personnel and reduce errors by
disambiguating hardware.
Stretch clustering uses fault domains for storage affinity. Stretch clustering allows faraway servers to
join a common cluster. For the best performance, applications or virtual machines should be run on servers
that are nearby to those providing their storage. Fault domain awareness enables this storage affinity.
Levels of fault domains
There are four canonical levels of fault domains - site, rack, chassis, and node. Nodes are discovered automatically;
each additional level is optional. For example, if your deployment does not use blade servers, the chassis level may
not make sense for you.

Usage
You can use PowerShell or XML markup to specify fault domains. Both approaches are equivalent and provide full
functionality.

IMPORTANT
Specify fault domains before enabling Storage Spaces Direct, if possible. This enables the automatic configuration to prepare
the pool, tiers, and settings like resiliency and column count, for chassis or rack fault tolerance. Once the pool and volumes
have been created, data will not retroactively move in response to changes to the fault domain topology. To move nodes
between chassis or racks after enabling Storage Spaces Direct, you should first evict the node and its drives from the pool
using Remove-ClusterNode -CleanUpDisks .

Defining fault domains with PowerShell


Windows Server 2016 introduces the following cmdlets to work with fault domains:
Get-ClusterFaultDomain
Set-ClusterFaultDomain
New-ClusterFaultDomain
Remove-ClusterFaultDomain

This short video demonstrates the usage of these cmdlets.

Use Get-ClusterFaultDomain to see the current fault domain topology. This will list all nodes in the cluster, plus any
chassis, racks, or sites you have created. You can filter using parameters like -Type or -Name, but these are not
required.

Get-ClusterFaultDomain
Get-ClusterFaultDomain -Type Rack
Get-ClusterFaultDomain -Name "server01.contoso.com"

Use New-ClusterFaultDomain to create new chassis, racks, or sites. The -Type and -Name parameters are required.
The possible values for -Type are Chassis , Rack , and Site . The -Name can be any string. (For Node type fault
domains, the name must be the actual node name, as set automatically).

New-ClusterFaultDomain -Type Chassis -Name "Chassis 007"


New-ClusterFaultDomain -Type Rack -Name "Rack A"
New-ClusterFaultDomain -Type Site -Name "Shanghai"

IMPORTANT
Windows Server cannot and does not verify that any fault domains you create correspond to anything in the real, physical
world. (This may sound obvious, but it's important to understand.) If, in the physical world, your nodes are all in one rack,
then creating two -Type Rack fault domains in software does not magically provide rack fault tolerance. You are
responsible for ensuring the topology you create using these cmdlets matches the actual arrangement of your hardware.

Use Set-ClusterFaultDomain to move one fault domain into another. The terms "parent" and "child" are commonly
used to describe this nesting relationship. The -Name and -Parent parameters are required. In -Name , provide the
name of the fault domain that is moving; in -Parent , provide the name of the destination. To move multiple fault
domains at once, list their names.
Set-ClusterFaultDomain -Name "server01.contoso.com" -Parent "Rack A"
Set-ClusterFaultDomain -Name "Rack A", "Rack B", "Rack C", "Rack D" -Parent "Shanghai"

IMPORTANT
When fault domains move, their children move with them. In the above example, if Rack A is the parent of
server01.contoso.com, the latter does not separately need to be moved to the Shanghai site – it is already there by virtue of
its parent being there, just like in the physical world.

You can see parent-child relationships in the output of Get-ClusterFaultDomain , in the ParentName and
ChildrenNames columns.

You can also use Set-ClusterFaultDomain to modify certain other properties of fault domains. For example, you can
provide optional -Location or -Description metadata for any fault domain. If provided, this information will be
included in hardware alerting from the Health Service. You can also rename fault domains using the -NewName
parameter. Do not rename Node type fault domains.

Set-ClusterFaultDomain -Name "Rack A" -Location "Building 34, Room 4010"


Set-ClusterFaultDomain -Type Node -Description "Contoso XYZ Server"
Set-ClusterFaultDomain -Name "Shanghai" -NewName "China Region"

Use Remove-ClusterFaultDomain to remove chassis, racks, or sites you have created. The -Name parameter is
required. You cannot remove a fault domain that contains children – first, either remove the children, or move
them outside using Set-ClusterFaultDomain . To move a fault domain outside of all other fault domains, set its
-Parent to the empty string (""). You cannot remove Node type fault domains. To remove multiple fault domains
at once, list their names.

Set-ClusterFaultDomain -Name "server01.contoso.com" -Parent ""


Remove-ClusterFaultDomain -Name "Rack A"

Defining fault domains with XML markup


Fault domains can be specified using an XML-inspired syntax. We recommend using your favorite text editor, such
as Visual Studio Code (available for free here) or Notepad, to create an XML document which you can save and
reuse.
This short video demonstrates the usage of XML Markup to specify fault domains.
In PowerShell, run the following cmdlet: Get-ClusterFaultDomainXML . This returns the current fault domain
specification for the cluster, as XML. This reflects every discovered <Node> , wrapped in opening and closing
<Topology> tags.

Run the following to save this output to a file.

Get-ClusterFaultDomainXML | Out-File <Path>

Open the file, and add <Site> , <Rack> , and <Chassis> tags to specify how these nodes are distributed across
sites, racks, and chassis. Every tag must be identified by a unique Name. For nodes, you must keep the node's
name as populated by default.

IMPORTANT
While all additional tags are optional, they must adhere to the transitive Site > Rack > Chassis > Node hierarchy, and must
be properly closed.
In addition to name, freeform Location="..." and Description="..." descriptors can be added to any tag.

Example: Two sites, one rack each

<Topology>
<Site Name="SEA" Location="Contoso HQ, 123 Example St, Room 4010, Seattle">
<Rack Name="A01" Location="Aisle A, Rack 01">
<Node Name="Server01" Location="Rack Unit 33" />
<Node Name="Server02" Location="Rack Unit 35" />
<Node Name="Server03" Location="Rack Unit 37" />
</Rack>
</Site>
<Site Name="NYC" Location="Regional Datacenter, 456 Example Ave, New York City">
<Rack Name="B07" Location="Aisle B, Rack 07">
<Node Name="Server04" Location="Rack Unit 20" />
<Node Name="Server05" Location="Rack Unit 22" />
<Node Name="Server06" Location="Rack Unit 24" />
</Rack>
</Site>
</Topology>
Example: two chassis, blade servers

<Topology>
<Rack Name="A01" Location="Contoso HQ, Room 4010, Aisle A, Rack 01">
<Chassis Name="Chassis01" Location="Rack Unit 2 (Upper)" >
<Node Name="Server01" Location="Left" />
<Node Name="Server02" Location="Right" />
</Chassis>
<Chassis Name="Chassis02" Location="Rack Unit 6 (Lower)" >
<Node Name="Server03" Location="Left" />
<Node Name="Server04" Location="Right" />
</Chassis>
</Rack>
</Topology>

To set the new fault domain specification, save your XML and run the following in PowerShell.

$xml = Get-Content <Path> | Out-String


Set-ClusterFaultDomainXML -XML $xml

This guide presents just two examples, but the <Site> , <Rack> , <Chassis> , and <Node> tags can be mixed and
matched in many additional ways to reflect the physical topology of your deployment, whatever that may be. We
hope these examples illustrate the flexibility of these tags and the value of freeform location descriptors to
disambiguate them.
Optional: Location and description metadata
You can provide optional Location or Description metadata for any fault domain. If provided, this information
will be included in hardware alerting from the Health Service. This short video demonstrates the value of adding
such descriptors.

See Also
Windows Server 2016
Storage Spaces Direct in Windows Server 2016
Virtual Machine Load Balancing overview
4/24/2017 • 1 min to read • Edit Online

Applies to Windows Server 2016

A key consideration for private cloud deployments is the capital expenditure (CapEx) required to go into
production. It is very common to add redundancy to private cloud deployments to avoid under-capacity during
peak traffic in production, but this increases CapEx. The need for redundancy is driven by unbalanced private
clouds where some nodes are hosting more Virtual Machines (VMs) and others are underutilized (such as a freshly
rebooted server).

What is Virtual Machine Load Balancing?


VM Load Balancing is a new in-box feature in Windows Server 2016 that allows you to optimize the utilization of
nodes in a Failover Cluster. It identifies over-committed nodes and re-distributes VMs from those nodes to under-
committed nodes. Some of the salient aspects of this feature are as follows:
It is a zero-downtime solution: VMs are live-migrated to idle nodes.
Seamless integration with your existing cluster environment: Failure policies such as anti-affinity, fault domains
and possible owners are honored.
Heuristics for balancing: VM memory pressure and CPU utilization of the node.
Granular control: Enabled by default. Can be activated on-demand or at a periodic interval.
Aggressiveness thresholds: Three thresholds available based on the characteristics of your deployment.

The feature in action


A new node is added to your Failover Cluster

When you add new capacity to your Failover Cluster, the VM Load Balancing feature automatically balances
capacity from the existing nodes, to the newly added node in the following order:
1. The pressure is evaluated on the existing nodes in the Failover Cluster.
2. All nodes exceeding threshold are identified.
3. The nodes with the highest pressure are identified to determine priority of balancing.
4. VMs are Live Migrated (with no down time) from a node exceeding threshold to a newly added node in the
Failover Cluster.
Recurring load balancing

When configured for periodic balancing, the pressure on the cluster nodes is evaluated for balancing every 30
minutes. Alternately, the pressure can be evaluated on-demand. Here is the flow of the steps:
1. The pressure is evaluated on all nodes in the private cloud.
2. All nodes exceeding threshold and those below threshold are identified.
3. The nodes with the highest pressure are identified to determine priority of balancing.
4. VMs are Live Migrated (with no down time) from a node exceeding the threshold to node under minimum
threshold.

See Also
Virtual Machine Load Balancing Deep-Dive
Failover Clustering
Hyper-V Overview
Virtual Machine Load Balancing deep-dive
4/24/2017 • 1 min to read • Edit Online

Applies to Windows Server 2016

Windows Server 2016 introduces the Virtual Machine Load Balancing feature to optimize the utilization of nodes in
a Failover Cluster. This document describes how to configure and control VM Load Balancing.

Heuristics for balancing


VM Virtual Machine Load Balancing evaluates a node's load based on the following heuristics:
1. Current memory pressure: Memory is the most common resource constraint on a Hyper-V host
2. CPU utilization of the Node averaged over a 5 minute window: Mitigates a node in the cluster becoming over-
committed

Controlling the aggressiveness of balancing


The aggressiveness of balancing based on the Memory and CPU heuristics can be configured using the by the
cluster common property 'AutoBalancerLevel'. To control the aggressiveness run the following in PowerShell:

(Get-Cluster).AutoBalancerLevel = <value>

AUTOBALANCERLEVEL AGGRESSIVENESS BEHAVIOR

1 (default) Low Move when host is more than 80%


loaded

2 Medium Move when host is more than 70%


loaded

3 High Move when host is more than 60%


loaded

Controlling VM Load Balancing


VM Load Balancing is enabled by default and when load balancing occurs can be configured by the cluster
common property 'AutoBalancerMode'. To control when Node Fairness balances the cluster:
Using Failover Cluster Manager:
1. Right-click on your cluster name and select the "Properties" option
2. Select the "Balancer" pane

Using PowerShell:
Run the following:

(Get-Cluster).AutoBalancerMode = <value>

AUTOBALANCERMODE BEHAVIOR

0 Disabled
AUTOBALANCERMODE BEHAVIOR

1 Load balance on node join

2 (default) Load balance on node join and every 30 minutes

VM Load Balancing vs. System Center Virtual Machine Manager


Dynamic Optimization
The node fairness feature, provides in-box functionality, which is targeted towards deployments without System
Center Virtual Machine Manager (SCVMM). SCVMM Dynamic Optimization is the recommended mechanism for
balancing virtual machine load in your cluster for SCVMM deployments. SCVMM automatically disables the
Windows Server VM Load Balancing when Dynamic Optimization is enabled.

See Also
Virtual Machine Load Balancing Overview
Failover Clustering
Hyper-V Overview
Deploy a Cloud Witness for a Failover Cluster
4/24/2017 • 8 min to read • Edit Online

Applies to Windows Server 2016

Cloud Witness is a new type of Failover Cluster quorum witness being introduced in Windows Server 2016. This
topic provides an overview of the Cloud Witness feature, the scenarios that it supports, and instructions about how
to configure a cloud witness for a Failover Cluster that is running Windows Server 2016.

Cloud Witness overview


Figure 1 illustrates a multi-site stretched Failover Cluster quorum configuration with Windows Server 2016. In this
example configuration (figure 1), there are 2 nodes in 2 datacenters (referred to as Sites). Note, it is possible for a
cluster to span more than 2 datacenters. Also, each datacenter can have more than 2 nodes. A typical cluster
quorum configuration in this setup (automatic failover SLA) gives each node a vote. One extra vote is given to the
quorum witness to allow cluster to keep running even if either one of the datacenter experiences a power outage.
The math is simple - there are 5 total votes and you need 3 votes for the cluster to keep it running.

Figure 1: Using a File Share Witness as a quorum witness


In case of power outage in one datacenter, to give equal opportunity for the cluster in other datacenter to keep it
running, it is recommended to host the quorum witness in a location other than the two datacenters. This typically
means requiring a third separate datacenter (site) to host a File Server that is backing the File Share which is used
as the quorum witness (File Share Witness).
Most organizations do not have a third separate datacenter that will host File Server backing the File Share
Witness. This means organizations primarily host the File Server in one of the two datacenters, which by extension,
makes that datacenter the primary datacenter. In a scenario where there is power outage in the primary datacenter,
the cluster would go down as the other datacenter would only have 2 votes which is below the quorum majority of
3 votes needed. For the customers that have third separate datacenter to host the File Server, it is an overhead to
maintain the highly available File Server backing the File Share Witness. Hosting virtual machines in the public
cloud that have the File Server for File Share Witness running in Guest OS is a significant overhead in terms of
both setup & maintenance.
Cloud Witness is a new type of Failover Cluster quorum witness that leverages Microsoft Azure as the arbitration
point (figure 2). It uses Azure Blob Storage to read/write a blob file which is then used as an arbitration point in
case of split-brain resolution.
There are significant benefits which this approach:
1. Leverages Microsoft Azure (no need for third separate datacenter).
2. Uses standard available Azure Blob Storage (no extra maintenance overhead of virtual machines hosted in
public cloud).
3. Same Azure Storage Account can be used for multiple clusters (one blob file per cluster; cluster unique id used
as blob file name).
4. Very low on-going $cost to the Storage Account (very small data written per blob file, blob file updated only
once when cluster nodes' state changes).
5. Built-in Cloud Witness resource type.

Figure 2: Multi-site stretched clusters with Cloud Witness as a quorum witness


As shown in figure 2, there is no third separate site that is required. Cloud Witness, like any other quorum witness,
gets a vote and can participate in quorum calculations.

Cloud Witness: Supported scenarios for single witness type


If you have a Failover Cluster deployment, where all nodes can reach the internet (by extension of Azure), it is
recommended that you configure a Cloud Witness as your quorum witness resource.
Some of the scenarios that are supported use of Cloud Witness as a quorum witness are as follows:
Disaster recovery stretched multi-site clusters (see figure 2).
Failover Clusters without shared storage (SQL Always On, Exchange DAGs, etc.).
Failover Clusters running inside Guest OS hosted in Microsoft Azure Virtual Machine Role (or any other public
cloud).
Failover Clusters running inside Guest OS of Virtual Machines hosted in private clouds.
Storage clusters with or without shared storage, such as Scale-out File Server clusters.
Small branch-office clusters (even 2-node clusters)
Starting with Windows Server 2012 R2, it is recommended to always configure a witness as the cluster
automatically manages the witness vote and the nodes vote with Dynamic Quorum.
Set up a Cloud Witness for a cluster
To set up a Cloud Witness as a quorum witness for your cluster, complete the following steps:
1. Create an Azure Storage Account to use as a Cloud Witness
2. Configure the Cloud Witness as a quorum witness for your cluster.

Create an Azure Storage Account to use as a Cloud Witness


This section describes how to create a storage account and view and copy endpoint URLs and access keys for that
account.
To configure Cloud Witness, you must have a valid Azure Storage Account which can be used to store the blob file
(used for arbitration). Cloud Witness creates a well-known Container msft-cloud-witness under the Microsoft
Storage Account. Cloud Witness writes a single blob file with corresponding cluster's unique ID used as the file
name of the blob file under this msft-cloud-witness container. This means that you can use the same Microsoft
Azure Storage Account to configure a Cloud Witness for multiple different clusters.
When you use the same Azure Storage Account for configuring Cloud Witness for multiple different clusters, a
single msft-cloud-witness container gets created automatically. This container will contain one-blob file per
cluster.
To create an Azure storage account
1. Sign in to the Azure Portal.
2. On the Hub menu, select New -> Data + Storage -> Storage account.
3. In the Create a storage account page, do the following:
a. Enter a name for your storage account.
Storage account names must be between 3 and 24 characters in length and may contain numbers
and lowercase letters only. The storage account name must also be unique within Azure.
b. For Account kind, select General purpose.
You can't use a Blob storage account for a Cloud Witness.
c. For Performance, select Standard.
You can't use Azure Premium Storage for a Cloud Witness.
d. For Replication, select Locally-redundant storage (LRS) .
Failover Clustering uses the blob file as the arbitration point, which requires some consistency
guarantees when reading the data. Therefor you must select Locally-redundant storage for
Replication type.
View and copy storage access keys for your Azure Storage Account
When you create a Microsoft Azure Storage Account, it is associated with two Access Keys that are automatically
generated - Primary Access key and Secondary Access key. For a first-time creation of Cloud Witness, use the
Primary Access Key. There is no restriction regarding which key to use for Cloud Witness.
To view and copy storage access keys
In the Azure Portal, navigate to your storage account, click All settings and then click Access Keys to view, copy,
and regenerate your account access keys. The Access Keys blade also includes pre-configured connection strings
using your primary and secondary keys that you can copy to use in your applications (see figure 4).
Figure 4: Storage Access Keys
View and copy endpoint URL Links
When you create a Storage Account, the following URLs are generated using the format:
https://<Storage Account Name>.<Storage Type>.<Endpoint>

Cloud Witness always uses Blob as the storage type. Azure uses .core.windows.net as the Endpoint. When
configuring Cloud Witness, it is possible that you configure it with a different endpoint as per your scenario (for
example the Microsoft Azure datacenter in China has a different endpoint).

NOTE
The endpoint URL is generated automatically by Cloud Witness resource and there is no extra step of configuration
necessary for the URL.

To view and copy endpoint URL links


In the Azure Portal, navigate to your storage account, click All settings and then click Properties to view and copy
your endpoint URLs (see figure 5).

Figure 5: Cloud Witness endpoint URL links


For more information about creating and managing Azure Storage Accounts, see About Azure Storage Accounts

Configure Cloud Witness as a quorum witness for your cluster


Cloud Witness configuration is well-integrated within the existing Quorum Configuration Wizard built into the
Failover Cluster Manager.
To configure Cloud Witness as a Quorum Witness
1. Launch Failover Cluster Manager.
2. Right-click the cluster -> More Actions -> Configure Cluster Quorum Settings (see figure 6). This
launches the Configure Cluster Quorum wizard.
Figure 6. Cluster Quorum Settings
3. On the Select Quorum Configurations page, select Select the quorum witness (see figure 7).

Figure 7. Select the Quorum Configuration


4. On the Select Quorum Witness page, select Configure a cloud witness (see figure 8).
Figure 8. Select the Quorum Witness
5. On the Configure Cloud Witness page, enter the following information:
a. (Required parameter) Azure Storage Account Name.
b. (Required parameter) Access Key corresponding to the Storage Account.
a. When creating for the first time, use Primary Access Key (see figure 5)
b. When rotating the Primary Access Key, use Secondary Access Key (see figure 5)
c. (Optional parameter) If you intend to use a different Azure service endpoint (for example the
Microsoft Azure service in China), then update the endpoint server name.

Figure 9: Configure your Cloud Witness


6. Upon successful configuration of Cloud Witness, you can view the newly created witness resource in the
Failover Cluster Manager snap-in (see figure 10).
Figure 10: Successful configuration of Cloud Witness
Configuring Cloud Witness using PowerShell
The existing Set-ClusterQuorum PowerShell command has new additional parameters corresponding to Cloud
Witness.
You can configure Cloud Witness using the Set-ClusterQuorum following PowerShell command:

Set-ClusterQuorum -CloudWitness -AccountName <StorageAccountName> -AccessKey <StorageAccountAccessKey>

In case you need to use a different endpoint (rare):

Set-ClusterQuorum -CloudWitness -AccountName <StorageAccountName> -AccessKey <StorageAccountAccessKey> -


Endpoint <servername>

Azure Storage Account considerations with Cloud Witness


When configuring a Cloud Witness as a quorum witness for your Failover Cluster, consider the following:
Instead of storing the Access Key, your Failover Cluster will generate and securely store a Shared Access
Security (SAS) token.
The generated SAS token is valid as long as the Access Key remains valid. When rotating the Primary Access
Key, it is important to first update the Cloud Witness (on all your clusters that are using that Storage Account)
with the Secondary Access Key before regenerating the Primary Access Key.
Cloud Witness uses HTTPS REST interface of the Azure Storage Account service. This means it requires the
HTTPS port to be open on all cluster nodes.
Proxy considerations with Cloud Witness
Cloud Witness uses HTTPS (default port 443) to establish communication with Azure blob service. Ensure that
HTTPS port is accessible via network Proxy.

See Also
What's New in Failover Clustering in Windows Server
Cluster operating system rolling upgrade
4/24/2017 • 15 min to read • Edit Online

Applies to Windows Server 2016

Cluster OS Rolling Upgrade is a new feature in Windows Server 2016 that enables an administrator to upgrade the
operating system of the cluster nodes from Windows Server 2012 R2 to Windows Server 2016 without stopping
the Hyper-V or the Scale-Out File Server workloads. Using this feature, the downtime penalties against Service
Level Agreements (SLA) can be avoided.
Cluster OS Rolling Upgrade provides the following benefits:
Failover clusters running Hyper-V virtual machine and Scale-out File Server (SOFS) workloads can be upgraded
from Windows Server 2012 R2 (running on all nodes in the cluster) to Windows Server 2016 (running on all
cluster nodes of the cluster) without downtime. Other cluster workloads, such as SQL Server, will be unavailable
during the time (typically less than five minutes) it takes to failover to Windows Server 2016.
It does not require any additional hardware. Although, you can add additional cluster nodes temporarily to
small clusters to improve availability of the cluster during the Cluster OS Rolling Upgrade process.
The cluster does not need to be stopped or restarted.
A new cluster is not required. The existing cluster is upgraded. In addition, existing cluster objects stored in
Active Directory are used.
The upgrade process is reversible until the customer choses the "point-of-no-return", when all cluster nodes are
running Windows Server 2016, and when the Update-ClusterFunctionalLevel PowerShell cmdlet is run.
The cluster can support patching and maintenance operations while running in the mixed-OS mode.
It supports automation via PowerShell and WMI.
The cluster public property ClusterFunctionalLevel property indicates the state of the cluster on Windows
Server 2016 cluster nodes. This property can be queried using the PowerShell cmdlet from a Windows
Server 2016 cluster node that belongs to a failover cluster:

Get-Cluster | Select ClusterFunctionalLevel

A value of 8 indicates that the cluster is running at the Windows Server 2012 R2 functional level. A value of
9 indicates that the cluster is running at the Windows Server 2016 functional level.
This guide describes the various stages of the Cluster OS Rolling Upgrade process, installation steps, feature
limitations, and frequently asked questions (FAQs), and is applicable to the following Cluster OS Rolling Upgrade
scenarios in Windows Server 2016:
Hyper-V clusters
Scale-Out File Server clusters
The following scenario is not supported in Windows Server 2016:
Cluster OS Rolling Upgrade of guest clusters using virtual hard disk (.vhdx file) as shared storage
Cluster OS Rolling Upgrade is fully supported by System Center Virtual Machine Manager (SCVMM) 2016. If you
are using SCVMM 2016, see Upgrading Windows Server 2012 R2 clusters to Windows Server 2016 in VMM for
guidance on upgrading the clusters and automating the steps that are described in this document.
Requirements
Complete the following requirements before you begin the Cluster OS Rolling Upgrade process:
Start with a Windows Server 2012 R2 Failover Cluster
If the cluster workload is Hyper-V VMs, or Scale-Out File Server, you can expect zero-downtime upgrade.
Verify that the Hyper-V nodes have CPUs that support Second-Level Addressing Table (SLAT) using one
of the following methods;
Review the Are you SLAT Compatible? WP8 SDK Tip 01 article that describes two methods to
check if a CPU supports SLATs
Download the Coreinfo v3.31 tool to determine if a CPU supports SLAT.

Cluster transition states during Cluster OS Rolling Upgrade


This section describes the various transition states of the Windows Server 2012 R2 cluster that is being upgraded
to Windows Server 2016 using Cluster OS Rolling Upgrade.
In order to keep the cluster workloads running during the Cluster OS Rolling Upgrade process, moving a cluster
workload from a Windows Server 2012 R2 node to Windows Server 2016 node works as if both nodes were
running the Windows Server 2012 R2 operating system. When Windows Server 2016 nodes are added to the
cluster, they operate in a Windows Server 2012 R2 compatibility mode. A new conceptual cluster mode, called
"mixed-OS mode", allows nodes of different versions to exist in the same cluster (see Figure 1).

Figure 1: Cluster operating system state transitions


A Windows Server 2012 R2 cluster enters mixed-OS mode when a Windows Server 2016 node is added to the
cluster. The process is fully reversible - Windows Server 2016 nodes can be removed from the cluster and
Windows Server 2012 R2 nodes can be added to the cluster in this mode. The "point of no return" occurs when the
Update-ClusterFunctionalLevel PowerShell cmdlet is run on the cluster. In order for this cmdlet to succeed, all
nodes must be Windows Server 2016, and all nodes must be online.

Transition states of a four-node cluster while performing Rolling OS


Upgrade
This section illustrates and describes the four different stages of a cluster with shared storage whose nodes are
upgraded from Windows Server 2012 R2 to Windows Server 2016.
"Stage 1" is the initial state - we start with a Windows Server 2012 R2 cluster.
Figure 2: Initial State: Windows Server 2012 R2 Failover Cluster (Stage 1)
In "Stage 2", two nodes have been paused, drained, evicted, reformatted, and installed with Windows Server 2016.

Figure 3: Intermediate State: Mixed-OS mode: Windows Server 2012 R2 and Windows Server 2016
Failover cluster (Stage 2)
At "Stage 3", all of the nodes in the cluster have been upgraded to Windows Server 2016, and the cluster is ready
to be upgraded with Update-ClusterFunctionalLevel PowerShell cmdlet.

NOTE
At this stage, the process can be fully reversed, and Windows Server 2012 R2 nodes can be added to this cluster.

Figure 4: Intermediate State: All nodes upgraded to Windows Server 2016, ready for Update-
ClusterFunctionalLevel (Stage 3)
After the Update-ClusterFunctionalLevelcmdlet is run, the cluster enters "Stage 4", where new Windows Server
2016 cluster features can be used.

Figure 5: Final State: Windows Server 2016 Failover Cluster (Stage 4)

Cluster OS Rolling Upgrade Process


This section describes the workflow for performing Cluster OS Rolling Upgrade.
Figure 6: Cluster OS Rolling Upgrade Process Workflow
Cluster OS Rolling upgrade includes the following steps:
1. Prepare the cluster for the operating system upgrade as follows:
a. Cluster OS Rolling Upgrade requires removing one node at a time from the cluster. Check if you have
sufficient capacity on the cluster to maintain HA SLAs when one of the cluster nodes is removed from the
cluster for an operating system upgrade. In other words, do you require the capability to failover
workloads to another node when one node is removed from the cluster during the process of Cluster OS
Rolling Upgrade? Does the cluster have the capacity to run the required workloads when one node is
removed from the cluster for Cluster OS Rolling Upgrade?
b. For Hyper-V workloads, check that all Windows Server 2016 Hyper-V hosts have CPU support Second-
Level Address Table (SLAT). Only SLAT-capable machines can use the Hyper-V role in Windows Server
2016.
c. Check that any workload backups have completed, and consider backing-up the cluster. Stop backup
operations while adding nodes to the cluster.
d. Check that all cluster nodes are online /running/up using the Get-ClusterNode cmdlet (see Figure 7).

Figure 7: Determining node status using Get-ClusterNode cmdlet


e. If you are running Cluster Aware Updates (CAU), verify if CAU is currently running by using the
Cluster-Aware Updating UI, or the Get-CauRun cmdlet (see Figure 8). Stop CAU using the
Disable-CauClusterRole cmdlet (see Figure 9) to prevent any nodes from being paused and drained
by CAU during the Cluster OS Rolling Upgrade process.

Figure 8: Using the Get-CauRun cmdlet to determine if Cluster Aware Updates is running on
the cluster

Figure 9: Disabling the Cluster Aware Updates role using the Disable-CauClusterRole cmdlet
2. For each node in the cluster, complete the following:
a. Using Cluster Manager UI, select a node and use the Pause | Drain menu option to drain the node
(see Figure 10) or use the Suspend-ClusterNode cmdlet (see Figure 11).

Figure 10: Draining roles from a node using Failover Cluster Manager
Figure 11: Draining roles from a node using the Suspend-ClusterNode cmdlet
b. Using Cluster Manager UI, Evict the paused node from cluster, or use the Remove-ClusterNode
cmdlet.

Figure 12: Remove a node from the cluster using Remove-ClusterNode cmdlet
c. Reformat the system drive and perform a "clean operating system install" of Windows Server 2016
on the node using the Custom: Install Windows only (advanced) installation (See Figure 13)
option in setup.exe. Avoid selecting the Upgrade: Install Windows and keep files, settings, and
applications option since Cluster OS Rolling Upgrade does not encourage in-place upgrade.

Figure 13: Available installation options for Windows Server 2016


d. Add the node to the appropriate Active Directory domain.
e. Add the appropriate users to the Administrators group.
f. Using the Server Manager UI or Install-WindowsFeature PowerShell cmdlet, install any server roles
that you need, such as Hyper-V.

Install-WindowsFeature -Name Hyper-V

g. Using the Server Manager UI or Install-WindowsFeature PowerShell cmdlet, install the Failover
Clustering feature.
Install-WindowsFeature -Name Failover-Clustering

h. Install any additional features needed by your cluster workloads.


i. Check network and storage connectivity settings using the Failover Cluster Manager UI.
j. If Windows Firewall is used, check that the Firewall settings are correct for the cluster. For example,
Cluster Aware Updating (CAU) enabled clusters may require Firewall configuration.
k. For Hyper-V workloads, use the Hyper-V Manger UI to launch the Virtual Switch Manager dialog (see
Figure 14).
Check that the name of the Virtual Switch(s) used are identical for all Hyper-V host nodes in the
cluster.

Figure 14: Virtual Switch Manager


l. On a Windows Server 2016 node (do not use a Windows Server 2012 R2 node), use the Failover
Cluster Manager (see Figure 15) to connect to the cluster.
Figure 15: Adding a node to the cluster using Failover Cluster Manager
m. Use either the Failover Cluster Manager UI or the Add-ClusterNode cmdlet (see Figure 16) to add the
node to the cluster.

Figure 16: Adding a node to the cluster using Add-ClusterNode cmdlet

NOTE
When the first Windows Server 2016 node joins the cluster, the cluster enters "Mixed-OS" mode, and the
cluster core resources are moved to the Windows Server 2016 node. A "Mixed-OS" mode cluster is a fully
functional cluster where the new nodes run in a compatibility mode with the old nodes. "Mixed-OS" mode is a
transitory mode for the cluster. It is not intended to be permanent and customers are expected to update all
nodes of their cluster within four weeks.

n. After the Windows Server 2016 node is successfully added to the cluster, you can (optionally) move
some of the cluster workload to the newly added node in order to rebalance the workload across the
cluster as follows:

Figure 17: Moving a cluster workload (cluster VM role) using Move-ClusterVirtualMachineRole


cmdlet
a. Use Live Migration from the Failover Cluster Manager for virtual machines or the
Move-ClusterVirtualMachineRole cmdlet (see Figure 17) to perform a live migration of the
virtual machines.

Move-ClusterVirtualMachineRole -Name VM1 -Node robhind-host3

b. Use Move from the Failover Cluster Manager or the Move-ClusterGroup cmdlet for other
cluster workloads.
3. When every node has been upgraded to Windows Server 2016 and added back to the cluster, or when any
remaining Windows Server 2012 R2 nodes have been evicted, do the following:

IMPORTANT
After you update the cluster functional level, you cannot go back to Windows Server 2012 R2 functional level and
Windows Server 2012 R2 nodes cannot be added to the cluster.
Until the Update-ClusterFunctionalLevel cmdlet is run, the process is fully reversible and Windows Server
2012 R2 nodes can be added to this cluster and Windows Server 2016 nodes can be removed.
After the Update-ClusterFunctionalLevel cmdlet is run, new features will be available.

a. Using the Failover Cluster Manager UI or the Get-ClusterGroup cmdlet, check that all cluster roles are
running on the cluster as expected. In the following example, Available Storage is not being used,
instead CSV is used, hence, Available Storage displays an Offline status (see Figure 18).

Figure 18: Verifying that all cluster groups (cluster roles) are running using the
Get-ClusterGroup cmdlet

b. Check that all cluster nodes are online and running using the Get-ClusterNode cmdlet.
c. Run the Update-ClusterFunctionalLevel cmdlet - no errors should be returned (see Figure 19).

Figure 19: Updating the functional level of a cluster using PowerShell


d. After the Update-ClusterFunctionalLevel cmdlet is run, new features are available.
4. Windows Server 2016 - resume normal cluster updates and backups:
a. If you were previously running CAU, restart it using the CAU UI or use the Enable-CauClusterRole
cmdlet (see Figure 20).

Figure 20: Enable Cluster Aware Updates role using the Enable-CauClusterRole cmdlet
b. Resume backup operations.
5. Enable and use the Windows Server 2016 features on Hyper-V Virtual Machines.
a. After the cluster has been upgraded to Windows Server 2016 functional level, many workloads like
Hyper-V VMs will have new capabilities. For a list of new Hyper-V capabilities. see Migrate and
upgrade virtual machines
b. On each Hyper-V host node in the cluster, use the Get-VMHostSupportedVersion cmdlet to view the
Hyper-V VM configuration versions that are supported by the host.

Figure 21: Viewing the Hyper-V VM configuration versions supported by the host
a. On each Hyper-V host node in the cluster, Hyper-V VM configuration versions can be upgraded by
scheduling a brief maintenance window with users, backing up, turning off virtual machines, and
running the Update-VMVersion cmdlet (see Figure 22). This will update the virtual machine version,
and enable new Hyper-V features, eliminating the need for future Hyper-V Integration Component
(IC) updates. This cmdlet can be run from the Hyper-V node that is hosting the VM, or the
-ComputerName parameter can be used to update the VM Version remotely. In this example, here we
upgrade the configuration version of VM1 from 5.0 to 7.0 to take advantage of many new Hyper-V
features associated with this VM configuration version such as Production Checkpoints (Application
Consistent backups), and binary VM configuration file.
Figure 22: Upgrading a VM version using the Update-VMVersion PowerShell cmdlet
6. Storage pools can be upgraded using the Update-StoragePool PowerShell cmdlet - this is an online
operation.
Although we are targeting Private Cloud scenarios, specifically Hyper-V and Scale-out File Server clusters, which
can be upgraded without downtime, the Cluster OS Rolling Upgrade process can be used for any cluster role.

Restrictions / Limitations
This feature works only for Windows Server 2012 R2 to Windows Server 2016 versions only. This feature
cannot upgrade earlier versions of Windows Server such as Windows Server 2008, Windows Server 2008 R2,
or Windows Server 2012 to Windows Server 2016.
Each Windows Server 2016 node should be reformatted/new installation only. "In-place" or "upgrade"
installation type is discouraged.
A Windows Server 2016 node must be used to add Windows Server 2016 nodes to the cluster.
When managing a mixed-OS mode cluster, always perform the management tasks from an uplevel node that is
running Windows Server 2016. Downlevel Windows Server 2012 R2 nodes cannot use UI or management tools
against Windows Server 2016.
We encourage customers to move through the cluster upgrade process quickly because some cluster features
are not optimized for mixed-OS mode.
Avoid creating or resizing storage on Windows Server 2016 nodes while the cluster is running in mixed-OS
mode because of possible incompatibilities on failover from a Windows Server 2016 node to down-level
Windows Server 2012 R2 nodes.

Frequently asked questions


How long can the failover cluster run in mixed-OS mode?
We encourage customers to complete the upgrade within four weeks. There are many optimizations in Windows
Server 2016. We have successfully upgraded Hyper-V and Scale-out File Server clusters with zero downtime in less
than four hours total.
Will you port this feature back to Windows Server 2012, Windows Server 2008 R2, or Windows Server
2008?
We do not have any plans to port this feature back to previous versions. Cluster OS Rolling Upgrade is our vision
for upgrading Windows Server 2012 R2 clusters to Windows Server 2016 and beyond.
Does the Windows Server 2012 R2 cluster need to have all the software updates installed before starting
the Cluster OS Rolling Upgrade process?
Yes, before starting the Cluster OS Rolling Upgrade process, verify that all cluster nodes are updated with the latest
software updates.
Can I run the Update-ClusterFunctionalLevel cmdlet while nodes are Off or Paused?
No. All cluster nodes must be on and in active membership for the Update-ClusterFunctionalLevel cmdlet to work.
Does Cluster OS Rolling Upgrade work for any cluster workload? Does it work for SQL Server?
Yes, Cluster OS Rolling Upgrade works for any cluster workload. However, it is only zero-downtime for Hyper-V
and Scale-out File Server clusters. Most other workloads incur some downtime (typically a couple of minutes)
when they failover, and failover is required at least once during the Cluster OS Rolling Upgrade process.
Can I automate this process using PowerShell?
Yes, we have designed Cluster OS Rolling Upgrade to be automated using PowerShell.
For a large cluster that has extra workload and failover capacity, can I upgrade multiple nodes
simultaneously?
Yes. When one node is removed from the cluster to upgrade the OS, the cluster will have one less node for failover,
hence will have a reduced failover capacity. For large clusters with enough workload and failover capacity, multiple
nodes can be upgraded simultaneously. You can temporarily add cluster nodes to the cluster to provide improved
workload and failover capacity during the Cluster OS Rolling Upgrade process.
What if I discover an issue in my cluster after Update-ClusterFunctionalLevel has been run successfully?
If you have backed-up the cluster database with a System State backup before running
Update-ClusterFunctionalLevel , you should be able to perform an Authoritative restore on a Windows Server 2012
R2 cluster node and restore the original cluster database and configuration.
Can I use in-place upgrade for each node instead of using clean-OS install by reformatting the system
drive?
We do not encourage the use of in-place upgrade of Windows Server, but we are aware that it works in some cases
where default drivers are used. Please carefully read all warning messages displayed during in-place upgrade of a
cluster node.
If I am using Hyper-V replication for a Hyper-V VM on my Hyper-V cluster, will replication remain intact
during and after the Cluster OS Rolling Upgrade process?
Yes, Hyper-V replica remains intact during and after the Cluster OS Rolling Upgrade process.
Can I use System Center 2016 Virtual Machine Manager (SCVMM) to automate the Cluster OS Rolling
Upgrade process?
Yes, you can automate the Cluster OS Rolling Upgrade process using VMM in System Center 2016.

See also
Release Notes: Important Issues in Windows Server 2016
What's New in Windows Server 2016
What's New in Failover Clustering in Windows Server
Simplified SMB Multichannel and Multi-NIC Cluster
Networks
4/24/2017 • 3 min to read • Edit Online

Applies To: Windows Server 2016

Simplified SMB Multichannel and Multi-NIC Cluster Networks is a new feature in Windows Server 2016 that
enables the use of multiple NICs on the same cluster network subnet, and automatically enables SMB Mutichannel.
Simplified SMB Multichannel and Multi-NIC Cluster Networks provides the following benefits:
Failover Clustering automatically recognizes all NICs on nodes that are using the same switch / same subnet -
no additional configuration needed.
SMB Multichannel is enabled automatically.
Networks that only have IPv6 Link Local (fe80) IP Addresses resources are recognized on cluster-only (private)
networks.
A single IP Address resource is configured on each Cluster Access Point (CAP) Network Name (NN) by default.
Cluster validation no longer issues warning messages when multiple NICs are found on the same subnet.

Requirements
Multiple NICs per server, using the same switch / subnet.

How to take advantage of multi-NIC clusters networks and simplified


SMB multichannel
This section describes how to take advantage of the new multi-NIC clusters networks and simplified SMB
multichannel features in Windows Server 2016.
Use at least two networks for Failover Clustering
Although it is rare, network switches can fail - it is still best practice to use at least two networks for Failover
Clustering. All networks that are found are used for cluster heartbeats. Avoid using a single network for your
Failover Cluster in order to avoid a single point of failure. Ideally, there should be multiple physical communication
paths between the nodes in the cluster, and no single point of failure.

Figure 1: Use at least two networks for Failover Clustering


Use Multiple NICs across clusters
Maximum benefit of the simplified SMB multichannel is achieved when multiple NICs are used across clusters - in
both storage and storage workload clusters. This allows the workload clusters (Hyper-V, SQL Server Failover
Cluster Instance, Storage Replica, etc.) to use SMB multichannel and results in more efficient use of the network. In
a converged (also known as disaggregated) cluster configuration where a Scale-out File Server cluster is used for
storing workload data for a Hyper-V or SQL Server Failover Cluster Instance cluster, this network is often called
"the North-South subnet" / network. Many customers maximize throughput of this network by investing in RDMA
capable NIC cards and switches.

Figure 2: To achieve maximum network throughput, use multiple NICs on both the Scale-out File Server
cluster and the Hyper-V or SQL Server Failover Cluster Instance cluster - which share the North-South
subnet
Figure 3: Two clusters (Scale-out File Server for storage, SQL Server FCI for workload) both use multiple
NICs in the same subnet to leverage SMB Multichannel and achieve better network throughput.

Automatic recognition of IPv6 Link Local private networks


When private (cluster only) networks with multiple NICs are detected, the cluster will automatically recognize IPv6
Link Local (fe80) IP addresses for each NIC on each subnet. This saves administrators time since they no longer
have to manually configure IPv6 Link Local (fe80) IP Address resources.
When using more than one private (cluster only) network, check the IPv6 routing configuration to ensure that
routing is not configured to cross subnets, since this will reduce network performance.
Figure 4: Automatic IPv6 Link Local (fe80) Address resource configuration

Throughput and Fault Tolerance


Windows Server 2016 automatically detects NIC capabilities and will attempt to use each NIC in the fastest
possible configuration. NICs that are teamed, NICs using RSS, and NICs with RDMA capability can all be used. The
table below summarizes the trade-offs when using these technologies. Maximum throughput is achieved when
using multiple RDMA capable NICs. For more information, see The basics of SMB Mutlichannel.

Figure 5: Throughput and fault tolerance for various NIC conifigurations

Frequently asked questions


Are all NICs in a multi-NIC network used for cluster heart beating?
Yes.
Can a multi-NIC network be used for cluster communication only? Or can it only be used for client and
cluster communication?
Either configuration will work - all cluster network roles will work on a multi-NIC network.
Is SMB Multichannel also used for CSV and cluster traffic?
Yes, by default all cluster and CSV traffic will use available multi-NIC networks. Administrators can use the Failover
Clustering PowerShell cmdlets or Failover Cluster Manager UI to change the network role.
How can I see the SMB Multichannel settings?
Use the Get-SMBServerConfiguration cmdlet, look for the value of the EnableMultiChannel property.
Is the cluster common property PlumbAllCrossSubnetRoutes respected on a multi-NIC network?
Yes.

See also
What's New in Failover Clustering in Windows Server
Cluster-Aware Updating overview
5/1/2017 • 7 min to read • Edit Online

Applies to: Windows Server 2016, Windows Server 2012 R2, Windows Server 2012

This topic provides an overview of Cluster-Aware Updating (CAU), a feature that automates the software updating
process on clustered servers while maintaining availability.

NOTE
When updating Storage Spaces Direct clusters, we recommend using Cluster-Aware Updating.

Feature description
Cluster-Aware Updating is an automated feature that enables you to update servers in a failover cluster with little
or no loss in availability during the update process. During an Updating Run, Cluster-Aware Updating
transparently performs the following tasks:
1. Puts each node of the cluster into node maintenance mode.
2. Moves the clustered roles off the node.
3. Installs the updates and any dependent updates.
4. Performs a restart if necessary.
5. Brings the node out of maintenance mode.
6. Restores the clustered roles on the node.
7. Moves to update the next node.
For many clustered roles in the cluster, the automatic update process triggers a planned failover. This can cause a
transient service interruption for connected clients. However, in the case of continuously available workloads, such
as Hyper-V with live migration or file server with SMB Transparent Failover, Cluster-Aware Updating can
coordinate cluster updates with no impact to the service availability.

Practical applications
CAU reduces service outages in clustered services, reduces the need for manual updating workarounds,
and makes the end-to-end cluster updating process more reliable for the administrator. When the CAU
feature is used in conjunction with continuously available cluster workloads, such as continuously available
file servers (file server workload with SMB Transparent Failover) or Hyper-V, the cluster updates can be
performed with zero impact to service availability for clients.
CAU facilitates the adoption of consistent IT processes across the enterprise. Updating Run Profiles can be
created for different classes of failover clusters and then managed centrally on a file share to ensure that
CAU deployments throughout the IT organization apply updates consistently, even if the clusters are
managed by different lines-of-business or administrators.
CAU can schedule Updating Runs on regular daily, weekly, or monthly intervals to help coordinate cluster
updates with other IT management processes.
CAU provides an extensible architecture to update the cluster software inventory in a cluster-aware fashion.
This can be used by publishers to coordinate the installation of software updates that are not published to
Windows Update or Microsoft Update or that are not available from Microsoft, for example, updates for
non-Microsoft device drivers.
CAU self-updating mode enables a "cluster in a box" appliance (a set of clustered physical machines,
typically packaged in one chassis) to update itself. Typically, such appliances are deployed in branch offices
with minimal local IT support to manage the clusters. Self-updating mode offers great value in these
deployment scenarios.

Important functionality
The following is a description of important Cluster-Aware Updating functionality:
A user interface (UI) - the Cluster Aware Updating window - and a set of cmdlets that you can use to
preview, apply, monitor, and report on the updates
An end-to-end automation of the cluster-updating operation (an Updating Run), orchestrated by one or
more Update Coordinator computers
A default plug-in that integrates with the existing Windows Update Agent (WUA) and Windows Server
Update Services (WSUS) infrastructure in Windows Server to apply important Microsoft updates
A second plug-in that can be used to apply Microsoft hotfixes, and that can be customized to apply non-
Microsoft updates
Updating Run Profiles that you configure with settings for Updating Run options, such as the maximum
number of times that the update will be retried per node. Updating Run Profiles enable you to rapidly reuse
the same settings across Updating Runs and easily share the update settings with other failover clusters.
An extensible architecture that supports new plug-in development to coordinate other node-updating tools
across the cluster, such as custom software installers, BIOS updating tools, and network adapter or host bus
adapter (HBA) updating tools.
Cluster-Aware Updating can coordinate the complete cluster updating operation in two modes:
Self-updating mode For this mode, the CAU clustered role is configured as a workload on the failover
cluster that is to be updated, and an associated update schedule is defined. The cluster updates itself at
scheduled times by using a default or custom Updating Run profile. During the Updating Run, the CAU
Update Coordinator process starts on the node that currently owns the CAU clustered role, and the process
sequentially performs updates on each cluster node. To update the current cluster node, the CAU clustered
role fails over to another cluster node, and a new Update Coordinator process on that node assumes
control of the Updating Run. In self-updating mode, CAU can update the failover cluster by using a fully
automated, end-to-end updating process. An administrator can also trigger updates on-demand in this
mode, or simply use the remote-updating approach if desired. In self-updating mode, an administrator can
get summary information about an Updating Run in progress by connecting to the cluster and running the
Get-CauRun Windows PowerShell cmdlet.
Remote-updating mode For this mode, a remote computer, which is called an Update Coordinator, is
configured with the CAU tools. The Update Coordinator is not a member of the cluster that is updated
during the Updating Run. From the remote computer, the administrator triggers an on-demand Updating
Run by using a default or custom Updating Run profile. Remote-updating mode is useful for monitoring
real-time progress during the Updating Run, and for clusters that are running on Server Core installations.

Hardware and software requirements


CAU can be used on all editions of Windows Server, including Server Core installations. For detailed requirements
information, see Cluster-Aware Updating requirements and best practices.
Installing Cluster-Aware Updating
To use CAU, install the Failover Clustering feature in Windows Server and create a failover cluster. The
components that support CAU functionality are automatically installed on each cluster node.
To install the Failover Clustering feature, you can use the following tools:
Add Roles and Features Wizard in Server Manager
Install-WindowsFeature Windows PowerShell cmdlet
Deployment Image Servicing and Management (DISM) command-line tool
For more information, see Install or Uninstall Roles, Role Services, or Features.
You must also install the Failover Clustering Tools, which are part of the Remote Server Administration Tools and
are installed by default when you install the Failover Clustering feature in Server Manager. The Failover Clustering
tools include the Cluster-Aware Updating user interface and PowerShell cmdlets.
You must install the Failover Clustering Tools as follows to support the different CAU updating modes:
To use CAU in self-updating mode, install the Failover Clustering Tools on each cluster node.
To enable remote-updating mode, install the Failover Clustering Tools on a computer that has network
connectivity to the failover cluster.

NOTE
You can't use the Failover Clustering Tools on Windows Server 2012 to manage Cluster-Aware Updating on a newer
version of Windows Server.
To use CAU only in remote-updating mode, installation of the Failover Clustering Tools on the cluster nodes is not
required. However, certain CAU features will not be available. For more information, see Requirements and Best Practices
for Cluster-Aware Updating.
Unless you are using CAU only in self-updating mode, the computer on which the CAU tools are installed and that
coordinates the updates cannot be a member of the failover cluster.

Enabling self-updating mode


To enable the self-updating mode, you must add the Cluster-Aware Updating clustered role to the failover cluster.
To do so, use one of the following methods:
In Server Manager, select Tools > Cluster-Aware Updating, then in the Cluster-Aware Updating window,
select Configure cluster self-updating options.
In a PowerShell session, run the Add-CauClusterRole cmdlet.
To uninstall CAU, uninstall the Failover Clustering feature or Failover Clustering Tools by using Server Manager,
the Uninstall-WindowsFeature cmdlet, or the DISM command-line tools.
Additional requirements and best practices
To ensure that CAU can update the cluster nodes successfully, and for additional guidance for configuring your
failover cluster environment to use CAU, you can run the CAU Best Practices Analyzer.
For detailed requirements and best practices for using CAU, and information about running the CAU Best
Practices Analyzer, see Requirements and Best Practices for Cluster-Aware Updating.
Starting Cluster-Aware Updating
To st a r t C l u st e r- A w a r e U p d a t i n g fr o m Se r v e r M a n a g e r

1. Start Server Manager.


2. Do one of the following:
On the Tools menu, click Cluster-Aware Updating.
If one or more cluster nodes, or the cluster, is added to Server Manager, on the All Servers page,
right-click the name of a node (or the name of the cluster), and then click Update Cluster.

See also
The following links provide more information about using Cluster-Aware Updating.
Requirements and Best Practices for Cluster-Aware Updating
Cluster-Aware Updating: Frequently Asked Questions
Advanced Options and Updating Run Profiles for CAU
How CAU Plug-ins Work
Cluster-Aware Updating Cmdlets in Windows PowerShell
Cluster-Aware Updating Plug-in Reference
Cluster-Aware Updating requirements and best
practices
4/28/2017 • 20 min to read • Edit Online

Applies To: Windows Server 2016, Windows Server 2012 R2, Windows Server 2012

This section describes the requirements and dependencies that are needed to use Cluster-Aware Updating (CAU)
to apply updates to a failover cluster running Windows Server.

NOTE
You may need to independently validate that your cluster environment is ready to apply updates if you use a plug-in other
than Microsoft.WindowsUpdatePlugin. If you are using a non-Microsoft plug-in, contact the publisher for more
information. For more information about plug-ins, see How Plug-ins Work.

Install the Failover Clustering feature and the Failover Clustering Tools
CAU requires an installation of the Failover Clustering feature and the Failover Clustering Tools. The Failover
Clustering Tools include the CAU tools (clusterawareupdating.dll), the Failover Clustering cmdlets, and other
components needed for CAU operations. For steps to install the Failover Clustering feature, see Installing the
Failover Clustering Feature and Tools.
The exact installation requirements for the Failover Clustering Tools depend on whether CAU coordinates updates
as a clustered role on the failover cluster (by using self-updating mode) or from a remote computer. The self-
updating mode of CAU additionally requires the installation of the CAU clustered role on the failover cluster by
using the CAU tools.
The following table summarizes the CAU feature installation requirements for the two CAU updating modes.

INSTALLED COMPONENT SELF-UPDATING MODE REMOTE-UPDATING MODE

Failover Clustering feature Required on all cluster nodes Required on all cluster nodes

Failover Clustering Tools Required on all cluster nodes - Required on remote-updating


computer
- Required on all cluster nodes to run
the Save-CauDebugTrace cmdlet

CAU clustered role Required Not required

Obtain an administrator account


The following administrator requirements are necessary to use CAU features.
To preview or apply update actions by using the CAU user interface (UI) or the Cluster-Aware Updating
cmdlets, you must use a domain account that has local administrator rights and permissions on all the
cluster nodes. If the account doesn't have sufficient privileges on every node, you are prompted in the
Cluster-Aware Updating window to supply the necessary credentials when you perform these actions. To
use the Cluster-Aware Updating cmdlets, you can supply the necessary credentials as a cmdlet parameter.
If you use CAU in remote-updating mode when you are signed in with an account that doesn't have local
administrator rights and permissions on the cluster nodes, you must run the CAU tools as an administrator
by using a local administrator account on the Update Coordinator computer, or by using an account that
has the Impersonate a client after authentication user right.
To run the CAU Best Practices Analyzer, you must use an account that has administrative privileges on the
cluster nodes and local administrative privileges on the computer that is used to run the Test-CauSetup
cmdlet or to analyze cluster updating readiness using the Cluster-Aware Updating window. For more
information, see Test cluster updating readiness.

Verify the cluster configuration


The following are general requirements for a failover cluster to support updates by using CAU. Additional
configuration requirements for remote management on the nodes are listed in Configure the nodes for remote
management later in this topic.
Sufficient cluster nodes must be online so that the cluster has quorum.
All cluster nodes must be in the same Active Directory domain.
The cluster name must be resolved on the network using DNS.
If CAU is used in remote-updating mode, the Update Coordinator computer must have network
connectivity to the failover cluster nodes, and it must be in the same Active Directory domain as the
failover cluster.
The Cluster service should be running on all cluster nodes. By default this service is installed on all cluster
nodes and is configured to start automatically.
To use PowerShell pre-update or post-update scripts during a CAU Updating Run, ensure that the scripts
are installed on all cluster nodes or that they are accessible to all nodes, for example, on a highly available
network file share. If scripts are saved to a network file share, configure the folder for Read permission for
the Everyone group.

Configure the nodes for remote management


To use Cluster-Aware Updating, all nodes of the cluster must be configured for remote management. By default,
the only task you must perform to configure the nodes for remote management is to Enable a firewall rule to
allow automatic restarts.
The following table lists the complete remote managment requirements, in case your environment diverges from
the defaults.
These requirements are in addition to the installation requirements for the Install the Failover Clustering feature
and the Failover Clustering Tools and the general clustering requirements that are described in previous sections
in this topic.

REQUIREMENT DEFAULT STATE SELF-UPDATING MODE REMOTE-UPDATING MODE

Enable a firewall rule to Disabled Required on all cluster Required on all cluster
allow automatic restarts nodes if a firewall is in use nodes if a firewall is in use

Enable Windows Enabled Required on all cluster Required on all cluster


Management nodes nodes
Instrumentation
REQUIREMENT DEFAULT STATE SELF-UPDATING MODE REMOTE-UPDATING MODE

Enable Windows PowerShell Enabled Required on all cluster Required on all cluster
3.0 or 4.0 and Windows nodes nodes to run the following:
PowerShell remoting
- The Save-CauDebugTrace
cmdlet
- PowerShell pre-update
and post-update scripts
during an Updating Run
- Tests of cluster updating
readiness using the Cluster-
Aware Updating window or
the Test-CauSetup Windows
PowerShell cmdlet

Install .NET Framework 4.6 Enabled Required on all cluster Required on all cluster
or 4.5 nodes nodes to run the following:

- The Save-CauDebugTrace
cmdlet
- PowerShell pre-update
and post-update scripts
during an Updating Run
- Tests of cluster updating
readiness using the Cluster-
Aware Updating window or
the Test-CauSetup Windows
PowerShell cmdlet

Enable a firewall rule to allow automatic restarts


To allow automatic restarts after updates are applied (if the installation of an update requires a restart), if
Windows Firewall or a non-Microsoft firewall is in use on the cluster nodes, a firewall rule must be enabled on
each node that allows the following traffic:
Protocol: TCP
Direction: inbound
Program: wininit.exe
Ports: RPC Dynamic Ports
Profile: Domain
If Windows Firewall is used on the cluster nodes, you can do this by enabling the Remote Shutdown Windows
Firewall rule group on each cluster node. When you use the Cluster-Aware Updating window to apply updates
and to configure self-updating options, the Remote Shutdown Windows Firewall rule group is automatically
enabled on each cluster node.

NOTE
The Remote Shutdown Windows Firewall rule group cannot be enabled when it will conflict with Group Policy settings that
are configured for Windows Firewall.

The Remote Shutdown firewall rule group is also enabled by specifying the –EnableFirewallRules parameter
when running the following CAU cmdlets: Add-CauClusterRole, Invoke-CauRun, and SetCauClusterRole.
The following PowerShell example shows an additional method to enable automatic restarts on a cluster node.
Set-NetFirewallRule -Group "@firewallapi.dll,-36751" -Profile Domain -Enabled true

Enable Windows Management Instrumentation (WMI )


All cluster nodes must be configured for remote management using Windows Management Instrumentation
(WMI). This is enabled by default.
To manually enable remote management, do the following:
1. In the Services console, start the Windows Remote Management service and set the startup type to
Automatic.
2. Run the Set-WSManQuickConfig cmdlet, or run the following command from an elevated command
prompt:

winrm quickconfig -q

To support WMI remoting, if Windows Firewall is in use on the cluster nodes, the inbound firewall rule for
Windows Remote Management (HTTP-In) must be enabled on each node. By default, this rule is enabled.
Enable Windows PowerShell and Windows PowerShell remoting
To enable self-updating mode and certain CAU features in remote-updating mode, PowerShell must be installed
and enabled to run remote commands on all cluster nodes. By default, PowerShell is installed and enabled for
remoting.
To enable PowerShell remoting, use one of the following methods:
Run the Enable-PSRemoting cmdlet.
Configure a domain-level Group Policy setting for Windows Remote Management (WinRM).
For more information about enabling PowerShell remoting, see about_Remote_Requirements.
Install .NET Framework 4.6 or 4.5
To enable self-updating mode and certain CAU features in remote-updating mode,.NET Framework 4.6, or .NET
Framework 4.5 (on Windows Server 2012 R2) must be installed on all cluster nodes. By default, NET Framework is
installed.
To install .NET Framework 4.6 (or 4.5) using PowerShell if it's not already installed, use the following command:

Install-WindowsFeature -Name NET-Framework-45-Core

Best practices recommendations for using Cluster-Aware Updating


Recommendations for applying Microsoft updates
We recommend that when you begin to use CAU to apply updates with the default
Microsoft.WindowsUpdatePlugin plug-in on a cluster, you stop using other methods to install software
updates from Microsoft on the cluster nodes.
Cau t i on

Combining CAU with methods that update individual nodes automatically (on a fixed time schedule) can cause
unpredictable results, including interruptions in service and unplanned downtime.
We recommend that you follow these guidelines:
For optimal results, we recommend that you disable settings on the cluster nodes for automatic updating,
for example, through the Automatic Updates settings in Control Panel, or in settings that are configured
using Group Policy.
Cau t i on

Automatic installation of updates on the cluster nodes can interfere with installation of updates by CAU and
can cause CAU failures.
If they are needed, the following Automatic Updates settings are compatible with CAU, because the
administrator can control the timing of update installation:
Settings to notify before downloading updates and to notify before installation
Settings to automatically download updates and to notify before installation
However, if Automatic Updates is downloading updates at the same time as a CAU Updating Run, the
Updating Run might take longer to complete.
Do not configure an update system such as Windows Server Update Services (WSUS) to apply updates
automatically (on a fixed time schedule) to cluster nodes.
All cluster nodes should be uniformly configured to use the same update source, for example, a WSUS
server, Windows Update, or Microsoft Update.
If you use a configuration management system to apply software updates to computers on the network,
exclude cluster nodes from all required or automatic updates. Examples of configuration management
systems include Microsoft System Center Configuration Manager 2007 and Microsoft System Center
Virtual Machine Manager 2008.
If internal software distribution servers (for example, WSUS servers) are used to contain and deploy the
updates, ensure that those servers correctly identify the approved updates for the cluster nodes.
Apply Microsoft updates in branch office scenarios
To download Microsoft updates from Microsoft Update or Windows Update to cluster nodes in certain branch
office scenarios, you may need to configure proxy settings for the Local System account on each node. For
example, you might need to do this if your branch office clusters access Microsoft Update or Windows Update to
download updates by using a local proxy server.
If necessary, configure WinHTTP proxy settings on each node to specify a local proxy server and configure local
address exceptions (that is, a bypass list for local addresses). To do this, you can run the following command on
each cluster node from an elevated command prompt:

netsh winhttp set proxy <ProxyServerFQDN >:<port> "<local>"

where <ProxyServerFQDN> is the fully qualified domain name for the proxy server and <port> is the port over
which to communicate (usually port 443).
For example, to configure WinHTTP proxy settings for the Local System account specifying the proxy server
MyProxy.CONTOSO.com, with port 443 and local address exceptions, type the following command:

netsh winhttp set proxy MyProxy.CONTOSO.com:443 "<local>"

Recommendations for using the Microsoft.HotfixPlugin


We recommend that you configure permissions in the hotfix root folder and hotfix configuration file to
restrict Write access to only local administrators on the computers that are used to store these files. This
helps prevent tampering with these files by unauthorized users that could compromise the functionality of
the failover cluster when hotfixes are applied.
To help ensure data integrity for the server message block (SMB) connections that are used to access the
hotfix root folder, you should configure SMB Encryption in the SMB shared folder, if it is possible to
configure it. The Microsoft.HotfixPlugin requires that SMB signing or SMB Encryption is configured to
help ensure data integrity for the SMB connections.
For more information, see Restrict access to the hotfix root folder and hotfix configuration file.
Additional recommendations
To avoid interfering with a CAU Updating Run that may be scheduled at the same time, do not schedule
password changes for cluster name objects and virtual computer objects during scheduled maintenance
windows.
You should set appropriate permissions on pre-update and post-update scripts that are saved on network
shared folders to prevent potential tampering with these files by unauthorized users.
To configure CAU in self-updating mode, a virtual computer object (VCO) for the CAU clustered role must
be created in Active Directory. CAU can create this object automatically at the time that the CAU clustered
role is added, if the failover cluster has sufficient permissions. However, because of the security policies in
certain organizations, it may be necessary to prestage the object in Active Directory. For a procedure to do
this, see Steps for prestaging an account for a clustered role.
To save and reuse Updating Run settings across failover clusters with similar updating needs in the IT
organization, you can create Updating Run Profiles. Additionally, depending on the updating mode, you can
save and manage the Updating Run Profiles on a file share that is accessible to all remote Update
Coordinator computers or failover clusters. For more information, see Advanced Options and Updating
Run Profiles for CAU.

Test cluster updating readiness


You can run the CAU Best Practices Analyzer (BPA) model to test whether a failover cluster and the network
environment meet many of the requirements to have software updates applied by CAU. Many of the tests check
the environment for readiness to apply Microsoft updates by using the default plug-in,
Microsoft.WindowsUpdatePlugin.

NOTE
You might need to independently validate that your cluster environment is ready to apply software updates by using a
plug-in other than Microsoft.WindowsUpdatePlugin. If you are using a non-Microsoft plug-in, such as one provided by
your hardware manufacturer, contact the publisher for more information.

You can run the BPA in the following two ways:


1. Select Analyze cluster updating readiness in the CAU console. After the BPA completes the readiness
tests, a test report appears. If issues are detected on cluster nodes, the specific issues and the nodes where
the issues appear are identified so that you can take corrective action. The tests can take several minutes to
complete.
2. Run the Test-CauSetup cmdlet. You can run the cmdlet on a local or remote computer on which the
Failover Clustering Module for Windows PowerShell (part of the Failover Clustering Tools) is installed. You
can also run the cmdlet on a node of the failover cluster.
NOTE
You must use an account that has administrative privileges on the cluster nodes and local administrative privileges on
the computer that is used to run the Test-CauSetup cmdlet or to analyze cluster updating readiness using the Cluster-
Aware Updating window. To run the tests using the Cluster-Aware Updating window, you must be logged on to the
computer with the necessary credentials.
The tests assume that the CAU tools that are used to preview and apply software updates run from the same computer
and with the same user credentials as are used to test cluster updating readiness.

IMPORTANT
We highly recommend that you test the cluster for updating readiness in the following situations:
Before you use CAU for the first time to apply software updates.
After you add a node to the cluster or perform other hardware changes in the cluster that require running the Validate a
Cluster Wizard.
After you change an update source, or change update settings or configurations (other than CAU) that can affect the
application of updates on the nodes.

Tests for cluster updating readiness


The following table lists the cluster updating readiness tests, some common issues, and resolution steps.

TEST POSSIBLE ISSUES AND IMPACTS RESOLUTION STEPS

The failover cluster must be available Cannot resolve the failover cluster - Check the spelling of the name of the
name, or one or more cluster nodes cluster specified during the BPA run.
cannot be accessed. The BPA cannot - Ensure that all nodes of the cluster
run the cluster readiness tests. are online and running.
- Check that the Validate a
Configuration Wizard can successfully
run on the failover cluster.

The failover cluster nodes must be One or more failover cluster nodes are Ensure that all failover cluster nodes are
enabled for remote management via not enabled for remote management enabled for remote management
WMI by using Windows Management through WMI. For more information,
Instrumentation (WMI). CAU cannot see Configure the nodes for remote
update the cluster nodes if the nodes management in this topic.
are not configured for remote
management.

PowerShell remoting should be enabled PowerShell isn't installed or isn't Ensure that PowerShell is installed on
on each failover cluster node enabled for remoting on one or more all cluster nodes and is enabled for
failover cluster nodes. CAU cannot be remoting.
configured for self-updating mode or
use certain features in remote-updating For more information, see Configure
mode. the nodes for remote management in
this topic.

Failover cluster version One or more nodes in the failover Verify that the failover cluster that is
cluster don't run Windows Server 2016, specified during the BPA run is running
Windows Server 2012 R2, or Windows Windows Server 2016, Windows Server
Server 2012. CAU cannot update the 2012 R2, or Windows Server 2012.
failover cluster.
For more information, see Verify the
cluster configuration in this topic.
TEST POSSIBLE ISSUES AND IMPACTS RESOLUTION STEPS

The required versions of .NET .NET Framework 4.6, 4.5 or Windows Ensure that .NET Framework 4.6 or 4.5
Framework and Windows PowerShell PowerShell isn't installed on one or and Windows PowerShell are installed
must be installed on all failover cluster more cluster nodes. Some CAU features on all cluster nodes, if they are
nodes might not work. required.

For more information, see Configure


the nodes for remote management in
this topic.

The Cluster service should be running The Cluster service is not running on - Ensure that the Cluster service
on all cluster nodes one or more nodes. CAU cannot (clussvc) is started on all nodes in the
update the failover cluster. cluster, and it is configured to start
automatically.
- Check that the Validate a
Configuration Wizard can successfully
run on the failover cluster.

For more information, see Verify the


cluster configuration in this topic.

Automatic Updates must not be On at least one failover cluster node, If Windows Update functionality is
configured to automatically install Automatic Updates is configured to configured for Automatic Updates on
updates on any failover cluster node automatically install Microsoft updates one or more cluster nodes, ensure that
on that node. Combining CAU with Automatic Updates is not configured to
other update methods can result in automatically install updates.
unplanned downtime or unpredictable
results. For more information, see
Recommendations for applying
Microsoft updates.

The failover cluster nodes should use One or more failover cluster nodes are Ensure that every cluster node is
the same update source configured to use an update source for configured to use the same update
Microsoft updates that is different from source, for example, a WSUS server,
the rest of the nodes. Updates might Windows Update, or Microsoft Update.
not be applied uniformly on the cluster
nodes by CAU. For more information, see
Recommendations for applying
Microsoft updates.

A firewall rule that allows remote One or more failover cluster nodes do If Windows Firewall or a non-Microsoft
shutdown should be enabled on each not have a firewall rule enabled that firewall is in use on the cluster nodes,
node in the failover cluster allows remote shutdown, or a Group configure a firewall rule that allows
Policy setting prevents this rule from remote shutdown.
being enabled. An Updating Run that
applies updates that require restarting For more information, see Enable a
the nodes automatically might not firewall rule to allow automatic restarts
complete properly. in this topic.

The proxy server setting on each One or more failover cluster nodes Ensure that the WinHTTP proxy
failover cluster node should be set to a have an incorrect proxy server settings on each cluster node are set to
local proxy server configuration. a local proxy server if it is needed. If a
proxy server is not in use in your
If a local proxy server is in use, the environment, this warning can be
proxy server setting on each node must ignored.
be configured properly for the cluster
to access Microsoft Update or Windows For more information, see Apply
Update. updates in branch office scenarios in
this topic.
TEST POSSIBLE ISSUES AND IMPACTS RESOLUTION STEPS

The CAU clustered role should be The CAU clustered role is not installed To use CAU in self-updating mode, add
installed on the failover cluster to on this failover cluster. This role is the CAU clustered role on the failover
enable self-updating mode required for cluster self-updating. cluster in one of the following ways:

- Run the Add-CauClusterRole


PowerShell cmdlet.
- Select the Configure cluster self-
updating options action in the
Cluster-Aware Updating window.

The CAU clustered role should be The CAU clustered role is disabled. For To use CAU in self-updating mode,
enabled on the failover cluster to example, the CAU clustered role is not enable the CAU clustered role on this
enable self-updating mode installed, or it has been disabled by failover cluster in one of the following
using the Disable-CauClusterRole ways:
PowerShell cmdlet. This role is required
for cluster self-updating. - Run the Enable-CauClusterRole
PowerShell cmdlet.
- Select the Configure cluster self-
updating options action in the
Cluster-Aware Updating window.

The configured CAU plug-in for self- The CAU clustered role on one or more - Ensure that the configured CAU plug-
updating mode must be registered on nodes of this failover cluster cannot in is installed on all cluster nodes by
all failover cluster nodes access the CAU plug-in module that is following the installation procedure for
configured in the self-updating options. the product that supplies the CAU
A self-updating run might fail. plug-in.
- Run the Register-CauPlugin
PowerShell cmdlet to register the plug-
in on the required cluster nodes.

All failover cluster nodes should have A self-updating run might fail if the - Ensure that the configured CAU plug-
the same set of registered CAU plug- plug-in that is configured to be used in in is installed on all cluster nodes by
ins an Updating Run is changed to one following the installation procedure for
that is not available on all cluster the product that supplies the CAU
nodes. plug-in.
- Run the Register-CauPlugin
PowerShell cmdlet to register the plug-
in on the required cluster nodes.

The configured Updating Run options The self-updating schedule and Configure a valid self-updating
must be valid Updating Run options that are schedule and set of Updating Run
configured for this failover cluster are options. For example, you can use the
incomplete or are not valid. A self- Set-CauClusterRole PowerShell cmdlet
updating run might fail. to configure the CAU clustered role.

At least two failover cluster nodes must An Updating Run launched in self- Use the Failover Clustering Tools to
be owners of the CAU clustered role updating mode will fail because the ensure that all cluster nodes are
CAU clustered role does not have a configured as possible owners of the
possible owner node to move to. CAU clustered role. This is the default
configuration.

All failover cluster nodes must be able Not all possible owner nodes of the Ensure that all possible owner nodes of
to access Windows PowerShell scripts CAU clustered role can access the the CAU clustered role have
configured Windows PowerShell pre- permissions to access the configured
update and post-update scripts. A self- PowerShell pre-update and post-
updating run will fail. update scripts.
TEST POSSIBLE ISSUES AND IMPACTS RESOLUTION STEPS

All failover cluster nodes should use Not all possible owner nodes of the Ensure that all possible owner nodes of
identical Windows PowerShell scripts CAU clustered role use the same copy the CAU clustered role use the same
of the specified Windows PowerShell PowerShell pre-update and post-
pre-update and post-update scripts. A update scripts.
self-updating run might fail or show
unexpected behavior.

The WarnAfter setting specified for the The specified CAU Updating Run In the Updating Run options, configure
Updating Run should be less than the timeout values make the warning a WarnAfter option value that is less
StopAfter setting timeout ineffective. An Updating Run than the StopAfter option value.
might be canceled before a warning
event log can be generated.

See also
Cluster-Aware Updating overview
Cluster-Aware Updating advanced options and
updating run profiles
6/8/2017 • 8 min to read • Edit Online

Applies To: Windows Server 2016, Windows Server 2012 R2, Windows Server 2012

This topic describes Updating Run options that can be configured for a Cluster-Aware Updating (CAU) Updating
Run. These advanced options can be configured when you use either the CAU UI or the CAU Windows PowerShell
cmdlets to apply updates or to configure self-updating options.
Most configuration settings can be saved as an XML file called an Updating Run Profile and reused for later
Updating Runs. The default values for the Updating Run options that are provided by CAU can also be used in
many cluster environments.
For information about additional options that you can specify for each Updating Run and about Updating Run
Profiles, see the following sections later in this topic:
Options that you specify when you request an Updating Run Use Updating Run Profiles Options that can be set in
an Updating Run Profile
The following table lists options that you can set in a CAU Updating Run Profile.

NOTE
To set the PreUpdateScript or PostUpdateScript option, ensure that Windows PowerShell and .NET Framework 4.6 or 4.5 are
installed and that PowerShell remoting is enabled on each node in the cluster. For more information, see Configure the
nodes for remote management in Requirements and Best Practices for Cluster-Aware Updating.

OPTION DEFAULT VALUE DETAILS

StopAfter Unlimited time Time in minutes after which the


Updating Run will be stopped if it has
not completed. Note: If you specify a
pre-update or a post-update
PowerShell script, the entire process of
running scripts and performing updates
must be complete within the StopAfter
time limit.

WarnAfter By default, no warning appears Time in minutes after which a warning


will appear if the Updating Run
(including a pre-update script and a
post-update script, if they are
configured) has not completed.

MaxRetriesPerNode 3 Maximum number of times that the


update process (including a pre-update
script and a post-update script, if they
are configured) will be retried per node.
The maximum is 64.
OPTION DEFAULT VALUE DETAILS

MaxFailedNodes For most clusters, an integer that is Maximum number of nodes on which
approximately one-third of the number updating can fail, either because the
of cluster nodes nodes fail or the Cluster service stops
running. If one more node fails, the
Updating Run is stopped.

The valid range of values is 0 to 1 less


than the number of cluster nodes.

RequireAllNodesOnline None Specifies that all nodes must be online


and reachable before updating begins.

RebootTimeoutMinutes 15 Time in minutes that CAU will allow for


restarting a node (if a restart is
necessary) and starting all auto-start
services. If the restart process doesn't
complete within this time, the Updating
Run on that node is marked as failed.

PreUpdateScript None The path and file name for a PowerShell


script to run on each node before
updating begins, and before the node is
put into maintenance mode. The file
name extension must be .ps1, and the
total length of the path plus file name
must not exceed 260 characters. As a
best practice, the script should be
located on a disk in cluster storage, or
at a highly available network file share,
to ensure that it is always accessible to
all of the cluster nodes. If the script is
located on a network file share, ensure
that you configure the file share for
Read permission for the Everyone
group, and restrict write access to
prevent tampering with the files by
unauthorized users.

If you specify a pre-update script, be


sure that settings such as the time
limits (for example, StopAfter) are
configured to allow the script to run
successfully. These limits span the entire
process of running scripts and installing
updates, not just the process of
installing updates.
OPTION DEFAULT VALUE DETAILS

PostUpdateScript None The path and file name for a PowerShell


script to run after updating completes
(after the node leaves maintenance
mode). The file name extension must be
.ps1 and the total length of the path
plus file name must not exceed 260
characters. As a best practice, the script
should be located on a disk in cluster
storage, or at a highly available network
file share, to ensure that it is always
accessible to all of the cluster nodes. If
the script is located on a network file
share, ensure that you configure the file
share for Read permission for the
Everyone group, and restrict write
access to prevent tampering with the
files by unauthorized users.

If you specify a post-update script, be


sure that settings such as the time
limits (for example, StopAfter) are
configured to allow the script to run
successfully. These limits span the entire
process of running scripts and installing
updates, not just the process of
installing updates.

ConfigurationName This setting only has an effect if you run Specifies the PowerShell session
scripts. configuration that defines the session in
which scripts (specified by
If you specify a pre-update script or a PreUpdateScript and
post-update script, but you do not PostUpdateScript) are run, and can
specify a ConfigurationName, the limit the commands that can be run.
default session configuration for
PowerShell (Microsoft.PowerShell) is
used.

CauPluginName Microsoft.WindowsUpdatePlugin Plug-in that you configure Cluster-


Aware Updating to use to preview
updates or perform an Updating Run.
For more information, see How Cluster-
Aware Updating plug-ins work.
OPTION DEFAULT VALUE DETAILS

CauPluginArguments None A set of name=value pairs (arguments)


for the updating plug-in to use, for
example:

Domain=Domain.local

These name=value pairs must be


meaningful to the plug-in that you
specify in CauPluginName.

To specify an argument using the CAU


UI, type the name, press the Tab key,
and then type the corresponding value.
Press the Tab key again to provide the
next argument. Each name and value
are automatically separated with an
equal (=) sign. Multiple pairs are
automatically separated with
semicolons.

For the default


Microsoft.WindowsUpdatePlugin
plug-in, no arguments are needed.
However, you can specify an optional
argument, for example to specify a
standard Windows Update Agent query
string to filter the set of updates that
are applied by the plug-in. For a name,
use QueryString, and for a value,
enclose the full query in quotation
marks.

For more information, see How Cluster-


Aware Updating plug-ins work.

Options that you specify when you request an Updating Run


The following table lists options (other than those in an Updating Run Profile) that you can specify when you
request an Updating Run. For information about options that you can set in an Updating Run Profile, see the
preceding table.

OPTION DEFAULT VALUE DETAILS

ClusterName None NetBIOS name of the cluster on which


Note: This option must be set only to perform the Updating Run.
when the CAU UI is not run on a
failover cluster node, or you want to
reference a failover cluster different
from where the CAU UI is run.
OPTION DEFAULT VALUE DETAILS

Credential Current account credentials Administrative credentials for the target


cluster on which the Updating Run will
be performed. You may already have
the necessary credentials if you start
the CAU UI (or open a PowerShell
session, if you're using the CAU
PowerShell cmdlets) from an account
that has administrator rights and
permissions on the cluster.

NodeOrder By default, CAU starts with the node Names of the cluster nodes in the order
that owns the smallest number of that they should be updated (if
clustered roles, then progresses to the possible).
node that has the second smallest
number, and so on.

Use Updating Run Profiles


Each Updating Run can be associated with a specific Updating Run Profile. The default Updating Run Profile is
stored in the %windir%\cluster folder. If you're using the CAU UI in remote-updating mode, you can specify an
Updating Run Profile at the time that you apply updates, or you can use the default Updating Run profile. If you're
using CAU in self-updating mode, you can import the settings from a specified Updating Run Profile when you
configure the self-updating options. In both cases, you can override the displayed values for the Updating Run
options according to your needs. If you want, you can save the Updating Run options as an Updating Run Profile
with the same file name or a different file name. The next time that you apply updates or configure self-updating
options, CAU automatically selects the Updating Run Profile that was previously selected.
You can modify an existing Updating Run Profile or create a new one by selecting Create or modify Updating
Run Profile in the CAU UI.
Here are some important notes about using Updating Run Profiles:
An Updating Run Profile doesn't store cluster-specific information such as administrative credentials. If you're
using CAU in self-updating mode, the Updating Run Profile also doesn't store the self-updating schedule
information. This makes it possible to share an Updating Run Profile across all failover clusters in a specified
class.
If you configure self-updating options using an Updating Run Profile and later modify the profile with different
values for the Updating Run options, the self-updating configuration doesn't change automatically. To apply
the new Updating Run settings, you must configure the self-updating options again.
The Run Profile Editor unfortunately doesn't support file paths that include spaces, such as C:\Program Files. As
a workaround, store your pre and post update scripts in a path that doesn't include spaces, or use PowerShell
exclusively to manage Run Profiles, putting quotes around the path when running Invoke-CauRun.
Windows PowerShell equivalent commands
You can import the settings from an Updating Run Profile when you run the Invoke-CauRun, Add-
CauClusterRole, or Set-CauClusterRole cmdlet.
The following example performs a scan and a full Updating Run on the cluster named CONTOSO-FC1, using the
Updating Run options that are specified in C:\Windows\Cluster\DefaultParameters.xml. Default values are used for
the remaining cmdlet parameters.

$MyRunProfile = Import-Clixml C:\Windows\Cluster\DefaultParameters.xml


Invoke-CauRun –ClusterName CONTOSO-FC1 @MyRunProfile
By using an Updating Run Profile, you can update a failover cluster in a repeatable fashion with consistent settings
for exception management, time bounds, and other operational parameters. Because these settings are typically
specific to a class of failover clusters—such as “All Microsoft SQL Server clusters”, or “My business-critical
clusters”—you might want to name each Updating Run Profile according to the class of Failover Clusters it will be
used with. In addition, you might want to manage the Updating Run Profile on a file share that is accessible to all
of the failover clusters of a specific class in your IT organization.

See also
Cluster-Aware Updating
Cluster-Aware Updating Cmdlets in Windows PowerShell
Cluster-Aware Updating: Frequently Asked Questions
5/1/2017 • 10 min to read • Edit Online

Applies To: Windows Server 2016, Windows Server 2012 R2, Windows Server 2012

Cluster-Aware Updating (CAU) is a feature that coordinates software updates on all servers in a failover cluster in a
way that doesn't impact the service availability any more than a planned failover of a cluster node. For some
applications with continuous availability features (such as Hyper-V with live migration, or an SMB 3.x file server
with SMB Transparent Failover), CAU can coordinate automated cluster updating with no impact on service
availability.

Does CAU support updating Storage Spaces Direct clusters?


Yes. CAU supports updating Storage Spaces Direct clusters regardless of the deployment type: hyper-converged or
converged. Specifically, CAU orchestration ensures that suspending each cluster node waits for the underlying
clustered storage space to be healthy.

Does CAU work with Windows Server 2008 R2 or Windows 7?


No. CAU coordinates the cluster updating operation only from computers running Windows Server 2016,
Windows Server 2012 R2, Windows Server 2012, Windows 10, Windows 8.1, or Windows 8. The failover cluster
being updated must run Windows Server 2016, Windows Server 2012 R2, or Windows Server 2012.

Is CAU limited to specific clustered applications?


No. CAU is agnostic to the type of the clustered application. CAU is an external cluster-updating solution that is
layered on top of clustering APIs and PowerShell cmdlets. As such, CAU can coordinate updating for any clustered
application that is configured in a Windows Server failover cluster.

NOTE
Currently, the following clustered workloads are tested and certified for CAU: SMB, Hyper-V, DFS Replication, DFS
Namespaces, iSCSI, and NFS.

Does CAU support updates from Microsoft Update and Windows


Update?
Yes. By default, CAU is configured with a plug-in that uses the Windows Update Agent (WUA) utility APIs on the
cluster nodes. The WUA infrastructure can be configured to point to Microsoft Update and Windows Update or to
Windows Server Update Services (WSUS) as its source of updates.

Does CAU support WSUS updates?


Yes. By default, CAU is configured with a plug-in that uses the Windows Update Agent (WUA) utility APIs on the
cluster nodes. The WUA infrastructure can be configured to point to Microsoft Update and Windows Update or to a
local Windows Server Update Services (WSUS) server as its source of updates.

Can CAU apply limited distribution release updates?


Yes. Limited distribution release (LDR) updates, also called hotfixes, are not published through Microsoft Update or
Windows Update, so they cannot be downloaded by the Windows Update Agent (WUA) plug-in that CAU uses by
default.
However, CAU includes a second plug-in that you can select to apply hotfix updates. This hotfix plug-in can also be
customized to apply non-Microsoft driver, firmware, and BIOS updates.

Can I use CAU to apply cumulative updates?


Yes. If the cumulative updates are general distribution release updates or LDR updates, CAU can apply them.

Can I schedule updates?


Yes. CAU supports the following updating modes, both of which allow updates to be scheduled:
Self-updating Enables the cluster to update itself according to a defined profile and a regular schedule, such as
during a monthly maintenance window. You can also start a Self-Updating Run on demand at any time. To enable
self-updating mode, you must add the CAU clustered role to the cluster. The CAU self-updating feature performs
like any other clustered workload, and it can work seamlessly with the planned and unplanned failovers of an
update coordinator computer.
Remote-updating Enables you to start an Updating Run at any time from a computer running Windows or
Windows Server. You can start an Updating run through the Cluster-Aware Updating window or by using the
Invoke-CauRun PowerShell cmdlet. Remote-updating is the default updating mode for CAU. You can use Task
Scheduler to run the Invoke-CauRun cmdlet on a desired schedule from a remote computer that is not one of the
cluster nodes.

Can I schedule updates to apply during a backup?


Yes. CAU doesn't impose any constraints in this regard. However, performing software updates on a server (with
the associated potential restarts) while a server backup is in progress is not an IT best practice. Be aware that CAU
relies only on clustering APIs to determine resource failovers and failbacks; thus, CAU is unaware of the server
backup status.

Can CAU work with System Center Configuration Manager?


CAU is a tool that coordinates software updates on a cluster node, and Configuration Manager also performs
server software updates. It's important to configure these tools so that they don't have overlapping coverage of the
same servers in any datacenter deployment, including using different Windows Server Update Services servers.
This ensures that the objective behind using CAU is not inadvertently defeated, because Configuration Manager-
driven updating doesn't incorporate cluster awareness.

Do I need administrative credentials to run CAU?


Yes. For running the CAU tools, CAU needs administrative credentials on the local server, or it needs the
Impersonate a Client after Authentication user right on the local server or the client computer on which it is
running. However, to coordinate software updates on the cluster nodes, CAU requires cluster administrative
credentials on every node. Although the CAU UI can start without the credentials, it prompts for the cluster
administrative credentials when it connects to a cluster instance to preview or apply updates.

Can I script CAU?


Yes. CAU comes with PowerShell cmdlets that offer a rich set of scripting options. These are the same cmdlets that
the CAU UI calls to perform CAU actions.
What happens to active clustered roles?
Clustered roles (formerly called applications and services) that are active on a node, fail over to other nodes before
software updating can commence. CAU orchestrates these failovers by using the maintenance mode, which pauses
and drains the node of all active clustered roles. When the software updates are complete, CAU resumes the node
and the clustered roles fail back to the updated node. This ensures that the distribution of clustered roles relative to
nodes stays the same across the CAU Updating Runs of a cluster.

How does CAU select target nodes for clustered roles?


CAU relies on clustering APIs to coordinate the failovers. The clustering API implementation selects the target
nodes by relying on internal metrics and intelligent placement heuristics (such as workload levels) across the target
nodes.

Does CAU load balance the clustered roles?


CAU doesn't load balance the clustered nodes, but it attempts to preserve the distribution of clustered roles. When
CAU finishes updating a cluster node, it attempts to fail back previously hosted clustered roles to that node. CAU
relies on clustering APIs to fail back the resources to the beginning of the pause process. Thus in the absence of
unplanned failovers and preferred owner settings, the distribution of clustered roles should remain unchanged.

How does CAU select the order of nodes to update?


By default, CAU selects the order of nodes to update based on the level of activity. The nodes that are hosting the
fewest clustered roles are updated first. However, an administrator can specify a particular order for updating the
nodes by specifying a parameter for the Updating Run in the CAU UI or by using the PowerShell cmdlets.

What happens if a cluster node is offline?


The administrator who initiates an Updating Run can specify the acceptable threshold for the number of nodes that
can be offline. Therefore, an Updating Run can proceed on a cluster even if all the cluster nodes are not online.

Can I use CAU to update only a single node?


No. CAU is a cluster-scoped updating tool, so it only allows you to select clusters to update. If you want to update a
single node, you can use existing server updating tools independently of CAU.

Can CAU report updates that are initiated from outside CAU?
No. CAU can only report Updating Runs that are initiated from within CAU. However, when a subsequent CAU
Updating Run is launched, updates that were installed through non-CAU methods are appropriately considered to
determine the additional updates that might be applicable to each cluster node.

Can CAU support my unique IT process needs?


Yes. CAU offers the following dimensions of flexibility to suit enterprise customers' unique IT process needs:
Scripts An Updating Run can specify a pre-update PowerShell script and a post-update PowerShell script. The pre-
update script runs on each cluster node before the node is paused. The post-update script runs on each cluster
node after the node updates are installed.
NOTE
.NET Framework 4.6 or 4.5 and PowerShell must be installed on each cluster node on which you want to run the pre-update
and post-update scripts. You must also enable PowerShell remoting on the cluster nodes. For detailed system requirements,
see Requirements and Best Practices for Cluster-Aware Updating.

Advanced Updating Run options The administrator can additionally specify from a large set of advanced
Updating Run options such as the maximum number of times that the update process is retried on each node.
These options can be specified using either the CAU UI or the CAU PowerShell cmdlets. These custom settings can
be saved in an Updating Run Profile and reused for later Updating Runs.
Public plug-in architecture CAU includes features to Register, Unregister, and Select plug-ins. CAU ships with
two default plug-ins: one coordinates the Windows Update Agent (WUA) APIs on each cluster node; the second
applies hotfixes that are manually copied to a file share that is accessible to the cluster nodes. If an enterprise has
unique needs that cannot be met with these two plug-ins, the enterprise can build a new CAU plug-in according to
the public API specification. For more information, see Cluster-Aware Updating Plug-in Reference.
For information about configuring and customizing CAU plug-ins to support different updating scenarios, see How
Plug-ins Work.

How can I export the CAU preview and update results?


CAU offers export options through the command-line interface and through the UI.
Command-line interface options:
Preview results by using the PowerShell cmdlet Invoke-CauScan | ConvertTo-Xml. Output: XML
Report results by using the PowerShell cmdlet Invoke-CauRun | ConvertTo-Xml. Output: XML
Report results by using the PowerShell cmdlet Get-CauReport | Export-CauReport. Output: HTML, CSV
UI options:
Copy the report results from the Preview updates screen. Output: CSV
Copy the report results from the Generate report screen. Output: CSV
Export the report results from the Generate report screen. Output: HTML

How do I install CAU?


A CAU installation is seamlessly integrated into the Failover Clustering feature. CAU is installed as follows:
When Failover Clustering is installed on a cluster node, the CAU Windows Management Instrumentation
(WMI) provider is automatically installed.
When the Failover Clustering Tools feature is installed on a server or client computer, the Cluster-Aware
Updating UI and PowerShell cmdlets are automatically installed.

Does CAU need components running on the cluster nodes that are
being updated?
CAU doesn't need a service running on the cluster nodes. However, CAU needs a software component (the WMI
provider) installed on the cluster nodes. This component is installed with the Failover Clustering feature.
To enable self-updating mode, the CAU clustered role must also be added to the cluster.
What is the difference between using CAU and VMM?
System Center Virtual Machine Manager (VMM) is focused on updating only Hyper-V clusters, whereas CAU
can update any type of supported failover cluster, including Hyper-V clusters.
VMM requires additional licensing, whereas CAU is licensed for all Windows Server. The CAU features, tools,
and UI are installed with Failover Clustering components.
If you already own a System Center license, you can continue to use VMM to update Hyper-V clusters
because it offers an integrated management and software updating experience.
CAU is supported only on clusters that are running Windows Server 2016, Windows Server 2012 R2, and
Windows Server 2012. VMM also supports Hyper-V clusters on computers running Windows Server 2008
R2 and Windows Server 2008.

Can I use remote-updating on a cluster that is configured for self-


updating?
Yes. A failover cluster in a self-updating configuration can be updated through remote-updating on-demand, just
as you can force a Windows Update scan at any time on your computer, even if Windows Update is configured to
install updates automatically. However, you need to make sure that an Updating Run is not already in progress.

Can I reuse my cluster update settings across clusters?


Yes. CAU supports a number of Updating Run options that determine how the Updating Run behaves when it
updates the cluster. These options can be saved as an Updating Run Profile, and they can be reused across any
cluster. We recommend that you save and reuse your settings across failover clusters that have similar updating
needs. For example, you might create a "Business-Critical SQL Server Cluster Updating Run Profile" for all
Microsoft SQL Server clusters that support business-critical services.

Where is the CAU plug-in specification?


Cluster-Aware Updating Plug-in Reference
Cluster Aware Updating plug-in sample

See also
Cluster-Aware Updating Overview
How Cluster-Aware Updating plug-ins work
5/1/2017 • 21 min to read • Edit Online

Applies To: Windows Server 2016, Windows Server 2012 R2, Windows Server 2012

Cluster-Aware Updating (CAU) uses plug-ins to coordinate the installation of updates across nodes in a failover
cluster. This topic provides information about using the built-in CAU plug-ins or other plug-ins that you install for
CAU.

Install a plug-in
A plug-in other than the default plug-ins that are installed with CAU (Microsoft.WindowsUpdatePlugin and
Microsoft.HotfixPlugin) must be installed separately. If CAU is used in self-updating mode, the plug-in must be
installed on all cluster nodes. If CAU is used in remote-updating mode, the plug-in must be installed on the remote
Update Coordinator computer. A plug-in that you install may have additional installation requirements on each
node.
To install a plug-in, follow the instructions from the plug-in publisher. To manually register a plug-in with CAU,
run the Register-CauPlugin cmdlet on each computer where the plug-in is installed.

Specify a plug-in and plug-in arguments


Specify a CAU plug-in
In the CAU UI, you select a plug-in from a drop-down list of available plug-ins when you use CAU to perform the
following actions:
Apply updates to the cluster
Preview updates for the cluster
Configure cluster self-updating options
By default, CAU selects the plug-in Microsoft.WindowsUpdatePlugin. However, you can specify any plug-in
that is installed and registered with CAU.

TIP
In the CAU UI, you can only specify a single plug-in for CAU to use to preview or to apply updates during an Updating Run.
By using the CAU PowerShell cmdlets, you can specify one or more plug-ins. If you need to install multiple types of updates
on the cluster, it is usually more efficient to specify multiple plug-ins in one Updating Run, rather than using a separate
Updating Run for each plug-in. For example, fewer node restarts will typically occur.

By using the CAU PowerShell cmdlets that are listed in the following table, you can specify one or more plug-ins
for an Updating Run or scan by passing the –CauPluginName parameter. You can specify multiple plug-in
names by separating them with commas. If you specify multiple plug-ins, you can also control how the plug-ins
influence each other during an Updating Run by specifying the -RunPluginsSerially, -StopOnPluginFailure,
and –SeparateReboots parameters. For more information about using multiple plug-ins, use the links provided
to the cmdlet documentation in the following table.
CMDLET DESCRIPTION

Add-CauClusterRole Adds the CAU clustered role that provides the self-updating
functionality to the specified cluster.

Invoke-CauRun Performs a scan of cluster nodes for applicable updates and


installs those updates through an Updating Run on the
specified cluster.

Invoke-CauScan Performs a scan of cluster nodes for applicable updates and


returns a list of the initial set of updates that would be
applied to each node in the specified cluster.

Set-CauClusterRole Sets configuration properties for the CAU clustered role on


the specified cluster.

If you do not specify a CAU plug-in parameter by using these cmdlets, the default is the plug-in
Microsoft.WindowsUpdatePlugin.
Specify CAU plug-in arguments
When you configure the Updating Run options, you can specify one or more name=value pairs (arguments) for
the selected plug-in to use. For example, in the CAU UI, you can specify multiple arguments as follows:
Name1=Value1;Name2=Value2;Name3=Value3
These name=value pairs must be meaningful to the plug-in that you specify. For some plug-ins the arguments are
optional.
The syntax of the CAU plug-in arguments follows these general rules:
Multiple name=value pairs are separated by semicolons.
A value that contains spaces is surrounded by quotation marks, for example: Name1="Value with
Spaces".
The exact syntax of value depends on the plug-in.
To specify plug-in arguments by using the CAU PowerShell cmdlets that support the –CauPluginParameters
parameter, pass a parameter of the form:
-CauPluginArguments @{Name1=Value1;Name2=Value2;Name3=Value3}
You can also use a predefined PowerShell hash table. To specify plug-in arguments for more than one plug-in,
pass multiple hash tables of arguments, separated with commas. Pass the plug-in arguments in the plug-in order
that is specified in CauPluginName.
Specify optional plug-in arguments
The plug-ins that CAU installs (Microsoft.WindowsUpdatePlugin and Microsoft.HotfixPlugin) provide
additional options that you can select. In the CAU UI, these appear on an Additional Options page after you
configure Updating Run options for the plug-in. If you are using the CAU PowerShell cmdlets, these options are
configured as optional plug-in arguments. For more information, see Use the Microsoft.WindowsUpdatePlugin
and Use the Microsoft.HotfixPlugin later in this topic.

Manage plug-ins using Windows PowerShell cmdlets


CMDLET DESCRIPTION

Get-CauPlugin Retrieves information about one or more software updating


plug-ins that are registered on the local computer.

Register-CauPlugin Registers a CAU software updating plug-in on the local


computer.

Unregister-CauPlugin Removes a software updating plug-in from the list of plug-ins


that can be used by CAU. Note: The plug-ins that are
installed with CAU (Microsoft.WindowsUpdatePlugin and
the Microsoft.HotfixPlugin) cannot be unregistered.

Using the Microsoft.WindowsUpdatePlugin


The default plug-in for CAU, Microsoft.WindowsUpdatePlugin, performs the following actions:
Communicates with the Windows Update Agent on each failover cluster node to apply updates that are needed
for the Microsoft products that are running on each node.
Installs cluster updates directly from Windows Update or Microsoft Update, or from an on-premises Windows
Server Update Services (WSUS) server.
Installs only selected, general distribution release (GDR) updates. By default, the plug-in applies only important
software updates. No configuration is required. The default configuration downloads and installs important
GDR updates on each node.

NOTE
To apply updates other than the important software updates that are selected by default (for example, driver updates), you
can configure an optional plug-in parameter. For more information, see Configure the Windows Update Agent query string.

Requirements
The failover cluster and remote Update Coordinator computer (if used) must meet the requirements for CAU
and the configuration that is required for remote management listed in Requirements and Best Practices for
CAU.
Review Recommendations for applying Microsoft updates, and then make any necessary changes to your
Microsoft Update configuration for the failover cluster nodes.
For best results, we recommend that you run the CAU Best Practices Analyzer (BPA) to ensure that the cluster
and update environment are configured properly to apply updates by using CAU. For more information, see
Test CAU updating readiness.

NOTE
Updates that require the acceptance of Microsoft license terms or require user interaction are excluded, and they must be
installed manually.

Additional options
Optionally, you can specify the following plug-in arguments to augment or restrict the set of updates that are
applied by the plug-in:
To configure the plug-in to apply recommended updates in addition to important updates on each node, in the
CAU UI, on the Additional Options page, select the Give me recommended updates the same way that I
receive important updates check box.
Alternatively, configure the 'IncludeRecommendedUpdates'='True' plug-in argument.
To configure the plug-in to filter the types of GDR updates that are applied to each cluster node, specify a
Windows Update Agent query string using a QueryString plug-in argument. For more information, see
Configure the Windows Update Agent query string.
Configure the Windows Update Agent query string
You can configure a plug-in argument for the default plug-in, Microsoft.WindowsUpdatePlugin, that consists
of a Windows Update Agent (WUA) query string. This instruction uses the WUA API to identify one or more
groups of Microsoft updates to apply to each node, based on specific selection criteria. You can combine multiple
criteria by using a logical AND or a logical OR. The WUA query string is specified in a plug-in argument as follows:
QueryString="Criterion1=Value1 and\/or Criterion2=Value2 and\/or…"
For example, Microsoft.WindowsUpdatePlugin automatically selects important updates by using a default
QueryString argument that is constructed using the IsInstalled, Type, IsHidden, and IsAssigned criteria:
QueryString="IsInstalled=0 and Type='Software' and IsHidden=0 and IsAssigned=1"
If you specify a QueryString argument, it is used in place of the default QueryString that is configured for the
plug-in.
Example 1
To configure a QueryString argument that installs a specific update as identified by ID f6ce46c1-971c-43f9-
a2aa-783df125f003:
QueryString="UpdateID='f6ce46c1-971c-43f9-a2aa-783df125f003' and IsInstalled=0"

NOTE
The preceding example is valid for applying updates by using the Cluster-Aware Updating Wizard. If you want to install a
specific update by configuring self-updating options with the CAU UI or by using the Add-CauClusterRole or Set-
CauClusterRolePowerShell cmdlet, you must format the UpdateID value with two single-quote characters:
QueryString="UpdateID=''f6ce46c1-971c-43f9-a2aa-783df125f003'' and IsInstalled=0"

Example 2
To configure a QueryString argument that installs only drivers:
QueryString="IsInstalled=0 and Type='Driver' and IsHidden=0"
For more information about query strings for the default plug-in, Microsoft.WindowsUpdatePlugin, the search
criteria (such as IsInstalled), and the syntax that you can include in the query strings, see the section about search
criteria in the Windows Update Agent (WUA) API Reference.

Use the Microsoft.HotfixPlugin


The plug-in Microsoft.HotfixPlugin can be used to apply Microsoft limited distribution release (LDR) updates
(also called hotfixes, and formerly called QFEs) that you download independently to address specific Microsoft
software issues. The plug-in installs updates from a root folder on an SMB file share and can also be customized to
apply non-Microsoft driver, firmware, and BIOS updates.

NOTE
Hotfixes are sometimes available for download from Microsoft in Knowledge Base articles, but they are also provided to
customers on an as-needed basis.

Requirements
The failover cluster and remote Update Coordinator computer (if used) must meet the requirements for CAU
and the configuration that is required for remote management listed in Requirements and Best Practices for
CAU.
Review Recommendations for using the Microsoft.HotfixPlugin.
For best results, we recommend that you run the CAU Best Practices Analyzer (BPA) model to ensure that the
cluster and update environment are configured properly to apply updates by using CAU. For more information,
see Test CAU updating readiness.
Obtain the updates from the publisher, and copy them or extract them to a Server Message Block (SMB) file
share (hotfix root folder) that supports at least SMB 2.0 and that is accessible by all of the cluster nodes and
the remote Update Coordinator computer (if CAU is used in remote-updating mode). For more information,
see Configure a hotfix root folder structure later in this topic.

NOTE
By default, this plug-in only installs hotfixes with the following file name extensions: .msu, .msi, and .msp.

Copy the DefaultHotfixConfig.xml file (which is provided in the


%systemroot%\System32\WindowsPowerShell\v1.0\Modules\ClusterAwareUpdating folder on a
computer where the CAU tools are installed) to the hotfix root folder that you created and under which you
extracted the hotfixes. For example, copy the configuration file to \\MyFileServer\Hotfixes\Root\.

NOTE
To install most hotfixes provided by Microsoft and other updates, the default hotfix configuration file can be used
without modification. If your scenario requires it, you can customize the configuration file as an advanced task. The
configuration file can include custom rules, for example, to handle hotfix files that have specific extensions, or to
define behaviors for specific exit conditions. For more information, see Customize the hotfix configuration file later in
this topic.

Configuration
Configure the following settings. For more information, see the links to sections later in this topic.
The path to the shared hotfix root folder that contains the updates to apply and that contains the hotfix
configuration file. You can type this path in the CAU UI or configure the HotfixRootFolderPath=<Path>
PowerShell plug-in argument.

NOTE
You can specify the hotfix root folder as a local folder path or as a UNC path of the form
\\ServerName\Share\RootFolderName. A domain-based or standalone DFS Namespace path can be used. However,
the plug-in features that check access permissions in the hotfix configuration file are incompatible with a DFS
Namespace path, so if you configure one, you must disable the check for access permissions by using the CAU UI or
by configuring the DisableAclChecks='True' plug-in argument.

Settings on the server that hosts the hotfix root folder to check for appropriate permissions to access the folder
and ensure the integrity of the data accessed from the SMB shared folder (SMB signing or SMB Encryption).
For more information, see Restrict access to the hotfix root folder.
Additional options
Optionally, configure the plug-in so that SMB Encryption is enforced when accessing data from the hotfix file
share. In the CAU UI, on the Additional Options page, select the Require SMB Encryption in accessing the
hotfix root folder option, or configure the RequireSMBEncryption='True' PowerShell plug-in argument. >
[!IMPORTANT] > You must perform additional configuration steps on the SMB server to enable SMB data
integrity with SMB signing or SMB Encryption. For more information, see Step 4 in Restrict access to the hotfix
root folder. If you select the option to enforce the use of SMB Encryption, and the hotfix root folder is not
configured for access by using SMB Encryption, the Updating Run will fail.
Optionally, disable the default checks for sufficient permissions for the hotfix root folder and the hotfix
configuration file. In the CAU UI, select Disable check for administrator access to the hotfix root folder
and configuration file, or configure the DisableAclChecks='True' plug-in argument.
Optionally, configure the HotfixInstallerTimeoutMinutes= argument to specify how long the hotfix plug-in
waits for the hotfix installer process to return. (The default is 30 minutes.) For example, to specify a timeout
period of two hours, set HotfixInstallerTimeoutMinutes=120.
Optionally, configure the HotfixConfigFileName = plug-in argument to specify a name for the hotfix
configuration file that is located in the hotfix root folder. If not specified, the default name
DefaultHotfixConfig.xml is used.
Configure a hotfix root folder structure
For the hotfix plug-in to work, hotfixes must be stored in a well-defined structure in an SMB file share (hotfix root
folder), and you must configure the hotfix plug-in with the path to the hotfix root folder by using the CAU UI or the
CAU PowerShell cmdlets. This path is passed to the plug-in as the HotfixRootFolderPath argument. You can
choose one of several structures for the hotfix root folder, according to your updating needs, as shown in the
following examples. Files or folders that do not adhere to the structure are ignored.
Example 1 - Folder structure used to apply hotfixes to all cluster nodes
To specify that hotfixes apply to all cluster nodes, copy them to a folder named CAUHotfix_All under the hotfix
root folder. In this example, the HotfixRootFolderPath plug-in argument is set to \\MyFileServer\Hotfixes\Root\.
The CAUHotfix_All folder contains three updates with the extensions .msu, .msi, and .msp that will be applied to
all cluster nodes. The update file names are only for illustration purposes.

NOTE
In this and the following examples, the hotfix configuration file with its default name DefaultHotfixConfig.xml is shown in its
required location in the hotfix root folder.

\\MyFileServer\Hotfixes\Root\
DefaultHotfixConfig.xml
CAUHotfix_All\
Update1.msu
Update2.msi
Update3.msp
...

Example 2 - Folder structure used to apply certain updates only to a specific node
To specify hotfixes that apply only to a specific node, use a subfolder under the hotfix root folder with the name of
the node. Use the NetBIOS name of the cluster node, for example, ContosoNode1. Then, move the updates that
apply only to this node to this subfolder. In the following example, the HotfixRootFolderPath plug-in argument
is set to \\MyFileServer\Hotfixes\Root\. Updates in the CAUHotfix_All folder will be applied to all cluster nodes,
and Node1_Specific_Update.msu will be applied only to ContosoNode1.
\\MyFileServer\Hotfixes\Root\
DefaultHotfixConfig.xml
CAUHotfix_All\
Update1.msu
Update2.msi
Update3.msp
...
ContosoNode1\
Node1_Specific_Update.msu
...

Example 3 - Folder structure used to apply updates other than .msu, .msi, and .msp files
By default, Microsoft.HotfixPlugin only applies updates with the .msu, .msi, or .msp extension. However, certain
updates might have different extensions and require different installation commands. For example, you might
need to apply a firmware update with the extension .exe to a node in a cluster. You can configure the hotfix root
folder with a subfolder that indicates a specific, non-default update type should be installed. You must also
configure a corresponding folder installation rule that specifies the installation command in the <FolderRules>
element in the hotfix configuration XML file.
In the following example, the HotfixRootFolderPath plug-in argument is set to \\MyFileServer\Hotfixes\Root\.
Several updates will be applied to all cluster nodes, and a firmware update SpecialHotfix1.exe will be applied to
ContosoNode1 by using FolderRule1. For information about configuring FolderRule1 in the hotfix configuration
file, see Customize the hotfix configuration file later in this topic.

\\MyFileServer\Hotfixes\Root\
DefaultHotfixConfig.xml
CAUHotfix_All\
Update1.msu
Update2.msi
Update3.msp
...

ContosoNode1\
FolderRule1\
SpecialHotfix1.exe
...

Customize the hotfix configuration file


The hotfix configuration file controls how Microsoft.HotfixPlugin installs specific hotfix file types in a failover
cluster. The XML schema for the configuration file is defined in HotfixConfigSchema.xsd, which is located in the
following folder on a computer where the CAU tools are installed:
%systemroot%\System32\WindowsPowerShell\v1.0\Modules\ClusterAwareUpdating folder
To customize the hotfix configuration file, copy the sample configuration file DefaultHotfixConfig.xml from this
location to the hotfix root folder and make appropriate modifications for your scenario.

IMPORTANT
To apply most hotfixes provided by Microsoft and other updates, the default hotfix configuration file can be used without
modification. Customization of the hotfix configuration file is a task only in advanced usage scenarios.

By default, the hotfix configuration XML file defines installation rules and exit conditions for the following two
categories of hotfixes:
Hotfix files with extensions that the plug-in can install by default (.msu, .msi, and .msp files).
These are defined as <ExtensionRules> elements in the <DefaultRules> element. There is one <Extension>
element for each of the default supported file types. The general XML structure is as follows:

<DefaultRules>
<ExtensionRules>
<Extension name="MSI">
<!-- Template and ExitConditions elements for installation of .msi files follow -->
...
</Extension>
<Extension name="MSU">
<!-- Template and ExitConditions elements for installation of .msu files follow -->
...
</Extension>
<Extension name="MSP">
<!-- Template and ExitConditions elements for installation of .msp files follow -->
...
</Extension>
...
</ExtensionRules>
</DefaultRules>

If you need to apply certain update types to all cluster nodes in your environment, you can define additional
<Extension> elements.

Hotfix or other update files that are not .msi, .msu, or .msp files, for example, non-Microsoft drivers,
firmware, and BIOS updates.
Each non-default file type is configured as a <Folder> element in the <FolderRules> element. The name
attribute of the <Folder> element must be identical to the name of a folder in the hotfix root folder that will
contain updates of the corresponding type. The folder can be in the CAUHotfix_All folder or in a node-
specific folder. For example, if FolderRule1 is configured in the hotfix root folder, configure the following
element in the XML file to define an installation template and exit conditions for the updates in that folder:

<FolderRules>
<Folder name="FolderRule1">
<!-- Template and ExitConditions elements for installation of updates in FolderRule1 follow -->
...
</Folder>
...
</FolderRules>

The following tables describe the <Template> attributes and the possible <ExitConditions> subelements.

<TEMPLATE> ATTRIBUTE DESCRIPTION

path The full path to the installation program for the file type that
is defined in the <Extension name> attribute.

To specify the path to an update file in the hotfix root folder


structure, use $update$ .

parameters A string of required and optional parameters for the program


that is specified in path .

To specify a parameter that is the path to an update file in the


hotfix root folder structure, use $update$ .
<EXITCONDITIONS> SUBELEMENT DESCRIPTION

<Success> Defines one or more exit codes that indicate the specified
update succeeded. This is a required subelement.

<Success_RebootRequired> Optionally defines one or more exit codes that indicate the
specified update succeeded and the node must restart.
Note: Optionally, the <Folder> element can contain the
alwaysReboot attribute. If this attribute is set, it indicates
that if a hotfix installed by this rule returns one of the exit
codes that is defined in <Success> , it is interpreted as a
<Success_RebootRequired> exit condition.

<Fail_RebootRequired> Optionally defines one or more exit codes that indicate the
specified update failed and the node must restart.

<AlreadyInstalled> Optionally defines one or more exit codes that indicate the
specified update was not applied because it is already
installed.

<NotApplicable> Optionally defines one or more exit codes that indicate the
specified update was not applied because it does not apply to
the cluster node.

IMPORTANT
Any exit code that is not explicitly defined in <ExitConditions> is interpreted as the update failed, and the node does not
restart.

Restrict access to the hotfix root folder


You must perform several steps to configure the SMB file server and file share to help secure the hotfix root folder
files and hofix configuration file for access only in the context of Microsoft.HotfixPlugin. These steps enable
several features that help prevent possible tampering with the hotfix files in a way that might compromise the
failover cluster.
The general steps are as follows:
1. Identify the user account that is used for Updating Runs by using the plug-in
2. Configure this user account in the necessary groups on an SMB file server
3. Configure permissions to access the hotfix root folder
4. Configure settings for SMB data integrity
5. Enable a Windows Firewall rule on the SMB server
Step 1. Identify the user account that is used for Updating Runs by using the hotfix plug-in
The account that is used in CAU to check security settings while performing an Updating Run using
Microsoft.HotfixPlugin depends on whether CAU is used in remote-updating mode or self-updating mode, as
follows:
Remote-updating mode The account that has administrative privileges on the cluster to preview and
apply updates.
Self-updating mode The name of the virtual computer object that is configured in Active Directory for the
CAU clustered role. This is either the name of a prestaged virtual computer object in Active Directory for the
CAU clustered role or the name that is generated by CAU for the clustered role. To obtain the name if it is
generated by CAU, run the Get-CauClusterRole CAU PowerShell cmdlet. In the output,
ResourceGroupName is the name of the generated virtual computer object account.
Step 2. Configure this user account in the necessary groups on an SMB file server

IMPORTANT
You must add the account that is used for Updating Runs as a local administrator account on the SMB server. If this is not
permitted because of the security policies in your organization, configure this account with the necessary permissions on the
SMB server by using the following procedure.

To c o n fi g u r e a u se r a c c o u n t o n t h e SM B se r v e r

1. Add the account that is used for Updating Runs to the Distributed COM Users group and to one of the
following groups: Power User, Server Operation, or Print Operator.
2. To enable the necessary WMI permissions for the account, start the WMI Management Console on the SMB
server. Start PowerShell and then type the following command:

wmimgmt.msc

3. In the console tree, right-click WMI Control (Local), and then click Properties.
4. Click Security, and then expand Root.
5. Click CIMV2, and then click Security.
6. Add the account that is used for Updating Runs to the Group or user names list.
7. Grant the Execute Methods and Remote Enable permissions to the account that is used for Updating
Runs.
Step 3. Configure permissions to access the hotfix root folder
By default, when you attempt to apply updates, the hotfix plug-in checks the configuration of the NTFS file system
permissions for access to the hotfix root folder. If the folder access permissions are not configured properly, an
Updating Run using the hotfix plug-in might fail.
If you use the default configuration of the hotfix plug-in, ensure that the folder access permissions meet the
following requirements.
The Users group has Read permission.
If the plug-in will apply updates with the .exe extension, the Users group has Execute permission.
Only certain security principals are permitted (but are not required) to have Write or Modify permission.
The allowed principals are the local Administrators group, SYSTEM, CREATOR OWNER, and TrustedInstaller.
Other accounts or groups are not permitted to have Write or Modify permission on the hotfix root folder.
Optionally, you can disable the preceding checks that the plug-in performs by default. You can do this in one of
two ways:
If you are using the CAU PowerShell cmdlets, configure the DisableAclChecks='True' argument in the
CauPluginArguments parameter for the hotfix plug-in.
If you are using the CAU UI, select the Disable check for administrator access to the hotfix root folder
and configuration file option on the Additional Update Options page of the wizard that is used to
configure Updating Run options.
However, as a best practice in many environments, we recommend that you use the default configuration to
enforce these checks.
Step 4. Configure settings for SMB data integrity
To check for data integrity in the connections between the cluster nodes and the SMB file share, the hotfix plug-in
requires that you enable settings on the SMB file share for SMB signing or SMB Encryption. SMB Encryption,
which provides enhanced security and better performance in many environments, is supported starting in
Windows Server 2012. You can enable either or both of these settings, as follows:
To enable SMB signing, see the procedure in the article 887429 in the Microsoft Knowledge Base.
To enable SMB Encryption for the SMB shared folder, run the following PowerShell cmdlet on the SMB
server:

Set-SmbShare <ShareName> -EncryptData $true

Where <ShareName> is the name of the SMB shared folder.


Optionally, to enforce the use of SMB Encryption in the connections to the SMB server, select the Require SMB
Encryption in accessing the hotfix root folder option in the CAU UI, or configure the
RequireSMBEncryption='True' plug-in argument by using the CAU PowerShell cmdlets.

IMPORTANT
If you select the option to enforce the use of SMB Encryption, and the hotfix root folder is not configured for connections
that use SMB Encryption, the Updating Run will fail.

Step 5. Enable a Windows Firewall rule on the SMB server


You must enable the File Server Remote Management (SMB-in) rule in Windows Firewall on the SMB file
server. This is enabled by default in Windows Server 2016, Windows Server 2012 R2, and Windows Server 2012.

See also
Cluster-Aware Updating Overview
Cluster-Aware Updating Windows PowerShell Cmdlets
Cluster-Aware Updating Plug-in Reference
Change history for Failover Clustering topics in
Windows Server 2016
6/8/2017 • 1 min to read • Edit Online

Applies To: Windows Server 2016

This topic lists new and updated topics in the Failover Clustering documentation for Windows Server 2016.

If you're looking for update history for Windows Server 2016, see Windows 10 and Windows Server 2016
update history.

June 2017
NEW OR CHANGED TOPIC DESCRIPTION

Cluster-Aware Updating advanced options Added info about using run profile paths that include spaces.

April 2017
NEW OR CHANGED TOPIC DESCRIPTION

Cluster-Aware Updating overview New topic.

Cluster-Aware Updating requirements and best practices New topic.

Cluster-Aware Updating advanced options New topic.

Cluster-Aware Updating FAQ New topic.

Cluster-Aware Updating plug-ins New topic.

Deploy a cloud witness for a Failover Cluster Clarified the type of storage account that's required (you can't
use Azure Premium Storage or Blob storage accounts).

March 2017
NEW OR CHANGED TOPIC DESCRIPTION

Deploy a cloud witness for a Failover Cluster Updated screenshots to match changes to Microsoft Azure.

February 2017
NEW OR CHANGED TOPIC DESCRIPTION

Cluster operating system rolling upgrade Removed an unnecessary Caution note and added a link.

You might also like