0% found this document useful (0 votes)
94 views34 pages

Administering and Configuring Advanced Windows Server 2012 R2 Services Individual Assignment

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views34 pages

Administering and Configuring Advanced Windows Server 2012 R2 Services Individual Assignment

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

NAME: NICKELL CONSTANCE

COURSE: ITNS
CAMPUS: STE.MADELINE
INSTRUCTOR: MS.SHARDHA MANICK
MODULE: ADMINISTERING AND
CONFIGURING WINDOWS SERVER
2012 R2
ASSIGNMENT: INDIVIDUAL
ASSIGNMENT
TABLE OF CONTENTS
Contents
OBJECTIVE............................................................................................................................................3
INTRODUCTION.......................................................................................................................................5
UPGRADING THE NLB CLUSTER...............................................................................................................6
CONFIGURING REDUNDANT SERVER.....................................................................................................20
REFERENCES..........................................................................................................................................34
OBJECTIVE
The objective of this assignment is to analyze and propose solutions for upgrading an NLB cluster from
Windows Server 2008 R2 to Windows Server 2012 R2 without downtime, and to recommend strategies
for configuring a redundant server to ensure high availability for file server clients.

Assignment Components:

1. Introduction:
 Brief overview of the assignment objective and its relevance in understanding strategies for
upgrading NLB clusters and ensuring high availability for file servers.

2. Upgrading NLB Cluster:


 Explanation of the best practices and steps required to upgrade an NLB cluster from Windows
Server 2008 R2 to Windows Server 2012 R2 without downtime, including:
a. Assessing current NLB cluster configuration.
b. Planning the upgrade process to minimize downtime.
c. Implementing rolling upgrades by adding new Windows Server 2012 R2 nodes to the cluster and
removing Windows Server 2008 R2 nodes gradually.
d. Verifying cluster functionality after each node upgrade.

3. Configuring Redundant Server:


 Recommendation for configuring a redundant server for high availability as a file server for
clients, including:
a. Implementing server clustering or network load balancing (NLB) to distribute client requests
across multiple servers.
b. Utilizing redundant hardware components such as redundant power supplies, RAID
configurations for data redundancy, and network interface cards for failover.
c. Implementing regular backups and disaster recovery procedures to minimize data loss in case of
server failure.

4. Discussion:
 Explanation of the importance of minimizing downtime during server upgrades and ensuring high
availability for critical services such as file servers.
 Discussion on the challenges and considerations involved in upgrading NLB clusters and
configuring redundant servers.

5. Submission:
 Guidelines on how to submit the completed assignment, including any additional documentation
or diagrams, through the specified platform or as instructed by the instructor.

6. Assessment Criteria:
 Depth of analysis and understanding demonstrated in the proposed solutions for upgrading NLB
cluster and configuring redundant server (8%)
 Clarity and coherence of the discussion on the importance and challenges of server upgrades and
high availability configurations (5%)
 Alignment with best practices and recommendations in server administration and high availability
strategies (5%)
 Proper adherence to the APA format in the reference list (2%)
 Report Submission or Late Submission (5%)

7. Late Submission: Please be advised that a late submission of individual report assignments will
incur a penalty of minus 5% from the course work grade score. It is important to adhere to the
specified deadlines to avoid any deductions in your grades.

INTRODUCTION
By upgrading an existing Network Load Balancing cluster to the Windows Server 2019, by
taking the entire cluster offline, and upgrading all the hosts, or you can leave the cluster online,
and perform a rolling upgrade. A rolling upgrade entails taking individual cluster hosts offline
one at a time, upgrading each host, and returning the host to the cluster. You continue upgrading
individual cluster hosts, until the entire cluster is upgraded. A rolling upgrade allows the cluster
to continue running during the upgrade. The decision to use rolling upgrades is based on the
applications and services running on your existing cluster. If the applications and services
support rolling upgrades, then perform a rolling upgrade on the existing cluster.
On this assignment, I am going to explain about upgrading the NLB cluster and by ensuring the
high availability for every files in the server itself.

UPGRADING THE NLB CLUSTER


Network Load Balancing (NLB) distributes TCP/IP traffic across multiple servers by combining
their resources into a virtual cluster configuration where each server is viewed as a host. Each
Windows Server participating in network load balancing runs an identical copy of the server
applications.
The Network Load Balancing service then distributes incoming client requests to the various
nodes of the cluster. This configuration has several benefits:
 Admins can configure custom load values to be handled by each host
 New hosts can be dynamically added to scale the capacity of the NLB cluster
 You can also configure a default host if you want to have a single host handle all traffic
by default
 The NLB cluster is referenced by clients using a single IP address that can be configured
with a proper DNS name
 NLB knows the backend IP addresses of each host and forwards the traffic accordingly

Important considerations with Network Load Balancing.


The following items are important to consider with Network Load Balancing to ensure it
functions correctly:
 Configuring the correct time on all NLB hosts is important
 Don't configure any other protocols on the cluster network adapter
 You can configure the cluster to operate in unicast or multicast mode but not both
 You can't mix Windows Server Failover Cluster and Network Load Balancing
 Which applications work well with Network Load Balancing (NLB)?
 Network Load Balancing as found in Windows Server works well with web-based
applications. Microsoft specifically mentions running highly available Internet
Information Services (IIS) websites with minimal downtime. As load increases, admins
can add servers to the NLB cluster to increase capacity.
The Businesses can also use network load balancing in front of SQL Servers to increase
availability and redundancy. For example, you may configure a report server on a Network Load
Balancing cluster.

Installing the Network Load Balancing Windows Server


Feature.
The first part of the process is adding the Network Load Balancing feature in Windows Server.
You can easily do this using Server Manager. On the Features screen, place a checkmark by the
Network Load Balancing feature. On the next screen, you will see a prompt to add features
required for network load balancing. Click the Add Features button.

Confirm the installation of the Network Load Balancing feature and click Install.
After a moment, the Network Load Balancing feature will finish installing successfully, and no
reboot is needed. In addition, you will have to install the Network Load Balancing feature on all
Windows Servers participating in the NLB cluster.

Creating a new NLB cluster


Launch the Network Load Balancing Manager from the Administrative Tools on your Windows
Server. Right-click the Network Load Balancing Clusters node and select New Cluster.

Enter the IP address of the first host you want to join to the NLB cluster. Click the Connect
button. It populates the IP addresses of the Windows Server. Click Next.
On the Host Parameters screen, you can configure the priority of the specific host, add IP
addresses if needed, and set the default host state. Click Next.
On the next screen, click the Add button to add a new cluster IP address. It will be a virtual IP
address that all NLB hosts will assume as part of the NLB cluster.
When the new cluster IP address is added, click Next. Now, configure the cluster parameters.
Here, you can set the Full Internet name and the Cluster operation mode.
After the new cluster IP address has been added, click next.
On the Port Rules screen, configure the ports you want the NLB cluster to direct to the hosts.
Here, I am defining Ports 80 and 443 for web traffic. Click Finish.

We have successfully added the first NLB host to the cluster. We need to add additional hosts to
the NLB cluster. Right-click the cluster node and select Add Host to Cluster.
We begin the process of adding the second NLB host to the cluster. Enter the IP address of the
new NLB host and click Connect. Once the adapters are populated, select the adapter, and click
next.
Follow the remaining steps to add the second NLB host to the cluster.
I have loaded Internet Information Services (IIS) on both NLB servers participating in the
cluster. A DNS record has been configured for nlbcluster.cloud.local. Browsing out to the cluster
name returns the test site correctly. There isn't any special configuration you need to perform to
make IIS work with NLB.

To test the high availability of the NLB cluster, we will drainstop the first node.
The first host is now drained, and services are stopped.
CONFIGURING REDUNDANT
SERVER.

The primary high-availability technology available in Windows Server is failover clustering.


Failover clustering is a stateful, high-availability solution that allows an application or service to
remain available to clients if a host server fails. You can use failover clustering to provide high
availability to applications such as SQL Server and to scale out file servers and virtual machines
(VMs). With clustering, you can ensure that your workloads remain available if server hardware
or even a site fails.
Failover clustering is supported in both the Standard and Datacenter editions of Windows Server.
In some earlier versions of the Windows Server operating system, you gained access to failover
clustering only if you used the Enterprise edition. Windows Server supports up to 64 nodes in a
failover cluster.
Generally, all servers in a cluster should run either a similar hardware configuration or should be
similarly provisioned virtual machines. You should also use the same edition and installation
option. For example, you should aim to have cluster nodes that run either the full GUI or the
Server Core version of Windows Server, but you should avoid having cluster nodes that have a
mix of computers running Server Core and the full GUI version. Avoiding this mix ensures that
you use a similar update routine. A similar update routine is more difficult to maintain when you
use different versions of Windows Server.
You should use the Datacenter edition of Windows Server when building clusters that host
Hyper-V virtual machines because the virtual machine licensing scheme available with this
edition provides the most VM licenses.
To be fully supported by Microsoft, cluster hardware should meet the Certified for Windows
Server logo requirement. An easy way of accomplishing this is to purchase and deploy Azure
Stack HCI, a prebuilt hyper-converged Windows Server installation available from select
vendors. Even though it is called Azure Stack HCI and sounds as though it is far more of a
cloud-based solution, it’s primarily just an optimized Windows Server deployment on a certified
configuration with all the relevant clustering and “Software-Defined Datacenter” features lit up.
Create a Windows failover cluster.

Windows Server failover clusters have the following prerequisites:


 All cluster nodes should be running the same version and edition of Windows Server.
 You can add clustered storage during or after cluster creation.
 All servers in the cluster that are located in the same site should be members of the same
Active Directory (AD) domain. If configuring a stretch cluster, nodes must be members
of the same forest.
 The account used to create the cluster must be a domain user who has local administrator
rights on all servers that will function as cluster nodes.
 The account used to create the cluster requires the Create Computer Objects permission
in the organizational unit (OU) or container that will host the cluster-related Active
Directory objects.
Recommended practice is to place the computer accounts for cluster nodes in the same OU and
to use separate OUs for each cluster. Some organizations create child OUs for each separate
cluster in a specially created parent Cluster OU.
You install failover clustering by installing the Failover Clustering feature, performing initial
cluster configuration, running the cluster validation process, and then performing cluster
creation. You can use Windows Admin Center, PowerShell, or the Server Manager console to
perform these tasks. Once the cluster is deployed, you can manage your clusters using the
Failover Clustering Remote Server Administration Tools (RSAT), PowerShell, or Windows
Admin Center. You can install the Failover Clustering feature and its associated PowerShell
cmdlets on a node using the following PowerShell command:
Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools
Validating cluster configuration.

Cluster validation performs a check of a cluster’s current or proposed configuration and allows
you to determine whether you have the necessary pieces in place to create a cluster prior to
attempting to perform this task. Although you can skip validation, recommended practice is to go
through the process. This is because even though you may have created numerous clusters in the
past it doesn’t mean that the next time you go to create a cluster you accidentally overlook some
small but critical detail.
The period prior to cluster deployment is not the only time that you can perform cluster
validation. You should rerun cluster validation whenever you change or update a significant
component of the cluster. This includes adding nodes, modifying storage hardware, updating
network adapters, updating firmware or drivers for network adapters, and updating multipathing
software. Cluster validation performs tests in six categories:
 Inventory tests determine if the hardware, software, networking, and storage
configuration support the deployment of a cluster.
 Network A detailed set of tests to validate cluster network settings.
 Storage A detailed set of tests to analyze shared cluster storage.
 Storage Spaces Direct (S2D) A detailed set of tests to analyze S2D configuration.
 System Configuration A set of tests on the current system configuration.
 Cluster Configuration This category of test only executes on deployed clusters to verify
that best practices are being followed (for example, using multiple network adapters
connected to different networks).

You can perform cluster validation from the Failover Cluster Management Tools that are part of
the Remote Server Administration Tools, using Windows Admin Center to connect to an existing
cluster, or by running the Test-Cluster PowerShell cmdlet.

Prestage cluster computer objects.

During the cluster creation process, a computer object is created in Active Directory Domain
Services (AD DS) that matches the cluster name. This AD DS object is called the cluster name
object. As mentioned earlier in the chapter, the domain user account used to create the cluster
must have the Create Computer Objects permission in order to create this object. It’s possible to
have an appropriately permissioned account pre-create a cluster name object. When this is done,
the account used to then create the cluster using the constituent nodes does not require the Create
Computer Objects permission.
Workgroup clusters.

Workgroup clusters are a special type of cluster where cluster nodes are not members of an
Active Directory domain. Workgroup clusters are also known as Active Directory detached
clusters. The following workloads are supported for workgroup clusters:
 SQL Server When deploying SQL Server on a workgroup cluster, you should use SQL
Server Authentication for databases and SQL Server Always on Availability Groups.
 File Server A supported but not recommended configuration as Kerberos will not be
available as an authentication protocol for SMB traffic.
 Hyper-V A supported but not recommended configuration. Hyper-V live migration is not
supported, though it is possible to perform quick migration.
When creating a workgroup cluster, you first need to create a special account on all nodes that
will participate in the cluster that has the following properties:
 The special account must have the same username and password on all cluster nodes.
 The special account must be added to the local Administrators group on each cluster
node.
 The primary DNS suffix on each cluster node must be configured with the same value.
 When creating the cluster, ensure that the AdministrativeAccessPoint parameter when
using the New-Cluster cmdlet is set to DNS. Ensure that the cluster name is present in the
appropriate DNS zone, which depends on the primary DNS suffix, when running this
command.
 You will need to run the following PowerShell command on each node to configure the
LocalAccountTokenFilterPolicy registry setting to 1:
new-itemproperty -path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\
Policies\System -Name LocalAccountTokenFilterPolicy -Value 1
To create a workgroup cluster, use the New-Cluster cmdlet with the name parameter listing the
cluster name, the node parameters listing the nodes that you wish to join to the cluster where the
nodes have been configured according to the prerequisites, and the AdministrativeAccessPoint
parameter configured for DNS. For example, to create a new workgroup cluster named
workgrpclst with member nodes node1 and node2, run the following command on one of the
nodes:
New-Cluster -name workgrpclst -node node1, node2 -AdministrativeAccessPoint DNS
Stretch cluster across datacenter or Azure regions.

Failover clusters can span multiple sites. From the perspective of a hybrid environment, site
spanning can include spanning two separate on-premises locations, an on-premises location and
an Azure datacenter, or having cluster nodes hosted in different Azure regions. When
configuring a cluster that spans two sites, you should consider the following:
 Ensure that there are an equal number of nodes in each site.
 Allow each node to have a vote.
 Enable dynamic quorum. Dynamic quorum allows quorum to be recalculated when
individual nodes leave the cluster one at a time. Dynamic quorum is enabled by default
on Windows Server failover clusters.
 Use a file share witness. You should host the file share witness on a third site that has
separate connectivity to the two sites that host the cluster nodes. When configured in this
manner, the cluster retains quorum if one of the sites is lost. An alternative to a file share
witness is an Azure Cloud Witness.
 If you only have two sites and are unable to place a file share witness in an independent
third site, you can manually edit the cluster configuration to reassign votes so that the
cluster recalculates quorum.
Manually reassigning votes is also useful to avoid split-brain scenarios. Split-brain scenarios
occur when a failure occurs in a multisite cluster and when both sides of the cluster believe they
have quorum. Split-brain scenarios cause challenges when connectivity is restored and make it
necessary to restart servers on one side of the multisite cluster to resolve the issue. You can
manually reassign votes so that one side always retains quorum if intersite connectivity is lost.
For example, by setting the Melbourne site with two votes and the Sydney site with one vote, the
Melbourne site always retains quorum if intersite connectivity is lost.
You can use Storage Replica to enable stretch clusters with shared storage. Stretch clusters that
use shared storage have the following requirements:
 Cluster nodes are all members of the same AD DS forest.
 Firewall rules allow ICMP, SMB (ports 445 and 5445 for SMB direct), and WS-MAN
(port 5985) bidirectional traffic between all nodes that participate in the cluster.
 They must have two sets of shared storage that support persistent reservation. Each
storage set must be able to support the creation of two virtual disks. One will be used for
replicated data and the other for logs. These disks need to have the following properties:
 They must be initialized as GUID Partition Table (GPT) and not Master Boot Record
(MBR).
 Data volumes must be of identical size and use the same sector size.
 Log volumes must be of identical size and use the same sector size.
 Replicated storage cannot be located on the drive containing the Windows Server
operating system.
 Premium SSD must be used for cluster nodes hosted as infrastructure-as-a-service (IaaS)
VMs in Azure.
 Ensure that there is less than 5-millisecond round-trip latency if synchronous replication
is being used. If asynchronous replication is being used, this requirement does not need to
be met.
 Storage Replica–configured stretch clusters can use Storage Replica technology to
replicate shared cluster storage between locations.
Configure storage for failover clustering.

Storage for Windows Server failover clusters needs to be accessible to each node in the cluster.
You can use serial-attached SCSI (SAS), iSCSI, Fiber Channel, or Fiber Channel over Ethernet
(FCoE) to host shared storage for a Windows Server failover cluster.
 You should configure disks used for failover clustering as follows:
 Volumes should be formatted using NTFS or ReFS.
 Use Master Boot Record (MBR) or GUID Partition Table (GPT).
 Avoid allowing different clusters access to the same storage device. This can be
accomplished through LUN masking or zoning.
 Any multipath solution must be based on Microsoft Multipath I/O (MPIO).
Cluster Shared Volumes (CSV) is a technology that allows multiple cluster nodes to have
concurrent access to a single physical or virtual storage device, also termed a logical unit number
(LUN). CSV allows you to have virtual machines on the same shared storage run on different
cluster nodes. CSV also has the following benefits:
 Support for scale-out file servers
 Support for BitLocker volume encryption
 SMB 3.0 and higher support

 Integration with Storage Spaces


 Online volume scan and repair
 You can enable CSV only after you create a failover cluster and you have provided the
shared storage available to each node that will be available to the CSV.

Modify quorum options.

A cluster quorum mode determines how many nodes and witnesses must fail before the cluster is
in a failed state. Nodes are servers that participate in the cluster. Witnesses can be stored on
shared storage, on file shares, in Windows Server, and even on a USB drive attached to a
network switch; shared storage is the preferred method.
For unknown reasons, some people use Distributed File System (DFS) shares as file share
witnesses when setting up their failover clusters. To stop this type of shenanigan from occurring
in the future, Microsoft has configured Windows Server failover clustering so that it explicitly
blocks the use of DFS namespaces when configuring a file share witness.
Microsoft recommends that you configure a cluster so that an odd number of total votes be
spread across member nodes and the witness. This limits the chance of a tie during a quorum
vote.
There are four cluster quorum modes:
 Node Majority This cluster quorum mode is recommended for clusters that have an odd
number of nodes. When this quorum type is set, the cluster retains quorum when the
number of available nodes exceeds the number of failed nodes. For example, if a cluster
has five nodes and three are available, quorum is retained.
 Node and Disk Majority This cluster quorum mode is recommended when the cluster has
an even number of nodes. A disk witness hosted on a shared storage disk, such as iSCSI
or Fiber Channel that is accessible to cluster nodes has a vote when determining quorum,
as do the quorum nodes. The cluster retains quorum as long as the majority of voting
entities remain online. For example, if you have a four-node cluster and a witness disk, a
combination of three of those entities needs to remain online for the cluster to retain
quorum. The cluster retains quorum if three nodes are online or if two nodes and the
witness disk are online.
 Node and File Share Majority This configuration is similar to the Node and Disk
Majority configuration, but the quorum is stored on a network share rather than on a
shared storage disk. It is suitable for similar configurations to Node and Disk Majority.
This method is not as reliable as Node and Disk Majority because file shares generally do
not have the redundancy features of shared storage.
 No Majority: Disk Only This model can be used with clusters that have an odd number of
nodes. It is only recommended for testing environments because the disk hosting the
witness functions as a single point of failure. When you choose this model, as long as the
disk hosting the witness and one node remain available, the cluster retains quorum. If the
disk hosting the witness fails, quorum is lost, even if all the other nodes are available.
When you create a cluster, the cluster quorum is automatically configured for you. You might
want to alter the quorum mode, however, if you change the number of nodes in your cluster. For
example, you might want to alter the quorum mode if you change from a four-node to a five-
node cluster. When you change the cluster quorum configuration, the Failover Cluster Manager
provides you with a recommended configuration, but you can choose to override that
configuration if you want.
You can also perform advanced quorum configuration to specify what nodes can participate in
the quorum vote, which you can set on the Select Voting Configuration page of the Configure
Cluster Quorum Wizard. When you do this, only the selected nodes’ votes are used to calculate
quorum. Also, it’s possible that fewer nodes would need to fail to cause a cluster to fail than
would otherwise be the case if all nodes participated in the quorum vote. This can be useful when
configuring how multisite clusters calculate quorum when the connection between sites fails.
DISCUSSION.
1.

By implementing high availability and redundancy, businesses can achieve


continuous availability by minimizing or eliminating planned and unplanned
downtime. Planned maintenance or system upgrades can be performed without
interrupting services, as the workload is automatically shifted to redundant servers.
Unplanned outages, such as hardware failures or network issues, are also mitigated
by failover mechanisms that ensure services remain accessible.
2.

Network load balancing distributes traffic across multiple backend servers and
provides redundancy for critical applications. Windows Server includes a Network
Load Balancing (NLB) feature that allows admins to use the resources of multiple
application servers evenly.
REFERENCES.
Configuring Network Load Balancing (NLB) for a Windows Server cluster. (2022, October 7).
4sysops. https://4sysops.com/archives/configuring-network-load-balancing-nlb-for-a-windows-
server-cluster/
Configuring Network Load Balancing (NLB) for a Windows Server cluster. (2022, October 7).
4sysops. https://4sysops.com/archives/configuring-network-load-balancing-nlb-for-a-windows-
server-cluster/
Implement and manage Windows Server High Availability | Microsoft Press Store. (n.d.).
Www.microsoftpressstore.com. https://www.microsoftpressstore.com/articles/article.aspx?
p=3167979
hakia. (2023, June 1). High Availability and Redundancy in Server Systems: Ensuring
Continuity and Fault Tolerance - Servers. Hakia: Covering All Angles of Technology.
https://www.hakia.com/high-availability-and-redundancy-in-server-systems-ensuring-continuity-
and-fault-tolerance/#:~:text=By%20distributing%20the%20workload%20across%20multiple
%20servers%2C%20organization

You might also like