0% found this document useful (0 votes)
292 views

Quick Start Guide For Server Clusters: Software Requirements and Guidelines

Guide provides system requirements, installation instructions, and other, step-by-step instructions. Server cluster technology helps ensure that you have access to important server-based resources. Microsoft makes no warranties, either express or implied, in this document. Information in this document, including URL and other Internet Web site references, is subject to change without notice.

Uploaded by

rws1000
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
292 views

Quick Start Guide For Server Clusters: Software Requirements and Guidelines

Guide provides system requirements, installation instructions, and other, step-by-step instructions. Server cluster technology helps ensure that you have access to important server-based resources. Microsoft makes no warranties, either express or implied, in this document. Information in this document, including URL and other Internet Web site references, is subject to change without notice.

Uploaded by

rws1000
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 40

Quick Start Guide for Server Clusters

Project Writer: WSUA Writer


Project Editor: WSUA Editor

This guide provides system requirements, installation instructions, and other, step-by-step instructions that you
can use to deploy server clusters if you are using Microsoft® Windows Server™ 2003, Enterprise Edition, or
Windows Server 2003, Datacenter Edition, operating systems.

The server cluster technology in Windows Server 2003, Enterprise Edition, and Windows Server 2003,
Datacenter Edition, helps ensure that you have access to important server-based resources. You can use server
cluster technology to create several cluster nodes that appear to users as one server. If one of the nodes in the
cluster fails, another node begins to provide service. This is a process known as "failover." In this way, server
clusters can increase the availability of critical applications and resources.

Copyright
This document is provided for informational purposes only and Microsoft makes no warranties, either express
or implied, in this document. Information in this document, including URL and other Internet Web site
references, is subject to change without notice. The entire risk of the use or the results from the use of this
document remains with the user. Unless otherwise noted, the example companies, organizations, products,
domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no
association with any real company, organization, product, domain name, e-mail address, logo, person, place, or
event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the
user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or
introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical,
photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft
Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights
covering subject matter in this document. Except as expressly provided in any written license agreement from
Microsoft, the furnishing of this document does not give you any license to these patents, trademarks,
copyrights, or other intellectual property.

Copyright © 2005 Microsoft Corporation. All rights reserved.

Microsoft, Windows, Windows NT, SQL Server, and Windows Server are either registered trademarks or
trademarks of Microsoft Corporation in the United States and/or other countries.

The names of actual companies and products mentioned herein may be the trademarks of their respective
owners.

Requirements and Guidelines for Configuring Server Clusters


The section lists requirements and guidelines that will help you set up a server cluster effectively.

Software requirements and guidelines

•You must have either Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition,
installed on all computers in the cluster. We strongly recommend that you also install the latest service pack for
Windows Server 2003. If you install a service pack, the same service pack must be installed on all computers in
the cluster.
•All nodes in the cluster must be of the same architecture. You cannot mix x86-based, Itanium-based, and x64-
based computers within the same cluster.
•Your system must be using a name-resolution service, such as Domain Name System (DNS), DNS dynamic
update protocol, Windows Internet Name Service (WINS), or Hosts file. Hosts file is supported as a local,
static file method of mapping DNS domain names for host computers to their Internet Protocol (IP) addresses.
The Hosts file is provided in the systemroot\System32\Drivers\Etc folder.
•All nodes in the cluster must be in the same domain. As a best practice, all nodes should have the same domain
role (either member server or domain controller), and the recommended role is member server. Exceptions that
can be made to these domain role guidelines are described later in this document.
•When you first create a cluster or add nodes to it, you must be logged on to the domain with an account that
has administrator rights and permissions on all nodes in that cluster. The account does not need to be a Domain
Admin level account, but can be a Domain User account with Local Admin rights on each node.

Hardware requirements and guidelines

•For Windows Server 2003, Enterprise Edition, and Windows Server 2003, Datacenter Edition, Microsoft
supports only complete server cluster systems chosen from the Windows Catalog. To determine whether your
system and hardware components are compatible, including your cluster disks, see the Microsoft Windows
Catalog at the Microsoft Web site [http://go.microsoft.com/fwlink/?LinkId=4287] . For a geographically
dispersed cluster, both the hardware and software configuration must be certified and listed in the Windows
Catalog. For more information, see article 309395, "The Microsoft support policy for server clusters, the
Hardware Compatibility List, and the Windows Server Catalog," in the Microsoft Knowledge
Base [http://go.microsoft.com/fwlink/?LinkId=46608] .
•If you are installing a server cluster on a storage area network (SAN), and you plan to have multiple devices
and clusters sharing the SAN with a cluster, your hardware components must be compatible. For more
information, see article 304415, "Support for Multiple Clusters Attached to the Same SAN Device," in the
Microsoft Knowledge Base [http://go.microsoft.com/fwlink/?linkid=47293] .
•You must have two mass-storage device controllers in each node in the cluster: one for the local disk, one for
the cluster storage. You can choose between SCSI, iSCSI, or Fibre Channel for cluster storage on server
clusters that are running Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter
Edition. You must have two controllers because one controller has the local system disk for the operating
system installed, and the other controller has the shared storage installed.
•You must have two Peripheral Component Interconnect (PCI) network adapters in each node in the cluster.
•You must have storage cables to attach the cluster storage device to all computers. Refer to the manufacturer's
instructions for configuring storage devices.
•Ensure that all hardware is identical in all cluster nodes. This means that each hardware component must be the
same make, model, and firmware version. This makes configuration easier and eliminates compatibility
problems.

Network requirements and guidelines

•Your network must have a unique NetBIOS name.


•A WINS server must be available on your network.
•You must use static IP addresses for each network adapter on each node.

Important:

Server clusters do not support the use of IP addresses assigned from Dynamic Host Configuration Protocol
(DHCP) servers.
•The nodes in the cluster must be able to access a domain controller. The Cluster service requires that the nodes
be able to contact the domain controller to function correctly. The domain controller must be highly available.
In addition, it should be on the same local area network (LAN) as the nodes in the cluster. To avoid a single
point of failure, the domain must have at least two domain controllers.
•Each node must have at least two network adapters. One adapter will be used exclusively for internal node-to-
node communication (the private network). The other adapter will connect the node to the client public
network. It should also connect the cluster nodes to provide support in case the private network fails. (A
network that carries both public and private communication is called a mixed network.)
•If you are using fault-tolerant network cards or teaming network adapters, you must ensure that you are using
the most recent firmware and drivers. Check with your network adapter manufacturer to verify compatibility
with the cluster technology in Windows Server 2003, Enterprise Edition, and Windows Server 2003,
Datacenter Edition.

Note:

Using teaming network adapters on all cluster networks concurrently is not supported. At least one of the
cluster private networks must not be teamed. However, you can use teaming network adapters on other cluster
networks, such as public networks.

Storage requirements and guidelines

•An external disk storage unit must be connected to all nodes in the cluster. This will be used as the cluster
storage. You should also use some type of hardware redundant array of independent disks (RAID).
•All cluster storage disks, including the quorum disk, must be physically attached to a shared bus.

Note:

This requirement does not apply to Majority Node Set (MNS) clusters when they are used with some type of
software replication method.
•Cluster disks must not be on the same controller as the one that is used by the system drive, except when you
are using boot from SAN technology. For more information about using boot from SAN technology, see "Boot
from SAN in Windows Server 2003 and Windows 2000 Server" at the Microsoft Web
site [http://go.microsoft.com/fwlink/?LinkId=46609].
•You should create multiple logical unit numbers (LUNs) at the hardware level in the RAID configuration
instead of using a single logical disk that is then divided into multiple partitions at the operating system level.
We recommend a minimum of two logical clustered drives. This enables you to have multiple disk resources
and also allows you to perform manual load balancing across the nodes in the cluster.
•You should set aside a dedicated LUN on your cluster storage for holding important cluster configuration
information. This information makes up the cluster quorum resource. The recommended minimum size for the
volume is 500 MB. You should not store user data on any volume on the quorum LUN.
•If you are using SCSI, ensure that each device on the shared bus (both SCSI controllers and hard disks) has a
unique SCSI identifier. If the SCSI controllers all have the same default identifier (the default is typically SCSI
ID 7), change one controller to a different SCSI ID, such as SCSI ID 6. If more than one disk will be on the
shared SCSI bus, each disk must also have a unique SCSI identifier.
•Software fault tolerance is not natively supported for disks in the cluster storage. For cluster disks, you must
use the NTFS file system and configure the disks as basic disks with all partitions formatted as NTFS. They
can be either compressed or uncompressed. Cluster disks cannot be configured as dynamic disks. In addition,
features of dynamic disks, such as spanned volumes (volume sets), cannot be used without additional non-
Microsoft software.
•All disks on the cluster storage device must be partitioned as master boot record (MBR) disks, not as GUID
partition table (GPT) disks.

Deploying SANs with server clusters

This section lists the requirements for deploying SANs with server clusters.

•Nodes from different clusters must not be able to access the same storage devices. Each cluster used with a
SAN must be deployed in a way that isolates it from all other devices. This is because the mechanism the
cluster uses to protect access to the disks can have adverse effects if other clusters are in the same zone. Using
zoning to separate the cluster traffic from other cluster or non-cluster traffic prevents this type of interference.
For more information, see "Zoning vs. LUN masking" later in this guide.
•All host bus adapters in a single cluster must be the same type and have the same firmware version. Host bus
adapters are the interface cards that connect a cluster node to a SAN. This is similar to the way that a network
adapter connects a server to a typical Ethernet network. Many storage vendors require that all host bus adapters
on the same zone—and, in some cases, the same fabric—share these characteristics.
•In a cluster, all device drivers for storage and host bus adapters must have the same software version. We
strongly recommend that you use a Storport mini-port driver with clustering. Storport (Storport.sys) is a
storage port driver that is provided in Windows Server 2003. It is especially suitable for use with high-
performance buses, such as Fibre Channel buses, and RAID adapters.
•Tape devices should never be used in the same zone as cluster disk storage devices. A tape device could
misinterpret a bus reset and rewind at inappropriate times, such as when backing up a large amount of data.
•In a highly available storage fabric, you should deploy server clusters with multiple host bus adapters using
multipath I/O software. This provides the highest level of redundancy and availability.
r
a
c
t
s

w
i
t
h

W
i
n
d
o
w
s

S
e
r
v
e
r

2
0
0
3
.

Creating a Cluster
It is important to plan the details of your hardware and network before you create a cluster.

If you are using a shared storage device, ensure that when you turn on the computer and start the operating
system, only one node has access to the cluster storage. Otherwise, the cluster disks can become corrupted.

In Windows Server 2003, Enterprise Edition, and Windows Server 2003, Datacenter Edition, logical disks that
are not on the same shared bus as the boot partition are not automatically mounted and assigned a drive letter.
This helps prevent a server in a complex SAN environment from mounting drives that might belong to another
server. (This is different from how new disks are mounted in Microsoft Windows® 2000 Server operating
systems.) Although the drives are not mounted by default, we still recommend that you follow the procedures
provided in the table later in this section to ensure that the cluster disks will not become corrupted.

The table in this section can help you determine which nodes and storage devices should be turned on during
each installation step. The steps in the table pertain to a two-node cluster. However, if you are installing a
cluster with more than two nodes, the Node 2 column lists the required state of all other nodes.

Node Node
Step StorageNotes
1 2
Set up networks On On Off Verify that all storage devices on the shared bus are turned off. Turn
on all nodes.
Set up cluster disks On Off On Shut down all nodes. Turn on the cluster storage, and then turn on the
first node.
Verify disk Off On On Turn off the first node, turn on second node. Repeat for nodes three
configuration and four if necessary.
Configure the first On Off On Turn off all nodes; and then turn on the first node.
node
Configure the On On On After the first node is successfully configured, turn on the second
second node node. Repeat for nodes three and four as necessary.
Post-installation On On On All nodes should be turned on.

Preparing to create a cluster

Complete the following three steps on each cluster node before you install a cluster on the first node.

•Install Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition, on each node
of the cluster. We strongly recommend that you also install the latest service pack for Windows Server 2003. If
you install a service pack, the same service pack must be installed on all computers in the cluster.
•Set up networks.
•Set up cluster disks.

All nodes must be members of the same domain. When you create a cluster or join nodes to a cluster, you
specify the domain user account under which the Cluster service runs. This account is called the Cluster service
account (CSA).

Installing the Windows Server 2003 operating system

Install Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition, on each node of
the cluster. For information about how to perform this installation, see the documentation you received with the
operating system.

Before configuring the Cluster service, you must be logged on locally with a domain account that is a member
of the local administrators group.

Important:

If you attempt to join a node to a cluster that has a blank password for the local administrator account, the
installation will fail. For security reasons, Windows Server 2003 operating systems prohibit blank administrator
passwords.
Setting up networks

Each cluster node requires at least two network adapters and must be connected by two or more independent
networks. At least two LAN networks (or virtual LANs) are required to prevent a single point of failure. A
server cluster whose nodes are connected by only one network is not a supported configuration. The adapters,
cables, hubs, and switches for each network must fail independently. This usually means that the components of
any two networks must be physically independent.

Two networks must be configured to handle either All communications (mixed network) or Internal cluster
communications only (private network). The recommended configuration for two adapters is to use one
adapter for the private (node-to-node only) communication and the other adapter for mixed communication
(node-to-node plus client-to-cluster communication).

You must have two PCI network adapters in each node. They must be certified in the Microsoft Windows
Catalog [http://go.microsoft.com/fwlink/?LinkId=4287] and supported by Microsoft Product Support Services.
Assign one network adapter on each node a static IP address, and assign the other network adapter a static IP
address on a separate network on a different subnet for private network communication.

Because communication between cluster nodes is essential for smooth cluster operations, the networks that you
use for cluster communication must be configured optimally and follow all hardware compatibility-list
requirements. For additional information about recommended configuration settings, see article 258750,
"Recommended private heartbeat configuration on a cluster server," in the Microsoft Knowledge
Base [http://go.microsoft.com/fwlink/?LinkId=46549].

You should keep all private networks physically separate from other networks. Specifically, do not use a router,
switch, or bridge to join a private cluster network to any other network. Do not include other network
infrastructure or application servers on the private network subnet. To separate a private network from other
networks, use a cross-over cable in a two-node cluster configuration or a dedicated hub in a cluster
configuration of more than two nodes.

Additional network considerations

•All cluster nodes must be on the same logical subnet.


•If you are using a virtual LAN (VLAN), the one-way communication latency between any pair of cluster nodes
on the VLAN must be less than 500 milliseconds.
•In Windows Server 2003 operating systems, cluster nodes exchange multicast heartbeats rather than unicast
heartbeats. A heartbeat is a message that is sent regularly between cluster network drivers on each node.
Heartbeat messages are used to detect communication failure between cluster nodes. Using multicast
technology enables better node communication because it allows several unicast messages to be replaced with
a single multicast message. Clusters that consist of fewer than three nodes will not send multicast heartbeats.
For additional information about using multicast technology, see article 307962, "Multicast Support Enabled
for the Cluster Heartbeat," in the Microsoft Knowledge Base [http://go.microsoft.com/fwlink/?LinkId=46558].

Determine an appropriate name for each network connection. For example, you might want to name the private
network "Private" and the public network "Public." This will help you uniquely identify a network and correctly
assign its role.

The following figure shows the elements of a four-node cluster that uses a private network.
Setting the order of the network adapter binding

One of the recommended steps for setting up networks is to ensure the network adapter binding
is set in the correct order. To do this, use the following procedure.

To set the order of the network adapter binding


o
r
k

(
H
e
a
r
t
b
e
a
t
)

[
R
e
m
o
t
e

A
c
c
e
s
s

C
o
n
n
e
c
t
i
o
n
s
]
Configuring the private network adapter

As stated earlier, the recommended configuration for two adapters is to use one adapter for
private communication, and the other adapter for mixed communication. To configure the private
network adapter, use the following procedure.

To configure the private network adapter


def
ine
d
on
the
pa
ge,
an
d
the
n
cli
ck
Dis
abl
e
Ne
tBI
OS
ov
er
TC
P/I
P.

Ad
va
nc
ed
TC
P/I
P
Set
tin
gs
op
ens
an
d
loo
ks
si
mil
ar
to
the
foll
ow
ing
fig
ure
:
Configuring the public network adapter

If DHCP is used to obtain IP addresses, it might not be possible to access cluster nodes if the DHCP server is
inaccessible. For increased availability, static, valid IP addresses are required for all interfaces on a server
cluster. If you plan to put multiple network adapters in each logical subnet, keep in mind that the Cluster service
will recognize only one network interface per subnet.

•Verifying connectivity and name resolution. To verify that the private and public networks are
communicating properly, "ping" all IP addresses from each node. To "ping" an IP address means that you
search for and verify it. You should be able to ping all IP addresses, both locally and on the remote nodes. To
verify the name resolution, ping each node from a client using the node's computer name instead of its IP
address. It should only return the IP address for the public network. You might also want to try using the PING
–a command to perform a reverse name resolution on the IP addresses.
•Verifying domain membership. All nodes in the cluster must be members of the same domain, and they must
be able to access a domain controller and a DNS server. They can be configured as member servers or domain
controllers. You should have at least one domain controller on the same network segment as the cluster. To
avoid having a single point of failure, another domain controller should also be available. In this guide, all
nodes are configured as member servers, which is the recommended role.

In a two-node server cluster, if one node is a domain controller, the other node must also be a domain
controller. In a four-node cluster, it is not necessary to configure all four nodes as domain controllers. However,
when following a "best practices" model of having at least one backup domain controller, at least one of the
remaining three nodes should also be configured as a domain controller. A cluster node must be promoted to a
domain controller before the Cluster service is configured.

The dependence in Windows Server 2003 on DNS requires that every node that is a domain controller must
also be a DNS server if another DNS server that supports dynamic updates is not available.

You should consider the following issues if you are planning to deploy cluster nodes as domain controllers:
e
r

i
n

s
c
e
n
a
r
i
o

w
h
e
r
e

t
h
e

n
o
d
e
s

a
r
e

a
l
s
o

d
o
m
a
i
n

c
o
n
t
•Setting up a Cluster service user account. The Cluster service requires a domain user account that is a
member of the Local Administrators group on each node. This is the account under which the Cluster service
can run. Because Setup requires a user name and password, you must create this user account before you
configure the Cluster service. This user account should be dedicated to running only the Cluster service and
should not belong to an individual.
N
o
t
e
:

I
t

i
s

n
o
t

n
e
c
e
s
s
a
r
y

f
o
r

t
h
e

C
l
u
s
t
e
r

s
e
r
You can use the following procedure to set up a Cluster service user account.

To set up a Cluster service user account


rs
sna
p-
in,
rig
ht-
cli
ck
Cl
ust
er,
an
d
the
n
cli
ck
Pr
op
ert
ies.

9.

C
lic
k
Ad
d
Me
mb
ers
to
a
Gr
ou
p.

10.

C
lic
k
Ad
mi
nis
tra
tor
s,
Setting up disks

This section includes information and step-by-step procedures you can use to set up disks.

Important:

To avoid possible corruption of cluster disks, ensure that both the Windows Server 2003 operating system and
the Cluster service are installed, configured, and running on at least one node before you start the operating
system on another node in the cluster.

Quorum resource

The quorum resource maintains the configuration data necessary for recovery of the cluster. The quorum
resource is generally accessible to other cluster resources so that any cluster node has access to the most recent
database changes. There can only be one quorum disk resource per cluster.

The requirements and guidelines for the quorum disk are as follows:

•The quorum disk should be at least 500 MB in size.


•You should use a separate LUN as the dedicated quorum resource.
•A disk failure could cause the entire cluster to fail. Because of this, we strongly recommend that you
implement a hardware RAID solution for your quorum disk to help guard against disk failure. Do not use the
quorum disk for anything other than cluster management.

When you configure a cluster disk, it is best to manually assign drive letters to the disks on the shared bus. The
drive letters should not start with the next available letter. Instead, leave several free drive letters between the
local disks and the shared disks. For example, start with drive Q as the quorum disk and then use drives R and S
for the shared disks. Another method is to start with drive Z as the quorum disk and then work backward
through the alphabet with drives X and Y as data disks. You might also want to consider labeling the drives in
case the drive letters are lost. Using labels makes it easier to determine what the drive letter was. For example, a
drive label of "DriveR" makes it easy to determine that this drive was drive letter R. We recommend that you
follow these best practices when assigning driver letters because of the following issues:

•Adding disks to the local nodes can cause the drive letters of the cluster disks to be revised up by one letter.
•Adding disks to the local nodes can cause a discontinuous flow in the drive lettering and result in confusion.
•Mapping a network drive can conflict with the drive letters on the cluster disks.

The letter Q is commonly used as a standard for the quorum disk. Q is used in the next procedure.

The first step in setting up disks for a cluster is to configure the cluster disks you plan to use. To
do this, use the following procedure.

To configure cluster disks


1.

Make sure that only one node in the cluster is turned on.

2.

Open Computer Management (Local).

3.

In the console tree, click Computer Management (Local), click Storage, and then click Disk Management.

4.

When you first start Disk Management after installing a new disk, a wizard appears that provides a list of the
new disks detected by the operating system. If a new disk is detected, the Write Signature and Upgrade Wizard
starts. Follow the instructions in the wizard.

5.

Because the wizard automatically configures the disk as dynamic storage, you must reconfigure the disk to
basic storage. To do this, right-click the disk, and then click Convert To Basic Disk.

6.

Right-click an unallocated region of a basic disk, and then click New Partition.

7.

In the New Partition Wizard, click Next, click Primary partition, and then click Next.

8.

By default, the maximum size for the partition is selected. Using multiple logical drives is better than using
multiple partitions on one disk because cluster disks are managed at the LUN level, and logical drives are the
After you have configured the cluster disks, you should verify that the disks are accessible. To do
this, use the following procedure.

To verify that the cluster disks are accessible


o
d
e,
a
n
d
th
e
n
tu
rn
o
n
th
e
se
c
o
n
d
n
o
d
e.

6.

R
e
p
ea
t
st
e
ps
1
th
ro
u
g
h
3
to
v
er
if
y
th
at
th
e
Creating a new server cluster

In the first phase of creating a new server cluster, you must provide all initial cluster configuration information.
To do this, use the New Server Cluster Wizard.

Important:

Before configuring the first node of the cluster, make sure that all other nodes are turned off. Also make sure
that all cluster storage devices are turned on.

The following procedure explains how to use the New Server Cluster Wizard to configure the first
cluster node.

To configure the first node


%\
Sy
ste
m3
2\L
og
Fil
es\
Cl
ust
er\
Cl
Cf
gSr
v.L
og
Validating the cluster installation

You should validate the cluster configuration of the first node before configuring the second
node. To do this, use the following procedure.

To validate the cluster configuration


1.

O
p
e
n
C
lu
st
er
A
d
m
in
is
tr
at
or
.
T
o
d
o
th
is,
cl
ic
k
St
a
rt
,
cl
ic
k
C
o
nt
ro
l
P
a
n
el
,
d
o
u
bl
e-
cl
ic
Configuring subsequent nodes

After you install the Cluster service on the first node, it takes less time to install it on subsequent nodes. This is
because the Setup program uses the network configuration settings configured on the first node as a basis for
configuring the network settings on subsequent nodes. You can also install the Cluster service on multiple nodes
at the same time and choose to install it from a remote location.

Note:

The first node and all cluster disks must be turned on. You can then turn on all other nodes. At this stage, the
Cluster service controls access to the cluster disks, which helps prevent disk corruption. You should also verify
that all cluster disks have had resources automatically created for them. If they have not, manually create them
before adding any more nodes to the cluster.

After you have configured the first node, you can use the following procedure to configure
subsequent nodes.

To configure the second node


ge,
in
Pa
ss
wo
rd,
typ
e
the
pas
sw
ord
for
the
Cl
ust
er
ser
vic
e
acc
ou
nt.
En
sur
e
tha
t
the
cor
rec
t
do
ma
in
for
thi
s
acc
ou
nt
ap
pea
rs
in
the
Do
ma
in
list
,
an
d
Configuring the server cluster after installation

Heartbeat configuration

After the network and the Cluster service have been configured on each node, you should determine the
network's function within the cluster. Using Cluster Administrator, select the Enable this network for cluster
use check box and select from among the following options.

Option Description
Client access only (public Select this option if you want the Cluster service to use this network adapter only
network) for external communication with other clients. No node-to-node communication
will take place on this network adapter.
Internal cluster Select this option if you want the Cluster service to use this network only for
communications only (private node-to-node communication.
network)
All communications (mixed Select this option if you want the Cluster service to use the network adapter for
network) node-to-node communication and for communication with external clients. This
option is selected by default for all networks.

This guide assumes that only two networks are in use. It explains how to configure these networks as one mixed
network and one private network. This is the most common configuration.

Use the following procedure to configure the heartbeat.

To configure the heartbeat


7.

S
el
ec
t
th
e
E
n
a
bl
e
th
is
n
et
w
o
r
k
fo
r
cl
u
st
er
u
se
c
h
ec
k
Prioritize the order of the heartbeat adapter

After you have decided the roles in which the Cluster service will use the network adapters, you
must prioritize the order in which the adapters will be used for internal cluster communication. To
do this, use the following procedure.

To configure network priority


u
a
l
l
y

b
e
s
t

f
o
r

p
r
i
v
a
t
e

n
e
t
w
o
r
k
s

t
o

h
a
v
e

h
i
g
h
e
r

p
r
i
o
r
i
t
Quorum disk configuration

The New Server Cluster Wizard and the Add Nodes Wizard automatically select the drive used for
the quorum device. The wizard automatically uses the smallest partition it finds that is larger
then 50 MB. If you want to, you can change the automatically selected drive to a dedicated one
that you have designated for use as the quorum. The following procedure explains what to do if
you want to use a different disk for the quorum resource.

To use a different disk for the quorum resource


at
h
to
th
e
fo
ld
er
o
n
th
e
p
ar
tit
io
n;
fo
r
e
x
a
m
pl
e:

\
M
S
C
S

Testing the Server Cluster


After Setup, there are several methods you can use to verify a cluster installation.
•Use Cluster Administrator. After Setup is run on the first node, open Cluster Administrator, and then try to
connect to the cluster. If Setup was run on a second node, start Cluster Administrator on either the first or
second node, attempt to connect to the cluster, and then verify that the second node is listed.
•Services snap-in. Use the Services snap-in to verify that the Cluster service is listed and started.
•Event log. Use Event Viewer to check for ClusSvc entries in the system log. You should see entries that
confirm the Cluster service successfully formed or joined a cluster.

Testing whether group resources can fail over

You might want to ensure that a new group is functioning correctly. To do this, use the following
procedure.

To test whether group resources can fail over


e
su
re
th
e
O
w
n
er
c
ol
u
m
n
in
th
e
d
et
ai
ls
p
a
n
e
re
fl
ec
ts
a
c
h
a
n
g
e
of
o
w
n
er
fo
r
al
l
of
th
e
gr
o
u
p'
s
SCSI Drive Installations
This section of the guide provides a generic set of instructions for parallel SCSI drive installations.

Important:

If the SCSI hard disk vendor’s instructions differ from the instructions provided here, follow the instructions
supplied by the vendor.

The SCSI bus listed in the hardware requirements must be configured before you install the Cluster service.
This configuration applies to the following:

•The SCSI devices.


•The SCSI controllers and the hard disks. This is to ensure that they work properly on a shared SCSI bus.
•The termination of the shared bus. If a shared bus must be terminated, it must be done properly. The shared
SCSI bus must have a terminator at each end of the bus. It is possible to have multiple shared SCSI buses
between the nodes of a cluster.

In addition to the following information, refer to documentation from the manufacturer of your SCSI device.

Configuring SCSI devices

Each device on the shared SCSI bus must have a unique SCSI identification number. Because most SCSI
controllers default to SCSI ID 7, configuring the shared SCSI bus includes changing the SCSI ID number on
one controller to a different number, such as SCSI ID 6. If there is more than one disk that will be on the shared
SCSI bus, each disk must have a unique SCSI ID number.

Storage Area Network Considerations


Fibre Channel systems are required for all server clusters running 64-bit versions of Windows Server 2003,
Enterprise Edition, or Windows Server 2003, Datacenter Edition. It is also best to use Fibre Channel systems for
clusters of three or more nodes. Two methods of Fibre Channel-based storage are supported in a cluster that is
running Windows Server 2003: arbitrated loops and switched fabric.

Note:

To determine which type of Fibre Channel hardware to use, read the Fibre Channel vendor's documentation.

Fibre Channel arbitrated loops (FC-AL)

A Fibre Channel arbitrated loop (FC-AL) is a set of nodes and devices connected into a single loop. FC-AL
provides a cost-effective way to connect up to 126 devices into a single network.

Fibre Channel arbitrated loops provide a solution for a small number of devices in a relatively fixed
configuration. All devices on the loop share the media, and any packet traveling from one device to another
must pass through all intermediate devices. FC-AL is a good choice if a low number of cluster nodes is
sufficient to meet your high-availability requirements.
FC-AL offers the following advantages:

•The cost is relatively low.


•Loops can be expanded to add storage; however, nodes cannot be added.
•Loops are easy for Fibre Channel vendors to develop.

The disadvantage of FC-ALs is that they can be difficult to deploy successfully. This is because every device on
the loop shares the media, which causes the overall bandwidth of the cluster to be lower. Some organizations
might also not want to be restricted by the 126-device limit. Having more than one cluster on the same
arbitrated loop is not supported.

Fiber Channel switched fabric (FC-SW)

With Fibre Channel switched fabric, switching hardware can link multiple nodes together into a matrix of Fibre
Channel nodes. A switched fabric is responsible for device interconnection and switching. When a node is
connected to a Fibre Channel switching fabric, it is responsible for managing only the single point-to-point
connection between itself and the fabric. The fabric handles physical interconnections to other nodes,
transporting messages, flow control, and error detection and correction. Switched fabrics also offer very fast
switching latency.

The switching fabric can be configured to allow multiple paths between the same two ports. It provides efficient
sharing (at the cost of higher contention) of the available bandwidth. It also makes effective use of the burst
nature of communications with high-speed peripheral devices.

Other advantages to using switched fabric include the following:

•It is easy to deploy.


•It can support millions of devices.
•The switches provide fault isolation and rerouting.
•There is no shared media, which allows faster communication in the cluster.

Zoning vs. LUN masking

Zoning and LUN masking are important to SAN deployments, especially if you are deploying a SAN with a
server cluster that is running Windows Server 2003.

Zoning
Many devices and nodes can be attached to a SAN. With data stored in a single storage entity (known as a
"cloud") it is important to control which hosts have access to specific devices. Zoning allows administrators to
partition devices in logical volumes, thereby reserving the devices in a volume for a server cluster. This means
that all interactions between cluster nodes and devices in the logical storage volumes are isolated within the
boundaries of the zone; other non-cluster members of the SAN are not affected by cluster activity. The elements
used in zoning are shown in the following figure:

You must implement zoning at the hardware level with the controller or switch, not through software. This is
because zoning is a security mechanism for a SAN-based cluster. Unauthorized servers cannot access devices
inside the zone. Access control is implemented by the switches in the fabric, so a host adapter cannot gain
access to a device for which it has not been configured. With software zoning, the cluster would not be secure if
the software component failed.

In addition to providing cluster security, zoning also limits the traffic flow within a given SAN environment.
Traffic between ports is routed only to segments of the fabric that are in the same zone.

LUN masking

A logical unit number (LUN) is a logical disk defined within a SAN. Server clusters see LUNs and act as
though the LUNs are physical disks. With LUN masking, which is performed at the controller level, you can
define relationships between LUNs and cluster nodes. Storage controllers usually provide the means for
creating LUN-level access controls that allow one or more hosts to access a given LUN. With access control at
the storage controller, the controller itself can enforce access policies to the devices.

LUN masking provides security at a more detailed level than zoning. This is because LUNs allow for zoning at
the port level. For example, many SAN switches allow overlapping zones, which enable a storage controller to
reside in multiple zones. Multiple clusters in multiple zones can share the data on those controllers. The
elements used in LUN masking are shown in the following figure:
Other Resources
For a comprehensive list of hardware and software supported by Windows operating systems, see one of the
following:

•Windows Catalog at the Microsoft Web site [http://go.microsoft.com/fwlink/?LinkId=48250] .


•Hardware and software compatibility information in the Windows Server Catalog at the Microsoft Web
site [http://go.microsoft.com/fwlink/?LinkId=4287]

For the latest information about Windows Server 2003, see the Windows 2003 Server Web site at the Microsoft
Web site [http://go.microsoft.com/fwlink/?LinkId=48237] .

For interactive help in solving a problem with your computer or to research your problem see Product Support
Services at the Microsoft Web site [http://go.microsoft.com/fwlink/?LinkId=281] .

For additional information about cluster deployment, see "Designing and Deploying Clusters" at the Microsoft
Web site [http://go.microsoft.com/fwlink/?LinkId=48238] .

For information about troubleshooting, see "Troubleshooting cluster node installations" at the Microsoft Web
site [http://go.microsoft.com/fwlink/?LinkId=48239] .

For information about quorum configuration, see "Quorum Drive Configuration Information" at the Microsoft
Web site [http://go.microsoft.com/fwlink/?LinkId=48240] .

For information about private heartbeat configuration, see "Recommended private 'Heartbeat' configuration on a
cluster server" at the Microsoft Web site [http://go.microsoft.com/fwlink/?LinkId=48241] .

For information about network failure, see "Network Failure Detection and Recovery in a Server Cluster" at the
Microsoft Web site [http://go.microsoft.com/fwlink/?LinkId=48243] .

For information about quorum disk designation, see "How to Change Quorum Disk Designation" at the
Microsoft Web site [http://go.microsoft.com/fwlink/?LinkId=48244] .

For additional information about Storage Area Networks, see "Microsoft Windows Clustering: Storage Area
Networks" at the Microsoft Web site [http://go.microsoft.com/fwlink/?LinkId=48246] .
For information about geographically dispersed clusters, see "Geographically Dispersed Clusters in Windows
Server 2003" at the Microsoft Web site [http://go.microsoft.com/fwlink/?LinkId=48249] .

You might also like