Vcs Install 60pr1 Sol
Vcs Install 60pr1 Sol
Installation Guide
Solaris
March 2012
Veritas Cluster Server Installation Guide
The software described in this book is furnished under a license agreement and may be used
only in accordance with the terms of the agreement.
Legal Notice
Copyright © 2012 Symantec Corporation. All rights reserved.
The product described in this document is distributed under licenses restricting its use,
copying, distribution, and decompilation/reverse engineering. No part of this document
may be reproduced in any form by any means without prior written authorization of
Symantec Corporation and its licensors, if any.
THE DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS,
REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT,
ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO
BE LEGALLY INVALID. SYMANTEC CORPORATION SHALL NOT BE LIABLE FOR INCIDENTAL
OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING,
PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED
IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, "Rights in
Commercial Computer Software or Commercial Computer Software Documentation", as
applicable, and any successor regulations. Any use, modification, reproduction release,
performance, display or disclosure of the Licensed Software and Documentation by the U.S.
Government shall be solely in accordance with the terms of this Agreement.
Symantec Corporation
350 Ellis Street
Mountain View, CA 94043
http://www.symantec.com
Technical Support
Symantec Technical Support maintains support centers globally. Technical
Support’s primary role is to respond to specific queries about product features
and functionality. The Technical Support group also creates content for our online
Knowledge Base. The Technical Support group works collaboratively with the
other functional areas within Symantec to answer your questions in a timely
fashion. For example, the Technical Support group works with Product Engineering
and Symantec Security Response to provide alerting services and virus definition
updates.
Symantec’s support offerings include the following:
■ A range of support options that give you the flexibility to select the right
amount of service for any size organization
■ Telephone and/or Web-based support that provides rapid response and
up-to-the-minute information
■ Upgrade assurance that delivers software upgrades
■ Global support purchased on a regional business hours or 24 hours a day, 7
days a week basis
■ Premium service offerings that include Account Management Services
For information about Symantec’s support offerings, you can visit our Web site
at the following URL:
www.symantec.com/business/support/index.jsp
All support services will be delivered in accordance with your support agreement
and the then-current enterprise technical support policy.
Customer service
Customer service information is available at the following URL:
www.symantec.com/business/support/
Customer Service is available to assist with non-technical questions, such as the
following types of issues:
■ Questions regarding product licensing or serialization
■ Product registration updates, such as address or name changes
■ General product information (features, language availability, local dealers)
■ Latest information about product updates and upgrades
■ Information about upgrade assurance and support contracts
■ Information about the Symantec Buying Programs
■ Advice about Symantec's technical support options
■ Nontechnical presales questions
■ Issues that are related to CD-ROMs or manuals
Documentation
Product guides are available on the media in PDF format. Make sure that you are
using the current version of the documentation. The document version appears
on page 2 of each guide. The latest product documentation is available on the
Symantec Web site.
https://sort.symantec.com/documents
Your feedback on product documentation is important to us. Send suggestions
for improvements and reports on errors or omissions. Include the title and
document version (located on the second page), and chapter and section titles of
the text on which you are reporting. Send feedback to:
[email protected]
Figure 1-1 illustrates a typical VCS configuration of four nodes that are connected
to shared storage.
Public network
Shared storage
Client workstations receive service over the public network from applications
running on VCS nodes. VCS monitors the nodes and their services. VCS nodes in
the cluster communicate over a private network.
galaxy nebula
Shared disks
Public network
I/O fencing technology uses coordination points for arbitration in the event of a
network partition.
I/O fencing coordination points can be coordinator disks or coordination point
servers (CP servers) or both. You can configure disk-based or server-based I/O
fencing:
Disk-based I/O fencing I/O fencing that uses coordinator disks is referred
to as disk-based I/O fencing.
Server-based I/O fencing I/O fencing that uses at least one CP server system
is referred to as server-based I/O fencing.
Server-based fencing can include only CP servers,
or a mix of CP servers and coordinator disks.
Note: Symantec recommends that you use I/O fencing to protect your cluster
against split-brain situations.
Veritas Operations Manager See “About Veritas Operations Manager” on page 25.
Cluster Manager (Java console) See “About Cluster Manager (Java Console)” on page 25.
You can administer VCS Simulator from the Java Console or from the command
line.
To download VCS Simulator, go to http://go.symantec.com/vcsm_download.
to PROM level with a break and subsequently resumes operations, the other
nodes may declare the system dead. They can declare it dead even if the system
later returns and begins write operations.
I/O fencing is a feature that prevents data corruption in the event of a
communication breakdown in a cluster. VCS uses I/O fencing to remove the risk
that is associated with split-brain. I/O fencing allows write access for members
of the active cluster. It blocks access to storage from non-members so that even
a node that is alive is unable to cause damage.
After you install and configure VCS, you must configure I/O fencing in VCS to
ensure data integrity.
See “About planning to configure I/O fencing” on page 83.
About I/O fencing for VCS in virtual machines that do not support
SCSI-3 PR
In a traditional I/O fencing implementation, where the coordination points are
coordination point servers (CP servers) or coordinator disks, Veritas Clustered
Volume Manager and Veritas I/O fencing modules provide SCSI-3 persistent
reservation (SCSI-3 PR) based protection on the data disks. This SCSI-3 PR
protection ensures that the I/O operations from the losing node cannot reach a
disk that the surviving sub-cluster has already taken over.
See the Veritas Cluster Server Administrator's Guide for more information on how
I/O fencing works.
In virtualized environments that do not support SCSI-3 PR, VCS attempts to
provide reasonable safety for the data disks. VCS requires you to configure
non-SCSI-3 server-based I/O fencing in such environments. Non-SCSI-3 fencing
uses CP servers as coordination points with some additional configuration changes
to support I/O fencing in such environments.
See “Setting up non-SCSI-3 server-based I/O fencing in virtual environments
using installvcs program” on page 135.
Note: Typically, a fencing configuration for a cluster must have three coordination
points. Symantec also supports server-based fencing with a single CP server as
its only coordination point with a caveat that this CP server becomes a single
point of failure.
Note: With the CP server, the fencing arbitration logic still remains on the VCS
cluster.
Every system where you want to install VCS must meet the hardware and the
software requirements.
Item Description
DVD drive One drive in a system that can communicate to all the nodes in the
cluster.
Disks Typical VCS configurations require that shared disks support the
applications that migrate between systems in the cluster.
The VCS I/O fencing feature requires that all data and coordinator
disks support SCSI-3 Persistent Reservations (PR).
Disks Typical VCS configurations require that shared disks support the
applications that migrate between systems in the cluster.
The VCS I/O fencing feature requires that all data and coordinator
disks support SCSI-3 Persistent Reservations (PR).
See “About planning to configure I/O fencing” on page 83.
Disk space Note: VCS may require more temporary disk space during installation
than the specified disk space.
Symantec recommends that you turn off the spanning tree algorithm
on the switches used to connect private network interfaces..
Fibre Channel or Typical VCS configuration requires at least one SCSI or Fibre Channel
SCSI host bus Host Bus Adapter per system for shared data disks.
adapters
# ./installer -precheck
# ./installvcs -precheck
Note: VCS supports the previous and the next versions of SF to facilitate product
upgrades.
CP server requirements
VCS 6.0 PR1 clusters (application clusters) support coordination point servers
(CP servers) which are hosted on the following VCS and SFHA versions:
■ VCS 6.0PR1, VCS 6.0, 5.1SP1, or 5.1 single-node cluster
Single-node VCS clusters with VCS 5.1 SP1 RP1 and later or VCS 6.0 and later
that hosts CP server does not require LLT and GAB to be configured.
■ SFHA 6.0PR1, SFHA 6.0, 5.1SP1, or 5.1 cluster
Make sure that you meet the basic hardware requirements for the VCS/SFHA
cluster to host the CP server.
See the Veritas Storage Foundation High Availability Installation Guide.
System requirements 35
I/O fencing requirements
Note: While Symantec recommends at least three coordination points for fencing,
a single CP server as coordination point is a supported server-based fencing
configuration. Such single CP server fencing configuration requires that the
coordination point be a highly available CP server that is hosted on an SFHA
cluster.
Make sure you meet the following additional CP server requirements which are
covered in this section before you install and configure CP server:
■ Hardware requirements
■ Operating system requirements
■ Networking requirements (and recommendations)
■ Security requirements
Table 2-2 lists additional requirements for hosting the CP server.
Disk space To host the CP server on a VCS cluster or SFHA cluster, each
host requires the following file system space:
Table 2-3 displays the CP server supported operating systems and versions. An
application cluster can use a CP server that runs any of the following supported
operating systems.
36 System requirements
I/O fencing requirements
CP server hosted on a VCS CP server supports any of the following operating systems:
single-node cluster or on an
■ AIX 6.1 and 7.1
SFHA cluster
■ HP-UX 11i v3
■ Linux:
■ RHEL 5
■ RHEL 6
■ SLES 10
■ SLES 11
■ Solaris 10
■ Oracle Solaris 11
that results in an I/O fencing scenario, there is no bias in the race due to the
number of hops between the nodes.
For secure communication between the VCS cluster (application cluster) and the
CP server, review the following support matrix:
For secure communications between the VCS cluster and CP server, consider the
following requirements and suggestions:
■ In a secure communication environment, all CP servers that are used by the
application cluster must be configured with security enabled. A configuration
where the application cluster uses some CP servers running with security
enabled and other CP servers running with security disabled is not supported.
■ For non-secure communication between CP server and application clusters,
there is no need to configure Symantec Product Authentication Service. In
non-secure mode, authorization is still provided by CP server for the application
cluster users. The authorization that is performed only ensures that authorized
users can perform appropriate actions as per their user privileges on the CP
server.
For information about establishing secure communications between the application
cluster and CP server, see the Veritas Cluster Server Administrator's Guide.
Method Description
Interactive installation using the You can use one of the following script-based
script-based installer installers:
Automated installation using the VCS Use response files to perform unattended
response files installations. You can generate a response file in
one of the following ways:
See “Setting up disk-based I/O fencing using installvcs program” on page 127.
See “Setting up server-based I/O fencing using installvcs program” on page 134.
See “Setting up non-SCSI-3 server-based I/O fencing in virtual environments
using installvcs program” on page 135.
■ Create a single-node cluster
See “Creating a single-node cluster using the installer program” on page 259.
■ Add a node to an existing cluster
See “Adding nodes using the VCS installer” on page 215.
■ Perform automated installations using the values that are stored in a
configuration file.
See “Installing VCS using response files” on page 141.
See “Configuring VCS using response files” on page 147.
$CFG{Scalar_variable}="value";
Planning to install VCS 43
Typical VCS cluster setup models
$CFG{Scalar_variable}=123;
■ VCS clusters such as global clusters, replicated data clusters, or campus clusters
for disaster recovery
See the Veritas Cluster Server Administrator's Guide for disaster recovery
cluster configuration models.
hme0 hme0
Public network
Figure 3-2 illustrates a a simple VCS cluster setup with two Solaris x64 systems.
net0 net0
Public network
Multiple clusters
Cluster 1 Cluster 2
Single cluster
Cluster 1 Cluster 2
Chapter 4
Licensing VCS
This chapter includes the following topics:
Within 60 days of choosing this option, you must install a valid license key
corresponding to the license level entitled or continue with keyless licensing
by managing the server or cluster with a management server, such as Veritas
Operations Manager (VOM). If you do not comply with the above terms,
continuing to use the Symantec product is a violation of your end user license
agreement, and results in warning messages.
For more information about keyless licensing, see the following URL:
http://go.symantec.com/sfhakeyless
If you upgrade to this release from a prior release of the Veritas software, the
product installer does not change the license keys that are already installed. The
existing license keys may not activate new features in this release.
If you upgrade with the product installer, or if you install or upgrade with a method
other than the product installer, you must do one of the following to license the
products:
■ Run the vxkeyless command to set the product level for the products you
have purchased. This option also requires that you manage the server or cluster
with a management server.
See the vxkeyless(1m) manual page.
■ Use the vxlicinst command to install a valid product license key for the
products you have purchased.
See “Installing Veritas product license keys” on page 49.
See the vxlicinst(1m) manual page.
You can also use the above options to change the product levels to another level
that you are authorized to use. For example, you can add the replication option
to the installed product. You must ensure that you have the appropriate license
for the product level and options in use.
Note: In order to change from one product group to another, you may need to
perform additional steps.
license. A key may enable the operation of more products than are specified on
the certificate. However, you are legally limited to the number of product licenses
purchased. The product installation procedure describes how to activate the key.
To register and receive a software license key, go to the Symantec Licensing Portal
at the following location:
https://licensing.symantec.com
Make sure you have your Software Product License document. You need
information in this document to retrieve and manage license keys for your
Symantec product. After you receive the license key, you can install the product.
Click the Help link at this site to access the License Portal User Guide and FAQ.
The VRTSvlic package enables product licensing. For information about the
commands that you can use after the installing VRTSvlic:
See “Installing Veritas product license keys” on page 49.
You can only install the Symantec software products for which you have purchased
a license. The enclosed software discs might include other products for which you
have not purchased a license.
Even though other products are included on the enclosed software discs, you can
only use the Symantec software products for which you have purchased a license.
To install a new license
◆ Run the following commands. In a cluster environment, run the commands
on each node in the cluster:
# cd /opt/VRTS/bin
# ./vxlicinst -k xxxx-xxxx-xxxx-xxxx-xxxx-xxx
50 Licensing VCS
Installing Veritas product license keys
Section 2
Preinstallation tasks
Task Reference
Obtain license keys if See “Obtaining VCS license keys” on page 48.
you do not want to use
keyless licensing.
Set up the private See “Setting up the private network” on page 54.
network.
Task Reference
Set up ssh on cluster See “Setting up ssh on cluster systems” on page 281.
systems.
Set up shared storage for See “Setting up shared storage” on page 58.
I/O fencing (optional)
Set the PATH and the See “Setting the PATH variable” on page 63.
MANPATH variables.
See “Setting the MANPATH variable” on page 63.
Disable the abort See “Disabling the abort sequence on SPARC systems”
sequence on SPARC on page 63.
systems.
Review basic See “Optimizing LLT media speed settings on private NICs”
instructions to optimize on page 65.
LLT media speeds.
Review guidelines to help See “Guidelines for setting the media speed of the LLT
you set the LLT interconnects” on page 65.
interconnects.
Mount the product disc See “Mounting the product disc” on page 65.
Verify the systems See “Performing automated preinstallation check” on page 66.
before installation
The following products make extensive use of the private cluster interconnects
for distributed locking:
■ Veritas Storage Foundation Cluster File System (SFCFS)
■ Veritas Storage Foundation for Oracle RAC (SF Oracle RAC)
Symantec recommends network switches for the SFCFS and the SF Oracle RAC
clusters due to their performance characteristics.
Refer to the Veritas Cluster Server Administrator's Guide to review VCS
performance considerations.
Figure 5-1 shows two private networks for use with VCS.
Private
network
Public network
Private networks
Crossed link
4 Configure the Ethernet devices that are used for the private network such
that the autonegotiation protocol is not used. You can achieve a more stable
configuration with crossover cables if the autonegotiation protocol is not
used.
Preparing to install VCS 57
Performing preinstallation tasks
{0} ok show-disks
...b) /sbus@6,0/QLGC,isp@2,10000/sd
The example output shows the path to one host adapter. You must include
the path information without the "/sd" directory, in the nvramrc script. The
path information varies from system to system.
5 Edit the nvramrc script on to change the scsi-initiator-id to 5. (The Solaris
OpenBoot 3.x Command Reference Manual contains a full list of nvedit
commands and keystrokes.) For example:
{0} ok nvedit
0: probe-all
1: cd /sbus@6,0/QLGC,isp@2,10000
2: 5 " scsi-initiator-id" integer-property
3: device-end
4: install-console
5: banner
6: <CTRL-C>
60 Preparing to install VCS
Performing preinstallation tasks
6 Store the changes you make to the nvramrc script. The changes you make are
temporary until you store them.
{0} ok nvstore
If you are not sure of the changes you made, you can re-edit the script without
risk before you store it. You can display the contents of the nvramrc script
by entering:
{0} ok nvedit
{0} ok nvquit
7 Instruct the OpenBoot PROM Monitor to use the nvramrc script on the node.
8 Reboot the node. If necessary, halt the system so that you can use the ok
prompt.
Preparing to install VCS 61
Performing preinstallation tasks
9 Verify that the scsi-initiator-id has changed. Go to the ok prompt. Use the
output of the show-disks command to find the paths for the host adapters.
Then, display the properties for the paths. For example:
{0} ok show-disks
...b) /sbus@6,0/QLGC,isp@2,10000/sd
{0} ok cd /sbus@6,0/QLGC,isp@2,10000
{0} ok .properties
scsi-initiator-id 00000005
{0} ok show-disks
...b) /sbus@6,0/QLGC,isp@2,10000/sd
{0} ok cd /sbus@6,0/QLGC,isp@2,10000
{0} ok .properties
scsi-initiator-id 00000007
ok boot -r
4 After all systems have booted, use the format(1m) command to verify that
each system can see all shared devices.
If Volume Manager is used, the same number of external disk devices must
appear, but device names (c#t#d#s#) may differ.
62 Preparing to install VCS
Performing preinstallation tasks
If Volume Manager is not used, then you must meet the following
requirements:
■ The same number of external disk devices must appear.
■ The device names must be identical for all devices on all systems.
% su - root
2 Remove the root role from local users who have been assigned the role.
# roles admin
root
root::::auths=solaris.*;profiles=All;audit_flags=lo\
:no;lock_after_retries=no;min_label=admin_low;clearance=admin_high
Note: For more information, see the Oracle documentation on Oracle Solaris
11 operating system.
Preparing to install VCS 63
Performing preinstallation tasks
Note: After installation, you may want to change root user into root role to allow
local users to assume the root role.
See “Changing root user into root role” on page 177.
heartbeat in the cluster. When other cluster members believe that the aborted
node is a failed node, these cluster members may begin corrective action.
Keep the following points in mind:
■ The only action that you must perform following a system abort is to reset the
system to achieve the following:
■ Preserve data integrity
■ Prevent the cluster from taking additional corrective actions
■ Do not resume the processor as cluster membership may have changed and
failover actions may already be in progress.
■ To remove this potential problem on SPARC systems, you should alias the go
function in the OpenBoot eeprom to display a message.
To alias the go function to display a message
1 At the ok prompt, enter:
nvedit
7 Type the nvstore command to commit your changes to the non-volatile RAM
(NVRAM) for use in subsequent reboots.
8 After you perform these commands, at reboot you see this output:
# cd /cdrom/cdrom0/cluster_server
Note: Remember to make back up copies of the configuration files before you edit
them.
You also need to use this procedure if you have manually changed the configuration
files before you perform the following actions using the installer:
■ Upgrade VCS
■ Uninstall VCS
For more information about the main.cf and types.cf files, refer to the Veritas
Cluster Server Administrator's Guide.
To display the configuration files in the correct format on a running cluster
◆ Run the following commands to display the configuration files in the correct
format:
# haconf -dump
System names The system names where you plan to install VCS
The required license If you decide to use keyless licensing, you do not need to obtain
keys license keys. However, you require to set up management server
within 60 days to manage the cluster.
Table 5-3 lists the information you need to configure VCS cluster name and ID.
Preparing to install VCS 69
Getting your VCS installation and configuration information ready
Table 5-3 Information you need to configure VCS cluster name and ID
A name for the cluster The cluster name must begin with a letter of the alphabet. The
cluster name can contain only the characters "a" through "z",
"A" through "Z", the numbers "0" through "9", the hyphen "-",
and the underscore "_".
Example: my_cluster
A unique ID number for A number in the range of 0-65535. If multiple distinct and
the cluster separate clusters share the same network, then each cluster must
have a unique cluster ID.
Example: 12133
Table 5-4 lists the information you need to configure VCS private heartbeat links.
Table 5-4 Information you need to configure VCS private heartbeat links
Decide how you want to You can configure LLT over Ethernet or LLT over UDP.
configure LLT
Symantec recommends that you configure heartbeat links that
use LLT over Ethernet, unless hardware requirements force you
to use LLT over UDP. If you want to configure LLT over UDP,
make sure you meet the prerequisites.
Table 5-4 Information you need to configure VCS private heartbeat links
(continued)
For option 1: ■ The device names of the NICs that the private networks use
among systems
LLT over Ethernet
A network interface card or an aggregated interface.
Do not use the network interface card that is used for the
public network, which is typically hme0 for SPARC and net0
for x64.
For example on a SPARC system: qfe0, qfe1
For example on an x64 system: e1000g1, e1000g2
■ Choose whether to use the same NICs on all systems. If you
want to use different NICs, enter the details for each system.
For option 2: For each system, you must have the following details:
LLT over UDP ■ The device names of the NICs that the private networks use
among systems
■ IP address for each NIC
■ UDP port details for each NIC
Table 5-5 lists the information you need to configure virtual IP address of the
cluster (optional).
The name of the public The device name for the NIC that provides public network access.
NIC for each node in the
A network interface card or an aggregated interface.
cluster
Example: hme0
A virtual IP address of You can enter either an IPv4 or an IPv6 address. This virtual IP
the NIC address becomes a resource for use by the ClusterService group.
The "Cluster Virtual IP address" can fail over to another cluster
system.
Example IPv4 address: 192.168.1.16
The netmask for the The subnet that you use with the virtual IPv4 address.
virtual IPv4 address
Example: 255.255.240.0
Preparing to install VCS 71
Getting your VCS installation and configuration information ready
The prefix for the The prefix length for the virtual IPv6 address.
virtual IPv6 address
Example: 64
Table 5-6 lists the information you need to add VCS users.
Example: smith
Example: Administrator
Table 5-7 lists the information you need to configure SMTP email notification
(optional).
Table 5-7 Information you need to configure SMTP email notification (optional)
The name of the public The device name for the NIC that provides public network access.
NIC for each node in the
A network interface card or an aggregated interface.
cluster
Examples: hme0
The domain-based The SMTP server sends notification emails about the events
address of the SMTP within the cluster.
server
Example: smtp.symantecexample.com
Table 5-7 Information you need to configure SMTP email notification (optional)
(continued)
To decide the minimum Events have four levels of severity, and the severity levels are
severity of events for cumulative:
SMTP email notification
■ Information
VCS sends notifications for important events that exhibit
normal behavior.
■ Warning
VCS sends notifications for events that exhibit any deviation
from normal behavior. Notifications include both Warning
and Information type of events.
■ Error
VCS sends notifications for faulty behavior. Notifications
include both Error, Warning, and Information type of events.
■ SevereError
VCS sends notifications for a critical error that can lead to
data loss or corruption. Notifications include both Severe
Error, Error, Warning, and Information type of events.
Example: Error
Table 5-8 lists the information you need to configure SNMP trap notification
(optional).
Table 5-8 Information you need to configure SNMP trap notification (optional)
The name of the public The device name for the NIC that provides public network access.
NIC for each node in the
A network interface card or an aggregated interface.
cluster
Examples: hme0
The port number for the The default port number is 162.
SNMP trap daemon
Table 5-8 Information you need to configure SNMP trap notification (optional)
(continued)
To decide the minimum Events have four levels of severity, and the severity levels are
severity of events for cumulative:
SNMP trap notification
■ Information
VCS sends notifications for important events that exhibit
normal behavior.
■ Warning
VCS sends notifications for events that exhibit any deviation
from normal behavior. Notifications include both Warning
and Information type of events.
■ Error
VCS sends notifications for faulty behavior. Notifications
include both Error, Warning, and Information type of events.
■ SevereError
VCS sends notifications for a critical error that can lead to
data loss or corruption. Notifications include both Severe
Error, Error, Warning, and Information type of events.
Example: Error
Table 5-9 lists the information you need to configure global clusters (optional).
The name of the public You can use the same NIC that you used to configure the virtual
NIC IP of the cluster. Otherwise, specify appropriate values for the
NIC.
The virtual IP address You can enter either an IPv4 or an IPv6 address.
of the NIC
You can use the same virtual IP address that you configured
earlier for the cluster. Otherwise, specify appropriate values for
the virtual IP address.
The netmask for the You can use the same netmask that you used to configure the
virtual IPv4 address virtual IP of the cluster. Otherwise, specify appropriate values
for the netmask.
Example: 255.255.240.0
The prefix for the The prefix length for the virtual IPv6 address.
virtual IPv6 address
Example: 64
Veritas product Perform the following steps to start the product installer:
installer
1 Start the installer.
# ./installer
installvcs program Perform the following steps to start the product installer:
# cd /cdrom/cdrom0/cluster_server
# ./installvcs
Do you agree with the terms of the End User License Agreement
as specified in the cluster_server/EULA/<lang>/EULA_VCS_Ux_6.0.pdf
file present on media? [y,n,q,?] y
1 Installs only the minimal required VCS packages that provides basic
functionality of the product.
You must choose this option to configure any optional VCS feature.
5 Enter the names of the systems where you want to install VCS.
For a single-node VCS installation, enter one name for the system.
See “Creating a single-node cluster using the installer program” on page 259.
The installer does the following for the systems:
■ Checks that the local system that runs the installer can communicate with
remote systems.
If the installer finds ssh binaries, it confirms that ssh can operate without
requests for passwords or passphrases.
If the default communication method ssh fails, the installer attempts to
use rsh.
■ Makes sure the systems use one of the supported operating systems.
■ Makes sure that the systems have the required operating system patches.
If the installer reports that any of the patches are not available, install
the patches on the system before proceeding with the VCS installation.
■ Checks for the required file system space and makes sure that any
processes that are running do not conflict with the installation.
If requirements for installation are not met, the installer stops and
indicates the actions that you must perform to proceed with the process.
■ Checks whether any of the packages already exists on a system.
If the current version of any package exists, the installer removes the
package from the installation list for the system. If a previous version of
any package exists, the installer replaces the package with the current
version.
6 Review the list of packages and patches that the installer would install on
each node.
80 Installing VCS
Installing VCS using the installer
The installer installs the VCS packages and patches on the systems galaxy
and nebula.
7 Select the license type.
Based on what license type you want to use, enter one of the following:
1 You must have a valid license key. Enter the license key at the prompt:
If you plan to configure global clusters, enter the corresponding license keys
when the installer prompts for additional licenses.
2 The keyless license option enables you to install VCS without entering a key.
However, to ensure compliance, keyless licensing requires that you manage
the systems with a management server.
http://go.symantec.com/sfhakeyless
The installer registers the license and completes the installation process.
8 To install the Global Cluster Option, enter y at the prompt.
9 To configure VCS, enter y at the prompt. You can also configure VCS later.
The installer provides an option to collect data about the installation process
each time you complete an installation, upgrade, configuration, or uninstall
Installing VCS 81
Manually installing packages on solaris brand non-global zones
of the product. The installer transfers the contents of the install log files to
an internal Symantec site. The information is used only to gather metrics
about how you use the installer. No personal customer data is collected, and
no information will be shared by any other parties. Information gathered
may include the product and the version installed or upgraded, how many
systems were installed, and the time spent in any section of the install process.
11 The installer checks for online updates and provides an installation summary.
12 After the installation, note the location of the installation log files, the
summary file, and the response file for future reference.
The files provide the useful information that can assist you with the
configuration and can also assist future configurations.
summary file Lists the packages that are installed on each system.
response file Contains the installation information that can be used to perform
unattended or automated installations on other systems.
# svcs svc:/application/pkg/system-repository
■ VRTSvcs
■ VRTSvcsag
■ VRTSvcsea
If you have installed VCS in a virtual environment that is not SCSI-3 PR compliant,
you can configure non-SCSI-3 server-based fencing.
84 Preparing to configure VCS
About planning to configure I/O fencing
Figure 7-1 illustrates a high-level flowchart to configure I/O fencing for the VCS
cluster.
Check disks for I/O fencing Install and configure VCS or SFHA on CP server
compliance systems
Configuration tasks
Use one of the following methods
VCS in non-
SCSI3 compliant
virtual
environment ?
Configure server-based fencing
(customized mode) with CP servers
Preparatory tasks
Identify existing CP servers
Configuration tasks
Use one of the following methods
After you perform the preparatory tasks, you can use any of the following methods
to configure I/O fencing:
86 Preparing to configure VCS
Setting up the CP server
Using the installvcs program See “Setting up disk-based I/O fencing using installvcs program” on page 127.
See “Setting up server-based I/O fencing using installvcs program” on page 134.
Using response files See “Response file variables to configure disk-based I/O fencing” on page 160.
See “Response file variables to configure server-based I/O fencing” on page 162.
You can also migrate from one I/O fencing configuration to another.
See the Veritas Cluster Server Administrator's Guide for more details.
Task Reference
Configure the CP server cluster in secure See “Configuring the CP server cluster in
mode secure mode” on page 88.
Set up shared storage for the CP server See “Setting up shared storage for the CP
database server database” on page 89.
3 Decide whether you want to configure the CP server cluster in secure mode.
Symantec recommends configuring the CP server cluster in secure mode to
secure the communication between the CP server and its clients (VCS clusters).
It also secures the HAD communication on the CP server cluster.
4 Set up the hardware and network for your CP server.
CP server setup uses a Install and configure VCS to create a single-node VCS cluster.
single system
During installation, make sure to select all packages for installation. The VRTScps package
is installed only if you select to install all packages.
See “ Configuring the CP server using the configuration utility” on page 90.
CP server setup uses Install and configure SFHA to create an SFHA cluster. This makes the CP server highly
multiple systems available.
Meet the following requirements for CP server:
■ During installation, make sure to select all packages for installation. The VRTScps
package is installed only if you select to install all packages.
■ During configuration, configure disk-based fencing (scsi3 mode).
See the Veritas Storage Foundation and High Availability Installation Guide for instructions
on installing and configuring SFHA.
Note: If you already configured the CP server cluster in secure mode during the
VCS configuration, then skip this section.
Preparing to configure VCS 89
Setting up the CP server
# installvcs -security
If you have SFHA installed on the CP server, run the following command:
# installsfha -security
For CP servers on See “To configure the CP server on a single-node VCS cluster”
single-node VCS on page 90.
cluster:
For CP servers on an See “To configure the CP server on an SFHA cluster” on page 94.
SFHA cluster:
# /opt/VRTScps/bin/configure_cps.pl
Preparing to configure VCS 91
Setting up the CP server
5 Enter valid virtual IP addresses on which the CP server process should depend
on:
■ Enter the number of virtual IP addresses you want to configure:
7 Choose whether the communication between the CP server and the VCS
clusters has to be made secure.
If you have not configured the CP server cluster in secure mode, enter n at
the prompt.
Warning: If the CP server cluster is not configured in secure mode, and if you
enter y, then the script immediately exits. You must configure the CP server
cluster in secure mode and rerun the CP server configuration script.
8 Enter the absolute path of the CP server database or press Enter to accept
the default value (/etc/VRTScps/db).
10 The configuration utility proceeds with the configuration process, and creates
a vxcps.conf configuration file.
11 Enter the number of NIC resources that you want to configure. You must use
a public NIC.
Answer the following questions for each NIC resource that you want to
configure.
12 Enter a valid network interface for the virtual IP address for the CP server
process.
13 Enter the NIC resource you want to associate with the virtual IP addresses.
Enter the NIC resource you want to associate with the
virtual IP 10.209.83.85 [1 to 2] : 1
Enter the NIC resource you want to associate with the
virtual IP 10.209.83.87 [1 to 2] : 2
If you entered an IPv6 address, enter the prefix details at the prompt.
16 After the configuration process has completed, a success message appears.
For example:
17 Run the hagrp -state command to ensure that the CPSSG service group
has been added.
For example:
# /opt/VRTScps/bin/configure_cps.pl [-n]
6 Enter valid virtual IP addresses on which the CP server process should depend
on:
■ Enter the number of virtual IP addresses you want to configure:
7 Enter the CP server port number or press Enter to accept the default value
(14250).
8 Choose whether the communication between the CP server and the VCS
clusters has to be made secure.
If you have not configured the CP server cluster in secure mode, enter n at
the prompt.
Warning: If the CP server cluster is not configured in secure mode, and if you
enter y, then the script immediately exits. You must configure the CP server
cluster in secure mode and rerun the CP server configuration script.
9 Enter the absolute path of the CP server database or press Enter to accept
the default value (/etc/VRTScps/db).
11 The configuration utility proceeds with the configuration process, and creates
a vxcps.conf configuration file on each node.
The following output is for one node:
12 Enter the number of NIC resources that you want to configure. You must use
a public NIC.
Answer the following questions for each NIC resource that you want to
configure.
13 Confirm whether you use the same NIC name for the virtual IP on all the
systems in the cluster.
14 Enter a valid network interface for the virtual IP address for the CP server
process.
15 Enter the NIC resource you want to associate with the virtual IP addresses.
Enter the NIC resource you want to associate with the
virtual IP 10.209.83.85 [1 to 2] : 1
Enter the NIC resource you want to associate with the
virtual IP 10.209.83.87 [1 to 2] : 2
If you entered an IPv6 address, enter the prefix details at the prompt.
18 Enter the name of the disk group for the CP server database.
Enter the name of diskgroup for cps database :
cps_dg
19 Enter the name of the volume that is created on the above disk group.
Enter the name of volume created on diskgroup cps_dg :
cps_volume
Preparing to configure VCS 99
Setting up the CP server
21 Run the hagrp -state command to ensure that the CPSSG service group
has been added.
For example:
# hastop -local
2 Edit the main.cf file to add the CPSSG service group on any node. Use the
CPSSG service group in the main.cf as an example:
See “Sample configuration files for CP server” on page 252.
Customize the resources under the CPSSG service group as per your
configuration.
3 Verify the main.cf file using the following command:
■ For a CP server cluster which is not configured in secure mode, edit the
/etc/vxcps.conf file to set security=0.
# hastart
2 Run the cpsadm command to check if the vxcpserv process is listening on the
configured Virtual IP.
Task Reference
Specify the systems where you want to See “Specifying systems for configuration”
configure VCS on page 105.
Configure virtual IP address of the cluster See “Configuring the virtual IP of the
(optional) cluster” on page 109.
Configure the cluster in secure mode See “Configuring the cluster in secure mode”
(optional) on page 111.
Add VCS users (required if you did not See “Adding VCS users” on page 116.
configure the cluster in secure mode)
Configure SMTP email notification (optional) See “Configuring SMTP email notification”
on page 117.
Configure SNMP email notification (optional) See “Configuring SNMP trap notification”
on page 119.
Configure global clusters (optional) See “Configuring global clusters” on page 121.
Note: You must have enabled Global Cluster
Option when you installed VCS.
Note: If you want to reconfigure VCS, before you start the installer you must stop
all the resources that are under VCS control using the hastop command or the
hagrp -offline command.
Configuring VCS 105
Specifying systems for configuration
# ./installer
The installer starts the product installation program with a copyright message
and specifies the directory where the logs are created.
3 From the opening Selection Menu, choose: C for "Configure an Installed
Product."
4 From the displayed list of products to configure, choose the corresponding
number for your product:
Veritas Cluster Server
To configure VCS using the installvcs program
1 Confirm that you are logged in as the superuser.
2 Start the installvcs program.
# /opt/VRTS/install/installvcs -configure
The installer begins with a copyright message and specifies the directory
where the logs are created.
2 Review the output as the installer verifies the systems you specify.
The installer does the following tasks:
■ Checks that the local node running the installer can communicate with
remote nodes
If the installer finds ssh binaries, it confirms that ssh can operate without
requests for passwords or passphrases.
106 Configuring VCS
Configuring the cluster name
■ Makes sure that the systems are running with the supported operating
system
■ Makes sure the installer started from the global zone
■ Checks whether VCS is installed
■ Exits if VCS 6.0 PR1 is not installed
3 Review the installer output about the I/O fencing configuration and confirm
whether you want to configure fencing in enabled mode.
2 If you chose option 1, enter the network interface card details for the private
heartbeat links.
The installer discovers and lists the network interface cards.
Answer the installer prompts. The following example shows different NICs
based on architecture:
■ For Solaris SPARC:
You must not enter the network interface card that is used for the public
network (typically qfe0.)
Enter the NIC for the first private heartbeat link on galaxy:
[b,q,?] qfe0
Would you like to configure a second private heartbeat link?
[y,n,q,b,?] (y)
Enter the NIC for the second private heartbeat link on galaxy:
[b,q,?] qfe1
Would you like to configure a third private heartbeat link?
[y,n,q,b,?](n)
Enter the NIC for the first private heartbeat link on galaxy:
[b,q,?] e1000g1
Would you like to configure a second private heartbeat link?
[y,n,q,b,?] (y)
108 Configuring VCS
Configuring private heartbeat links
Enter the NIC for the second private heartbeat link on galaxy:
[b,q,?] e1000g2
Would you like to configure a third private heartbeat link?
[y,n,q,b,?](n)
3 If you chose option 2, enter the NIC details for the private heartbeat links.
This step uses examples such as private_NIC1 or private_NIC2 to refer to the
available names of the NICs.
4 Choose whether to use the same NIC details to configure private heartbeat
links on other systems.
Are you using the same NICs for private heartbeat links on all
systems? [y,n,q,b,?] (y)
If you want to use the NIC details that you entered for galaxy, make sure the
same NICs are available on each system. Then, enter y at the prompt.
For LLT over UDP, if you want to use the same NICs on other systems, you
still must enter unique IP addresses on each NIC for other systems.
If the NIC device names are different on some of the systems, enter n. Provide
the NIC details for each system as the program prompts.
5 If you chose option 3, the installer detects NICs on each system and network
links, and sets link priority.
If the installer fails to detect heartbeat links or fails to find any high-priority
links, then choose option 1 or option 2 to manually configure the heartbeat
links.
See step 2 for option 1, or step 3 for option 2.
6 Enter a unique cluster ID:
3 Confirm whether you want to use the discovered public NIC on the first
system.
Do one of the following:
■ If the discovered NIC is the one to use, press Enter.
■ If you want to use a different NIC, type the name of a NIC to use and press
Enter.
4 Confirm whether you want to use the same public NIC on all nodes.
Do one of the following:
■ If all nodes use the same public NIC, enter y.
■ If unique NICs are used, enter n and enter a NIC for each node.
NIC: hme0
IP: 192.168.1.16
Netmask: 255.255.240.0
■ Enter the prefix for the virtual IPv6 address you provided. For
example:
NIC: hme0
IP: 2001:454e:205a:110:203:baff:feee:10
Prefix: 64
# /opt/VRTS/install/installvcs -securitytrust
The installer specifies the location of the log files. It then lists the cluster
information such as cluster name, cluster ID, node names, and service groups.
3 When the installer prompts you for the broker information, specify the IP
address, port number, and the data directory for which you want to establish
trust relationship with the broker.
Specify a valid data directory or press Enter to accept the default directory.
Are you sure that you want to setup trust for the VCS cluster
with the broker 15.193.97.204 and port 14545? [y,n,q] y
The installer sets up trust relationship with the broker for all nodes in
the cluster and displays a confirmation.
The installer specifies the location of the log files, summary file, and
response file and exits.
■ If you entered incorrect details for broker IP address, port number, or
directory name, the installer displays an error. It specifies the location of
the log files, summary file, and response file and exits.
Task Reference
Configure security on one node See “Configuring the first node” on page 113.
Configure security on the See “Configuring the remaining nodes” on page 114.
remaining nodes
# /opt/VRTS/install/installvcs -securityonenode
The installer lists information about the cluster, nodes, and service groups.
If VCS is not configured or if VCS is not running on all nodes of the cluster,
the installer prompts whether you want to continue configuring security. It
then prompts you for the node that you want to configure.
VCS is not running on all systems in this cluster. All VCS systems
must be in RUNNING state. Do you want to continue? [y,n,q] (n) y
Warning: All configurations about cluster users are deleted when you configure
the first node. You can use the /opt/VRTSvcs/bin/hauser command to create
cluster users manually.
3 The installer completes the secure configuration on the node. It specifies the
location of the security configuration files and prompts you to copy these
files to the other nodes in the cluster. The installer also specifies the location
of log files, summary file, and response file.
4 Copy the security configuration files from the /var/VRTSvcs/vcsauth/bkup
directory to temporary directories on the other nodes in the cluster.
# /opt/VRTS/install/installvcs -securityonenode
The installer lists information about the cluster, nodes, and service groups.
If VCS is not configured or if VCS is not running on all nodes of the cluster,
the installer prompts whether you want to continue configuring security. It
then prompts you for the node that you want to configure. Enter 2.
VCS is not running on all systems in this cluster. All VCS systems
must be in RUNNING state. Do you want to continue? [y,n,q] (n) y
The installer completes the secure configuration on the node. It specifies the
location of log files, summary file, and response file.
# /opt/VRTSvcs/bin/haconf -makerw
# /opt/VRTSvcs/bin/CmdServer -stop
cluster clus1 (
SecureClus = 1
)
# touch /etc/VRTSvcs/conf/config/.secure
6 On the first node, start VCS. Then start VCS on the remaining nodes.
# /opt/VRTSvcs/bin/hastart
# /opt/VRTSvcs/bin/CmdServer
# /opt/VRTSvcs/bin/haconf -makerw
Enter Again:*******
Enter the privilege for user smith (A=Administrator, O=Operator,
G=Guest): [b,q,?] a
6 Review the summary of the newly added users and confirm the information.
If you do not want to configure the SMTP notification, you can skip to the
next configuration option.
See “Configuring SNMP trap notification” on page 119.
3 Provide information to configure SMTP notification.
Provide the following information:
■ Enter the NIC information.
NIC: hme0
If you skip this option and if you had installed a valid HA/DR license, the
installer presents you with an option to configure this cluster as global cluster.
If you did not install an HA/DR license, the installer proceeds to configure
VCS based on the configuration details you provided.
See “Configuring global clusters” on page 121.
120 Configuring VCS
Configuring SNMP trap notification
NIC: hme0
Note: If you installed a HA/DR license to set up replicated data cluster or campus
cluster, skip this installer option.
If you skip this option, the installer proceeds to configure VCS based on the
configuration details you provided.
122 Configuring VCS
Completing the VCS configuration
NIC: hme0
IP: 192.168.1.16
Netmask: 255.255.240.0
NIC: hme0
IP: 2001:454e:205a:110:203:baff:feee:10
Prefix: 64
2 Review the output as the installer stops various processes and performs the
configuration. The installer then restarts VCS and its related processes.
3 Enter y at the prompt to send the installation information to Symantec.
4 After the installer configures VCS successfully, note the location of summary,
log, and response files that installer creates.
The files provide the useful information that can assist you with the
configuration and can also assist future configurations.
# vxlicrep
Features :=
Platform = Solaris
Version = 6.0 PR1
Tier = 0
Reserved = 0
Mode = VCS
# vxlicinst -k XXXX-XXXX-XXXX-XXXX-XXXX-XXX
Configuring VCS 125
Verifying and updating licenses on the system
# vxlicinst -k XXXX-XXXX-XXXX-XXXX-XXXX-XXX
4 Make sure demo licenses are replaced on all cluster nodes before starting
VCS.
# vxlicrep
# hastart
126 Configuring VCS
Verifying and updating licenses on the system
Chapter 9
Configuring VCS clusters
for data integrity
This chapter includes the following topics:
# devfsadm
2 To initialize the disks as VxVM disks, use one of the following methods:
■ Use the interactive vxdiskadm utility to initialize the disks as VxVM disks.
128 Configuring VCS clusters for data integrity
Setting up disk-based I/O fencing using installvcs program
# vxdisksetup -i device_name
# vxdisksetup -i c2t13d0
Repeat this command for each disk you intend to use as a coordinator
disk.
Note: The installer stops and starts VCS to complete I/O fencing configuration.
Make sure to unfreeze any frozen VCS service groups in the cluster for the installer
to successfully stop VCS.
# /opt/VRTS/install/installvcs -fencing
The installvcs program starts with a copyright message and verifies the
cluster information.
Note the location of log files which you can access in the event of any problem
with the configuration process.
2 Confirm that you want to proceed with the I/O fencing configuration at the
prompt.
The program checks that the local node running the script can communicate
with remote nodes and checks whether VCS 6.0 PR1 is configured properly.
3 Review the I/O fencing configuration options that the program presents.
Type 2 to configure disk-based I/O fencing.
■ If the check fails, configure and enable VxVM before you repeat this
procedure.
■ If the check passes, then the program prompts you for the coordinator
disk group information.
5 Choose whether to use an existing disk group or create a new disk group to
configure as the coordinator disk group.
The program lists the available disk group names and provides an option to
create a new disk group. Perform one of the following:
■ To use an existing disk group, enter the number corresponding to the disk
group at the prompt.
The program verifies whether the disk group you chose has an odd number
of disks and that the disk group has a minimum of three disks.
■ To create a new disk group, perform the following steps:
■ Enter the number corresponding to the Create a new disk group option.
The program lists the available disks that are in the CDS disk format
in the cluster and asks you to choose an odd number of disks with at
least three disks to be used as coordinator disks.
Symantec recommends that you use three disks as coordination points
for disk-based I/O fencing.
If the available VxVM CDS disks are less than the required, installer
asks whether you want to initialize more disks as VxVM disks. Choose
the disks you want to initialize as VxVM disks and then use them to
create new disk group.
■ Enter the numbers corresponding to the disks that you want to use as
coordinator disks.
■ Enter the disk group name.
6 Verify that the coordinator disks you chose meet the I/O fencing requirements.
You must verify that the disks are SCSI-3 PR compatible using the vxfentsthdw
utility and then return to this configuration program.
See “Checking shared disks for I/O fencing” on page 130.
7 After you confirm the requirements, the program creates the coordinator
disk group with the information you provided.
8 Enter the I/O fencing disk policy that you chose to use. For example:
9 Verify and confirm the I/O fencing configuration information that the installer
summarizes.
10 Review the output as the configuration program does the following:
■ Stops VCS and I/O fencing on each node.
■ Configures disk-based I/O fencing and starts the I/O fencing process.
■ Updates the VCS configuration file main.cf if necessary.
■ Copies the /etc/vxfenmode file to a date and time suffixed file
/etc/vxfenmode-date-time. This backup file is useful if any future fencing
configuration fails.
■ Starts VCS on each node to make sure that the VCS is cleanly configured
to use the I/O fencing feature.
11 Review the output as the configuration program displays the location of the
log files, the summary files, and the response files.
12 Configure the Coordination Point agent to monitor the coordinator disks.
3 Scan all disk drives and their attributes, update the VxVM device list, and
reconfigure DMP with the new devices. Type:
# vxdisk scandisks
See the Veritas Volume Manager documentation for details on how to add
and configure disks.
132 Configuring VCS clusters for data integrity
Setting up disk-based I/O fencing using installvcs program
# vxfenadm -i diskpath
# vxfenadm -i /dev/rdsk/c1t1d0s2
Vendor id : EMC
Product id : SYMMETRIX
Revision : 5567
Serial Number : 42031000a
The same serial number information should appear when you enter the
equivalent command on node B using the /dev/rdsk/c2t1d0s2 path.
On a disk from another manufacturer, Hitachi Data Systems, the output is
different and may resemble:
# vxfenadm -i /dev/rdsk/c3t1d2s2
Vendor id : HITACHI
Product id : OPEN-3 -SUN
Revision : 0117
Serial Number : 0401EB6F0002
If the utility does not show a message that states a disk is ready, the verification
has failed. Failure of verification can be the result of an improperly configured
disk array. The failure can also be due to a bad disk.
If the failure is due to a bad disk, remove and replace it. The vxfentsthdw utility
indicates a disk can be used for I/O fencing with a message resembling:
For more information on how to replace coordinator disks, refer to the Veritas
Cluster Server Administrator's Guide.
To test the disks using vxfentsthdw utility
1 Make sure system-to-system communication functions properly.
See “Setting up inter-system communication” on page 281.
# vxfentsthdw [-n]
3 The script warns that the tests overwrite data on the disks. After you review
the overview and the warning, confirm to continue the process and enter the
node names.
Warning: The tests overwrite and destroy data on the disks unless you use
the -r option.
4 Enter the names of the disks that you want to check. Each node may know
the same disk by a different name:
If the serial numbers of the disks are not identical, then the test terminates.
5 Review the output as the utility performs the checks and reports its activities.
6 If a disk is ready for I/O fencing on each node, the utility reports success for
each node. For example, the utility displays the following message for the
node galaxy.
7 Run the vxfentsthdw utility for each disk you intend to verify.
# /opt/VRTS/install/installvcs -fencing
The installvcs program starts with a copyright message and verifies the
cluster information.
2 Confirm that you want to proceed with the I/O fencing configuration at the
prompt.
The program checks that the local node running the script can communicate
with remote nodes and checks whether VCS 6.0 PR1 is configured properly.
3 Review the I/O fencing configuration options that the program presents.
Type 1 to configure server-based I/O fencing.
4 Enter n to confirm that your storage environment does not support SCSI-3
PR.
5 Confirm that you want to proceed with the non-SCSI-3 I/O fencing
configuration at the prompt.
6 Enter the number of CP server coordination points you want to use in your
setup.
7 Enter the following details for each CP server:
■ Enter the virtual IP address or the fully qualified host name.
■ Enter the port address on which the CP server listens for connections.
The default value is 14250. You can enter a different port address. Valid
values are between 49152 and 65535.
136 Configuring VCS clusters for data integrity
Enabling or disabling the preferred fencing policy
The installer assumes that these values are identical from the view of the
VCS cluster nodes that host the applications for high availability.
8 Verify and confirm the CP server information that you provided.
9 Verify and confirm the VCS cluster configuration information.
Review the output as the installer performs the following tasks:
■ Updates the CP server configuration files on each CP server with the
following details:
■ Registers each node of the VCS cluster with the CP server.
■ Adds CP server user to the CP server.
■ Adds VCS cluster to the CP server user.
■ Updates the following configuration files on each node of the VCS cluster
■ /etc/vxfenmode file
■ /etc/vxenviron file
■ /etc/llttab file
10 Review the output as the installer stops VCS on each node, starts I/O fencing
on each node, updates the VCS configuration file main.cf, and restarts VCS
with non-SCSI-3 server-based fencing.
Confirm to configure the CP agent on the VCS cluster.
11 Confirm whether you want to send the installation information to Symantec.
12 After the installer configures I/O fencing successfully, note the location of
summary, log, and response files that installer creates.
The files provide useful information which can assist you with the
configuration, and can also assist future configurations.
# vxfenadm -d
2 Make sure that the cluster-level attribute UseFence has the value set to SCSI3.
# haconf -makerw
■ Set the value of the system-level attribute FencingWeight for each node
in the cluster.
For example, in a two-node cluster, where you want to assign galaxy five
times more weight compared to nebula, run the following commands:
# haconf -makerw
■ Set the value of the group-level attribute Priority for each service group.
For example, run the following command:
138 Configuring VCS clusters for data integrity
Enabling or disabling the preferred fencing policy
Make sure that you assign a parent service group an equal or lower priority
than its child service group. In case the parent and the child service groups
are hosted in different subclusters, then the subcluster that hosts the
child service group gets higher preference.
■ Save the VCS configuration.
5 To view the fencing node weights that are currently set in the fencing driver,
run the following command:
# vxfenconfig -a
# vxfenadm -d
2 Make sure that the cluster-level attribute UseFence has the value set to SCSI3.
3 To disable preferred fencing and use the default race policy, set the value of
the cluster-level attribute PreferredFencingPolicy as Disabled.
# haconf -makerw
# haclus -modify PreferredFencingPolicy Disabled
# haconf -dump -makero
Section 4
Installation using response
files
3 Copy the response file to one of the cluster systems where you want to install
VCS.
See “Sample response file for installing VCS” on page 144.
4 Edit the values of the response file variables as necessary.
See “Response file variables to install VCS” on page 142.
142 Performing automated VCS installation
Response file variables to install VCS
5 Mount the product disc and navigate to the directory that contains the
installation program.
6 Start the installation from the system to which you copied the response file.
For example:
(Required)
(Required)
Required
(Required)
Performing automated VCS installation 143
Response file variables to install VCS
(Required)
(Optional)
(Optional)
(Optional)
(Optional)
144 Performing automated VCS installation
Sample response file for installing VCS
(Optional)
(Optional)
(Optional)
(Optional)
(Optional)
#
# Configuration Values:
#
our %CFG;
$CFG{accepteula}=1;
$CFG{opt}{install}=1;
$CFG{opt}{installrecpkgs}=1;
$CFG{prod}="VCS60";
$CFG{systems}=[ qw(galaxy nebula) ];
1;
146 Performing automated VCS installation
Sample response file for installing VCS
Chapter 11
Performing automated VCS
configuration
This chapter includes the following topics:
Table 11-1 Response file variables specific to configuring Veritas Cluster Server
(Required)
(Required)
(Required)
(Required)
(Optional)
Performing automated VCS configuration 149
Response file variables to configure Veritas Cluster Server
Table 11-1 Response file variables specific to configuring Veritas Cluster Server
(continued)
(Optional)
(Optional)
(Optional)
Note that some optional variables make it necessary to define other optional
variables. For example, all the variables that are related to the cluster service
group (csgnic, csgvip, and csgnetmask) must be defined if any are defined. The
same is true for the SMTP notification (smtpserver, smtprecp, and smtprsev), the
SNMP trap notification (snmpport, snmpcons, and snmpcsev), and the Global
Cluster Option (gconic, gcovip, and gconetmask).
Table 11-2 lists the response file variables that specify the required information
to configure a basic VCS cluster.
150 Performing automated VCS configuration
Response file variables to configure Veritas Cluster Server
Table 11-2 Response file variables specific to configuring a basic VCS cluster
(Required)
(Required)
(Required)
(Required)
Table 11-3 lists the response file variables that specify the required information
to configure LLT over Ethernet.
Table 11-3 Response file variables specific to configuring private LLT over
Ethernet
(Required)
Performing automated VCS configuration 151
Response file variables to configure Veritas Cluster Server
Table 11-3 Response file variables specific to configuring private LLT over
Ethernet (continued)
(Optional)
Table 11-4 lists the response file variables that specify the required information
to configure LLT over UDP.
Table 11-4 Response file variables specific to configuring LLT over UDP
(Required)
(Required)
152 Performing automated VCS configuration
Response file variables to configure Veritas Cluster Server
Table 11-4 Response file variables specific to configuring LLT over UDP
(continued)
(Required)
(Required)
(Required)
(Required)
Performing automated VCS configuration 153
Response file variables to configure Veritas Cluster Server
Table 11-4 Response file variables specific to configuring LLT over UDP
(continued)
(Required)
Table 11-5 lists the response file variables that specify the required information
to configure virtual IP for VCS cluster.
Table 11-5 Response file variables specific to configuring virtual IP for VCS
cluster
(Optional)
(Optional)
Table 11-6 lists the response file variables that specify the required information
to configure the VCS cluster in secure mode.
154 Performing automated VCS configuration
Response file variables to configure Veritas Cluster Server
Table 11-6 Response file variables specific to configuring VCS cluster in secure
mode
Table 11-7 lists the response file variables that specify the required information
to configure VCS users.
(Optional)
(Optional)
Performing automated VCS configuration 155
Response file variables to configure Veritas Cluster Server
Table 11-7 Response file variables specific to configuring VCS users (continued)
(Optional)
Table 11-8 lists the response file variables that specify the required information
to configure VCS notifications using SMTP.
(Optional)
(Optional)
(Optional)
Table 11-9 lists the response file variables that specify the required information
to configure VCS notifications using SNMP.
156 Performing automated VCS configuration
Response file variables to configure Veritas Cluster Server
(Optional)
(Optional)
(Optional)
Table 11-10 lists the response file variables that specify the required information
to configure VCS global clusters.
Table 11-10 Response file variables specific to configuring VCS global clusters
(Optional)
(Optional)
(Optional)
Performing automated VCS configuration 157
Sample response file for configuring VCS
Note: For Solaris x64 Platform Edition, read the values of NICs as e1000g0,
e1000g2, and e1000g3 instead of hme0, qfe0, qfe1 in the sample response file.
#
# Configuration Values:
#
our %CFG;
$CFG{opt}{configure}=1;
$CFG{opt}{gco}=1;
$CFG{prod}="VCS60";
$CFG{systems}=[ qw(galaxy nebula) ];
$CFG{vcs_allowcomms}=1;
$CFG{vcs_clusterid}=13221;
$CFG{vcs_clustername}="clus1";
$CFG{vcs_csgnetmask}="255.255.255.0";
$CFG{vcs_csgnic}{all}="hme0";
$CFG{vcs_csgvip}="10.10.12.1";
$CFG{vcs_gconetmask}="255.255.255.0";
$CFG{vcs_gcovip}="10.10.12.1";
$CFG{vcs_lltlink1}{galaxy}="qfe0";
$CFG{vcs_lltlink1}{nebula}="qfe0";
$CFG{vcs_lltlink2}{galaxy}="qfe1";
$CFG{vcs_lltlink2}{nebula}="qfe1";
$CFG{vcs_smtprecp}=[ qw([email protected]) ];
$CFG{vcs_smtprsev}=[ qw(SevereError) ];
$CFG{vcs_smtpserver}="smtp.symantecexample.com";
$CFG{vcs_snmpcons}=[ qw(neptune) ];
$CFG{vcs_snmpcsev}=[ qw(SevereError) ];
$CFG{vcs_snmpport}=162;
1;
158 Performing automated VCS configuration
Sample response file for configuring VCS
Chapter 12
Performing automated I/O
fencing configuration for
VCS
This chapter includes the following topics:
3 Copy the response file to one of the cluster systems where you want to
configure I/O fencing.
See “Sample response file for configuring disk-based I/O fencing” on page 161.
See “Sample response file for configuring server-based I/O fencing”
on page 164.
4 Edit the values of the response file variables as necessary.
See “Response file variables to configure disk-based I/O fencing” on page 160.
See “Response file variables to configure server-based I/O fencing” on page 162.
5 Start the configuration from the system to which you copied the response
file. For example:
Table 12-1 Response file variables specific to configuring disk-based I/O fencing
(Required)
(Required)
Performing automated I/O fencing configuration for VCS 161
Sample response file for configuring disk-based I/O fencing
Table 12-1 Response file variables specific to configuring disk-based I/O fencing
(continued)
(Optional)
(Optional)
Note: You must define the
fencing_dgname variable to use an
existing disk group. If you want to
create a new disk group, you must use
both the fencing_dgname variable and
the fencing_newdg_disks variable.
(Optional)
Note: You must define the
fencing_dgname variable to use an
existing disk group. If you want to
create a new disk group, you must use
both the fencing_dgname variable and
the fencing_newdg_disks variable.
#
# Configuration Values:
162 Performing automated I/O fencing configuration for VCS
Response file variables to configure server-based I/O fencing
#
our %CFG;
$CFG{opt}{configure}=1;
$CFG{opt}{fencing}=1;
$CFG{prod}="VCS60";
Table 12-2 Coordination point server (CP server) based fencing response file
definitions
CFG {fencing_config_cpagent} Enter '1' or '0' depending upon whether you want to
configure the Coordination Point agent using the
installer or not.
Table 12-2 Coordination point server (CP server) based fencing response file
definitions (continued)
CFG {fencing_cpagentgrp} Name of the service group which will have the
Coordination Point agent resource as part of it.
Note: This field is obsolete if the
fencing_config_cpagent field is given a value of
'0'.
CFG {fencing_ports} The port that the virtual IP address or the fully
qualified host name of the CP server listens on.
CFG {fencing_scsi3_disk_policy} The disk policy that the customized fencing uses.
$CFG{fencing_config_cpagent}=0;
$CFG{fencing_cps}=[ qw(10.200.117.145) ];
$CFG{fencing_cps_vips}{"10.200.117.145"}=[ qw(10.200.117.145) ];
$CFG{fencing_dgname}="vxfencoorddg";
$CFG{fencing_disks}=[ qw(emc_clariion0_37 emc_clariion0_13) ];
$CFG{fencing_scsi3_disk_policy}="raw";
$CFG{fencing_ncp}=3;
$CFG{fencing_ndisks}=2;
$CFG{fencing_ports}{"10.200.117.145"}=14250;
$CFG{fencing_reusedg}=1;
$CFG{opt}{configure}=1;
$CFG{opt}{fencing}=1;
$CFG{prod}="VCS60";
$CFG{systems}=[ qw(galaxy nebula) ];
$CFG{vcs_clusterid}=1256;
$CFG{vcs_clustername}="clus1";
$CFG{fencing_option}=1;
CFG {fencing_config_cpagent} Enter '1' or '0' depending upon whether you want to
configure the Coordination Point agent using the
installer or not.
CFG {fencing_cpagentgrp} Name of the service group which will have the
Coordination Point agent resource as part of it.
Note: This field is obsolete if the
fencing_config_cpagent field is given a value of
'0'.
$CFG{fencing_config_cpagent}=0;
$CFG{fencing_cps}=[ qw(10.198.89.251 10.198.89.252 10.198.89.253) ];
$CFG{fencing_cps_vips}{"10.198.89.251"}=[ qw(10.198.89.251) ];
$CFG{fencing_cps_vips}{"10.198.89.252"}=[ qw(10.198.89.252) ];
$CFG{fencing_cps_vips}{"10.198.89.253"}=[ qw(10.198.89.253) ];
$CFG{fencing_ncp}=3;
$CFG{fencing_ndisks}=0;
$CFG{fencing_ports}{"10.198.89.251"}=14250;
166 Performing automated I/O fencing configuration for VCS
Sample response file for configuring non-SCSI-3 server-based I/O fencing
$CFG{fencing_ports}{"10.198.89.252"}=14250;
$CFG{fencing_ports}{"10.198.89.253"}=14250;
$CFG{non_scsi3_fencing}=1;
$CFG{opt}{configure}=1;
$CFG{opt}{fencing}=1;
$CFG{prod}="VCS60";
$CFG{systems}=[ qw(galaxy nebula) ];
$CFG{vcs_clusterid}=1256;
$CFG{vcs_clustername}="clus1";
$CFG{fencing_option}=1;
Section 5
Post-installation tasks
■ About enabling LDAP authentication for clusters that run in secure mode
Figure 13-1 depicts the VCS cluster communication with the LDAP servers when
clusters run in secure mode.
VCS client
VCS node
(authentication broker)
The LDAP schema and syntax for LDAP commands (such as, ldapadd, ldapmodify,
and ldapsearch) vary based on your LDAP implementation.
Before adding the LDAP domain in Symantec Product Authentication Service,
note the following information about your LDAP environment:
■ The type of LDAP schema used (the default is RFC 2307)
■ UserObjectClass (the default is posixAccount)
■ UserObject Attribute (the default is uid)
■ User Group Attribute (the default is gidNumber)
■ Group Object Class (the default is posixGroup)
■ GroupObject Attribute (the default is cn)
■ Group GID Attribute (the default is gidNumber)
■ Group Membership Attribute (the default is memberUid)
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat showversion
vssat version: 6.1.6.0
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat addldapdomain \
--domainname "MYENTERPRISE.symantecdomain.com"\
--server_url "ldap://my_openldap_host.symantecexample.com"\
--user_base_dn "ou=people,dc=symantecdomain,dc=myenterprise,dc=com"\
--user_attribute "cn" --user_object_class "account"\
--user_gid_attribute "gidNumber"\
--group_base_dn "ou=group,dc=symantecdomain,dc=myenterprise,dc=com"\
--group_attribute "cn" --group_object_class "posixGroup"\
--group_gid_attribute "member"\
--admin_user "cn=manager,dc=symantecdomain,dc=myenterprise,dc=com"\
--admin_user_password "password" --auth_type "FLAT"
2 Verify that you can successfully authenticate an LDAP user on the VCS nodes.
You must have a valid LDAP user ID and password to run the command. In
the following example, authentication is verified for the MYENTERPRISE
domain for the LDAP user, vcsadmin1.
authenticate
----------------------
----------------------
# haconf makerw
# hauser -add "CN=vcsadmin1/CN=people/\
DC=symantecdomain/DC=myenterprise/\
[email protected]" -priv Administrator
# haconf -dump -makero
If you want to enable group-level authentication, you must run the following
command:
# hauser -addpriv \
ldap_group@ldap_domain AdministratorGroup
# cat /etc/VRTSvcs/conf/config/main.cf
...
...
cluster clus1 (
SecureClus = 1
Administrators = {
"CN=vcsadmin1/CN=people/DC=symantecdomain/DC=myenterprise/
[email protected]" }
AdministratorGroups = {
"CN=symantecusergroups/DC=symantecdomain/DC=myenterprise/
[email protected] " }
)
...
...
# export VCS_DOMAIN=myenterprise.symantecdomain.com
# export VCS_DOMAINTYPE=ldap
174 Performing post-installation tasks
About enabling LDAP authentication for clusters that run in secure mode
Similarly, you can use the same LDAP user credentials to log on to the VCS
node using the VCS Cluster Manager (Java Console).
7 To enable LDAP authentication on other nodes in the cluster, perform the
procedure on each of the nodes in the cluster.
Performing post-installation tasks 175
About enabling LDAP authentication for clusters that run in secure mode
To enable Windows Active Directory authentication for clusters that run in secure
mode
1 Run the LDAP configuration tool atldapconf using the -d option. The -d option
discovers and retrieves an LDAP properties file which is a prioritized attribute
list.
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/atldapconf -d \
-s domain_controller_name_or_ipaddress \
-u domain_user -g domain_group
For example:
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/atldapconf \
-d -s 192.168.20.32 -u Administrator -g "Domain Admins"
Search User provided is invalid or Authentication is required to
proceed further.
Please provide authentication information for LDAP server.
2 Run the LDAP configuration tool atldapconf using the -c option. The -c option
creates a CLI file to add the LDAP domain.
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/atldapconf \
-c -d windows_domain_name
For example:
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/atldapconf \
-c -d symantecdomain.com
Attribute list file not provided, using default AttributeList.txt.
CLI file name not provided, using default CLI.txt.
3 Run the LDAP configuration tool atldapconf using the -x option. The -x option
reads the CLI file and executes the commands to add a domain to the AT.
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/atldapconf -x
176 Performing post-installation tasks
About enabling LDAP authentication for clusters that run in secure mode
4 List the LDAP domains to verify that the Windows Active Directory server
integration is complete.
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat listldapdomains
# export VCS_DOMAIN=symantecdomain.com
# export VCS_DOMAINTYPE=ldap
Performing post-installation tasks 177
Accessing the VCS documentation
Similarly, you can use the same LDAP user credentials to log on to the VCS
node using the VCS Cluster Manager (Java Console).
7 To enable LDAP authentication on other nodes in the cluster, perform the
procedure on each of the nodes in the cluster.
root::::type=role;auths=solaris.*;profiles=All;audit_flags=lo\
:no;lock_after_retries=no;min_label=admin_low;clearance=admin_high
3. Assign the root role to a local user who was unassigned the role.
For more information, see the Oracle documentation on Oracle Solaris 11 operating
system.
Chapter 14
Verifying the VCS
installation
This chapter includes the following topics:
cat /etc/vx/.uuids/clusuuid
180 Verifying the VCS installation
Verifying the LLT, GAB, and VCS configuration files
Verifying LLT
Use the lltstat command to verify that links are active for LLT. If LLT is
configured correctly, this command shows all the nodes in the cluster. The
Verifying the VCS installation 181
Verifying LLT, GAB, and cluster operation
command also returns information about the links for LLT for the node on which
you typed the command.
Refer to the lltstat(1M) manual page for more information.
To verify LLT
1 Log in as superuser on the node galaxy.
2 Run the lltstat command on the node galaxy to view the status of LLT.
lltstat -n
Each node has two links and each node is in the OPEN state. The asterisk (*)
denotes the node on which you typed the command.
If LLT does not operate, the command does not return any LLT links
information: If only one network is connected, the command returns the
following LLT statistics information:
lltstat -n
5 To view additional information about LLT, run the lltstat -nvv command
on each node.
182 Verifying the VCS installation
Verifying LLT, GAB, and cluster operation
For example, run the following command on the node galaxy in a two-node
cluster:
The command reports the status on the two active nodes in the cluster, galaxy
and nebula.
For each correctly configured node, the information must show the following:
■ A state of OPEN
■ A status for each link of UP
■ An address for each link
However, the output in the example shows different details for the node
nebula. The private network connection is possibly broken or the information
in the /etc/llttab file may be incorrect.
6 To obtain information about the ports open for LLT, type lltstat -p on any
node.
For example, type lltstat -p on the node galaxy in a two-node cluster:
lltstat -p
Verifying the VCS installation 183
Verifying LLT, GAB, and cluster operation
Verifying GAB
Verify the GAB operation using the gabconfig -a command. This command
returns the GAB port membership information.
The ports indicate the following:
Port b ■ Indicates that the I/O fencing driver is connected to GAB port b.
Note: Port b appears in the gabconfig command output only if you had
configured I/O fencing after you configured VCS.
■ gen a23da40d is a randomly generated number
■ membership 01 indicates that nodes 0 and 1 are connected
For more information on GAB, refer to the Veritas Cluster Server Administrator's
Guide.
184 Verifying the VCS installation
Verifying LLT, GAB, and cluster operation
To verify GAB
1 To verify that GAB operates, type the following command on each node:
/sbin/gabconfig -a
Note that port b appears in the gabconfig command output only if you
had configured I/O fencing. You can also use the vxfenadm -d command
to verify the I/O fencing configuration.
■ If GAB does not operate, the command does not return any GAB port
membership information:
■ If only one network is connected, the command returns the following GAB
port membership information:
# hastatus -summary
-- SYSTEM STATE
-- System State Frozen
A galaxy RUNNING 0
A nebula RUNNING 0
-- GROUP STATE
-- Group System Probed AutoDisabled State
Note: The example in the following procedure is for SPARC. x64 clusters have
different command output.
# hasys -display
The example shows the output when the command is run on the node galaxy.
The list continues with similar information for nebula (not shown) and any
other nodes in the cluster.
galaxy AgentsStopped 0
galaxy CPUUsage 0
galaxy ConfigInfoCnt 0
galaxy CurrentLimits
galaxy DiskHbStatus
Verifying the VCS installation 187
Verifying LLT, GAB, and cluster operation
galaxy DynamicLoad 0
galaxy EngineRestarted 0
galaxy FencingWeight 0
galaxy Frozen 0
galaxy GUIIPAddr
galaxy LLTNodeId 0
galaxy Limits
galaxy LoadTimeCounter 0
galaxy LoadWarningLevel 80
galaxy NoAutoDisable 0
galaxy NodeId 0
galaxy OnGrpCnt 1
galaxy PhysicalServer
galaxy SystemLocation
188 Verifying the VCS installation
Performing a postcheck on a node
galaxy SystemOwner
galaxy SystemRecipients
galaxy TFrozen 0
galaxy TRSE 0
galaxy UpDownState Up
galaxy UserInt 0
galaxy UserStr
When you use the postcheck option, it can help you troubleshoot the following
VCS-related issues:
■ The heartbeat link does not exist.
■ The heartbeat link cannot communicate.
Verifying the VCS installation 189
Performing a postcheck on a node
■ Volume Manager cannot start because the Volboot file is not loaded.
■ Volume Manager cannot start because no license exists.
■ Cluster Volume Manager cannot start because the CVM configuration is
incorrect in the main.cf file. For example, the Autostartlist value is missing
on the nodes.
■ Cluster Volume Manager cannot come online because the node ID in the
/etc/llthosts file is not consistent.
■ Cluster Volume Manager cannot come online because Vxfen is not started.
■ Cluster Volume Manager cannot start because gab is not configured.
190 Verifying the VCS installation
Performing a postcheck on a node
# cd /opt/VRTS/install
# ./uninstallvcs
The program specifies the directory where the logs are created. The program
displays a copyright notice and a description of the cluster:
3 Enter the names of the systems from which you want to uninstall VCS.
The program performs system verification checks and asks to stop all running
VCS processes.
4 Enter y to stop all the VCS processes.
The program stops the VCS processes and proceeds with uninstalling the
software.
5 Review the output as the uninstallvcs program continues to do the following:
Uninstalling VCS using the installer 195
Removing language packages using the uninstaller program
6 Review the output as the uninstaller stops processes, unloads kernel modules,
and removes the packages.
7 Note the location of summary, response, and log files that the uninstaller
creates after removing all the packages.
./uninstallvcs
Warning: Ensure that no VCS cluster (application cluster) uses the CP server that
you want to unconfigure.
Note: You must run the configuration utility only once per CP server (which can
be on a single-node VCS cluster or an SFHA cluster), when you want to remove
the CP server configuration.
[email protected] # /opt/VRTScps/bin/configure_cps.pl
3 Review the warning message and confirm that you want to unconfigure the
CP server.
Are you sure you want to bring down the cp server? (y/n)
(Default:n) :y
4 Review the screen output as the script performs the following steps to remove
the CP server configuration:
■ Stops the CP server
■ Removes the CP server from VCS configuration
Uninstalling VCS using the installer 197
Removing the CP server configuration using the removal script
7 Answer yto delete the CP server configuration file and log files.
8 Run the hagrp -state command to ensure that the CPSSG service group has
been removed from the node. For example:
(Required)
(Required)
(Required)
(Optional)
(Optional)
(Optional)
Uninstalling VCS using response files 201
Sample response file for uninstalling VCS
#
# Configuration Values:
#
our %CFG;
$CFG{opt}{uninstall}=1;
$CFG{prod}="VCS60";
$CFG{systems}=[ qw(galaxy nebula) ];
1;
202 Uninstalling VCS using response files
Sample response file for uninstalling VCS
Chapter 17
Manually uninstalling VCS
packages from non-global
zones
This chapter includes the following topics:
Task Reference
Set up Node B to be compatible with See “Setting up a node to join the single-node
Node A. cluster” on page 208.
■ Add Ethernet cards for private See “Installing and configuring Ethernet cards
heartbeat network for Node B. for private network” on page 209.
■ If necessary, add Ethernet cards for
private heartbeat network for Node
A.
■ Make the Ethernet cable
connections between the two nodes.
Connect both nodes to shared storage. See “Configuring the shared storage” on page 210.
208 Adding a node to a single-node cluster
Adding a node to a single-node cluster
Task Reference
■ Bring up VCS on Node A. See “Bringing up the existing node” on page 210.
■ Edit the configuration file.
If necessary, install VCS on Node B and See “Installing the VCS software manually when
add a license key. adding a node to a single node cluster” on page 211.
Start LLT and GAB on Node B. See “Starting LLT and GAB” on page 211.
■ Start LLT and GAB on Node A. See “Reconfiguring VCS on the existing node”
■ Copy UUID from Node A to Node B. on page 211.
■ Restart VCS on Node A.
■ Modify service groups for two
nodes.
■ If you renamed the LLT and GAB startup files, remove them.
Adding a node to a single-node cluster 209
Adding a node to a single-node cluster
# hastop -local
# sync;sync;init 0
ok boot -r
2 Log in as superuser.
3 Make the VCS configuration writable.
# haconf -makerw
# hagrp -list
8 If you have configured I/O Fencing, GAB, and LLT on the node, stop them.
# gabconfig -a
5 Copy the cluster UUID from the existing node to the new node:
Where you are copying the cluster UUID from a node in the cluster
(node_name_in_running_cluster) to systems from new_sys1 through new_sysn
that you want to join the cluster.
6 Start VCS on Node A.
# hastart
# haconf -makerw
# hagrp -list
# gabconfig -a
# hastart
# hastatus
# hagrp -list
■ Updates the following configuration files and copies them on the new node:
/etc/llthosts
/etc/gabtab
/etc/VRTSvcs/conf/config/main.cf
■ Copies the following files from the existing cluster to the new node
/etc/vxfenmode
/etc/vxfendg
/etc/vx/.uuids/clusuuid
/etc/default/llt
/etc/default/gab
/etc/default/vxfen
■ Configures disk-based or server-based fencing depending on the fencing mode
in use on the existing cluster.
At the end of the process, the new node joins the VCS cluster.
Note: If you have configured server-based fencing on the existing cluster, make
sure that the CP server does not contain entries for the new node. If the CP server
already contains entries for the new node, remove these entries before adding the
node to the cluster, otherwise the process may fail with an error.
To add the node to an existing VCS cluster using the VCS installer
1 Log in as the root user on one of the nodes of the existing cluster.
2 Run the VCS installer with the -addnode option.
# cd /opt/VRTS/install
# ./installvcs -addnode
The installer displays the copyright message and the location where it stores
the temporary installation logs.
3 Enter the name of a node in the existing VCS cluster. The installer uses the
node information to identify the existing cluster.
5 Enter the name of the systems that you want to add as new nodes to the
cluster.
The installer checks the installed products and packages on the nodes and
discovers the network interfaces.
6 Enter the name of the network interface that you want to configure as the
first private heartbeat link.
Note: The LLT configuration for the new node must be the same as that of
the existing cluster. If your existing cluster uses LLT over UDP, the installer
asks questions related to LLT over UDP for the new node.
See “Configuring private heartbeat links” on page 106.
Note: At least two private heartbeat links must be configured for high
availability of the cluster.
8 Enter the name of the network interface that you want to configure as the
second private heartbeat link.
Task Reference
■ Back up the configuration file. See “Verifying the status of nodes and
■ Check the status of the nodes and the service service groups” on page 219.
groups.
■ Switch or remove any VCS service groups on See “Deleting the departing node from
the node departing the cluster. VCS configuration” on page 219.
■ Delete the node from VCS configuration.
Modify the llthosts(4) and gabtab(4) files to reflect See “Modifying configuration files on
the change. each remaining node” on page 222.
If the existing cluster is configured to use See “Removing the node configuration
server-based I/O fencing, remove the node from the CP server” on page 223.
configuration from the CP server.
For a cluster that is running in a secure mode, See “Removing security credentials
remove the security credentials from the leaving from the leaving node ” on page 224.
node.
On the node departing the cluster: See “Unloading LLT and GAB and
removing VCS on the departing node”
■ Modify startup scripts for LLT, GAB, and VCS
on page 224.
to allow reboot of the node without affecting
the cluster.
■ Unconfigure and unload the LLT and GAB
utilities.
■ Remove the VCS packages.
Adding and removing cluster nodes 219
Removing a node from a cluster
# cp -p /etc/VRTSvcs/conf/config/main.cf\
/etc/VRTSvcs/conf/config/main.cf.goodcopy
# hastatus -summary
-- SYSTEM STATE
-- System State Frozen
A galaxy RUNNING 0
A nebula RUNNING 0
A saturn RUNNING 0
-- GROUP STATE
-- Group System Probed AutoDisabled State
B grp1 galaxy Y N ONLINE
B grp1 nebula Y N OFFLINE
B grp2 galaxy Y N ONLINE
B grp3 nebula Y N OFFLINE
B grp3 saturn Y N ONLINE
B grp4 saturn Y N ONLINE
The example output from the hastatus command shows that nodes galaxy,
nebula, and saturn are the nodes in the cluster. Also, service group grp3 is
configured to run on node nebula and node saturn, the departing node. Service
group grp4 runs only on node saturn. Service groups grp1 and grp2 do not
run on node saturn.
■ Switch the service groups to another node that other service groups depend
on.
To remove or switch service groups from the departing node
1 Switch failover service groups from the departing node. You can switch grp3
from node saturn to node nebula.
2 Check for any dependencies involving any service groups that run on the
departing node; for example, grp4 runs only on the departing node.
# hagrp -dep
3 If the service group on the departing node requires other service groups—if
it is a parent to service groups on other nodes—unlink the service groups.
# haconf -makerw
# hagrp -unlink grp4 grp1
These commands enable you to edit the configuration and to remove the
requirement grp4 has for grp1.
4 Stop VCS on the departing node:
5 Check the status again. The state of the departing node should be EXITED.
Make sure that any service group that you want to fail over is online on other
nodes.
# hastatus -summary
-- SYSTEM STATE
-- System State Frozen
A galaxy RUNNING 0
A nebula RUNNING 0
A saturn EXITED 0
-- GROUP STATE
-- Group System Probed AutoDisabled State
B grp1 galaxy Y N ONLINE
B grp1 nebula Y N OFFLINE
B grp2 galaxy Y N ONLINE
B grp3 nebula Y N ONLINE
B grp3 saturn Y Y OFFLINE
B grp4 saturn Y N OFFLINE
6 Delete the departing node from the SystemList of service groups grp3 and
grp4.
7 For the service groups that run only on the departing node, delete the
resources from the group before you delete the group.
8 Delete the service group that is configured to run on the departing node.
# hastatus -summary
-- SYSTEM STATE
-- System State Frozen
A galaxy RUNNING 0
A nebula RUNNING 0
A saturn EXITED 0
-- GROUP STATE
-- Group System Probed AutoDisabled State
B grp1 galaxy Y N ONLINE
B grp1 nebula Y N OFFLINE
B grp2 galaxy Y N ONLINE
B grp3 nebula Y N ONLINE
2 Modify /etc/llthosts file on each remaining nodes to remove the entry of the
departing node.
For example, change:
0 galaxy
1 nebula
2 saturn
To:
0 galaxy
1 nebula
Note: The cpsadm command is used to perform the steps in this procedure. For
detailed information about the cpsadm command, see the Veritas Cluster Server
Administrator's Guide.
3 Remove the VCS user associated with the node you previously removed from
the cluster.
For CP server in non-secure mode:
5 View the list of nodes on the CP server to ensure that the node entry was
removed:
Unloading LLT and GAB and removing VCS on the departing node
Perform the tasks on the node that is departing the cluster.
If you have configured VCS as part of the Storage Foundation and High Availability
products, you may have to delete other dependent packages before you can delete
all of the following ones.
Adding and removing cluster nodes 225
Removing a node from a cluster
# /sbin/gabconfig -U
# /sbin/lltconfig -U
# modunload -i gab_id
# modunload -i llt_id
4 Disable the startup files to prevent LLT, GAB, or VCS from starting up:
6 To permanently remove the VCS packages from the system, use the pkgrm
command. Start by removing the following packages, which may have been
optionally installed, in the order shown:
# pkgrm VRTSvcsea
# pkgrm VRTSat
# pkgrm VRTSvcsag
# pkgrm VRTScps
# pkgrm VRTSvcs
# pkgrm VRTSamf
# pkgrm VRTSvxfen
# pkgrm VRTSgab
# pkgrm VRTSllt
# pkgrm VRTSspt
# rpm -e VRTSsfcpi60
# pkgrm VRTSperl
# pkgrm VRTSvlic
# rm /etc/llttab
# rm /etc/gabtab
# rm /etc/llthosts
■ Appendix H. Sample VCS cluster setup diagrams for CP server-based I/O fencing
■ installation
■ configuration
■ upgrade
■ uninstallation
■ adding nodes
■ removing nodes
■ etc.
VRTSvcsea VRTSvcsea contains the binaries Optional for VCS. Required to use
for Veritas high availability agents VCS with the high availability
for DB2, Sybase, and Oracle. agents for DB2, Sybase, or Oracle.
Discovers configuration
information on a Storage
Foundation managed host. This
information is stored on a central
database, which is not part of this
release. You must download the
database separately at:
http://www.symantec.com/business/
storage-foundation-manager
installvcs [ system1
system2... ]
[ -install | -configure | -uninstall | -license
| -upgrade | -precheck | -requirements | -start | -stop
| -postcheck ]
[ -responsefile response_file ]
[ -logpath log_path ]
[ -tmppath tmp_path ]
[ -tunablesfile tunables_file ]
[ -timeout timeout_value ]
[ -keyfile ssh_key_file ]
[ -hostfile hostfile_path ]
[ -rootpath root_path ]
[ -flash_archive flash_archive_path ]
[ -serial | -rsh | -redirect | -installminpkgs
| -installrecpkgs | -installallpkgs | -minpkgs
| -recpkgs | -allpkgs | -pkgset | -pkgtable | -pkginfo
| -makeresponsefile | -comcleanup | -version | -nolic
| -ignorepatchreqs | -settunables | -security | -securityonenode
234 Installation command options
Command options for installvcs program
Table B-1 provides a consolidated list of the options used with the installvcs
command and uninstallvcs command.
-addnode Add the nodes that you specify to a cluster. The cluster must be online
to use this command option to add nodes.
-allpkgs View a list of all VCS packages. The installvcs program lists the
packages in the correct installation order.
You can use the output to create scripts for command-line installation,
or for installations over a network.
-comcleanup Remove the ssh or ssh configuration added by installer on the systems.
The option is only required when installation routines that performed
auto-configuration of ssh or rsh are abruptly terminated.
-copyinstallscripts Use this option when you manually install products and want to use
the installation scripts that are stored on the system to perform
product configuration, uninstallation, and licensing tasks without
the product media.
■ ./installer -copyinstallscripts
Copies the installation and uninstallation scripts for all products
in the release to /opt/VRTS/install. It also copies the installation
Perl libraries to /opt/VRTSperl/lib/site_perl/release_name .
■ ./installproduct_name -copyinstallscripts
Copies the installation and uninstallation scripts for the specified
product and any subset products for the product to
/opt/VRTS/install. It also copies the installation Perl libraries to
/opt/VRTSperl/lib/site_perl/release_name .
■ ./installer -rootpath alt_root_path
-copyinstallscripts
The path alt_root_path can be a directory like /rdisk2. In that case,
this command copies installation and uninstallation scripts for
all the products in the release to /rdisk2/opt/VRTS/install. CPI
perl libraries are copied at
/rdisk2/opt/VRTSperl/lib/site_perl/release_name. For example,
for the 6.0 PR1 release, the release_name is UXRT60.
-fencing Configure I/O fencing after you configure VCS. The script provides
an option to configure disk-based I/o fencing or server-based I/O
fencing.
-hostfile Specify the location of a file that contains the system names for the
installer.
-keyfile Specify a key file for SSH. The option passes -i ssh_key_file with
ssh_key_file each SSH invocation.
-minpkgs View a list of the minimal packages for VCS. The installvcs program
lists the packages in the correct installation order. The list does not
include the optional packages.
You can use the output to create scripts for command-line installation,
or for installations over a network.
■ -allpkgs
■ -minpkgs
■ -recpkgs
-pkgpath Specify that pkg_path contains all packages that the installvcs
pkg_path program is about to install on all systems. The pkg_path is the
complete path of a directory, usually NFS mounted.
Installation command options 237
Command options for installvcs program
-pkgset Discover and lists the 6.0 PR1 packages installed on the systems that
you specify.
-pkgtable Display the VCS 6.0 PR1 packages in the correct installation order.
-recpkgs View a list of the recommended packages for VCS. The installvcs
program lists the packages in the correct installation order. The list
does not include the optional packages.
You can use the output to create scripts for command-line installation,
or for installations over a network.
-responsefile Perform automated VCS installation using the system and the
response_file configuration information that is stored in a specified file instead of
prompting for information.
-rootpath Specify that root_path is the root location for the installation of all
root_path packages.
-redirect Specify that the installer need not display the progress bar details
during the installation.
-rsh Specify that rsh and rcp are to be used for communication between
systems instead of ssh and scp. This option requires that systems be
preconfigured such that rsh commands between systems execute
without prompting for passwords or confirmations
-securitytrust Set up a trust relationship between your VCS cluster and a broker.
See “Setting up trust relationships for your VCS cluster” on page 112.
If the installvcs program failed to start up all the VCS processes, you
can use the -stop option to stop all the processes and then use the
-start option to start the processes.
If the installvcs program failed to start up all the VCS processes, you
can use the -stop option to stop all the processes and then use the
-start option to start the processes.
-timeout Specifies the timeout value (in seconds) for each command that the
installer issues during the installation. The default timeout value is
set to 600 seconds.
-tmppath Specify that tmp_path is the working directory for installvcs program.
tmp_path This path is different from the /var/tmp path. This destination is
where the installvcs program performs the initial logging and where
the installvcs program copies the packages on remote systems before
installation.
-upgrade Upgrade the installed packages on the systems that you specify.
- Upgrade the VCS and other agent packages to the latest version during
rollingupgrade_phase2 rolling upgrade Phase 2. Product kernel drivers are rolling-upgraded
to the latest protocol version.
-version Check and display the installed product and version. Identify the
installed and missing packages for the product. Provide a summary
that includes the count of the installed and any missing packages.
Lists the installed patches, hotfixes, and available updates for the
installed product if an Internet connection is available.
uninstallvcs [ system1
system2... ]
[ -uninstall ]
[ -responsefile response_file ]
[ -logpath log_path ]
[ -tmppath tmp_path ]
[ -tunablesfile tunables_file ]
[ -timeout timeout_value ]
[ -keyfile ssh_key_file ]
[ -hostfile hostfile_path ]
[ -rootpath root_path ]
[ -flash_archive flash_archive_path ]
[ -serial | -rsh | -redirect | -makeresponsefile
| -comcleanup | -version | -nolic | -ignorepatchreqs
| -settunables | -security | -securityonenode
| -securitytrust | -addnode | -fencing | -upgrade_kernelpkgs
| -upgrade_nonkernelpkgs | -rolling_upgrade
| -rollingupgrade_phase1
| -rollingupgrade_phase2 ]
File Description
/etc/default/llt This file stores the start and stop environment variables for LLT:
■ LLT_START—Defines the startup behavior for the LLT module after a system reboot. Valid
values include:
1—Indicates that LLT is enabled to start up.
0—Indicates that LLT is disabled to start up.
■ LLT_STOP—Defines the shutdown behavior for the LLT module during a system shutdown.
Valid values include:
1—Indicates that LLT is enabled to shut down.
0—Indicates that LLT is disabled to shut down.
The installer sets the value of these variables to 1 at the end of VCS configuration.
If you manually configured VCS, make sure you set the values of these environment variables
to 1.
/etc/llthosts The file llthosts is a database that contains one entry per system. This file links the LLT
system ID (in the first column) with the LLT host name. This file must be identical on each node
in the cluster. A mismatch of the contents of the file can cause indeterminate behavior in the
cluster.
For example, the file /etc/llthosts contains the entries that resemble:
0 galaxy
1 nebula
Configuration files 243
About the LLT and GAB configuration files
File Description
/etc/llttab The file llttab contains the information that is derived during installation and used by the
utility lltconfig(1M). After installation, this file lists the private network links that correspond
to the specific system. For example, the file /etc/llttab contains the entries that resemble the
following:
set-node galaxy
set-cluster 2
link net1 /dev/net/net1 - ether - -
link net2 /dev/net/net2 - ether - -
set-node galaxy
set-cluster 2
link net1 /dev/net/net1 - ether - -
link net2 /dev/net/net2 - ether - -
The first line identifies the system. The second line identifies the cluster (that is, the cluster
ID you entered during installation). The next two lines begin with the link command. These
lines identify the two network cards that the LLT protocol uses.
If you configured a low priority link under LLT, the file also includes a "link-lowpri" line.
Refer to the llttab(4) manual page for details about how the LLT configuration may be
modified. The manual page describes the ordering of the directives in the llttab file.
Table C-2 lists the GAB configuration files and the information that these files
contain.
244 Configuration files
About the AMF configuration files
File Description
/etc/default/gab This file stores the start and stop environment variables for GAB:
The installer sets the value of these variables to 1 at the end of VCS
configuration.
If you manually configured VCS, make sure you set the values of these
environment variables to 1.
/etc/gabtab After you install VCS, the file /etc/gabtab contains a gabconfig(1)
command that configures the GAB driver for use.
/sbin/gabconfig -c -nN
The -c option configures the driver for use. The -nN specifies that
the cluster is not formed until at least N nodes are ready to form the
cluster. Symantec recommends that you set N to be the total number
of nodes in the cluster.
Note: Symantec does not recommend the use of the -c -x option for
/sbin/gabconfig. Using -c -x can lead to a split-brain condition.
File Description
/etc/default/amf This file stores the start and stop environment variables for AMF:
The AMF init script uses this /etc/amftab file to configure the
AMF driver. The /etc/amftab file contains the following line by
default:
/opt/VRTSamf/bin/amfconfig -c
include "types.cf"
include "OracleTypes.cf"
include "OracleASMTypes.cf"
cluster vcs02 (
SecureClus = 1
)
system sysA (
)
system sysB (
)
system sysC (
)
group ClusterService (
SystemList = { sysA = 0, sysB = 1, sysC = 2 }
AutoStartList = { sysA, sysB, sysC }
OnlineRetryLimit = 3
OnlineRetryInterval = 120
)
NIC csgnic (
Device = hme0
NetworkHosts = { "10.182.13.1" }
)
248 Configuration files
About the VCS configuration files
NotifierMngr ntfr (
SnmpConsoles = { jupiter" = SevereError }
SmtpServer = "smtp.example.com"
SmtpRecipients = { "[email protected]" = SevereError }
)
include "types.cf"
cluster vcs03 (
ClusterAddress = "10.182.13.50"
SecureClus = 1
)
system sysA (
)
system sysB (
)
system sysC (
)
Configuration files 249
About the VCS configuration files
group ClusterService (
SystemList = { sysA = 0, sysB = 1, sysC = 2 }
AutoStartList = { sysA, sysB, sysC }
OnlineRetryLimit = 3
OnlineRetryInterval = 120
)
Application wac (
StartProgram = "/opt/VRTSvcs/bin/wacstart"
StopProgram = "/opt/VRTSvcs/bin/wacstop"
MonitorProcesses = { "/opt/VRTSvcs/bin/wac" }
RestartLimit = 3
)
IP gcoip (
Device = hme0
Address = "10.182.13.50"
NetMask = "255.255.240.0"
)
NIC csgnic (
Device = hme0
NetworkHosts = { "10.182.13.1" }
)
NotifierMngr ntfr (
SnmpConsoles = { jupiter" = SevereError }
SmtpServer = "smtp.example.com"
SmtpRecipients = { "[email protected]" = SevereError }
)
// Application wac
// {
// IP gcoip
// {
// NIC csgnic
// }
// }
// }
File Description
/etc/default/vxfen This file stores the start and stop environment variables for I/O fencing:
■ VXFEN_START—Defines the startup behavior for the I/O fencing module after a system
reboot. Valid values include:
1—Indicates that I/O fencing is enabled to start up.
0—Indicates that I/O fencing is disabled to start up.
■ VXFEN_STOP—Defines the shutdown behavior for the I/O fencing module during a system
shutdown. Valid values include:
1—Indicates that I/O fencing is enabled to shut down.
0—Indicates that I/O fencing is disabled to shut down.
The installer sets the value of these variables to 1 at the end of VCS configuration.
If you manually configured VCS, you must make sure to set the values of these environment
variables to 1.
File Description
■ vxfen_mode
■ scsi3—For disk-based fencing
■ customized—For server-based fencing
■ disabled—To run the I/O fencing driver but not do any fencing operations.
■ vxfen_mechanism
This parameter is applicable only for server-based fencing. Set the value as cps.
■ scsi3_disk_policy
■ dmp—Configure the vxfen module to use DMP devices
The disk policy is dmp by default. If you use iSCSI devices, you must set the disk policy
as dmp.
■ raw—Configure the vxfen module to use the underlying raw character devices
Note: You must use the same SCSI-3 disk policy on all the nodes.
■ security
This parameter is applicable only for server-based fencing.
1—Indicates that communication with the CP server is in secure mode. This setting is the
default.
0—Indicates that communication with the CP server is in non-secure mode.
■ List of coordination points
This list is required only for server-based fencing configuration.
Coordination points in a server-based fencing can include coordinator disks, CP servers, or
a mix of both. If you use coordinator disks, you must create a coordinator disk group with
the coordinator disk names.
Refer to the sample file /etc/vxfen.d/vxfenmode_cps for more information on how to specify
the coordination points and multiple IP addresses for each CP server.
■ single_cp
This parameter is applicable for server-based fencing which uses a single highly available
CP server as its coordination point. Also applicable for when you use a coordinator disk
group with single disk.
■ autoseed_gab_timeout
This parameter enables GAB automatic seeding of the cluster even when some cluster nodes
are unavailable. This feature requires that I/O fencing is enabled.
0—Turns the GAB auto-seed feature on. Any value greater than 0 indicates the number of
seconds that GAB must delay before it automatically seeds the cluster.
-1—Turns the GAB auto-seed feature off. This setting is the default.
252 Configuration files
Sample configuration files for CP server
File Description
/etc/vxfentab When I/O fencing starts, the vxfen startup script creates this /etc/vxfentab file on each node.
The startup script uses the contents of the /etc/vxfendg and /etc/vxfenmode files. Any time a
system is rebooted, the fencing driver reinitializes the vxfentab file with the current list of all
the coordinator points.
Note: The /etc/vxfentab file is a generated file; do not modify this file.
For disk-based I/O fencing, the /etc/vxfentab file on each node contains a list of all paths to
each coordinator disk. An example of the /etc/vxfentab file in a disk-based fencing configuration
on one node resembles as follows:
■ Raw disk:
/dev/rdsk/c1t1d0s2
/dev/rdsk/c2t1d0s2
/dev/rdsk/c3t1d2s2
■ DMP disk:
/dev/vx/rdmp/c1t1d0s2
/dev/vx/rdmp/c2t1d0s2
/dev/vx/rdmp/c3t1d0s2
For server-based fencing, the /etc/vxfentab file also includes the security settings information.
For server-based fencing with single CP server, the /etc/vxfentab file also includes the single_cp
settings information.
Sample main.cf file for CP server hosted on a single node that runs
VCS
The following is an example of a single CP server node main.cf.
For this CP server single node main.cf, note the following values:
■ Cluster name: cps1
■ Node name: mycps1
include "types.cf"
include "/opt/VRTScps/bin/Quorum/QuorumTypes.cf"
cluster cps1 (
UserNames = { admin = bMNfMHmJNiNNlVNhMK, haris = fopKojNvpHouNn,
"mycps1.symantecexample.com@root@vx" = aj,
"[email protected]" = hq }
Administrators = { admin, haris,
"mycps1.symantecexample.com@root@vx",
"[email protected]" }
SecureClus = 1
HacliUserLevel = COMMANDROOT
)
system mycps1 (
)
group CPSSG (
SystemList = { mycps1 = 0 }
AutoStartList = { mycps1 }
)
IP cpsvip1 (
Critical = 0
Device @mycps1 = hme0
Address = "10.209.3.1"
254 Configuration files
Sample configuration files for CP server
NetMask = "255.255.252.0"
)
IP cpsvip2 (
Critical = 0
Device @mycps1 = qfe:0
Address = "10.209.3.2"
NetMask = "255.255.252.0"
)
NIC cpsnic1 (
Critical = 0
Device @mycps1 = hme0
PingOptimize = 0
NetworkHosts @mycps1 = { "10.209.3.10 }
)
NIC cpsnic2 (
Critical = 0
Device @mycps1 = qfe:0
PingOptimize = 0
)
Process vxcpserv (
PathName = "/opt/VRTScps/bin/vxcpserv"
ConfInterval = 30
RestartLimit = 3
)
Quorum quorum (
QuorumResources = { cpsvip1, cpsvip2 }
)
// {
// NIC cpsnic1
// }
// IP cpsvip2
// {
// NIC cpsnic2
// }
// Process vxcpserv
// {
// Quorum quorum
// }
// }
include "types.cf"
include "CFSTypes.cf"
include "CVMTypes.cf"
include "/opt/VRTScps/bin/Quorum/QuorumTypes.cf"
// cluster: cps1
// CP servers:
// mycps1
// mycps2
cluster cps1 (
UserNames = { admin = ajkCjeJgkFkkIskEjh,
"mycps1.symantecexample.com@root@vx" = JK,
"mycps2.symantecexample.com@root@vx" = dl }
Administrators = { admin, "mycps1.symantecexample.com@root@vx",
"mycps2.symantecexample.com@root@vx" }
SecureClus = 1
)
system mycps1 (
256 Configuration files
Sample configuration files for CP server
system mycps2 (
)
group CPSSG (
SystemList = { mycps1 = 0, mycps2 = 1 }
AutoStartList = { mycps1, mycps2 } )
DiskGroup cpsdg (
DiskGroup = cps_dg
)
IP cpsvip1 (
Critical = 0
Device @mycps1 = hme0
Device @mycps2 = hme0
Address = "10.209.81.88"
NetMask = "255.255.252.0"
)
IP cpsvip2 (
Critical = 0
Device @mycps1 = qfe:0
Device @mycps2 = qfe:0
Address = "10.209.81.89"
NetMask = "255.255.252.0"
)
Mount cpsmount (
MountPoint = "/etc/VRTScps/db"
BlockDevice = "/dev/vx/dsk/cps_dg/cps_volume"
FSType = vxfs
FsckOpt = "-y"
)
NIC cpsnic1 (
Critical = 0
Device @mycps1 = hme0
Device @mycps2 = hme0
PingOptimize = 0
NetworkHosts @mycps1 = { "10.209.81.10 }
)
Configuration files 257
Sample configuration files for CP server
NIC cpsnic2 (
Critical = 0
Device @mycps1 = qfe:0
Device @mycps2 = qfe:0
PingOptimize = 0
)
Process vxcpserv (
PathName = "/opt/VRTScps/bin/vxcpserv"
)
Quorum quorum (
QuorumResources = { cpsvip1, cpsvip2 }
)
Volume cpsvol (
Volume = cps_volume
DiskGroup = cps_dg
)
// Quorum quorum
// Mount cpsmount
// {
// Volume cpsvol
// {
// DiskGroup cpsdg
// }
// }
// }
// }
Task Reference
Prepare for installation. See “Preparing for a single node installation” on page 260.
Install the VCS software on See “Starting the installer for the single node cluster”
the system using the on page 260.
installer.
Enter a single system name. While you configure, the installer asks if you want
to enable LLT and GAB:
If you plan to run VCS on a single node without any need for
adding cluster node online, you have an option to proceed
without starting GAB and LLT.
Starting GAB and LLT is recommended.
Do you want to start GAB and LLT? [y,n,q,?] (y)
Answer n if you want to use the single node cluster as a stand-alone cluster.
Installing VCS on a single node 261
Verifying single-node operation
Answer y if you plan to incorporate the single node cluster into a multi-node
cluster in the future.
Continue with the installation.
2 Verify that the had and hashadow daemons are running in single-node mode:
set-node galaxy
set-cluster 1
link link1 /dev/udp - udp 50000 - 192.168.9.1 192.168.9.255
link link2 /dev/udp - udp 50001 - 192.168.10.1 192.168.10.255
Verify the subnet mask using the ifconfig command to ensure that the two
links are on separate subnets.
■ Display the content of the /etc/llttab file on the second node nebula:
set-node nebula
set-cluster 1
link link1 /dev/udp - udp 50000 - 192.168.9.2 192.168.9.255
link link2 /dev/udp - udp 50001 - 192.168.10.2 192.168.10.255
Verify the subnet mask using the ifconfig command to ensure that the two
links are on separate subnets.
Configuring LLT over UDP 265
Manually configuring LLT over UDP using IPv4
Field Description
tag-name A unique string that is used as a tag by LLT; for example link1,
link2,....
device The device path of the UDP protocol; for example /dev/udp.
node-range Nodes using the link. "-" indicates all cluster nodes are to be
configured for this link.
udp-port Unique UDP port in the range of 49152-65535 for the link.
MTU "-" is the default, which has a value of 8192. The value may be
increased or decreased depending on the configuration. Use the
lltstat -l command to display the current value.
bcast-address ■ For clusters with enabled broadcasts, specify the value of the
subnet broadcast address.
■ "-" is the default for clusters spanning routers.
Field Description
link tag-name The string that LLT uses to identify the link; for example link1,
link2,....
To check which ports are defined as defaults for a node, examine the file
/etc/services. You should also use the netstat command to list the UDP ports
currently in use. For example:
# netstat -a | more
UDP
Local Address Remote Address State
-------------------- -------------------- -------
*.sunrpc Idle
*.* Unbound
*.32771 Idle
*.32776 Idle
*.32777 Idle
*.name Idle
*.biff Idle
*.talk Idle
*.32779 Idle
.
.
.
*.55098 Idle
*.syslog Idle
Configuring LLT over UDP 267
Manually configuring LLT over UDP using IPv4
*.58702 Idle
*.* Unbound
Look in the UDP section of the output; the UDP ports that are listed under Local
Address are already in use. If a port is listed in the /etc/services file, its associated
name is displayed rather than the port number in the output.
For example:
■ For the first network interface on the node galaxy:
# cat /etc/llttab
set-node nodexyz
set-cluster 100
Figure E-1 A typical configuration of direct-attached links that use LLT over
UDP
Solaris SPARC
UDP Endpoint qfe1 Node1
Node0
UDP Port = 50001
IP = 192.1.3.1
Link Tag = link2
qfe1
192.1.3.2
Link Tag = link2
Switches
Solaris x64
UDP Endpoint e1000g1 Node1
Node0
UDP Port = 50001
IP = 192.1.3.1
Link Tag = link2
e1000g1
192.1.3.2
Link Tag = link2
Switches
The configuration that the /etc/llttab file for Node 0 represents has directly
attached crossover links. It might also have the links that are connected through
a hub or switch. These links do not cross routers.
LLT broadcasts requests peer nodes to discover their addresses. So the addresses
of peer nodes do not need to be specified in the /etc/llttab file using the set-addr
command. For direct attached links, you do need to set the broadcast address of
270 Configuring LLT over UDP
Manually configuring LLT over UDP using IPv4
the links in the /etc/llttab file. Verify that the IP addresses and broadcast addresses
are set correctly by using the ifconfig -a command.
set-node Node0
set-cluster 1
#configure Links
#link tag-name device node-range link-type udp port MTU \
IP-address bcast-address
link link1 /dev/udp - udp 50000 - 192.1.2.1 192.1.2.255
link link2 /dev/udp - udp 50001 - 192.1.3.1 192.1.3.255
set-node Node1
set-cluster 1
#configure Links
#link tag-name device node-range link-type udp port MTU \
IP-address bcast-address
link link1 /dev/udp - udp 50000 - 192.1.2.2 192.1.2.255
link link2 /dev/udp - udp 50001 - 192.1.3.2 192.1.3.255
qfe1
192.1.4.1
Link Tag = link2
Solaris x64
Node0 on site UDP Endpoint e1000g1 Node1 on site
A UDP Port = 50001 B
IP = 192.1.2.1
Link Tag = link2
e1000g1
192.1.4.1
Link Tag = link2
The configuration that the following /etc/llttab file represents for Node 1 has
links crossing IP routers. Notice that IP addresses are shown for each link on each
peer node. In this configuration broadcasts are disabled. Hence, the broadcast
address does not need to be set in the link command of the /etc/llttab file.
set-node Node1
set-cluster 1
272 Configuring LLT over UDP
Manually configuring LLT over UDP using IPv6
#set address of each link for all peer nodes in the cluster
#format: set-addr node-id link tag-name address
set-addr 0 link1 192.1.1.1
set-addr 0 link2 192.1.2.1
set-addr 2 link1 192.1.5.2
set-addr 2 link2 192.1.6.2
set-addr 3 link1 192.1.7.3
set-addr 3 link2 192.1.8.3
set-node Node0
set-cluster 1
#set address of each link for all peer nodes in the cluster
#format: set-addr node-id link tag-name address
set-addr 1 link1 192.1.3.1
set-addr 1 link2 192.1.4.1
set-addr 2 link1 192.1.5.2
set-addr 2 link2 192.1.6.2
set-addr 3 link1 192.1.7.3
set-addr 3 link2 192.1.8.3
■ Make sure that each NIC has an IPv6 address that is configured before
configuring LLT.
■ Make sure the IPv6 addresses in the /etc/llttab files are consistent with the
IPv6 addresses of the network interfaces.
■ Make sure that each link has a unique not well-known UDP port.
See “Selecting UDP ports” on page 274.
■ For the links that cross an IP router, disable multicast features and specify the
IPv6 address of each link manually in the /etc/llttab file.
See “Sample configuration: links crossing IP routers” on page 277.
Field Description
tag-name A unique string that is used as a tag by LLT; for example link1,
link2,....
device The device path of the UDP protocol; for example /dev/udp6.
node-range Nodes using the link. "-" indicates all cluster nodes are to be
configured for this link.
udp-port Unique UDP port in the range of 49152-65535 for the link.
MTU "-" is the default, which has a value of 8192. The value may be
increased or decreased depending on the configuration. Use the
lltstat -l command to display the current value.
Field Description
Field Description
link tag-name The string that LLT uses to identify the link; for example link1,
link2,....
address IPv6 address assigned to the link for the peer node.
To check which ports are defined as defaults for a node, examine the file
/etc/services. You should also use the netstat command to list the UDP ports
currently in use. For example:
# netstat -a | more
UDP: IPv4
Local Address Remote Address State
-------------------- -------------------- ----------
*.sunrpc Idle
Configuring LLT over UDP 275
Manually configuring LLT over UDP using IPv6
*.* Unbound
*.32772 Idle
*.* Unbound
*.32773 Idle
*.lockd Idle
*.32777 Idle
*.32778 Idle
*.32779 Idle
*.32780 Idle
*.servicetag Idle
*.syslog Idle
*.16161 Idle
*.32789 Idle
*.177 Idle
*.32792 Idle
*.32798 Idle
*.snmpd Idle
*.32802 Idle
*.* Unbound
*.* Unbound
*.* Unbound
UDP: IPv6
Local Address Remote Address State If
------------------------- ------------------------- ---------- -----
*.servicetag Idle
*.177 Idle
Look in the UDP section of the output; the UDP ports that are listed under Local
Address are already in use. If a port is listed in the /etc/services file, its associated
name is displayed rather than the port number in the output.
Figure E-3 A typical configuration of direct-attached links that use LLT over
UDP
Solaris SPARC
Node0 UDP Port = 50001 Node1
IP = fe80::21a:64ff:fe92:1b47
Link Tag = link2
fe80::21a:64ff:fe92:1a93
Link Tag = link2
Switches
Solaris x64
Node0 UDP Port = 50001 Node1
IP = fe80::21a:64ff:fe92:1b47
Link Tag = link2
fe80::21a:64ff:fe92:1a93
Link Tag = link2
Switches
The configuration that the /etc/llttab file for Node 0 represents has directly
attached crossover links. It might also have the links that are connected through
a hub or switch. These links do not cross routers.
LLT uses IPv6 multicast requests for peer node address discovery. So the addresses
of peer nodes do not need to be specified in the /etc/llttab file using the set-addr
command. Use the ifconfig -a command to verify that the IPv6 address is set
correctly.
set-node Node0
set-cluster 1
Configuring LLT over UDP 277
Manually configuring LLT over UDP using IPv6
#configure Links
#link tag-name device node-range link-type udp port MTU \
IP-address mcast-address
link link1 /dev/udp6 - udp6 50000 - fe80::21a:64ff:fe92:1b46 -
link link1 /dev/udp6 - udp6 50001 - fe80::21a:64ff:fe92:1b47 -
set-node Node1
set-cluster 1
#configure Links
#link tag-name device node-range link-type udp port MTU \
IP-address mcast-address
link link1 /dev/udp6 - udp6 50000 - fe80::21a:64ff:fe92:1a92 -
link link1 /dev/udp6 - udp6 50001 - fe80::21a:64ff:fe92:1a93 -
fe80::21a:64ff:fe92:1b47
Link Tag = link2
Routers
Solaris x64
Node0 on site UDP Port = 50001 Node1 on site
A B
IP = fe80::21a:64ff:fe92:1a93
Link Tag = link2
fe80::21a:64ff:fe92:1b47
Link Tag = link2
Routers
The configuration that the following /etc/llttab file represents for Node 1 has
links crossing IP routers. Notice that IPv6 addresses are shown for each link on
each peer node. In this configuration multicasts are disabled.
set-node Node1
set-cluster 1
#set address of each link for all peer nodes in the cluster
Configuring LLT over UDP 279
LLT over UDP sample /etc/llttab
set-node Node0
set-cluster 1
#set address of each link for all peer nodes in the cluster
#format: set-addr node-id link tag-name address
set-addr 1 link1 fe80::21a:64ff:fe92:1a92
set-addr 1 link2 fe80::21a:64ff:fe92:1a93
set-addr 2 link1 fe80::21a:64ff:fe92:1d70
set-addr 2 link2 fe80::21a:64ff:fe92:1d71
set-addr 3 link1 fe80::209:6bff:fe1b:1c94
set-addr 3 link2 fe80::209:6bff:fe1b:1c95
set-node galaxy
set-cluster clus1
link e1000g1 /dev/udp - udp 50000 - 192.168.10.1 -
link e1000g2 /dev/udp - udp 50001 - 192.168.11.1 -
link-lowpri e1000g0 /dev/udp - udp 50004 - 10.200.58.205 -
set-addr 1 e1000g1 192.168.10.2
280 Configuring LLT over UDP
LLT over UDP sample /etc/llttab
The ssh shell provides strong authentication and secure communications over
channels. It is intended to replace rlogin, rsh, and rcp.
Configuring ssh
The procedure to configure ssh uses OpenSSH example file names and commands.
Note: You can configure ssh in other ways. Regardless of how ssh is configured,
complete the last step in the example to verify the configuration.
To configure ssh
1 Log in as root on the source system from which you want to install the Veritas
product.
2 To generate a DSA key pair on the source system, type the following:
# ssh-keygen -t dsa
4 Do not enter a passphrase. Press Enter. Enter same passphrase again. Press
Enter again.
5 Make sure the /.ssh directory is on all the target installation systems. If that
directory is absent, create it on the target system and set the write permission
to root only:
# mkdir /.ssh
# chmod go-w /
# chmod 700 /.ssh
# chmod go-rwx /.ssh
Configuring the secure shell or the remote shell for communications 283
Setting up inter-system communication
6 Make sure the secure file transfer program (SFTP) is enabled on all the target
installation systems. To enable SFTP, the /etc/ssh/sshd_config file must
contain the following two lines:
PermitRootLogin yes
Subsystem sftp /usr/lib/ssh/sftp-server
7 If the lines are not there, add them and restart SSH. To restart SSH on Solaris
10, type the following command:
8 To copy the public DSA key, /.ssh/id_dsa.pub to each target system, type the
following commands:
# sftp target_sys
If you run this step for the first time on a system, output similar to the
following appears:
Connecting to target_sys...
The authenticity of host 'target_sys (10.182.00.00)'
can't be established. DSA key fingerprint is
fb:6f:9e:61:91:9e:44:6b:87:86:ef:68:a6:fd:87:7d.
Are you sure you want to continue connecting (yes/no)?
13 To begin the ssh session on the target system, type the following command:
# ssh target_sys
15 After you log in, enter the following command to append the authorization
key to the id_dsa.pub file:
16 Delete the id_dsa.pub public key file. Before you delete this public key file,
make sure to complete the following tasks:
■ The file is copied to the target (host) system
■ The file is added to the authorized keys file
To delete the id_dsa.pub public key file, type the following command:
# rm /id_dsa.pub
18 When you install from a source system that is also an installation target, add
the local system id_dsa.pub key to the local /.ssh/authorized_key file. The
installation can fail if the installation source system is not authenticated.
Configuring the secure shell or the remote shell for communications 285
Setting up inter-system communication
This step is shell-specific and is valid only while the shell is active. You must
execute the procedure again if you close the shell during the session.
20 To verify that you can connect to the target system, type the following
command:
The commands should execute on the remote system without any requests
for a passphrase or password from the system.
286 Configuring the secure shell or the remote shell for communications
Setting up inter-system communication
Appendix G
Troubleshooting VCS
installation
This appendix includes the following topics:
■ The vxfentsthdw utility fails when SCSI TEST UNIT READY command fails
■ Issues during fencing startup on VCS cluster nodes set up for server-based
fencing
To comply with the terms of the EULA, and remove these messages, you must do
one of the following within 60 days:
■ Install a valid license key corresponding to the functionality in use on the host.
After you install the license key, you must validate the license key using the
following command:
# /opt/VRTS/bin/vxkeyless
# ./installer -stop
Troubleshooting VCS installation 289
Installer cannot create UUID for the cluster
# ./installer -start
You may see the error message during VCS configuration, upgrade, or when you
add a node to the cluster using the installer.
Workaround: To start VCS, you must run the uuidconfig.pl script manually to
configure the UUID on each cluster node.
To configure the cluster UUID when you create a cluster manually
◆ On one node in the cluster, perform the following command to populate the
cluster UUID on each node in the cluster.
Where nodeA, nodeB, through nodeN are the names of the cluster nodes.
address - 00:04:23:AC:24:2D
LLT lltconfig ERROR V-14-2-15664 LLT could not
configure any link
Check the log files that get generated in the /var/svc/log directory for any errors.
Recommended action: Ensure that all systems on the network have unique
clusterid-nodeid pair. You can use the lltdump -f device -D command to get
the list of unique clusterid-nodeid pairs connected to the network. This utility is
available only for LLT-over-ethernet.
The disk array does not support returning success for a SCSI TEST UNIT READY
command when another host has the disk reserved using SCSI-3 persistent
reservations. This happens with the Hitachi Data Systems 99XX arrays if bit 186
of the system mode option is not enabled.
cpsadm command on If you receive a connection error message after issuing the cpsadm command on the VCS
the VCS cluster gives cluster, perform the following actions:
connection error
■ Ensure that the CP server is reachable from all the VCS cluster nodes.
■ Check that the VCS cluster nodes use the correct CP server virtual IP or virtual hostname
and the correct port number.
Check the /etc/vxfenmode file.
■ Ensure that the running CP server is using the same virtual IP/virtual hostname and
port number.
Troubleshooting VCS installation 291
Issues during fencing startup on VCS cluster nodes set up for server-based fencing
Table G-1 Fencing startup issues on VCS cluster (client cluster) nodes
(continued)
Authorization failure Authorization failure occurs when the CP server's nodes or users are not added in the CP
server configuration. Therefore, fencing on the VCS cluster (client cluster) node is not
allowed to access the CP server and register itself on the CP server. Fencing fails to come
up if it fails to register with a majority of the coordination points.
To resolve this issue, add the CP server node and user in the CP server configuration and
restart fencing.
Authentication failure If you had configured secure communication between the CP server and the VCS cluster
(client cluster) nodes, authentication failure can occur due to the following causes:
GigE
GigE
GigE
GigE
GigE
VCS client SFRAC client
GigE
cluster cluster
Cluster-2
(UUID1) Cluster -2 (UUID2)
NIC 1
Cluster-1
NIC 1
Cluster -1 node 2
NIC 1
node 1
NIC 1
node 1 node 2
GigE
GigE
NIC 2
NIC 2
NIC 2
NIC 2
3
C
3 C
3 NI
C NI
C
3 NI
NI vxfenmode= customized HB
A vxfenmode= customized HB
A
A A
HB vxfen_mechanism = cps HB vxfen_mechanism = cps
cps1=[mycps1.company.com]=14250 cps1=[mycps1.company.com]=14250
cps2=[mycps2company.com]=14250 cps2=[mycps2company.com]=14250
cps3=[mycps3.company.com]=14250 cps3=[mycps3.company.com]=14250
et
ern
Eth witch
S
Intranet/Internet
Public network
mycps1.company.com mycps3.company.com
mycps2.company.com
CP Server 2 CP Server 3
CP Server 1
vxcpserv vxcpserv
vxcpserv
NIC
NIC
NIC
VIP 2 VIP 3
VIP 1
/etc/VRTScps/db
/etc/VRTScps/db /etc/VRTScps/db
Figure H-2 Client cluster served by highly available CP server and 2 SCSI-3
disks
VLAN
Private
network
et et
ern ern
Eth witch Eth witch
S S
GigE
GigE
GigE Cluster-1
GigE
Cluster -1
NIC 1 NIC 2
NIC 1 NIC 2
node 2
node 1
Client cluster C
3
C
3 NI
NI A
A vxfenmode=customized HB
HB
vxfen_mechanism=cps
cps1=[VIP]:14250
vxfendg=vxfencoorddg
et
CPS hosted on
ern SFHA cluster
Eth witch
S cp1=[VIP]:14250(port no.)
Intranet/
Internet VLAN
Public network Private network
SAN
GigE
et et
ern ern
ch Eth witch Eth witch
wit S S om
FCS n y.c
pa
om
om
GigE
GigE .c
.c
s2
ny
SFHA cp
my
pa
CPS-standby
om
GigE
disk1 CPS-Primary cluster
NIC 1 NIC 2
NIC 1 NIC 2
node
c
node
1.
ps
rv
disk2
yc
rv
se
se
m
cp
cp
vx
vx
VIP 3
VIP
C 3
SCSI-3 LUNs as 2 NI NI
C
coordination points A SAN A
HB HB
The coordinator disk group CPS database itc
h
specified in /etc/vxfenmode /etc/VRTScps/db Sw
FC
should have these 2 disks.
Coordinator
LUNs
Data
LUNs
The two SCSI-3 disks (one from each site) are part of disk group vxfencoorddg.
The third coordination point is a CP server on a single node VCS cluster.
Figure H-3 Two node campus cluster served by remote CP server and 2 SCSI-3
Client Client
SITE 1 Applications SITE 2 Applications
et
ern
et Eth itch
ern Sw et
Eth itch LAN ern
Sw Eth itch
et LAN Sw
ern
Eth itch
Sw
et
et ern
ern Eth itch Cluster
Eth itch Sw
Cluster
NIC 1NIC 2 HBA 1HBA 2
node 1 node 2
3
IC
3
N
IC
N
3
IC
3
N
IC
N
h
itc
SAN Sw
FC
h SAN
itc
Sw
FC
h h
itc itc
Sw Sw
FC FC
DWDM
Dark Fibre
Coordinator Coordinator
Data LUN 2 Data
LUN 1
Storage Array LUNs Storage Array LUNs
SITE 3
o. ) et Legends
CPS hosted rt n com ern
On the client cluster:
on single node 0 (poany. Eth itch
vxfenmode=customized 25 mp Sw Private Interconnects
VCS cluster :14 co
vxfen_mechanism=cps IP] ps. (GigE)
cps1=[VIP]:14250 =[V yc m
s1 Public Links (GigE)
vxfendg=vxfencoorddg cp
rv
se
The coordinator disk group cp Dark Fiber
vx
specified in /etc/vxfenmode Connections
should have one SCSI3 disk CPS database VIP
/etc/VRTScps/db
from site1 and another from
C
San 1 Connections
site2. NI
San 2 Connections
298 Sample VCS cluster setup diagrams for CP server-based I/O fencing
Configuration diagrams for setting up server-based I/O fencing
Figure H-4 Multiple client clusters served by highly available CP server and 2
SCSI-3 disks
VLAN
Private
network
VLAN
et et
Private ern ern
Eth witch Eth witch
network S S
et et
ern ern
Eth witch Eth witch
S S
GigE
GigE
GigE
GigE
GigE
Cluster-2
GigE
Cluster -2
NIC 1 NIC 2
NIC 1 NIC 2
node 2
node 1
GigE
Cluster-1
GigE
Cluster -1
NIC 1 NIC 2
NIC 1 NIC 2 node 2
node 1 SFRAC client
cluster C
3
C
3 NI
NI
A vxfenmode=customized A
VCS client cluster HB HB
3 vxfen_mechanism=cps
C
C
3 NI cps1=[VIP]:14250
NI vxfenmode=customized A SAN vxfendg=vxfencoorddg
A vxfen_mechanism=cps HB
HB itc
h
cps1=[VIP]:14250 Sw
vxfendg=vxfencoorddg FC c2t0d0s2
SCSI-3 LUNs
et as 2 coordinator
ern c2t1d0s2 disks
Eth witch
Intranet/ S
Internet
Public VLAN
SAN network Private
et
network t
ch ern h rne
it CPS hosted i E t h tc the h E witc
Sw Sw S
FC on SFHA
om
cluster
co
m y.c
GigE
ny. an
GigE
SFHA cluster omp
pa .c
disk1
.co
m CPS- s2 CPS-
s1 cp standby
my
NIC 1
NIC 1
cp Primary
my
GigE
disk2 node node
rv
rv
se
se
NIC 2
NIC 2
cp
cp
SCSI-3 LUNs vx
vx
as 2 coordinator disks VIP 3 VIP
C 3
NI NI
C
The coordinator disk group SAN
HBA HBA
specified in /etc/vxfenmode
should have these 2 disks. CPS database itc
h
w
/etc/VRTScps/ F CS
db
Coordinator
Data LUNs
LUNs
300 Sample VCS cluster setup diagrams for CP server-based I/O fencing
Configuration diagrams for setting up server-based I/O fencing
Appendix I
Reconciling major/minor
numbers for NFS shared
disks
This appendix includes the following topics:
# ls -lL block_device
# ls -lL /dev/dsk/c1t1d0s2
Note that the major numbers (32) and the minor numbers (1) match,
satisfactorily meeting the requirement for NFS file systems.
To reconcile the major numbers that do not match on disk partitions
1 Reconcile the major and minor numbers, if required. For example, if the
output in the previous section resembles the following, perform the
instructions beginning step 2:
Output on Node A:
Output on Node B:
# export PATH=$PATH:/usr/sbin:/sbin:/opt/VRTS/bin
Reconciling major/minor numbers for NFS shared disks 303
Reconciling major/minor numbers for NFS shared disks
3 Attempt to change the major number on System B (now 36) to match that of
System A (32). Use the command:
# haremajor -sd 32
6 Notice that the number 36 (the major number on Node A) is not available on
Node B. Run the haremajor command on Node B and change it to 128,
7 Run the same command on Node A. If the command fails on Node A, the
output lists the available numbers. Rerun the command on both nodes, setting
the major number to one available to both.
8 Reboot each system on which the command succeeds.
9 Proceed to reconcile the major numbers for your next partition.
To reconcile the minor numbers that do not match on disk partitions
1 In the example, the minor numbers are 1 and 3 and are reconciled by setting
to 30 on each node.
2 Type the following command on both nodes using the name of the block
device:
# ls -1 /dev/dsk/c1t1d0s2
The device name (in bold) includes the slash following the word devices,
and continues to, but does not include, the colon.
304 Reconciling major/minor numbers for NFS shared disks
Reconciling major/minor numbers for NFS shared disks
"/sbus@1f,0/QLGC,isp@0,10000/sd@0,0" 0 "sd"
"/sbus@1f,0/QLGC,isp@0,10000/sd@1,0" 1 "sd"
"/sbus@1f,0/QLGC,isp@0,10000/sd@2,0" 2 "sd"
"/sbus@1f,0/QLGC,isp@0,10000/sd@3,0" 3 "sd"
.
.
"/sbus@1f,0/SUNW,fas@e,8800000/sd@d,0" 27 "sd"
"/sbus@1f,0/SUNW,fas@e,8800000/sd@e,0" 28 "sd"
"/sbus@1f,0/SUNW,fas@e,8800000/sd@f,0" 29 "sd"
# reboot -- -rv
Reconciling major/minor numbers for NFS shared disks 305
Reconciling major/minor numbers for NFS shared disks
# export PATH=$PATH:/usr/sbin:/sbin:/opt/VRTS/bin
2 To list the devices, use the ls -lL block_device command on each node:
# ls -lL /dev/vx/dsk/shareddg/vol3
4 Use the following command on each node exporting an NFS file system. The
command displays the major numbers for vxio and vxspec that Veritas
Volume Manager uses . Note that other major numbers are also displayed,
but only vxio and vxspec are of concern for reconciliation:
# grep vx /etc/name_to_major
Output on Node A:
vxdmp 30
vxio 32
vxspec 33
vxfen 87
vxglm 91
Output on Node B:
vxdmp 30
vxio 36
vxspec 37
vxfen 87
vxglm 91
5 To change Node B’s major numbers for vxio and vxspec to match those of
Node A, use the command:
# haremajor -vx 32 33
If the command succeeds, proceed to step 8. If this command fails, you receive
a report similar to the following:
6 If you receive this report, use the haremajor command on Node A to change
the major number (32/33) to match that of Node B (36/37). For example, enter:
# haremajor -vx 36 37
If the command fails again, you receive a report similar to the following:
7 If you receive the second report, choose the larger of the two available
numbers (in this example, 128). Use this number in the haremajor command
to reconcile the major numbers. Type the following command on both nodes:
A configuring (continued)
abort sequence 63 private network 54
about rsh 57
global clusters 23 ssh 57, 281
adding switches 54
users 116 configuring VCS
adding node adding users 116
to a one-node cluster 207 event notification 117, 119
global clusters 121
required information 68
B script-based installer 103
block device secure mode 111
partitions starting 104
example file name 301 controllers
volumes private Ethernet 54
example file name 301 SCSI 58
coordinator disks
C DMP devices 28
cluster for I/O fencing 28
creating a single-node cluster
installer 259 D
four-node configuration 20 data disks
removing a node from 218 for I/O fencing 28
verifying operation 185 disk space
Cluster Manager 25 directories 32
cold start language pack 32
running VCS 22 required 32
commands disks
format 61 adding and initializing 127
gabconfig 183 testing with vxfentsthdw 130
hastatus 185 verifying node access 132
hasys 185 documentation
lltconfig 241 accessing 177
lltstat 180
vxdisksetup (initializing disks) 127
vxlicinst 124–125 E
vxlicrep 123 eeprom
communication channels 21 parameters 54
communication disk 21 Ethernet controllers 54
configuring
hardware 32
312 Index
F J
FC-AL controllers 61 Java Console 25
fibre channel 32
functions L
go 63
language packages 195
disk space 32
G license keys
GAB adding with vxlicinst 124
description 21 obtaining 48
port membership information 183 replacing demo key 125
verifying 183 licenses
gabconfig command 183 information about 123
-a (verifying GAB) 183 links
gabtab file private network 241
verifying after installation 241 LLT
global clusters 23 description 21
configuration 121 interconnects 65
verifying 180
H lltconfig command 241
llthosts file
hardware
verifying after installation 241
configuration 20
lltstat command 180
configuring network and storage 32
llttab file
hastatus -summary command 185
verifying after installation 241
hasys -display command 185
hubs 54
M
I MAC addresses 54
main.cf file
I/O fencing
contents after installation 247
checking disks 130
main.cf files 252
shared storage 130
major and minor numbers
I/O fencing requirements
checking 302, 305
non-SCSI-3 37
shared devices 301
installer program
MANPATH variable
uninstalling language packages 195
setting 63
installing
media speed 65
post 122
optimizing 65
required disk space 32
membership information 183
installing VCS
mounting
required information 68
software disc 65
installvcs
options 40
installvcs prompts N
b 41 network partition
n 41 preexisting 22
y 41 protecting against 20
Network partitions
protecting against 21
network switches 54
Index 313
NFS 19 S
NFS services script-based installer
shared storage 301 VCS configuration overview 103
non-SCSI-3 I/O fencing SCSI driver
requirements 37 determining instance numbers 303
non-SCSI3 fencing SCSI host bus adapter 32
setting up 135 SCSI-3
using installvcs program 135 persistent reservations 58
seeding 22
O automatic 22
optimizing manual 22
media speed 65 setting
overview MANPATH variable 63
VCS 19 PATH variable 63
shared storage
Fibre Channel
P setting up 61
parameters NFS services 301
eeprom 54 single-node cluster
PATH variable adding a node to 207
setting 63 single-system cluster
VCS commands 180 creating 259
persistent reservations SMTP email notification 117
SCSI-3 58 SNMP trap notification 119
port a ssh 105, 281
membership 183 configuration 57
port h configuring 281
membership 183 starting configuration
port membership information 183 installvcs program 105
prerequisites Veritas product installer 105
uninstalling 193 storage
private network fully shared vs. distributed 20
configuring 54 setting up shared fibre 61
shared 20
R switches 54
RAM Symantec Product Authentication Service 111
installation requirement 32 system communication using rsh
removing a system from a cluster 218 ssh 281
requirements system state attribute value 185
Ethernet controllers 32
fibre channel 32 U
hardware 32 uninstalling
RAM Ethernet controllers 32 prerequisites 193
SCSI host bus adapter 32 uninstalling language packages 195
response files 42
rsh 105, 281
configuration 57 V
variables
MANPATH 63
314 Index
variables (continued)
PATH 63
VCS
basics 19
command directory path variable 180
configuration files
main.cf 245
configuring 103
documentation 177
notifications 23
replicated states on each system 20
VCS features 23
VCS installation
verifying
cluster operations 180
GAB operations 180
LLT operations 180
VCS notifications
SMTP notification 23
SNMP notification 23
Veritas Operations Manager 25
Volume Manager
Fibre Channel 61
vxdisksetup command 127
vxlicinst command 124
vxlicrep command 123