Solaris Cluster PDF
Solaris Cluster PDF
Solaris Cluster PDF
Guide
This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual
property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software,
unless required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is
applicable:
U.S. GOVERNMENT END USERS. Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or
documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and
agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system,
integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the
programs. No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently
dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall
be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any
liability for any damages caused by use of this software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered
trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro
Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and
its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation
and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.
Ce logiciel et la documentation qui laccompagne sont protgs par les lois sur la proprit intellectuelle. Ils sont concds sous licence et soumis des restrictions
dutilisation et de divulgation. Sauf disposition de votre contrat de licence ou de la loi, vous ne pouvez pas copier, reproduire, traduire, diffuser, modifier, breveter,
transmettre, distribuer, exposer, excuter, publier ou afficher le logiciel, mme partiellement, sous quelque forme et par quelque procd que ce soit. Par ailleurs, il est
interdit de procder toute ingnierie inverse du logiciel, de le dsassembler ou de le dcompiler, except des fins dinteroprabilit avec des logiciels tiers ou tel que
prescrit par la loi.
Les informations fournies dans ce document sont susceptibles de modification sans pravis. Par ailleurs, Oracle Corporation ne garantit pas quelles soient exemptes
derreurs et vous invite, le cas chant, lui en faire part par crit.
Si ce logiciel, ou la documentation qui laccompagne, est concd sous licence au Gouvernement des Etats-Unis, ou toute entit qui dlivre la licence de ce logiciel
ou lutilise pour le compte du Gouvernement des Etats-Unis, la notice suivante sapplique:
U.S. GOVERNMENT END USERS. Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or
documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and
agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system,
integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the
programs. No other rights are granted to the U.S. Government.
Ce logiciel ou matriel a t dvelopp pour un usage gnral dans le cadre dapplications de gestion des informations. Ce logiciel ou matriel nest pas conu ni nest
destin tre utilis dans des applications risque, notamment dans des applications pouvant causer des dommages corporels. Si vous utilisez ce logiciel ou matriel
dans le cadre dapplications dangereuses, il est de votre responsabilit de prendre toutes les mesures de secours, de sauvegarde, de redondance et autres mesures
ncessaires son utilisation dans des conditions optimales de scurit. Oracle Corporation et ses affilis dclinent toute responsabilit quant aux dommages causs
par lutilisation de ce logiciel ou matriel pour ce type dapplications.
Oracle et Java sont des marques dposes dOracle Corporation et/ou de ses affilis. Tout autre nom mentionn peut correspondre des marques appartenant
dautres propritaires quOracle.
Intel et Intel Xeon sont des marques ou des marques dposes dIntel Corporation. Toutes les marques SPARC sont utilises sous licence et sont des marques ou des
marques dposes de SPARC International, Inc. AMD, Opteron, le logo AMD et le logo AMD Opteron sont des marques ou des marques dposes dAdvanced Micro
Devices. UNIX est une marque dpose dThe Open Group.
Ce logiciel ou matriel et la documentation qui laccompagne peuvent fournir des informations ou des liens donnant accs des contenus, des produits et des services
manant de tiers. Oracle Corporation et ses affilis dclinent toute responsabilit ou garantie expresse quant aux contenus, produits ou services manant de tiers. En
aucun cas, Oracle Corporation et ses affilis ne sauraient tre tenus pour responsables des pertes subies, des cots occasionns ou des dommages causs par laccs
des contenus, produits ou services tiers, ou leur utilisation.
120306@25097
Contents
Preface .....................................................................................................................................................7
3
Contents
5
Contents
The Oracle Solaris Cluster Software Installation Guide contains guidelines and procedures for
installing the Oracle Solaris Cluster 4.0 software on both SPARC based systems and x86 based
systems.
Note This Oracle Solaris Cluster release supports systems that use the SPARC and x86 families
of processor architectures. In this document, x86 refers to the larger family of x86 compatible
products. Information in this document pertains to all platforms unless otherwise specified.
This document is intended for experienced system administrators with extensive knowledge of
Oracle software and hardware. Do not use this document as a presales guide. You should have
already determined your system requirements and purchased the appropriate equipment and
software before reading this document.
The instructions in this book assume knowledge of the Oracle Solaris operating system and
expertise with the volume manager software that is used with Oracle Solaris Cluster software.
Bash is the default shell for Oracle Solaris 11. Machine names shown with the Bash shell prompt
are displayed for clarity.
7
Preface
Typographic Conventions
The following table describes the typographic conventions that are used in this book.
AaBbCc123 The names of commands, files, and directories, Edit your .login file.
and onscreen computer output
Use ls -a to list all files.
machine_name% you have mail.
aabbcc123 Placeholder: replace with a real name or value The command to remove a file is rm
filename.
AaBbCc123 Book titles, new terms, and terms to be Read Chapter 6 in the User's Guide.
emphasized
A cache is a copy that is stored
locally.
Do not save the file.
Note: Some emphasized items
appear bold online.
Shell Prompt
C shell machine_name%
Related Documentation
Information about related Oracle Solaris Cluster topics is available in the documentation that is
listed in the following table. All Oracle Solaris Cluster documentation is available at
http://www.oracle.com/technetwork/indexes/documentation/index.html.
Topic Documentation
Hardware installation and Oracle Solaris Cluster 4.0 Hardware Administration Manual
administration
Individual hardware administration guides
Data service installation and Oracle Solaris Cluster Data Services Planning and Administration Guide
administration and individual data service guides
Data service development Oracle Solaris Cluster Data Services Developers Guide
9
Preface
Getting Help
If you have problems installing or using Oracle Solaris Cluster, contact your service provider
and provide the following information.
Your name and email address (if available)
Your company name, address, and phone number
The model number and serial number of your systems
The release number of the operating environment (for example, Oracle Solaris 11)
The release number of Oracle Solaris Cluster (for example, Oracle Solaris Cluster 4.0)
Use the following commands to gather information about your system for your service
provider.
Command Function
This chapter provides planning information and guidelines specific to an Oracle Solaris Cluster
4.0 configuration.
Task Instructions
Set up cluster hardware. Oracle Solaris Cluster 4.0 Hardware Administration Manual
Documentation that shipped with your server and storage devices
Plan global-cluster software installation. Chapter 1, Planning the Oracle Solaris Cluster Configuration
Establish a new global cluster or a new global-cluster node. Establishing a New Global Cluster or New Global-Cluster Node
on page 62
Configure Solaris Volume Manager software. Configuring Solaris Volume Manager Software on page 129
Solaris Volume Manager Administration Guide
Configure cluster file systems, if used. How to Create Cluster File Systems on page 143
11
Planning the Oracle Solaris OS
Plan, install, and configure resource groups and data services. Oracle Solaris Cluster Data Services Planning and Administration
Create highly available local file systems, if used. Guide
Develop custom data services. Oracle Solaris Cluster Data Services Developers Guide
For more information about Oracle Solaris software, see your Oracle Solaris installation
documentation.
See How to Install and Configure Oracle Solaris and Oracle Solaris Cluster Software
(Automated Installer) on page 84 for details about the scinstall AI installation method. See
your Oracle Solaris installation documentation for details about standard Oracle Solaris
installation methods and what configuration choices you must make during installation of the
OS.
If the cluster meets only one of these conditions, you can safely enable LOFS.
If you require both LOFS and the automountd daemon to be enabled, exclude from the
automounter map all files that are part of the highly available local file system that is
exported by HA for NFS.
Power-saving shutdown Automatic power-saving shutdown is not supported in Oracle
Solaris Cluster configurations and should not be enabled. See the poweradm(1M) man page
for more information.
Network Auto-Magic (NWAM) The Oracle Solaris Network Auto-Magic (NWAM)
feature activates a single network interface and disables all others. For this reason, NWAM
cannot coexist with the Oracle Solaris Cluster software and you must disable it before you
configure or run your cluster.
IP Filter Oracle Solaris Cluster relies on IP network multipathing (IPMP) for public
network monitoring. Any IP Filter configuration must be made in accordance with IPMP
configuration guidelines and restrictions concerning IP Filter.
fssnap Oracle Solaris Cluster software does not support the fssnap command, which is a
feature of UFS. However, you can use the fssnap command on local systems that are not
controlled by Oracle Solaris Cluster software. The following restrictions apply to fssnap
support:
The fssnap command is supported on local files systems that are not managed by Oracle
Solaris Cluster software.
The fssnap command is not supported on cluster file systems.
The fssnap command is not supported on local file systems under the control of
HAStoragePlus.
Note The lofi device that is created for the global-devices namespace is restricted to
that use only. Do not use this device for any other purpose, and never unmount the
device.
/var The Oracle Solaris Cluster software occupies a negligible amount of space in the /var
file system at installation time. However, you need to set aside ample space for log files. Also,
more messages might be logged on a clustered node than would be found on a typical
stand-alone server. Therefore, allow at least 100 Mbytes for the /var file system.
swap The combined amount of swap space that is allocated for Oracle Solaris and Oracle
Solaris Cluster software must be no less than 750 Mbytes. For best results, add at least
512 Mbytes for Oracle Solaris Cluster software to the amount that is required by the Oracle
Solaris OS. In addition, allocate any additional swap amount that is required by applications
that are to run on the Oracle Solaris host.
Note If you create an additional swap file, do not create the swap file on a global device. Use
only a local disk as a swap device for the host.
Volume manager Create a 20-Mbyte partition on slice 6 for volume manager use.
To support Solaris Volume Manager, you can create this partition on one of the following
locations:
A local disk other than the ZFS root pool
The ZFS root pool, if the ZFS root pool is on a partition rather than a disk
Set aside a slice for this purpose on each local disk. However, if you have only one local disk on
an Oracle Solaris host, you might need to create three state database replicas in the same slice
for Solaris Volume Manager software to function properly. See Solaris Volume Manager
Administration Guide for more information.
To meet these requirements, you must customize the partitioning if you are performing
interactive installation of the Oracle Solaris OS.
High-interrupt load, for example, due to network or disk I/O. Under extreme load,
virtual switches can preclude system threads from running for a long time, including
virtual-switch threads.
Real-time threads that are overly aggressive in retaining CPU resources. Real-time
threads run at a higher priority than virtual-switch threads, which can restrict CPU
resources for virtual-switch threads for an extended time.
Non-shared storage For non-shared storage, such as for Oracle VM Server for SPARC
guest-domain OS images, you can use any type of virtual device. You can back such virtual
devices by any implement in the I/O domain, such as files or volumes. However, do not copy
files or clone volumes in the I/O domain for the purpose of mapping them into different
guest domains of the same cluster. Such copying or cloning would lead to problems because
the resulting virtual devices would have the same device identity in different guest domains.
Always create a new file or device in the I/O domain, which would be assigned a unique
device identity, then map the new file or device into a different guest domain.
Exporting storage from I/O domains If you configure a cluster that is composed of
Oracle VM Server for SPARC I/O domains, do not export its storage devices to other guest
domains that also run Oracle Solaris Cluster software.
Oracle Solaris I/O multipathing Do not run Oracle Solaris I/O multipathing software
(MPxIO) from guest domains. Instead, run Oracle Solaris I/O multipathing software in the
I/O domain and export it to the guest domains.
For more information about Oracle VM Server for SPARC, see the Oracle VM Server for
SPARC 2.1 Administration Guide.
For detailed information about Oracle Solaris Cluster components, see the Oracle Solaris
Cluster Concepts Guide.
Licensing
Ensure that you have available all necessary license certificates before you begin software
installation. Oracle Solaris Cluster software does not require a license certificate, but each node
installed with Oracle Solaris Cluster software must be covered under your Oracle Solaris
Cluster software license agreement.
For licensing requirements for volume-manager software and applications software, see the
installation documentation for those products.
Software Updates
After installing each software product, you must also install any required software updates. For
proper cluster operation, ensure that all cluster nodes maintain the same update level.
For general guidelines and procedures for applying software updates, see Chapter 11,
Updating Your Software, in Oracle Solaris Cluster System Administration Guide.
Public-Network IP Addresses
For information about the use of public networks by the cluster, see Public Network Adapters
and IP Network Multipathing in Oracle Solaris Cluster Concepts Guide.
You must set up a number of public-network IP addresses for various Oracle Solaris Cluster
components. The number of addresses that you need depends on which components you
include in your cluster configuration. Each Oracle Solaris host in the cluster configuration must
have at least one public-network connection to the same set of public subnets.
The following table lists the components that need public-network IP addresses assigned. Add
these IP addresses to the following locations:
Any naming services that are used
The local /etc/inet/hosts file on each global-cluster node, after you install Oracle Solaris
software
The local /etc/inet/hosts file on any exclusive-IP non-global zone
TABLE 12 Oracle Solaris Cluster Components That Use Public-Network IP Addresses (Continued)
Component Number of IP Addresses Needed
For more information about planning IP addresses, see Chapter 1, Planning the Network
Deployment, in Oracle Solaris Administration: IP Services.
Console-Access Devices
You must have console access to all cluster nodes. A service processor (SP) is used to
communicate between the administrative console and the global-cluster node consoles.
For more information about console access, see the Oracle Solaris Cluster Concepts Guide.
You can use the Oracle Solaris pconsole utility to connect to the cluster nodes. The utility also
provides a master console window from which you can propagate your input to all connections
that you opened. For more information, see the pconsole(1) man page that is available when
you install the Oracle Solaris 11 terminal/pconsole package.
Logical addresses Each data-service resource group that uses a logical address must have a
hostname specified for each public network from which the logical address can be accessed.
For additional information about data services and resources, also see the Oracle Solaris
Cluster Concepts Guide.
IPv4 Oracle Solaris Cluster software supports IPv4 addresses on the public network.
IPv6 Oracle Solaris Cluster software supports IPv6 addresses on the public network for
both failover and scalable data services.
IPMP groups Each public-network adapter that is used for data-service traffic must
belong to an IP network multipathing (IPMP) group. If a public-network adapter is not used
for data-service traffic, you do not have to configure it in an IPMP group.
Unless there are one or more non-link-local IPv6 public network interfaces in the public
network configuration, the scinstall utility automatically configures a multiple-adapter
IPMP group for each set of public-network adapters in the cluster that uses the same subnet.
These groups are link-based with transitive probes.
If the configuration contains any non-link-local IPv6 public network interfaces, you must
manually configure in IPMP groups all interfaces that will be used for data-service traffic.
You can configure the IPMP groups either before or after the cluster is established.
The scinstall utility ignores adapters that are already configured in an IPMP group. You
can use probe-based IPMP groups or link-based IPMP groups in a cluster. Probe-based
IPMP groups, which test the target IP address, provide the most protection by recognizing
more conditions that might compromise availability.
If any adapter in an IPMP group that the scinstall utility configures will not be used for
data-service traffic, you can remove that adapter from the group.
For guidelines on IPMP groups, see Chapter 14, Introducing IPMP, in Oracle Solaris
Administration: Network Interfaces and Network Virtualization. To modify IPMP groups
after cluster installation, follow the guidelines in How to Administer IP Network
Multipathing Groups in a Cluster in Oracle Solaris Cluster System Administration Guide
and procedures in Chapter 15, Administering IPMP, in Oracle Solaris Administration:
Network Interfaces and Network Virtualization.
Local MAC address support All public-network adapters must use network interface
cards (NICs) that support local MAC address assignment. Local MAC address assignment is
a requirement of IPMP.
local-mac-address setting The local-mac-address? variable must use the default value
true for Ethernet adapters. Oracle Solaris Cluster software does not support a
local-mac-address? value of false for Ethernet adapters.
For more information about public-network interfaces, see Oracle Solaris Cluster Concepts
Guide.
Consider the following points when you plan the use of a quorum server in an Oracle Solaris
Cluster configuration.
Network connection The quorum-server computer connects to your cluster through the
public network.
Supported hardware The supported hardware platforms for a quorum server are the
same as for a global-cluster node.
Operating system Oracle Solaris software requirements for Oracle Solaris Cluster
software apply as well to Quorum Server software.
Restriction for non-global zones In the Oracle Solaris Cluster 4.0 release, a quorum
server cannot be installed and configured in a non-global zone.
Service to multiple clusters You can configure a quorum server as a quorum device to
more than one cluster.
Mixed hardware and software You do not have to configure a quorum server on the same
hardware and software platform as the cluster or clusters for which it provides quorum. For
example, a SPARC based machine that runs the Oracle Solaris 10 OS can be configured as a
quorum server for an x86 based cluster that runs the Oracle Solaris 11 OS.
Spanning tree algorithm You must disable the spanning tree algorithm on the Ethernet
switches for the ports that are connected to the cluster public network where the quorum
server will run.
Using a cluster node as a quorum server You can configure a quorum server on a cluster
node to provide quorum for clusters other than the cluster that the node belongs to.
However, a quorum server that is configured on a cluster node is not highly available.
NFS Guidelines
Consider the following points when you plan the use of Network File System (NFS) in an Oracle
Solaris Cluster configuration:
NFS client No Oracle Solaris Cluster node can be an NFS client of an HA for NFS exported
file system that is being mastered on a node in the same cluster. Such cross-mounting of HA
for NFS is prohibited. Use the cluster file system to share files among global-cluster nodes.
NFSv3 protocol If you are mounting file systems on the cluster nodes from external NFS
servers, such as NAS filers, and you are using the NFSv3 protocol, you cannot run NFS client
mounts and the HA for NFS data service on the same cluster node. If you do, certain HA for
NFS data-service activities might cause the NFS daemons to stop and restart, interrupting
NFS services. However, you can safely run the HA for NFS data service if you use the NFSv4
protocol to mount external NFS file systems on the cluster nodes.
Locking Applications that run locally on the cluster must not lock files on a file system that
is exported through NFS. Otherwise, local blocking (for example, flock(3UCB) or
fcntl(2)) might interfere with the ability to restart the lock manager ( lockd(1M)). During
restart, a blocked local process might be granted a lock which might be intended to be
reclaimed by a remote client. This situation would cause unpredictable behavior.
NFS security features Oracle Solaris Cluster software does not support the following
options of the share_nfs(1M) command:
secure
sec=dh
However, Oracle Solaris Cluster software does support the following security feature for
NFS:
The use of secure ports for NFS. You enable secure ports for NFS by adding the entry set
nfssrv:nfs_portmon=1 to the /etc/system file on cluster nodes.
Fencing Zone clusters support fencing for all supported NAS devices, shared disks, and
storage arrays.
Service Restrictions
Observe the following service restrictions for Oracle Solaris Cluster configurations:
Routers Do not configure cluster nodes as routers (gateways) due to the following
reasons:
Routing protocols might inadvertently broadcast the cluster interconnect as a publicly
reachable network to other routers, despite the setting of the IFF_PRIVATE flag on the
interconnect interfaces.
Routing protocols might interfere with the failover of IP addresses across cluster nodes
that impact client accessibility.
Routing protocols might compromise proper functionality of scalable services by
accepting client network packets and dropping them, instead of forwarding the packets
to other cluster nodes.
NIS+ servers Do not configure cluster nodes as NIS or NIS+ servers. There is no data
service available for NIS or NIS+. However, cluster nodes can be NIS or NIS+ clients.
Install servers Do not use an Oracle Solaris Cluster configuration to provide a highly
available installation service on client systems.
RARP Do not use an Oracle Solaris Cluster configuration to provide an rarpd service.
Remote procedure call (RPC) program numbers If you install an RPC service on the
cluster, the service must not use any of the following program numbers:
100141
100142
100248
These numbers are reserved for the Oracle Solaris Cluster daemons rgmd_receptionist,
fed, and pmfd, respectively.
If the RPC service that you install also uses one of these program numbers, you must change
that RPC service to use a different program number.
Scheduling classes Oracle Solaris Cluster software does not support the running of
high-priority process scheduling classes on cluster nodes. Do not run either of the following
types of processes on cluster nodes:
Processes that run in the time-sharing scheduling class with a high priority
Processes that run in the real-time scheduling class
Oracle Solaris Cluster software relies on kernel threads that do not run in the real-time
scheduling class. Other time-sharing processes that run at higher-than-normal priority or
real-time processes can prevent the Oracle Solaris Cluster kernel threads from acquiring
needed CPU cycles.
See the Oracle Solaris Cluster Concepts Guide for further information about cluster time. For
more information about NTP, see the ntpd(1M) man page that is delivered in the Oracle Solaris
11 service/network/ntp package.
Global-Cluster Name
Specify a name for the global cluster during Oracle Solaris Cluster configuration. The global
cluster name should be unique throughout the enterprise.
For information about naming a zone cluster, see Zone Clusters on page 30.
In single-host cluster installations, the default cluster name is the name of the voting node.
During Oracle Solaris Cluster configuration, you specify the names of all voting nodes that you
are installing in the global cluster. Node names must be unique throughout the cluster.
A node ID number is assigned to each cluster node for intracluster use, beginning with the
number 1. Node ID numbers are assigned to each cluster node in the order that the node
becomes a cluster member. If you configure all cluster nodes in one operation, the node from
which you run the scinstall utility is the last node assigned a node ID number. You cannot
change a node ID number after it is assigned to a cluster node.
A node that becomes a cluster member is assigned the lowest available node ID number. If a
node is removed from the cluster, its node ID becomes available for assignment to a new node.
For example, if in a four-node cluster the node that is assigned node ID 3 is removed and a new
node is added, the new node is assigned node ID 3, not node ID 5.
If you want the assigned node ID numbers to correspond to certain cluster nodes, configure the
cluster nodes one node at a time in the order that you want the node ID numbers to be assigned.
For example, to have the cluster software assign node ID 1 to phys-schost-1, configure that
node as the sponsoring node of the cluster. If you next add phys-schost-2 to the cluster
established by phys-schost-1, phys-schost-2 is assigned node ID 2.
For information about node names in a zone cluster, see Zone Clusters on page 30.
Note You do not need to configure a private network for a single-host global cluster. The
scinstall utility automatically assigns the default private-network address and netmask even
though a private network is not used by the cluster.
Oracle Solaris Cluster software uses the private network for internal communication among
nodes and among non-global zones that are managed by Oracle Solaris Cluster software. An
Oracle Solaris Cluster configuration requires at least two connections to the cluster
interconnect on the private network. When you configure Oracle Solaris Cluster software on
the first node of the cluster, you specify the private-network address and netmask in one of the
following ways:
Accept the default private-network address (172.16.0.0) and default netmask
(255.255.240.0). This IP address range supports a combined maximum of 64 voting nodes
and non-global zones, a maximum of 12 zone clusters, and a maximum of 10 private
networks.
Note The maximum number of voting nodes that an IP address range can support does not
reflect the maximum number of voting nodes that the hardware or software configuration
can currently support.
Specify a different allowable private-network address and accept the default netmask.
Accept the default private-network address and specify a different netmask.
Specify both a different private-network address and a different netmask.
If you choose to specify a different netmask, the scinstall utility prompts you for the number
of nodes and the number of private networks that you want the IP address range to support. The
utility also prompts you for the number of zone clusters that you want to support. The number
of global-cluster nodes that you specify should also include the expected number of unclustered
non-global zones that will use the private network.
The utility calculates the netmask for the minimum IP address range that will support the
number of nodes, zone clusters, and private networks that you specified. The calculated
netmask might support more than the supplied number of nodes, including non-global zones,
zone clusters, and private networks. The scinstall utility also calculates a second netmask that
would be the minimum to support twice the number of nodes, zone clusters, and private
networks. This second netmask would enable the cluster to accommodate future growth
without the need to reconfigure the IP address range.
The utility then asks you what netmask to choose. You can specify either of the calculated
netmasks or provide a different one. The netmask that you specify must minimally support the
number of nodes and private networks that you specified to the utility.
Note Changing the cluster private IP address range might be necessary to support the addition
of voting nodes, non-global zones, zone clusters, or private networks.
To change the private-network address and netmask after the cluster is established, see How to
Change the Private Network Address or Address Range of an Existing Cluster in Oracle Solaris
Cluster System Administration Guide. You must bring down the cluster to make these changes.
However, the cluster can remain in cluster mode if you use the cluster set-netprops
command to change only the netmask. For any zone cluster that is already configured in the
cluster, the private IP subnets and the corresponding private IP addresses that are allocated for
that zone cluster will also be updated.
If you specify a private-network address other than the default, the address must meet the
following requirements:
Address and netmask sizes The private network address cannot be smaller than the
netmask. For example, you can use a private network address of 172.16.10.0 with a
netmask of 255.255.255.0. However, you cannot use a private network address of
172.16.10.0 with a netmask of 255.255.0.0.
Acceptable addresses The address must be included in the block of addresses that
RFC 1918 reserves for use in private networks. You can contact the InterNIC to obtain
copies of RFCs or view RFCs online at http://www.rfcs.org.
Use in multiple clusters You can use the same private-network address in more than one
cluster provided that the clusters are on different private networks. Private IP network
addresses are not accessible from outside the physical cluster.
Oracle VM Server for SPARC - When guest domains are created on the same physical
machine and are connected to the same virtual switch, the private network is shared by such
guest domains and is visible to all these domains. Proceed with caution before you specify a
private-network IP address range to the scinstall utility for use by a cluster of guest
domains. Ensure that the address range is not already in use by another guest domain that
exists on the same physical machine and shares its virtual switch.
VLANs shared by multiple clusters Oracle Solaris Cluster configurations support the
sharing of the same private-interconnect VLAN among multiple clusters. You do not have
to configure a separate VLAN for each cluster. However, for the highest level of fault
isolation and interconnect resilience, limit the use of a VLAN to a single cluster.
IPv6 Oracle Solaris Cluster software does not support IPv6 addresses for the private
interconnect. The system does configure IPv6 addresses on the private-network adapters to
support scalable services that use IPv6 addresses. However, internode communication on
the private network does not use these IPv6 addresses.
Private Hostnames
The private hostname is the name that is used for internode communication over the
private-network interface. Private hostnames are automatically created during Oracle Solaris
Cluster configuration of a global cluster or a zone cluster. These private hostnames follow the
naming convention clusternodenode-id -priv, where node-id is the numeral of the internal
node ID. During Oracle Solaris Cluster configuration, the node ID number is automatically
assigned to each voting node when the node becomes a cluster member. A voting node of the
global cluster and a node of a zone cluster can both have the same private hostname, but each
hostname resolves to a different private-network IP address.
After a global cluster is configured, you can rename its private hostnames by using the
clsetup(1CL) utility. Currently, you cannot rename the private hostname of a zone-cluster
node.
The creation of a private hostname for a non-global zone is optional. There is no required
naming convention for the private hostname of a non-global zone.
Cluster Interconnect
The cluster interconnects provide the hardware pathways for private-network communication
between cluster nodes. Each interconnect consists of a cable that is connected in one of the
following ways:
Between two transport adapters
Between a transport adapter and a transport switch
For more information about the purpose and function of the cluster interconnect, see Cluster
Interconnect in Oracle Solaris Cluster Concepts Guide.
Note You do not need to configure a cluster interconnect for a single-host cluster. However, if
you anticipate eventually adding more voting nodes to a single-host cluster configuration, you
might want to configure the cluster interconnect for future use.
During Oracle Solaris Cluster configuration, you specify configuration information for one or
two cluster interconnects.
If the number of available adapter ports is limited, you can use tagged VLANs to share the
same adapter with both the private and public network. For more information, see the
guidelines for tagged VLAN adapters in Transport Adapters on page 27.
You can set up from one to six cluster interconnects in a cluster. While a single cluster
interconnect reduces the number of adapter ports that are used for the private interconnect,
it provides no redundancy and less availability. If a single interconnect fails, the cluster is at a
higher risk of having to perform automatic recovery. Whenever possible, install two or more
cluster interconnects to provide redundancy and scalability, and therefore higher
availability, by avoiding a single point of failure.
You can configure additional cluster interconnects, up to six interconnects total, after the
cluster is established by using the clsetup utility.
For guidelines about cluster interconnect hardware, see Interconnect Requirements and
Restrictions in Oracle Solaris Cluster 4.0 Hardware Administration Manual. For general
information about the cluster interconnect, see Cluster Interconnect in Oracle Solaris Cluster
Concepts Guide.
Transport Adapters
For the transport adapters, such as ports on network interfaces, specify the transport adapter
names and transport type. If your configuration is a two-host cluster, you also specify whether
your interconnect is a point-to-point connection (adapter to adapter) or uses a transport
switch.
Consider the following guidelines and restrictions:
IPv6 Oracle Solaris Cluster software does not support IPv6 communications over the
private interconnects.
Local MAC address assignment All private network adapters must use network interface
cards (NICs) that support local MAC address assignment. Link-local IPv6 addresses, which
are required on private-network adapters to support IPv6 public-network addresses for
scalable data services, are derived from the local MAC addresses.
Tagged VLAN adapters Oracle Solaris Cluster software supports tagged Virtual Local
Area Networks (VLANs) to share an adapter between the private cluster interconnect and
the public network. You must use the dladm create-vlan command to configure the
adapter as a tagged VLAN adapter before you configure it with the cluster.
To configure a tagged VLAN adapter for the cluster interconnect, specify the adapter by its
VLAN virtual device name. This name is composed of the adapter name plus the VLAN
instance number. The VLAN instance number is derived from the formula (1000*V)+N,
where V is the VID number and N is the PPA.
As an example, for VID 73 on adapter net2, the VLAN instance number would be
calculated as (1000*73)+2. You would therefore specify the adapter name as net73002 to
indicate that it is part of a shared virtual LAN.
For information about configuring VLAN in a cluster, see Configuring VLANs as Private
Interconnect Networks in Oracle Solaris Cluster 4.0 Hardware Administration Manual. For
information about creating and administering VLANs, see the dladm(1M) man page and
Chapter 13, Administering VLANs, in Oracle Solaris Administration: Network Interfaces
and Network Virtualization.
SPARC: Oracle VM Server for SPARC guest domains Specify adapter names by their
virtual names, vnetN, such as vnet0 and vnet1. Virtual adapter names are recorded in the
/etc/path_to_inst file.
Logical network interfaces Logical network interfaces are reserved for use by Oracle
Solaris Cluster software.
Transport Switches
If you use transport switches, such as a network switch, specify a transport switch name for each
interconnect. You can use the default name switchN, where N is a number that is automatically
assigned during configuration, or create another name.
Also specify the switch port name or accept the default name. The default port name is the same
as the internal node ID number of the Oracle Solaris host that hosts the adapter end of the cable.
However, you cannot use the default port name for certain adapter types.
Clusters with three or more voting nodes must use transport switches. Direct connection
between voting cluster nodes is supported only for two-host clusters. If your two-host cluster is
direct connected, you can still specify a transport switch for the interconnect.
Tip If you specify a transport switch, you can more easily add another voting node to the
cluster in the future.
Global Fencing
Fencing is a mechanism that is used by the cluster to protect the data integrity of a shared disk
during split-brain situations. By default, the scinstall utility in Typical Mode leaves global
fencing enabled, and each shared disk in the configuration uses the default global fencing
setting of prefer3. With the prefer3 setting, the SCSI-3 protocol is used.
If any device is unable to use the SCSI-3 protocol, the pathcount setting should be used instead,
where the fencing protocol for the shared disk is chosen based on the number of DID paths that
are attached to the disk. Non-SCSI-3 capable devices are limited to two DID device paths within
the cluster. Fencing can be turned off for devices which do not support either SCSI-3 or SCSI-2
fencing. However, data integrity for such devices cannot be guaranteed during split-brain
situations.
In Custom Mode, the scinstall utility prompts you whether to disable global fencing. For
most situations, respond No to keep global fencing enabled. However, you can disable global
fencing in certain situations.
Caution If you disable fencing under other situations than the ones described, your data might
be vulnerable to corruption during application failover. Examine this data corruption
possibility carefully when you consider turning off fencing.
The situations in which you can disable global fencing are as follows:
The shared storage does not support SCSI reservations.
If you turn off fencing for a shared disk that you then configure as a quorum device, the
device uses the software quorum protocol. This is true regardless of whether the disk
supports SCSI-2 or SCSI-3 protocols. Software quorum is a protocol in Oracle Solaris
Cluster software that emulates a form of SCSI Persistent Group Reservations (PGR).
You want to enable systems that are outside the cluster to gain access to storage that is
attached to the cluster.
If you disable global fencing during cluster configuration, fencing is turned off for all shared
disks in the cluster. After the cluster is configured, you can change the global fencing protocol
or override the fencing protocol of individual shared disks. However, to change the fencing
protocol of a quorum device, you must first unconfigure the quorum device. Then set the new
fencing protocol of the disk and reconfigure it as a quorum device.
For more information about fencing behavior, see Failfast Mechanism in Oracle Solaris
Cluster Concepts Guide. For more information about setting the fencing protocol of individual
shared disks, see the cldevice(1CL) man page. For more information about the global fencing
setting, see the cluster(1CL) man page.
Quorum Devices
Oracle Solaris Cluster configurations use quorum devices to maintain data and resource
integrity. If the cluster temporarily loses connection to a voting node, the quorum device
prevents amnesia or split-brain problems when the voting cluster node attempts to rejoin the
cluster. For more information about the purpose and function of quorum devices, see Quorum
and Quorum Devices in Oracle Solaris Cluster Concepts Guide.
During Oracle Solaris Cluster installation of a two-host cluster, you can choose to have the
scinstall utility automatically configure an available shared disk in the configuration as a
quorum device. The scinstall utility assumes that all available shared disks are supported as
quorum devices.
If you want to use a quorum server or a Sun ZFS Storage Appliance NAS device from Oracle as
the quorum device, you configure it after scinstall processing is completed.
After installation, you can also configure additional quorum devices by using the clsetup
utility.
Note You do not need to configure quorum devices for a single-host cluster.
If your cluster configuration includes third-party shared storage devices that are not supported
for use as quorum devices, you must use the clsetup utility to configure quorum manually.
Minimum A two-host cluster must have at least one quorum device, which can be a
shared disk, a quorum server, or a NAS device. For other topologies, quorum devices are
optional.
Odd-number rule If more than one quorum device is configured in a two-host cluster or
in a pair of hosts directly connected to the quorum device, configure an odd number of
quorum devices. This configuration ensures that the quorum devices have completely
independent failure pathways.
Distribution of quorum votes For highest availability of the cluster, ensure that the total
number of votes that are contributed by quorum devices is less than the total number of
votes that are contributed by voting nodes. Otherwise, the nodes cannot form a cluster if all
quorum devices are unavailable even if all nodes are functioning.
Connection You must connect a quorum device to at least two voting nodes.
SCSI fencing protocol When a SCSI shared-disk quorum device is configured, its fencing
protocol is automatically set to SCSI-2 in a two-host cluster or SCSI-3 in a cluster with three
or more voting nodes.
Changing the fencing protocol of quorum devices For SCSI disks that are configured as a
quorum device, you must unconfigure the quorum device before you can enable or disable
its SCSI fencing protocol.
Software quorum protocol You can configure supported shared disks that do not support
SCSI protocol, such as SATA disks, as quorum devices. You must disable fencing for such
disks. The disks would then use the software quorum protocol, which emulates SCSI PGR.
The software quorum protocol would also be used by SCSI-shared disks if fencing is
disabled for such disks.
Replicated devices Oracle Solaris Cluster software does not support replicated devices as
quorum devices.
ZFS storage pools Do not add a configured quorum device to a ZFS storage pool. When a
configured quorum device is added to a ZFS storage pool, the disk is relabeled as an EFI disk
and quorum configuration information is lost. The disk can then no longer provide a
quorum vote to the cluster.
After a disk is in a storage pool, you can configure that disk as a quorum device. Or, you can
unconfigure the quorum device, add it to the storage pool, then reconfigure the disk as a
quorum device.
For more information about quorum devices, see Quorum and Quorum Devices in Oracle
Solaris Cluster Concepts Guide.
Zone Clusters
A zone cluster is a cluster of non-global Oracle Solaris zones. All nodes of a zone cluster are
configured as non-global zones of the solaris brand that are set with the cluster attribute. No
other brand type is permitted in a zone cluster. The isolation that is provided by the Oracle
Solaris Zones feature enables you to run supported services on the zone cluster in a similar way
as running the services in a global cluster.
Consider the following points when you plan the creation of a zone cluster:
Global-Cluster Requirements and Guidelines on page 31
Zone-Cluster Requirements and Guidelines on page 31
derived from, and identical to, the name that you assign to the zone cluster when you create
the cluster. For example, if you create a zone cluster that is named zc1, the corresponding
non-global zone name on each host that supports the zone cluster is also zc1.
Cluster name Each zone-cluster name must be unique throughout the cluster of machines
that host the global cluster. The zone-cluster name cannot also be used by a non-global zone
elsewhere in the cluster of machines, nor can the zone-cluster name be the same as that of a
global-cluster node. You cannot use all or global as a zone-cluster name, because these
are reserved names.
Public-network IP addresses You can optionally assign a specific public-network IP
address to each zone-cluster node.
Note If you do not configure an IP address for each zone cluster node, two things will occur:
That specific zone cluster will not be able to configure NAS devices for use in the zone
cluster. The cluster uses the IP address of the zone cluster node when communicating
with the NAS device, so not having an IP address prevents cluster support for fencing
NAS devices.
The cluster software will activate any Logical Host IP address on any NIC.
Oracle Solaris Cluster software does not require any specific disk layout or file system size.
Consider the following points when you plan your layout for global devices:
Mirroring You must mirror all global devices for the global device to be considered highly
available. You do not need to use software mirroring if the storage device provides hardware
RAID as well as redundant paths to disks.
Disks When you mirror, lay out file systems so that the file systems are mirrored across
disk arrays.
Availability You must physically connect a global device to more than one voting node in
the cluster for the global device to be considered highly available. A global device with
multiple physical connections can tolerate a single-node failure. A global device with only
one physical connection is supported, but the global device becomes inaccessible from other
voting nodes if the node with the connection is down.
Note You can alternatively configure highly available local file systems. This can provide better
performance to support a data service with high I/O, or to permit use of certain file system
features that are not supported in a cluster file system. For more information, see Enabling
Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and
Administration Guide.
Consider the following points when you plan cluster file systems:
Quotas Quotas are not supported on cluster file systems. However, quotas are supported
on highly available local file systems.
Zone clusters You cannot configure cluster file systems that use UFS for use in a zone
cluster. Use highly available local file systems instead.
Loopback file system (LOFS) During cluster creation, LOFS is enabled by default. You
must manually disable LOFS on each voting cluster node if the cluster meets both of the
following conditions:
HA for NFS (HA for NFS) is configured on a highly available local file system.
The automountd daemon is running.
If the cluster meets both of these conditions, you must disable LOFS to avoid switchover
problems or other failures. If the cluster meets only one of these conditions, you can safely
enable LOFS.
If you require both LOFS and the automountd daemon to be enabled, exclude from the
automounter map all files that are part of the highly available local file system that is
exported by HA for NFS.
Process accounting log files Do not locate process accounting log files on a cluster file
system or on a highly available local file system. A switchover would be blocked by writes to
the log file, which would cause the node to hang. Use only a local file system to contain
process accounting log files.
Communication endpoints The cluster file system does not support any of the file system
features of Oracle Solaris software by which one would put a communication endpoint in
the file system namespace. Therefore, do not attempt to use the fattach command from
any node other than the local node.
Although you can create a UNIX domain socket whose name is a path name into the
cluster file system, the socket would not survive a node failover.
Any FIFOs or named pipes that you create on a cluster file system would not be globally
accessible.
Device special files Neither block special files nor character special files are supported in a
cluster file system. To specify a path name to a device node in a cluster file system, create a
symbolic link to the device name in the /dev directory. Do not use the mknod command for
this purpose.
atime Cluster file systems do not maintain atime.
ctime When a file on a cluster file system is accessed, the update of the file's ctime might be
delayed.
Installing applications - If you want the binaries of a highly available application to reside
on a cluster file system, wait to install the application until after the cluster file system is
configured.
Note You can alternatively configure this and other types of file systems as highly available
local file systems. For more information, see Enabling Highly Available Local File Systems in
Oracle Solaris Cluster Data Services Planning and Administration Guide.
Follow the guidelines in the following list of mount options to determine what mount options
to use when you create your UFS cluster file systems.
global
Required. This option makes the file system globally visible to all nodes in the cluster.
logging
Required. This option enables logging.
forcedirectio
Conditional. This option is required only for cluster file systems that will host Oracle RAC
RDBMS data files, log files, and control files.
onerror=panic
Required. You do not have to explicitly specify the onerror=panic mount option in the
/etc/vfstab file. This mount option is already the default value if no other onerror mount
option is specified.
Note Only the onerror=panic mount option is supported by Oracle Solaris Cluster
software. Do not use the onerror=umount or onerror=lock mount options. These mount
options are not supported on cluster file systems for the following reasons:
Use of the onerror=umount or onerror=lock mount option might cause the cluster file
system to lock or become inaccessible. This condition might occur if the cluster file
system experiences file corruption.
The onerror=umount or onerror=lock mount option might cause the cluster file system
to become unmountable. This condition might thereby cause applications that use the
cluster file system to hang or prevent the applications from being killed.
syncdir
Optional. If you specify syncdir, you are guaranteed POSIX-compliant file system behavior
for the write() system call. If a write() succeeds, then this mount option ensures that
sufficient space is on the disk.
If you do not specify syncdir, the same behavior occurs that is seen with UFS file systems.
When you do not specify syncdir, performance of writes that allocate disk blocks, such as
when appending data to a file, can significantly improve. However, in some cases, without
syncdir you would not discover an out-of-space condition (ENOSPC) until you close a file.
You see ENOSPC on close only during a very short time after a failover. With syncdir, as with
POSIX behavior, the out-of-space condition would be discovered before the close.
See the mount_ufs(1M) man page for more information about UFS mount options.
Oracle Solaris Cluster software uses volume manager software to group disks into device groups
that can then be administered as one unit. You must install Solaris Volume Manager software
on all voting nodes of the cluster.
See your volume manager documentation and Configuring Solaris Volume Manager
Software on page 129 for instructions about how to install and configure the volume manager
software. For more information about the use of volume management in a cluster
configuration, see Multihost Devices in Oracle Solaris Cluster Concepts Guide and Device
Groups in Oracle Solaris Cluster Concepts Guide.
See your volume manager software documentation for disk layout recommendations and any
additional restrictions.
You must use the hosts that can master a disk set as mediators for that disk set. If you
have a campus cluster, you can also configure a third node or a non-clustered host on the
cluster network as a third mediator host to improve availability.
Mediators cannot be configured for disk sets that do not meet the two-string and
two-host requirements.
Mirroring Guidelines
This section provides the following guidelines for planning the mirroring of your cluster
configuration:
Guidelines for Mirroring Multihost Disks on page 39
Guidelines for Mirroring the ZFS Root Pool on page 40
For more information about multihost disks, see Multihost Devices in Oracle Solaris Cluster
Concepts Guide.
For maximum availability, mirror root (/), /usr, /var, /opt, and swap on the local disks.
However, Oracle Solaris Cluster software does not require that you mirror the ZFS root pool.
Consider the following points when you decide whether to mirror the ZFS root pool:
Boot disk You can set up the mirror to be a bootable root pool. You can then boot from
the mirror if the primary boot disk fails.
Backups Regardless of whether you mirror the root pool, you also should perform regular
backups of root. Mirroring alone does not protect against administrative errors. Only a
backup plan enables you to restore files that have been accidentally altered or deleted.
Quorum devices Do not use a disk that was configured as a quorum device to mirror a
root pool.
Separate controllers Highest availability includes mirroring the root pool on a separate
controller.
This chapter provides the following procedures to install Oracle Solaris Cluster 4.0 software on
global-cluster voting nodes.
How to Prepare for Cluster Software Installation on page 42
How to Install Oracle Solaris Software on page 43
How to Install pconsole Software on an Administrative Console on page 47
How to Install and Configure Oracle Solaris Cluster Quorum Server Software on page 49
How to Configure Internal Disk Mirroring on page 51
SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains on
page 52
How to Install Oracle Solaris Cluster Framework and Data Service Software Packages on
page 53
How to Install the Availability Suite Feature of Oracle Solaris 11 on page 57
How to Set Up the Root Environment on page 58
How to Configure Solaris IP Filter on page 59
The following task map lists the tasks that you perform to install software on multiple-host or
single-host global clusters. Complete the procedures in the order that is indicated.
Task Instructions
Plan the layout of your cluster configuration and prepare to install How to Prepare for Cluster Software Installation on page 42
software.
41
Installing the Software
Install the Oracle Solaris OS on all nodes and optionally on an How to Install Oracle Solaris Software on page 43
administrative console and a quorum server. Optionally, enable
Oracle Solaris I/O multipathing on the nodes.
(Optional) Install pconsole software on an administrative How to Install pconsole Software on an Administrative
console. Console on page 47
(Optional) Install and configure a quorum server. How to Install and Configure Oracle Solaris Cluster Quorum
Server Software on page 49
(Optional) Configure internal disk mirroring. How to Configure Internal Disk Mirroring on page 51
(Optional) Install Oracle VM Server for SPARC software and SPARC: How to Install Oracle VM Server for SPARC Software
create domains. and Create Domains on page 52
Install Oracle Solaris Cluster software and any data services that How to Install Oracle Solaris Cluster Framework and Data
you will use. Service Software Packages on page 53
(Optional) Install and configure the Availability Suite feature of How to Install the Availability Suite Feature of Oracle Solaris 11
Oracle Solaris software on page 57
(Optional) Configure Oracle Solaris IP Filter. How to Configure Solaris IP Filter on page 59
2 Read the following manuals for information that can help you plan your cluster configuration
and prepare your installation strategy.
Oracle Solaris Cluster 4.0 Release Notes Restrictions, bug workarounds, and other
late-breaking information.
Oracle Solaris Cluster Concepts Guide - Overview of the Oracle Solaris Cluster product.
Oracle Solaris Cluster Software Installation Guide (this manual) Planning guidelines and
procedures for installing and configuring Oracle Solaris, Oracle Solaris Cluster, and volume
manager software.
Oracle Solaris Cluster Data Services Planning and Administration Guide Planning
guidelines and procedures to install and configure data services.
Caution Plan your cluster installation completely. Identify requirements for all data services
and third-party products before you begin Oracle Solaris and Oracle Solaris Cluster software
installation. Failure to do so might result in installation errors that require you to completely
reinstall the Oracle Solaris and Oracle Solaris Cluster software.
Next Steps If you want to install a machine as a quorum server to use as the quorum device in your
cluster, go next to How to Install and Configure Oracle Solaris Cluster Quorum Server
Software on page 49.
Otherwise, if you want to use an administrative console to communicate with the cluster
nodes, go to How to Install pconsole Software on an Administrative Console on page 47.
Otherwise, choose the Oracle Solaris installation procedure to use.
To configure Oracle Solaris Cluster software by using the scinstall(1M) utility, go to
How to Install Oracle Solaris Software on page 43 to first install Oracle Solaris
software.
To install and configure both Oracle Solaris and Oracle Solaris Cluster software in the
same operation (Automated Installer method), go to How to Install and Configure
Oracle Solaris and Oracle Solaris Cluster Software (Automated Installer) on page 84.
1. (Optional) An administrative console that you will install with pconsole software. For
more information, see How to Install pconsole Software on an Administrative Console
on page 47.
2. (Optional) A quorum server. For more information, see How to Install and Configure
Oracle Solaris Cluster Quorum Server Software on page 49.
3. Each node in the global cluster, if you will not use the scinstall custom Automated
Installer method to install software. For more information about Automated Installer
installation of a cluster, see How to Install and Configure Oracle Solaris and Oracle Solaris
Cluster Software (Automated Installer) on page 84.
If your nodes are already installed with the Oracle Solaris OS but do not meet Oracle Solaris
Cluster installation requirements, you might need to reinstall the Oracle Solaris software.
Follow the steps in this procedure to ensure subsequent successful installation of the Oracle
Solaris Cluster software. See Planning the Oracle Solaris OS on page 12 for information
about required root-disk partitioning and other Oracle Solaris Cluster installation
requirements.
Note You must install all nodes in a cluster with the same version of the Oracle Solaris OS.
You can use any method that is normally used to install the Oracle Solaris software. During
Oracle Solaris software installation, perform the following steps:
Create any other file system partitions that you need, as described in System Disk
Partitionson page 14.
b. (Cluster nodes) For ease of administration, set the same root password on each node.
4 (Cluster nodes) If you will use role-based access control (RBAC) instead of the superuser role to
access the cluster nodes, set up an RBAC role that provides authorization for all Oracle Solaris
Cluster commands.
This series of installation procedures requires the following Oracle Solaris Cluster RBAC
authorizations if the user is not superuser:
solaris.cluster.modify
solaris.cluster.admin
solaris.cluster.read
See Role-Based Access Control (Overview) in Oracle Solaris Administration: Security Services
for more information about using RBAC roles. See the Oracle Solaris Cluster man pages for the
RBAC authorization that each Oracle Solaris Cluster subcommand requires.
5 (Cluster nodes) If you are adding a node to an existing cluster, add mount points for cluster file
systems to the new node.
a. From the active cluster node, display the names of all cluster file systems.
phys-schost-1# mount | grep global | egrep -v node@ | awk {print $1}
b. On the new node, create a mount point for each cluster file system in the cluster.
phys-schost-new# mkdir -p mountpoint
For example, if the mount command returned the file system name /global/dg-schost-1,
run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.
6 Install any required Oracle Solaris OS software updates and hardware-related firmware and
updates.
Include those updates for storage array support. Also download any needed firmware that is
contained in the hardware updates.
See Chapter 11, Updating Your Software, in Oracle Solaris Cluster System Administration
Guide for installation instructions.
8 (Cluster nodes) Update the /etc/inet/hosts file on each node with all public IP addresses that
are used in the cluster.
Perform this step regardless of whether you are using a naming service.
Note During establishment of a new cluster or new cluster node, the scinstall utility
automatically adds the public IP address of each node that is being configured to the
/etc/inet/hosts file.
10 (Optional) (Cluster nodes) If the Oracle Solaris Cluster software is not already installed and you
want to use Oracle Solaris I/O multipathing, enable multipathing on each node.
Caution If the Oracle Solaris Cluster software is already installed, do not issue this command.
Running the stmsboot command on an active cluster node might cause Oracle Solaris services
to go into the maintenance state. Instead, follow instructions in the stmsboot(1M) man page for
using the stmsboot command in an Oracle Solaris Cluster environment.
phys-schost# /usr/sbin/stmsboot -e
-e Enables Oracle Solaris I/O multipathing.
See the stmsboot(1M) man page for more information.
Next Steps If you want to use the pconsole utility, go to How to Install pconsole Software on an
Administrative Console on page 47.
If you want to use a quorum server, go to How to Install and Configure Oracle Solaris Cluster
Quorum Server Software on page 49.
If your cluster nodes support the mirroring of internal hard drives and you want to configure
internal disk mirroring, go to How to Configure Internal Disk Mirroring on page 51.
SPARC: If you want to install Oracle VM Server for SPARC, go to SPARC: How to Install
Oracle VM Server for SPARC Software and Create Domains on page 52.
Otherwise, install the Oracle Solaris Cluster software on the cluster nodes.
If you already installed the Oracle Solaris OS on the cluster nodes, go to How to Install
Oracle Solaris Cluster Framework and Data Service Software Packages on page 53.
If you want to use the scinstall custom Automated Installer (AI) method to install both
Oracle Solaris OS and Oracle Solaris Cluster software on the cluster nodes, go to How to
Install and Configure Oracle Solaris and Oracle Solaris Cluster Software (Automated
Installer) on page 84.
See Also See the Oracle Solaris Cluster System Administration Guide for procedures to perform dynamic
reconfiguration tasks in an Oracle Solaris Cluster configuration.
Note You are not required to use an administrative console. If you do not use an administrative
console, perform administrative tasks from one designated node in the cluster.
You cannot use this software to connect to Oracle VM Server for SPARC guest domains.
This procedure describes how to install the Parallel Console Access (pconsole) software on an
administrative console. The pconsole utility is part of the Oracle Solaris 11
terminal/pconsole package.
The pconsole utility creates a host terminal window for each remote host that you specify on
the command line. The utility also opens a central, or master, console window that you can use
to send input to all nodes at one time. For additional information, see the pconsole(1) man
page that is installed with the terminal/pconsole package.
You can use any desktop machine that runs a version of the Oracle Solaris OS that is supported
by Oracle Solaris Cluster 4.0 software as an administrative console.
Before You Begin Ensure that a supported version of the Oracle Solaris OS and any Oracle Solaris software
updates are installed on the administrative console.
When you install the Oracle Solaris Cluster man page packages on the administrative console,
you can view them from the administrative console before you install Oracle Solaris Cluster
software on the cluster nodes or on a quorum server.
5 (Optional) For convenience, set the directory paths on the administrative console.
b. If you installed any other man page package, ensure that the /usr/cluster/bin/ directory
is in the PATH.
See the procedures Logging Into the Cluster Remotely in Oracle Solaris Cluster System
Administration Guide and How to Connect Securely to Cluster Consoles in Oracle Solaris
Cluster System Administration Guide for additional information about how to use the pconsole
utility. Also see the pconsole(1) man page that is installed as part of the Oracle Solaris 11
terminal/pconsole package.
Next Steps If you want to use a quorum server, go to How to Install and Configure Oracle Solaris Cluster
Quorum Server Software on page 49.
If your cluster nodes support the mirroring of internal hard drives and you want to configure
internal disk mirroring, go to How to Configure Internal Disk Mirroring on page 51.
SPARC: If you want to install Oracle VM Server for SPARC, go to SPARC: How to Install
Oracle VM Server for SPARC Software and Create Domains on page 52.
Otherwise, install the Oracle Solaris Cluster software on the cluster nodes.
If you already installed the Oracle Solaris OS on the cluster nodes, go to How to Install
Oracle Solaris Cluster Framework and Data Service Software Packages on page 53.
If you want to use the scinstall custom Automated Installer (AI) method to install both
Oracle Solaris OS and Oracle Solaris Cluster software on the cluster nodes, go to How to
Install and Configure Oracle Solaris and Oracle Solaris Cluster Software (Automated
Installer) on page 84
1 Become superuser on the machine on which you want to install the Oracle Solaris Cluster
Quorum Server software.
For information about setting the solaris publisher, see Set the Publisher Origin To the File
Repository URI in Copying and Creating Oracle Solaris 11 Package Repositories.
4 (Optional) Add the Oracle Solaris Cluster Quorum Server binary location to your PATH
environment variable.
quorumserver# PATH=$PATH:/usr/cluster/bin
5 Configure the quorum server by adding the following entry to the /etc/scqsd/scqsd.conf file
to specify configuration information about the quorum server.
Identify the quorum server by specifying the port number and optionally the instance name.
If you provide an instance name, that name must be unique among your quorum servers.
If you do not provide an instance name, always refer to this quorum server by the port on
which it listens.
The format for the entry is as follows:
The quorum server process creates one file per cluster in this directory to store
cluster-specific quorum information.
By default, the value of this option is /var/scqsd. This directory must be unique for each
quorum server that you configure.
-i instance-name
A unique name that you choose for the quorum-server instance.
-p port
The port number on which the quorum server listens for requests from the cluster.
6 (Optional) To serve more than one cluster but use a different port number or instance, configure
an additional entry for each additional instance of the quorum server that you need.
quorum-server
Identifies the quorum server. You can use the port number on which the quorum server
listens. If you provided an instance name in the configuration file, you can use that name
instead.
To start a single quorum server, provide either the instance name or the port number.
To start all quorum servers when you have multiple quorum servers configured, use the +
operand.
Troubleshooting Oracle Solaris Cluster Quorum Server software consists of the following packages:
ha-cluster/service/quorum-server
ha-cluster/service/quorum-server/locale
ha-cluster/service/quorum-server/manual
ha-cluster/service/quorum-server/manual/locale
These packages are contained in the
ha-cluster/group-package/ha-cluster-quorum-server-full and
ha-cluster/group-package/ha-cluster-quorum-server-l10n group packages.
The installation of these packages adds software to the /usr/cluster and /etc/scqsd
directories. You cannot modify the location of the Oracle Solaris Cluster Quorum Server
software.
If you receive an installation error message regarding the Oracle Solaris Cluster Quorum Server
software, verify that the packages were properly installed.
Next Steps If your cluster nodes support the mirroring of internal hard drives and you want to configure
internal disk mirroring, go to How to Configure Internal Disk Mirroring on page 51.
SPARC: If you want to install Oracle VM Server for SPARC, go to SPARC: How to Install
Oracle VM Server for SPARC Software and Create Domains on page 52.
Otherwise, install the Oracle Solaris Cluster software on the cluster nodes.
If you already installed the Oracle Solaris OS on the cluster nodes, go to How to Install
Oracle Solaris Cluster Framework and Data Service Software Packages on page 53.
If you want to use the scinstall custom Automated Installer (AI) method to install both
Oracle Solaris OS and Oracle Solaris Cluster software on the cluster nodes, go to How to
Install and Configure Oracle Solaris and Oracle Solaris Cluster Software (Automated
Installer) on page 84.
Note Do not perform this procedure under either of the following circumstances:
Your servers do not support the mirroring of internal hard drives.
You have already established the cluster.
Instead, perform Mirroring Internal Disks on Servers that Use Internal Hardware Disk
Mirroring or Integrated Mirroring in Oracle Solaris Cluster 4.0 Hardware Administration
Manual.
Before You Begin Ensure that the Oracle Solaris operating system and any necessary updates are installed.
1 Become superuser.
Next Steps SPARC: If you want to install Oracle VM Server for SPARC, go to SPARC: How to Install
Oracle VM Server for SPARC Software and Create Domains on page 52.
Otherwise, install the Oracle Solaris Cluster software on the cluster nodes.
If you already installed the Oracle Solaris OS on the cluster nodes, go to How to Install
Oracle Solaris Cluster Framework and Data Service Software Packages on page 53.
If you want to use the scinstall custom Automated Installer (AI) method to install both
Oracle Solaris OS and Oracle Solaris Cluster software on the cluster nodes, go to How to
Install and Configure Oracle Solaris and Oracle Solaris Cluster Software (Automated
Installer) on page 84
2 Install Oracle VM Server for SPARC software and configure domains by following the procedures
in Chapter 2,Installing and Enabling Software,in Oracle VM Server for SPARC 2.1
Administration Guide.
Observe the following special instructions:
If you create guest domains, adhere to the Oracle Solaris Cluster guidelines for creating
guest domains in a cluster.
Use the mode=sc option for all virtual switch devices that connect the virtual network
devices that are used as the cluster interconnect.
For shared storage, map only the full SCSI disks into the guest domains.
Next Steps If your server supports the mirroring of internal hard drives and you want to configure internal
disk mirroring, go to How to Configure Internal Disk Mirroring on page 51.
Otherwise, install the Oracle Solaris Cluster software packages. Go to How to Install Oracle
Solaris Cluster Framework and Data Service Software Packages on page 53.
Note If your physically clustered machines are configured with Oracle VM Server for
SPARC, install Oracle Solaris Cluster software only in I/O domains or guest domains.
Note You cannot add or remove individual packages that are part of the ha-cluster-minimal
group package except by complete reinstallation or uninstallation. See How to Unconfigure
Oracle Solaris Cluster Software to Correct Installation Problems on page 163 and How to
Uninstall Oracle Solaris Cluster Software From a Cluster Node in Oracle Solaris Cluster System
Administration Guide for procedures to remove the cluster framework packages.
However, you can add or remove other, optional packages without removing the
ha-cluster-minimal group package.
ha-cluster-
Feature ha-cluster-full ha-cluster-framework-fullha-cluster-data-services-fullha-cluster-minimal framework-minimal
Framework X X X X X
Agents X X
Localization X X
Framework X X
man pages
Data Service X X
man pages
Agent Builder X X
Generic Data X X X
Service
1 If you are using a cluster administrative console, display a console screen for each node in the
cluster.
If pconsole software is installed and configured on your administrative console, use the
pconsole utility to display the individual console screens.
As superuser, use the following command to start the pconsole utility:
adminconsole# pconsole host[:port] [...] &
The pconsole utility also opens a master window from which you can send your input to all
individual console windows at the same time.
If you do not use the pconsole utility, connect to the consoles of each node individually.
5 Set up the repository for the Oracle Solaris Cluster software packages.
If the cluster nodes have direct access or web proxy access to the Internet, perform the
following steps.
a. Go to http://pkg-register.oracle.com.
e. Download the key and certificate files and install them as described in the returned
certification page.
f. Configure the ha-cluster publisher with the downloaded SSL keys and set the location of
the Oracle Solaris Cluster 4.0 repository.
In the following example the repository name is
https://pkg.oracle.com/repository-location/.
# pkg set-publisher \
-k /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.key.pem \
-c /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.certificate.pem \
-O https://pkg.oracle.com/repository-location/ ha-cluster
-k /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.key.pem
Specifies the full path to the downloaded SSL key file.
-c /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.certificate.pem
Specifies the full path to the downloaded certificate file.
-O https://pkg.oracle.com/repository-location/
Specifies the URL to the Oracle Solaris Cluster 4.0 package repository.
For more information, see the pkg(1) man page.
If you are using an ISO image of the software, perform the following steps.
a. Download the Oracle Solaris Cluster 4.0 ISO image from Oracle Software Delivery Cloud
at http://edelivery.oracle.com/.
Note A valid Oracle license is required to access Oracle Software Delivery Cloud.
Oracle Solaris Cluster software is part of the Oracle Solaris Product Pack. Follow online
instructions to complete selection of the media pack and download the software.
c. Set the location of the Oracle Solaris Cluster 4.0 package repository.
# pkg set-publisher -g file:///mnt/repo ha-cluster
Next Steps If you want to use the Availability Suite feature of Oracle Solaris 11, install the Availability Suite
software. Go to How to Install the Availability Suite Feature of Oracle Solaris 11 on page 57.
Otherwise, to set up the root user environment, go to How to Set Up the Root Environment
on page 58.
1 Become superuser.
3 Install the IPS package for the Availability Suite feature of the Oracle Solaris 11 software.
# /usr/bin/pkg install storage/avs
Next Steps To set up the root user environment, go to How to Set Up the Root Environment on page 58.
Note In an Oracle Solaris Cluster configuration, user initialization files for the various shells
must verify that they are run from an interactive shell. The files must verify this before they
attempt to output to the terminal. Otherwise, unexpected behavior or interference with data
services might occur. See Customizing a Users Work Environment in Oracle Solaris
Administration: Common Tasks for more information.
Note Always make /usr/cluster/bin the first entry in the PATH. This placement ensures that
Oracle Solaris Cluster commands take precedence over any other binaries that have the same
name, thus avoiding unexpected behavior.
See your Oracle Solaris OS documentation, volume manager documentation, and other
application documentation for additional file paths to set.
3 (Optional) For ease of administration, set the same root password on each node, if you have not
already done so.
Next Steps If you want to use Solaris IP Filter, go to How to Configure Solaris IP Filter on page 59.
Otherwise, configure Oracle Solaris Cluster software on the cluster nodes. Go to Establishing a
New Global Cluster or New Global-Cluster Node on page 62.
Note Only use Solaris IP Filter with failover data services. The use of Solaris IP Filter with
scalable data services is not supported.
For more information about the Solaris IP Filter feature, see Part III, IP Security, in Oracle
Solaris Administration: IP Services.
Before You Begin Read the guidelines and restrictions to follow when you configure Solaris IP Filter in a cluster.
See the IP Filter bullet item in Oracle Solaris OS Feature Restrictions on page 13.
1 Become superuser.
To unblock cluster interconnect traffic, add the following rules. The subnets used are for
example only. Derive the subnets to use by using the ifconfig show-addr | grep interface
command.
# Unblock cluster traffic on 172.16.0.128/25 subnet (physical interconnect)
pass in quick proto tcp/udp from 172.16.0.128/25 to any
pass out quick proto tcp/udp from 172.16.0.128/25 to any
# Unblock cluster traffic on 172.16.1.0/25 subnet (physical interconnect)
pass in quick proto tcp/udp from 172.16.1.0/25 to any
pass out quick proto tcp/udp from 172.16.1.0/25 to any
# Unblock cluster traffic on 172.16.4.0/23 (clprivnet0 subnet)
pass in quick proto tcp/udp from 172.16.4.0/23 to any
pass out quick proto tcp/udp from 172.16.4.0/23 to any
You can specify either the adapter name or the IP address for a cluster private network. For
example, the following rule specifies a cluster private network by its adapter's name:
# Allow all traffic on cluster private networks.
pass in quick on net1 all
...
Oracle Solaris Cluster software fails over network addresses from node to node. No special
procedure or code is needed at the time of failover.
All filtering rules that reference IP addresses of logical hostname and shared address
resources must be identical on all cluster nodes.
Rules on a standby node will reference a nonexistent IP address. This rule is still part of the
IP filter's active rule set and will become effective when the node receives the address after a
failover.
All filtering rules must be the same for all NICs in the same IPMP group. In other words, if a
rule is interface-specific, the same rule must also exist for all other interfaces in the same
IPMP group.
For more information about Solaris IP Filter rules, see the ipf(4) man page.
Next Steps Configure Oracle Solaris Cluster software on the cluster nodes. Go to Establishing a New
Global Cluster or New Global-Cluster Node on page 62.
This chapter provides procedures for how to establish a global cluster or a new global-cluster
node.
Note To create a zone cluster, see Configuring a Zone Cluster on page 147. You must
establish a global cluster before you can create a zone cluster.
61
Establishing a New Global Cluster or New Global-Cluster Node
The following task maps list the tasks to perform for either a new global cluster or a node added
to an existing global cluster. Complete the procedures in the order that is indicated.
Task Map: Establish a New Global Cluster
Task Map: Add a Node to an Existing Global Cluster
Method Instructions
Use the scinstall utility to establish the cluster. Configuring Oracle Solaris Cluster Software on All Nodes
(scinstall) on page 64
Use an XML configuration file to establish the cluster. How to Configure Oracle Solaris Cluster Software on All Nodes
(XML) on page 72
Set up an Automated Installer (AI) install server. Then use the Installing and Configuring Oracle Solaris and Oracle Solaris
scinstall AI option to install the software on each node and Cluster Software (Automated Installer) on page 80
establish the cluster.
Assign quorum votes and remove the cluster from installation How to Configure Quorum Devices on page 113
mode, if this operation was not already performed.
Validate the quorum configuration. How to Verify the Quorum Configuration and Installation
Mode on page 118
(Optional) Change a node's private hostname. How to Change Private Hostnames on page 120
Create or modify the NTP configuration file, if not already Configuring Network Time Protocol (NTP) on page 120
configured.
If using a volume manager, install the volume management Chapter 4, Configuring Solaris Volume Manager Software
software.
Create cluster file systems or highly available local file systems as Chapter 5, Creating a Cluster File System, or Enabling Highly
needed. Available Local File Systems in Oracle Solaris Cluster Data
Services Planning and Administration Guide
Install third-party applications, register resource types, set up Oracle Solaris Cluster Data Services Planning and Administration
resource groups, and configure data services. Guide
Documentation that is supplied with the application software
Take a baseline recording of the finished cluster configuration. How to Record Diagnostic Data of the Cluster Configuration
on page 127
Method Instructions
Use the clsetup command to add the new node to the cluster How to Prepare the Cluster for Additional Global-Cluster
authorized-nodes list. If necessary, also configure the cluster Nodes on page 92
interconnect and reconfigure the private network address range.
Reconfigure the cluster interconnect and the private network How to Change the Private Network Configuration When
address range as needed to accommodate the added node. Adding Nodes or Private Networks on page 94
Configure Oracle Solaris Cluster software on the new node by Configuring Oracle Solaris Cluster Software on Additional
using the scinstall utility. Global-Cluster Nodes (scinstall) on page 99
Configure Oracle Solaris Cluster software on the new node by How to Configure Oracle Solaris Cluster Software on Additional
using an XML configuration file. Global-Cluster Nodes (XML File) on page 106
Update the quorum configuration information. How to Update Quorum Devices After Adding a Node to a
Global Cluster on page 111
Validate the quorum configuration. How to Verify the Quorum Configuration and Installation
Mode on page 118
(Optional) Change a node's private hostname. How to Change Private Hostnames on page 120
Modify the NTP configuration. Configuring Network Time Protocol (NTP) on page 120
If using a volume manager, install the volume management Chapter 4, Configuring Solaris Volume Manager Software
software.
Create cluster file systems or highly available local file systems as Chapter 5, Creating a Cluster File System, or Enabling Highly
needed. Available Local File Systems in Oracle Solaris Cluster Data
Services Planning and Administration Guide
Install third-party applications, register resource types, set up Oracle Solaris Cluster Data Services Planning and Administration
resource groups, and configure data services. Guide
Documentation that is supplied with the application software
Take a baseline recording of the finished cluster configuration. How to Record Diagnostic Data of the Cluster Configuration
on page 127
Complete one of the following cluster configuration worksheets to plan your Typical mode or
Custom mode installation:
Typical Mode Worksheet If you will use Typical mode and accept all defaults, complete
the following worksheet.
Cluster Name What is the name of the cluster that you want to establish?
Cluster Nodes List the names of the other cluster nodes planned for the initial cluster
configuration.(For a single-node cluster, press Control-D alone.)
Cluster Transport What are the names of the two cluster-transport adapters that attach the node First:
Adapters and Cables to the private interconnect?
Second:
Quorum Configuration Do you want to disable automatic quorum device selection? (Answer Yes if any Yes | No
shared storage is not qualified to be a quorum device or if you want to configure a
(two-node cluster only)
quorum server as a quorum device.)
Check Do you want to interrupt cluster creation for cluster check errors? Yes | No
Custom Mode Worksheet If you will use Custom mode and customize the configuration
data, complete the following worksheet.
Note If you are installing a single-node cluster, the scinstall utility automatically assigns
the default private network address and netmask, even though the cluster does not use a
private network.
Cluster Name What is the name of the cluster that you want to establish?
Cluster Nodes List the names of the other cluster nodes planned for the initial cluster
configuration.(For a single-node cluster, press Control-D alone.)
Minimum Number of Private Should this cluster use at least two private networks?
Networks Yes | No
(multiple-node cluster only)
Point-to-Point Cables If this is a two-node cluster, does this cluster use switches?
Yes | No
(multiple-node cluster only)
Cluster Transport Adapters Node name (the node from which you run scinstall):
and Cables
(multiple-node cluster only)
Where does each transport adapter connect to (a switch or another adapter)? First:
Switch defaults: switch1 and switch2
Second:
If a transport switch, do you want to use the default port name? First: Yes | No
Second: Yes | No
If no, what is the name of the port that you want to use? First:
Second:
Do you want to use autodiscovery to list the available adapters for the other Yes | No
nodes?
If no, supply the following information for each additional node:
Where does each transport adapter connect to (a switch or another adapter)? First:
Defaults: switch1 and switch2
Second:
If a transport switch, do you want to use the default port name? First: Yes | No
Second: Yes | No
If no, what is the name of the port that you want to use? First:
Second:
Network Address for the Do you want to accept the default network address (172.16.0.0)?
Cluster Transport Yes | No
(multiple-node cluster only)
If no, what are the maximum numbers of nodes, private networks, and _____ nodes
zone clusters that you expect to configure in the cluster? _____ networks
_____ zone clusters
Which netmask do you want to use? (Choose from the values calculated by
___.___.___.___
scinstall or supply your own.)
Global Fencing Do you want to disable global fencing? (Answer No unless the shared storage First: Yes | No
does not support SCSI reservations or unless you want systems that are outside
the cluster to access the shared storage.) Second: Yes | No
Quorum Configuration Do you want to disable automatic quorum device selection? (Answer Yes if First: Yes | No
any shared storage is not qualified to be a quorum device or if you want to
(two-node cluster only) Second: Yes | No
configure a quorum server as a quorum device.)
Check Do you want to interrupt cluster creation for cluster check errors?
Yes | No
(multiple-node cluster only)
(single-node cluster only) Do you want to run the cluster check utility to validate the cluster? Yes | No
Automatic Reboot Do you want scinstall to automatically reboot the node after installation?
Yes | No
(single-node cluster only)
Note This procedure uses the interactive form of the scinstall command. For information
about how to use the noninteractive forms of the scinstall command, such as when
developing installation scripts, see the scinstall(1M) man page.
Follow these guidelines to use the interactive scinstall utility in this procedure:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key
more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of
related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a
question. Press Return to enter the response that is in brackets without typing it.
Ensure that Oracle Solaris Cluster software packages and updates are installed on each node.
See How to Install Oracle Solaris Cluster Framework and Data Service Software Packages
on page 53.
Ensure that any adapters that you want to use as tagged VLAN adapters are configured and
that you have their VLAN IDs.
Have available your completed Typical Mode or Custom Mode installation worksheet. See
Configuring Oracle Solaris Cluster Software on All Nodes (scinstall) on page 64.
1 If you are using switches in the private interconnect of your new cluster, ensure that Neighbor
Discovery Protocol (NDP) is disabled.
Follow the procedures in the documentation for your switches to determine whether NDP is
enabled and to disable NDP.
During cluster configuration, the software checks that there is no traffic on the private
interconnect. If NDP sends any packages to a private adapter when the private interconnect is
being checked for traffic, the software will assume that the interconnect is not private and
cluster configuration will be interrupted. NDP must therefore be disabled during cluster
creation.
After the cluster is established, you can re-enable NDP on the private-interconnect switches if
you want to use that feature.
3 Ensure that TCP wrappers for RPC are disabled on all nodes of the cluster.
The Oracle Solaris TCP wrappers for RPC feature prevents internode communication that is
necessary for cluster configuration.
b. If TCP wrappers for RPC are enabled on a node, disable TCP wrappers and refresh the RPC
bind service.
# svccfg -s rpc/bind setprop config/enable_tcpwrappers = false
# svcadm refresh rpc/bind
# svcadm restart rpc/bindEntry 2
Create the IPMP groups you need before you establish the cluster.
After the cluster is established, use the ipadm command to edit the IPMP groups.
For more information, see Configuring IPMP Groups in Oracle Solaris Administration:
Network Interfaces and Network Virtualization.
6 Type the option number for Create a New Cluster or Add a Cluster Node and press the Return
key.
*** Main Menu ***
Option: 1
The New Cluster and Cluster Node Menu is displayed.
7 Type the option number for Create a New Cluster and press the Return key.
The Typical or Custom Mode menu is displayed.
8 Type the option number for either Typical or Custom and press the Return key.
The Create a New Cluster screen is displayed. Read the requirements, then press Control-D to
continue.
9 Follow the menu prompts to supply your answers from the configuration planning worksheet.
The scinstall utility installs and configures all cluster nodes and reboots the cluster. The
cluster is established when all nodes have successfully booted into the cluster. Oracle Solaris
Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.
10 Verify on each node that multiuser services for the Service Management Facility (SMF) are
online.
If services are not yet online for a node, wait until the state changes to online before you proceed
to the next step.
phys-schost# svcs multi-user-server node
STATE STIME FMRI
online 17:52:55 svc:/milestone/multi-user-server:default
11 From one node, verify that all nodes have joined the cluster.
phys-schost# clnode status
Output resembles the following.
reboot_on_path_failure=enable
Enables automatic node reboot if failure of all monitored shared-disk paths occurs.
14 If you plan to enable RPC use of TCP wrappers, add all clprivnet0 IP addresses to the
/etc/hosts.allow file on each cluster node.
Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode
communication over RPC for cluster administration utilities.
a. On each node, display the IP addresses for all clprivnet0 devices on the node.
# /usr/sbin/ipadm show-addr
ADDROBJ TYPE STATE ADDR
clprivnet0/N static ok ip-address/netmask-length
...
b. On each cluster node, add to the /etc/hosts.allow file the IP addresses of all clprivnet0
devices in the cluster.
15 If you intend to use the HA for NFS data service (HA for NFS) on a highly available local file
system, exclude from the automounter map all shares that are part of the highly available local
file system that is exported by HA for NFS.
See Administrative Tasks Involving Maps in Oracle Solaris Administration: Network Services
for more information about modifying the automounter map.
Troubleshooting Unsuccessful configuration If one or more nodes cannot join the cluster, or if the wrong
configuration information was specified, first attempt to perform this procedure again. If that
does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris
Cluster Software to Correct Installation Problems on page 163 on each misconfigured node to
remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster
software packages. Then perform this procedure again.
Next Steps If you installed a single-node cluster, cluster establishment is complete. Go to Creating
Cluster File Systems on page 143 to install volume management software and configure the
cluster.
If you installed a multiple-node cluster and chose automatic quorum configuration,
postinstallation setup is complete. Go to How to Verify the Quorum Configuration and
Installation Mode on page 118.
If you installed a multiple-node cluster and declined automatic quorum configuration,
perform postinstallation setup. Go to How to Configure Quorum Devices on page 113.
If you intend to configure any quorum devices in your cluster, go to How to Configure
Quorum Devices on page 113.
1 Ensure that the Oracle Solaris Cluster 4.0 software is not yet configured on each potential cluster
node.
a. Become superuser on a potential node that you want to configure in the new cluster.
b. Determine whether the Oracle Solaris Cluster software is already configured on the
potential node.
phys-schost# /usr/sbin/clinfo -n
If the command returns the node ID number, do not perform this procedure.
The return of a node ID indicates that the Oracle Solaris Cluster software is already
configured on the node.
c. Repeat Step a and Step b on each remaining potential node that you want to configure in the
new cluster.
If the Oracle Solaris Cluster software is not yet configured on any of the potential cluster
nodes, proceed to Step 2.
2 Ensure that TCP wrappers for RPC are disabled on all nodes of the cluster.
The Oracle Solaris TCP wrappers for RPC feature prevents internode communication that is
necessary for cluster configuration.
b. If TCP wrappers for RPC are enabled on a node, disable TCP wrappers and refresh the RPC
bind service.
# svccfg -s rpc/bind setprop config/enable_tcpwrappers = false
# svcadm refresh rpc/bind
# svcadm restart rpc/bindEntry 2
3 If you are using switches in the private interconnect of your new cluster, ensure that Neighbor
Discovery Protocol (NDP) is disabled.
Follow the procedures in the documentation for your switches to determine whether NDP is
enabled and to disable NDP.
During cluster configuration, the software checks that there is no traffic on the private
interconnect. If NDP sends any packages to a private adapter when the private interconnect is
being checked for traffic, the software will assume that the interconnect is not private and
cluster configuration will be interrupted. NDP must therefore be disabled during cluster
creation.
After the cluster is established, you can re-enable NDP on the private-interconnect switches if
you want to use that feature.
4 If you are duplicating an existing cluster than runs the Oracle Solaris Cluster 4.0 software, use a
node in that cluster to create a cluster configuration XML file.
a. Become superuser on an active member of the cluster that you want to duplicate.
c. Copy the configuration file to the potential node from which you will configure the new
cluster.
You can store the file in any directory that is accessible to the other hosts that you will
configure as cluster nodes.
5 Become superuser on the potential node from which you will configure the new cluster.
8 From the potential node that contains the cluster configuration XML file, create the cluster.
phys-schost# cluster create -i clconfigfile
-i clconfigfile
Specifies the name of the cluster configuration XML file to use as the input source.
9 Verify on each node that multiuser services for the Service Management Facility (SMF) are
online.
If services are not yet online for a node, wait until the state changes to online before you proceed
to the next step.
phys-schost# svcs multi-user-server node
STATE STIME FMRI
online 17:52:55 svc:/milestone/multi-user-server:default
10 From one node, verify that all nodes have joined the cluster.
phys-schost# clnode status
Output resembles the following.
12 If you plan to enable RPC use of TCP wrappers, add all clprivnet0 IP addresses to the
/etc/hosts.allow file on each cluster node.
Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode
communication over RPC for cluster administration utilities.
a. On each node, display the IP addresses for all clprivnet0 devices on the node.
# /usr/sbin/ipadm show-addr
ADDROBJ TYPE STATE ADDR
clprivnet0/N static ok ip-address/netmask-length
...
b. On each cluster node, add to the /etc/hosts.allow file the IP addresses of all clprivnet0
devices in the cluster.
13 If you intend to use the HA for NFS data service (HA for NFS) on a highly available local file
system, exclude from the automounter map all shares that are part of the highly available local
file system that is exported by HA for NFS.
See Administrative Tasks Involving Maps in Oracle Solaris Administration: Network Services
for more information about modifying the automounter map.
14 To duplicate quorum information from an existing cluster, configure the quorum device by
using the cluster configuration XML file.
You must configure a quorum device if you created a two-node cluster. If you choose not to use
the cluster configuration XML file to create a required quorum device, go instead to How to
Configure Quorum Devices on page 113.
a. If you are using a quorum server for the quorum device, ensure that the quorum server is set
up and running.
Follow instructions in How to Install and Configure Oracle Solaris Cluster Quorum Server
Software on page 49.
b. If you are using a NAS device for the quorum device, ensure that the NAS device is set up and
operational.
ii. Follow instructions in your device's documentation to set up the NAS device.
c. Ensure that the quorum configuration information in the cluster configuration XML file
reflects valid values for the cluster that you created.
d. If you made changes to the cluster configuration XML file, validate the file.
phys-schost# xmllint --valid --noout clconfigfile
16 Close access to the cluster configuration by machines that are not configured cluster members.
phys-schost# claccess deny-all
17 (Optional) Enable automatic node reboot if all monitored shared-disk paths fail.
Example 32 Configuring Oracle Solaris Cluster Software on All Nodes By Using an XML File
The following example duplicates the cluster configuration and quorum configuration of an
existing two-node cluster to a new two-node cluster. The new cluster is installed with the Solaris
11 OS. The cluster configuration is exported from the existing cluster node, phys-oldhost-1, to
the cluster configuration XML file clusterconf.xml. The node names of the new cluster are
phys-newhost-1 and phys-newhost-2. The device that is configured as a quorum device in the
new cluster is d3.
The prompt name phys-newhost-N in this example indicates that the command is performed
on both cluster nodes.
phys-newhost-N# /usr/sbin/clinfo -n
clinfo: node is not configured as part of a cluster: Operation not applicable
phys-oldhost-1# cluster export -o clusterconf.xml
Copy clusterconf.xml to phys-newhost-1 and modify the file with valid values
The following describes a list of the cluster components that you can create from a cluster
configuration XML file after the cluster is established. The list includes the man page for the
command that you use to duplicate the component:
Device groups: Solaris Volume Manager: cldevicegroup(1CL)
For Solaris Volume Manager, first create the disk sets that you specify in the cluster
configuration XML file.
Resource Group Manager components
Resources: clresource(1CL)
Shared address resources: clressharedaddress(1CL)
Logical hostname resources: clreslogicalhostname(1CL)
Resource types: clresourcetype(1CL)
Resource groups: clresourcegroup(1CL)
Troubleshooting Unsuccessful configuration If one or more nodes cannot join the cluster, or if the wrong
configuration information was specified, first attempt to perform this procedure again. If that
does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris
Cluster Software to Correct Installation Problems on page 163 on each misconfigured node to
remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster
software packages. Then perform this procedure again.
Next Steps Go to How to Verify the Quorum Configuration and Installation Mode on page 118.
See Installing With the Text Installer in Installing Oracle Solaris 11 Systems for more
information about interactive installation of Oracle Solaris software.
The scinstall utility runs in two modes of installation, Typical or Custom. For the Typical
installation of Oracle Solaris Cluster software, scinstall automatically specifies the following
configuration defaults.
Private-network address 172.16.0.0
Private-network netmask 255.255.240.0
Cluster-transport adapters Exactly two adapters
Cluster-transport switches switch1 and switch2
Global fencing Enabled
Installation security (DES) Limited
Complete one of the following cluster configuration worksheets to plan your Typical mode or
Custom mode installation:
Typical Mode Worksheet If you will use Typical mode and accept all defaults, complete
the following worksheet.
Custom Automated Installer What is the full path name of the Automated Installer boot image ISO file?
Boot Image ISO File
Custom Automated Installer What is the password for the root account of the cluster nodes?
User root Password
Select the Oracle Solaris Cluster components that you want to install. (Select
one or more group packages to install.)
Do you want to select any individual components that are contained in Yes | No
these group packages?
Cluster Name What is the name of the cluster that you want to establish?
Cluster Nodes List the names of the cluster nodes that are planned for the initial cluster
configuration. (For a single-node cluster, press Control-D alone.)
Confirm that the auto-discovered MAC address for each node is correct.
Quorum Configuration Do you want to disable automatic quorum device selection? (Answer Yes if First: Yes | No
any shared storage is not qualified to be a quorum device or if you want to
(two-node cluster only) Second: Yes | No
configure a quorum server as a quorum device.)
Custom Mode Worksheet If you will use Custom mode and customize the configuration
data, complete the following worksheet.
Note If you are installing a single-node cluster, the scinstall utility automatically uses the
default private network address and netmask, even though the cluster does not use a private
network.
Custom Automated Installer What is the full path name of the Automated Installer boot image ISO file?
Boot Image ISO File
Custom Automated Installer What is the password for the root account of the cluster nodes?
User root Password
Select the Oracle Solaris Cluster components that you want to install. (Select
one or more group packages to install.)
Do you want to select any individual components that are contained in Yes | No
these group packages?
Cluster Name What is the name of the cluster that you want to establish?
Cluster Nodes List the names of the cluster nodes that are planned for the initial cluster
configuration. (For a single-node cluster, press Control-D alone.)
Confirm that the auto-discovered MAC address for each node is correct.
Network Address for the Do you want to accept the default network address (172.16.0.0)? Yes | No
Cluster Transport
(multiple-node cluster only)
If no, what are the maximum numbers of nodes, private networks, and _____ nodes
zone clusters that you expect to configure in the cluster? _____ networks
_____ zone clusters
Which netmask do you want to use? Choose from the values that are
___.___.___.___
calculated by scinstall or supply your own.
Minimum Number of Private Should this cluster use at least two private networks? Yes | No
Networks
(multiple-node cluster only)
If a transport switch, do you want to use the default port name? First: Yes | No
Second: Yes | No
If no, what is the name of the port that you want to use? First:
Second:
If a transport switch, do you want to use the default port name? First: Yes | No
Second: Yes | No
If no, what is the name of the port that you want to use? First:
Second:
Global Fencing Do you want to disable global fencing? Answer No unless the shared storage First: Yes | No
does not support SCSI reservations or unless you want systems that are
outside the cluster to access the shared storage. Second: Yes | No
Quorum Configuration Do you want to disable automatic quorum device selection? (Answer Yes if First: Yes | No
any shared storage is not qualified to be a quorum device or if you want to
(two-node cluster only) Second: Yes | No
configure a quorum server as a quorum device.)
How to Install and Configure Oracle Solaris and Oracle Solaris Cluster
Software (Automated Installer)
This procedure describes how to set up and use the scinstall(1M) custom Automated
Installer installation method. This method installs both Oracle Solaris OS and Oracle Solaris
Cluster framework and data services software on all global-cluster nodes in the same operation
and establishes the cluster. These nodes can be physical machines or (SPARC only) Oracle VM
Server for SPARC I/O domains or guest domains, or a combination of any of these types of
nodes.
Note If your physically clustered machines are configured with Oracle VM Server for SPARC,
install the Oracle Solaris Cluster software only in I/O domains or guest domains.
Follow these guidelines to use the interactive scinstall utility in this procedure:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key
more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of
related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a
question. Press Return to enter the response that is in brackets without typing it.
The following table lists the group packages for the Oracle Solaris Cluster 4.0 software that
you can choose during an AI installation and the principal features that each group package
contains. You must install at least the ha-cluster-framework-minimal group package.
ha-cluster-framework-
Feature ha-cluster-framework-full ha-cluster-data-services-full minimal
Framework X X X
Agents X
Localization X
Agent Builder X
Have available your completed Typical Mode or Custom Mode installation worksheet. See
Installing and Configuring Oracle Solaris and Oracle Solaris Cluster Software (Automated
Installer) on page 80.
1 Set up your Automated Installer (AI) install server and DHCP server.
Ensure that the AI install server meets the following requirements.
The install server is on the same subnet as the cluster nodes.
The install server is not itself a cluster node.
The install server runs a release of the Oracle Solaris OS that is supported by the Oracle
Solaris Cluster software.
Each new cluster node is configured as a custom AI installation client that uses the custom
AI directory that you set up for Oracle Solaris Cluster installation.
Follow the appropriate instructions for your software platform and OS version to set up the AI
install server and DHCP server. See Chapter 8, Setting Up an Install Server, in Installing Oracle
Solaris 11 Systems and Part II, DHCP, in Oracle Solaris Administration: IP Services.
3 On the AI install server, install the Oracle Solaris Cluster AI support package.
5 Choose the Install and Configure a Cluster From This Automated Installer Install Server menu
item.
*** Main Menu ***
* 1) Install and configure a cluster from this Automated Installer install server
* 2) Print release information for this Automated Installer install server
Option: 1
6 Follow the menu prompts to supply your answers from the configuration planning worksheet.
9 If you are using a cluster administrative console, display a console screen for each node in the
cluster.
If pconsole software is installed and configured on your administrative console, use the
pconsole utility to display the individual console screens.
As superuser, use the following command to start the pconsole utility:
adminconsole# pconsole host[:port] [...] &
The pconsole utility also opens a master window from which you can send your input to all
individual console windows at the same time.
If you do not use the pconsole utility, connect to the consoles of each node individually.
10 Ensure that TCP wrappers for RPC are disabled on all nodes of the cluster.
The Oracle Solaris TCP wrappers for RPC feature prevents internode communication that is
necessary for cluster configuration.
b. If TCP wrappers for RPC are enabled on a node, disable TCP wrappers and refresh the RPC
bind service.
# svccfg -s rpc/bind setprop config/enable_tcpwrappers = false
# svcadm refresh rpc/bind
# svcadm restart rpc/bindEntry 2
Note You cannot use this method if you want to customize the Oracle Solaris installation. If
you choose the Oracle Solaris interactive installation, the Automated Installer is bypassed and
Oracle Solaris Cluster software is not installed and configured. To customize Oracle Solaris
during installation, instead follow instructions in How to Install Oracle Solaris Software on
page 43, then install and configure the cluster by following instructions in How to Install
Oracle Solaris Cluster Framework and Data Service Software Packages on page 53.
SPARC:
Note Surround the dash (-) in the command with a space on each side.
x86:
Note If you do not select the Automated Install entry within 20 seconds, installation
proceeds using the default interactive text installer method, which will not install and
configure the Oracle Solaris Cluster software.
On each node, a new boot environment (BE) is created and Automated Installer installs
the Oracle Solaris OS and Oracle Solaris Cluster software. When the installation is
successfully completed, each node is fully installed as a new cluster node. Oracle Solaris
Cluster installation output is logged in a
/var/cluster/logs/install/scinstall.log.N file on each node.
12 Verify on each node that multiuser services for the Service Management Facility (SMF) are
online.
If services are not yet online for a node, wait until the state changes to online before you proceed
to the next step.
phys-schost# svcs multi-user-server node
STATE STIME FMRI
online 17:52:55 svc:/milestone/multi-user-server:default
13 On each node, activate the installed BE and boot into cluster mode.
Note Do not use the reboot or halt command. These commands do not activate a new BE.
SPARC:
ok boot
x86:
When the GRUB menu is displayed, select the appropriate Oracle Solaris entry and press
Enter.
For more information about GRUB based booting, see Booting and Shutting Down
Oracle Solaris on x86 Platforms.
14 If you intend to use the HA for NFS data service (HA for NFS) on a highly available local file
system, exclude from the automounter map all shares that are part of the highly available local
file system that is exported by HA for NFS.
See Administrative Tasks Involving Maps in Oracle Solaris Administration: Network Services
for more information about modifying the automounter map.
16 If you performed a task that requires a cluster reboot, reboot the cluster.
The following tasks require a reboot:
Installing software updates that require a node or cluster reboot
Making configuration changes that require a reboot to become active
Note Do not reboot the first-installed node of the cluster until after the cluster is shut down.
Until cluster installation mode is disabled, only the first-installed node, which established
the cluster, has a quorum vote. In an established cluster that is still in installation mode, if
the cluster is not shut down before the first-installed node is rebooted, the remaining cluster
nodes cannot obtain quorum. The entire cluster then shuts down.
Cluster nodes remain in installation mode until the first time that you run the clsetup
command. You run this command during the procedure How to Configure Quorum
Devices on page 113.
SPARC:
ok boot
x86:
When the GRUB menu is displayed, select the appropriate Oracle Solaris entry and press
Enter.
For more information about GRUB based booting, see Booting and Shutting Down
Oracle Solaris on x86 Platforms.
The cluster is established when all nodes have successfully booted into the cluster. Oracle
Solaris Cluster installation output is logged in a
/var/cluster/logs/install/scinstall.log.N file.
17 From one node, verify that all nodes have joined the cluster.
phys-schost# clnode status
Output resembles the following.
18 If you plan to enable RPC use of TCP wrappers, add all clprivnet0 IP addresses to the
/etc/hosts.allow file on each cluster node.
Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode
communication over RPC for cluster administration utilities.
a. On each node, display the IP addresses for all clprivnet0 devices on the node.
# /usr/sbin/ipadm show-addr
ADDROBJ TYPE STATE ADDR
clprivnet0/N static ok ip-address/netmask-length
...
b. On each cluster node, add to the /etc/hosts.allow file the IP addresses of all clprivnet0
devices in the cluster.
19 (Optional) On each node, enable automatic node reboot if all monitored shared-disk paths fail.
reboot_on_path_failure=enable
Enables automatic node reboot if failure of all monitored shared-disk paths occurs.
Next Steps 1. Perform all of the following procedures that are appropriate for your cluster
configuration.
How to Configure Internal Disk Mirroring on page 51
SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains on
page 52
How to Set Up the Root Environment on page 58
How to Configure Solaris IP Filter on page 59
Troubleshooting Disabled scinstall option If the AI option of the scinstall command is not preceded by an
asterisk, the option is disabled. This condition indicates that AI setup is not complete or that the
setup has an error. To correct this condition, first quit the scinstall utility. Repeat Step 1
through Step 7 to correct the AI setup, then restart the scinstall utility.
1 Add the name of the new node to the cluster's authorized-nodes list.
d. Choose the Specify the Name of a Machine Which May Add Itself menu item.
e. Follow the prompts to add the node's name to the list of recognized machines.
The clsetup utility displays the message Command completed successfully if the task is
completed without error.
2 If you are adding a node to a single-node cluster, ensure that two cluster interconnects already
exist by displaying the interconnect configuration.
phys-schost# clinterconnect show
You must have at least two cables or two adapters configured before you can add a node.
If the output shows configuration information for two cables or for two adapters, proceed to
Step 3.
If the output shows no configuration information for either cables or adapters, or shows
configuration information for only one cable or adapter, configure new cluster
interconnects.
f. Verify that the cluster now has two cluster interconnects configured.
phys-schost# clinterconnect show
The command output should show configuration information for at least two cluster
interconnects.
3 Ensure that the private-network configuration can support the nodes and private networks that
you are adding.
a. Display the maximum numbers of nodes, private networks, and zone clusters that the
current private-network configuration supports.
phys-schost# cluster show-netprops
The output looks similar to the following:
private_netaddr: 172.16.0.0
private_netmask: 255.255.240.0
max_nodes: 64
max_privatenets: 10
max_zoneclusters: 12
b. Determine whether the current private-network configuration can support the increased
number of nodes, including non-global zones, and private networks.
If the current IP address range is sufficient, you are ready to install the new node.
Go to How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster
Nodes (scinstall) on page 101.
If the current IP address range is not sufficient, reconfigure the private IP address range.
Go to How to Change the Private Network Configuration When Adding Nodes or
Private Networks on page 94. You must shut down the cluster to change the private IP
address range. This involves switching each resource group offline, disabling all
resources in the cluster, then rebooting into noncluster mode before you reconfigure the
IP address range.
Next Steps Configure Oracle Solaris Cluster software on the new cluster nodes. Go to How to Configure
Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (scinstall) on page 101
or How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes
(XML File) on page 106.
You can also use this procedure to decrease the private IP address range.
Note This procedure requires you to shut down the entire cluster. If you need to change only
the netmask, for example, to add support for zone clusters, do not perform this procedure.
Instead, run the following command from a global-cluster node that is running in cluster mode
to specify the expected number of zone clusters:
This command does not require you to shut down the cluster.
c. Follow the prompts to take offline all resource groups and to put them in the unmanaged
state.
d. When all resource groups are offline, type q to return to the Resource Group Menu.
d. When all resources are disabled, type q to return to the Resource Group Menu.
6 Verify that all resources on all nodes are Offline and that all resource groups are in the
Unmanaged state.
# cluster status -t resource,resourcegroup
-t Limits output to the specified cluster object
resource Specifies resources
resourcegroup Specifies resource groups
SPARC:
ok boot -x
x86:
a. In the GRUB menu, use the arrow keys to select the appropriate Oracle Solaris entry and
type e to edit its commands.
For more information about GRUB based booting, see Booting and Shutting Down
Oracle Solaris on x86 Platforms.
b. In the boot parameters screen, use the arrow keys to select the kernel entry and type e
to edit the entry.
c. Add -x to the command to specify that the system boot into noncluster mode.
d. Press Enter to accept the change and return to the boot parameters screen.
The screen displays the edited command.
Note This change to the kernel boot parameter command does not persist over the
system boot. The next time you reboot the node, it will boot into cluster mode. To boot
into noncluster mode instead, perform these steps to again add the -x option to the
kernel boot parameter command.
10 Choose the Change Network Addressing and Ranges for the Cluster Transport menu item.
The clsetup utility displays the current private network configuration, then asks if you would
like to change this configuration.
11 To change either the private network IP address or the IP address range, type yes and press the
Return key.
The clsetup utility displays the default private network IP address, 172.16.0.0, and asks if it is
okay to accept this default.
To accept the default private network IP address and proceed to changing the IP address
range, type yes and press the Return key.
a. Type no in response to the clsetup utility question about whether it is okay to accept the
default address, then press the Return key.
The clsetup utility will prompt for the new private-network IP address.
To accept the default IP address range, type yes and press the Return key.
a. Type no in response to the clsetup utility's question about whether it is okay to accept
the default address range, then press the Return key.
When you decline the default netmask, the clsetup utility prompts you for the number
of nodes and private networks, and zone clusters that you expect to configure in the
cluster.
b. Provide the number of nodes, private networks, and zone clusters that you expect to
configure in the cluster.
From these numbers, the clsetup utility calculates two proposed netmasks:
The first netmask is the minimum netmask to support the number of nodes, private
networks, and zone clusters that you specified.
The second netmask supports twice the number of nodes, private networks, and zone
clusters that you specified, to accommodate possible future growth.
c. Specify either of the calculated netmasks, or specify a different netmask that supports
the expected number of nodes, private networks, and zone clusters.
14 Type yes in response to the clsetup utility's question about proceeding with the update.
SPARC:
ok boot
x86:
When the GRUB menu is displayed, select the appropriate Oracle Solaris entry and press
Enter.
For more information about GRUB based booting, see Booting and Shutting Down
Oracle Solaris on x86 Platforms.
e. When all resources are re-enabled, type q to return to the Resource Group Menu.
b. Follow the prompts to put each resource group into the managed state and then bring the
resource group online.
20 When all resource groups are back online, exit the clsetup utility.
Type q to back out of each submenu, or press Control-C.
Next Steps To add a node to an existing cluster, go to one of the following procedures:
How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes
(scinstall) on page 101
How to Install and Configure Oracle Solaris and Oracle Solaris Cluster Software
(Automated Installer) on page 84
How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes
(XML File) on page 106
Complete one of the following configuration planning worksheets. See Planning the Oracle
Solaris OS on page 12 and Planning the Oracle Solaris Cluster Environment on page 16 for
planning guidelines.
Typical Mode Worksheet If you will use Typical mode and accept all defaults, complete
the following worksheet.
Cluster Name What is the name of the cluster that you want the node to join?
Check Do you want to run the cluster check validation utility? Yes | No
Autodiscovery of Cluster Do you want to use autodiscovery to configure the cluster transport?
Yes | No
Transport If no, supply the following additional information:
Point-to-Point Cables Does the node that you are adding to the cluster make this a two-node cluster? Yes | No
Where does each transport adapter connect to (a switch or another adapter)? First:
Switch defaults: switch1 and switch2
Second:
For transport switches, do you want to use the default port name? First: Yes | No
Second: Yes | No
If no, what is the name of the port that you want to use? First:
Second:
Automatic Reboot Do you want scinstall to automatically reboot the node after installation? Yes | No
Custom Mode Worksheet If you will use Custom mode and customize the configuration
data, complete the following worksheet.
Cluster Name What is the name of the cluster that you want the node to join?
Check Do you want to run the cluster check validation utility? Yes | No
Autodiscovery of Cluster Do you want to use autodiscovery to configure the cluster transport? Yes | No
Transport If no, supply the following additional information:
Point-to-Point Cables Does the node that you are adding to the cluster make this a two-node Yes | No
cluster?
100 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Establishing a New Global Cluster or New Global-Cluster Node
If a transport switch, do you want to use the default port name? First: Yes | No
Second: Yes | No
If no, what is the name of the port that you want to use? First:
Second:
Automatic Reboot Do you want scinstall to automatically reboot the node after installation? Yes | No
Note This procedure uses the interactive form of the scinstall command. For information
about how to use the noninteractive forms of the scinstall command, such as when
developing installation scripts, see the scinstall(1M) man page.
Follow these guidelines to use the interactive scinstall utility in this procedure:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key
more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of
related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a
question. Press Return to enter the response that is in brackets without typing it.
SPARC: If you are configuring Oracle VM Server for SPARC I/O domains or guest domains
as cluster nodes, ensure that the Oracle VM Server for SPARC software is installed on each
physical machine and that the domains meet Oracle Solaris Cluster requirements. See
SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains on
page 52.
Ensure that Oracle Solaris Cluster software packages and updates are installed on the node.
See How to Install Oracle Solaris Cluster Framework and Data Service Software Packages
on page 53.
Ensure that the cluster is prepared for the addition of the new node. See How to Prepare the
Cluster for Additional Global-Cluster Nodes on page 92.
Have available your completed Typical Mode or Custom Mode installation worksheet. See
Configuring Oracle Solaris Cluster Software on Additional Global-Cluster Nodes
(scinstall) on page 99.
2 Ensure that TCP wrappers for RPC are disabled on all nodes of the cluster.
The Oracle Solaris TCP wrappers for RPC feature prevents internode communication that is
necessary for cluster configuration.
b. If TCP wrappers for RPC are enabled on a node, disable TCP wrappers and refresh the RPC
bind service.
# svccfg -s rpc/bind setprop config/enable_tcpwrappers = false
# svcadm refresh rpc/bind
# svcadm restart rpc/bindEntry 2
102 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Establishing a New Global Cluster or New Global-Cluster Node
Create the IPMP groups you need before you establish the cluster.
After the cluster is established, use the ipadm command to edit the IPMP groups.
For more information, see Configuring IPMP Groups in Oracle Solaris Administration:
Network Interfaces and Network Virtualization.
5 Type the option number for Create a New Cluster or Add a Cluster Node and press the Return
key.
*** Main Menu ***
Option: 1
The New Cluster and Cluster Node Menu is displayed.
6 Type the option number for Add This Machine as a Node in an Existing Cluster and press the
Return key.
7 Follow the menu prompts to supply your answers from the configuration planning worksheet.
The scinstall utility configures the node and boots the node into the cluster.
8 Repeat this procedure on any other node to add to the cluster until all additional nodes are fully
configured.
9 Verify on each node that multiuser services for the Service Management Facility (SMF) are
online.
If services are not yet online for a node, wait until the state changes to online before you proceed
to the next step.
phys-schost# svcs multi-user-server node
STATE STIME FMRI
online 17:52:55 svc:/milestone/multi-user-server:default
10 From an active cluster member, prevent any other nodes from joining the cluster.
phys-schost# claccess deny-all
Alternately, you can use the clsetup utility. See How to Add a Node to an Existing Cluster in
Oracle Solaris Cluster System Administration Guide for procedures.
11 From one node, verify that all nodes have joined the cluster.
phys-schost# clnode status
Output resembles the following.
12 If TCP wrappers are used in the cluster, ensure that the clprivnet0 IP addresses for all added
nodes are added to the /etc/hosts.allow file on each cluster node.
Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode
communication over RPC for cluster administration utilities.
b. On each node, edit the /etc/hosts.allow file with the IP addresses of all clprivnet0
devices in the cluster.
104 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Establishing a New Global Cluster or New Global-Cluster Node
14 (Optional) Enable automatic node reboot if all monitored shared-disk paths fail.
15 If you intend to use the HA for NFS data service (HA for NFS) on a highly available local file
system, exclude from the automounter map all shares that are part of the highly available local
file system that is exported by HA for NFS.
See Administrative Tasks Involving Maps in Oracle Solaris Administration: Network Services
for more information about modifying the automounter map.
Updating "/etc/hostname.hme0".
Verifying that power management is NOT configured ... done
Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
Ensure network routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter.
Having a cluster node act as a router is not supported by Oracle Solaris Cluster.
Please do not re-enable network routing.
Updating file ("ntp.conf.cluster") on node phys-schost-1 ... done
Updating file ("hosts") on node phys-schost-1 ... done
Log file - /var/cluster/logs/install/scinstall.log.6952
Rebooting ...
Troubleshooting Unsuccessful configuration If one or more nodes cannot join the cluster, or if the wrong
configuration information was specified, first attempt to perform this procedure again. If that
does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris
Cluster Software to Correct Installation Problems on page 163 on each misconfigured node to
remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster
software packages. Then perform this procedure again.
Next Steps If you added a node to an existing cluster that uses a quorum device, go to How to Update
Quorum Devices After Adding a Node to a Global Cluster on page 111.
This procedure configures the following cluster components on the new node:
Cluster node membership
Cluster interconnect
Global devices
106 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Establishing a New Global Cluster or New Global-Cluster Node
If the Oracle Solaris software is already installed on the node, you must ensure that the
Oracle Solaris installation meets the requirements for the Oracle Solaris Cluster software
and any other software that you intend to install on the cluster. See How to Install Oracle
Solaris Software on page 43 for more information about installing the Oracle Solaris
software to meet Oracle Solaris Cluster software requirements.
Ensure that NWAM is disabled. See How to Install Oracle Solaris Cluster Framework and
Data Service Software Packages on page 53 for instructions.
SPARC: If you are configuring Oracle VM Server for SPARC I/O domains or guest domains
as cluster nodes, ensure that the Oracle VM Server for SPARC software is installed on each
physical machine and that the domains meet Oracle Solaris Cluster requirements. See
SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains on
page 52.
Ensure that Oracle Solaris Cluster software packages and any necessary updates are installed
on the node. See How to Install Oracle Solaris Cluster Framework and Data Service
Software Packages on page 53.
Ensure that the cluster is prepared for the addition of the new node. See How to Prepare the
Cluster for Additional Global-Cluster Nodes on page 92.
1 Ensure that the Oracle Solaris Cluster software is not yet configured on the potential node that
you want to add to a cluster.
b. Determine whether the Oracle Solaris Cluster software is configured on the potential node.
phys-schost-new# /usr/sbin/clinfo -n
If the command returns a node ID number, the Oracle Solaris Cluster software is already a
configured on the node.
Before you can add the node to a different cluster, you must remove the existing cluster
configuration information.
SPARC:
ok boot -x
x86:
i. In the GRUB menu, use the arrow keys to select the appropriate Oracle Solaris entry
and type e to edit its commands.
For more information about GRUB based booting, see Booting and Shutting Down
Oracle Solaris on x86 Platforms.
ii. In the boot parameters screen, use the arrow keys to select the kernel entry and type
e to edit the entry.
iii. Add -x to the command to specify that the system boot into noncluster mode.
iv. Press Enter to accept the change and return to the boot parameters screen.
The screen displays the edited command.
Note This change to the kernel boot parameter command does not persist over the
system boot. The next time you reboot the node, it will boot into cluster mode. To
boot into noncluster mode instead, perform these steps to again add the -x option to
the kernel boot parameter command.
d. Unconfigure the Oracle Solaris Cluster software from the potential node.
phys-schost-new# /usr/cluster/bin/clnode remove
2 If you are duplicating a node that runs the Oracle Solaris Cluster 4.0 software, create a cluster
configuration XML file.
c. Copy the cluster configuration XML file to the potential node that you will configure as a new
cluster node.
108 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Establishing a New Global Cluster or New Global-Cluster Node
4 Ensure that TCP wrappers for RPC are disabled on all nodes of the cluster.
The Oracle Solaris TCP wrappers for RPC feature prevents internode communication that is
necessary for cluster configuration.
b. If TCP wrappers for RPC are enabled on a node, disable TCP wrappers and refresh the RPC
bind service.
# svccfg -s rpc/bind setprop config/enable_tcpwrappers = false
# svcadm refresh rpc/bind
# svcadm restart rpc/bindEntry 2
8 If TCP wrappers are used in the cluster, ensure that the clprivnet0 IP addresses for all added
nodes are added to the /etc/hosts.allow file on each cluster node.
Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode
communication over RPC for cluster administration utilities.
b. On each node, edit the /etc/hosts.allow file with the IP addresses of all clprivnet0
devices in the cluster.
9 (Optional) Enable automatic node reboot if all monitored shared-disk paths fail.
Troubleshooting Unsuccessful configuration If one or more nodes cannot join the cluster, or if the wrong
configuration information was specified, first attempt to perform this procedure again. If that
does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris
Cluster Software to Correct Installation Problems on page 163 on each misconfigured node to
remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster
software packages. Then perform this procedure again.
Next Steps If you added a node to a cluster that uses a quorum device, go to How to Update Quorum
Devices After Adding a Node to a Global Cluster on page 111.
110 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Establishing a New Global Cluster or New Global-Cluster Node
Any newly configured SCSI quorum devices will be set to SCSI-3 reservations.
Before You Begin Ensure that you have completed installation of the Oracle Solaris Cluster software on the added
node.
8 On each node, verify that the cldevice populate command has completed processing before
you attempt to add a quorum device.
The cldevice populate command executes remotely on all nodes, even through the command
is issued from just one node. To determine whether the cldevice populate command has
completed processing, run the following command on each node of the cluster:
phys-schost# ps -ef | grep scgdevs
a. (Optional) If you want to choose a new shared device to configure as a quorum device,
display all devices that the system checks and choose the shared device from the output.
phys-schost# cldevice list -v
Output resembles the following:
Example 34 Updating SCSI Quorum Devices After Adding a Node to a Two-Node Cluster
The following example identifies the original SCSI quorum device d2, removes that quorum
device, lists the available shared devices, updates the global-device namespace, configures d3 as
a new SCSI quorum device, and verifies the new device.
112 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Establishing a New Global Cluster or New Global-Cluster Node
Next Steps Go to How to Verify the Quorum Configuration and Installation Mode on page 118.
If you chose automatic quorum configuration when you established the cluster, do not perform
this procedure. Instead, proceed to How to Verify the Quorum Configuration and Installation
Mode on page 118.
Perform this procedure one time only, after the new cluster is fully formed. Use this procedure
to assign quorum votes and then to remove the cluster from installation mode.
Before You Begin Quorum servers To configure a quorum server as a quorum device, do the following:
Install the Oracle Solaris Cluster Quorum Server software on the quorum server host
machine and start the quorum server. For information about installing and starting the
quorum server, see How to Install and Configure Oracle Solaris Cluster Quorum Server
Software on page 49.
Ensure that network switches that are directly connected to cluster nodes meet one of
the following criteria:
The switch supports Rapid Spanning Tree Protocol (RSTP).
Fast port mode is enabled on the switch.
1 If both of the following conditions apply, ensure that the correct prefix length is set for the
public-network addresses.
You intend to use a quorum server.
The public network uses variable-length subnet masking, also called classless inter domain
routing (CIDR).
# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
ipmp0/v4 static ok 10.134.94.58/24
Note If you use a quorum server but the public network uses classful subnets as defined in RFC
791, you do not need to perform this step.
114 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Establishing a New Global Cluster or New Global-Cluster Node
4 To use a shared disk as a quorum device, verify device connectivity to the cluster nodes and
choose the device to configure.
a. From one node of the cluster, display a list of all the devices that the system checks.
You do not need to be logged in as superuser to run this command.
phys-schost-1# cldevice list -v
Output resembles the following:
b. Ensure that the output shows all connections between cluster nodes and storage devices.
c. Determine the global device ID of each shared disk that you are configuring as a quorum
device.
Note Any shared disk that you choose must be qualified for use as a quorum device. See
Quorum Devices on page 29 for further information about choosing quorum devices.
Use the cldevice output from Step a to identify the device ID of each shared disk that you
are configuring as a quorum device. For example, the output in Step a shows that global
device d3 is shared by phys-schost-1 and phys-schost-2.
5 To use a shared disk that does not support the SCSI protocol, ensure that fencing is disabled for
that shared disk.
If fencing for the disk is set to nofencing or nofencing-noscrub, fencing is disabled for
that disk. Go to Step 6.
If fencing for the disk is set to pathcount or scsi, disable fencing for the disk. Skip to
Step c.
If fencing for the disk is set to global, determine whether fencing is also disabled
globally. Proceed to Step b.
Alternatively, you can simply disable fencing for the individual disk, which overrides for
that disk whatever value the global_fencing property is set to. Skip to Step c to disable
fencing for the individual disk.
If global fencing is set to pathcount or prefer3, disable fencing for the shared disk.
Proceed to Step c.
Note If an individual disk has its default_fencing property set to global, the fencing for
that individual disk is disabled only while the cluster-wide global_fencing property is set
to nofencing or nofencing-noscrub. If the global_fencing property is changed to a value
that enables fencing, then fencing becomes enabled for all disks whose default_fencing
property is set to global.
Note If the Main Menu is displayed instead, the initial cluster setup was already successfully
performed. Skip to Step 11.
116 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Establishing a New Global Cluster or New Global-Cluster Node
If your cluster is a two-node cluster, you must configure at least one shared quorum device.
Type Yes to configure one or more quorum devices.
If your cluster has three or more nodes, quorum device configuration is optional.
Type No if you do not want to configure additional quorum devices. Then skip to Step 10.
9 Specify the name of the device to configure as a quorum device and provide any required
additional information.
For a quorum server, also specify the following information:
The IP address of the quorum server host
The port number that is used by the quorum server to communicate with the cluster
nodes
Next Steps Verify the quorum configuration and that installation mode is disabled. Go to How to Verify
the Quorum Configuration and Installation Mode on page 118.
Troubleshooting Interrupted clsetup processing If the quorum setup process is interrupted or fails to be
completed successfully, rerun clsetup.
Changes to quorum vote count If you later increase or decrease the number of node
attachments to a quorum device, the quorum vote count is not automatically recalculated. You
can reestablish the correct quorum vote by removing each quorum device and then adding it
back into the configuration, one quorum device at a time. For a two-node cluster, temporarily
add a new quorum device before you remove and add back the original quorum device. Then
remove the temporary quorum device. See the procedure How to Modify a Quorum Device
Node List in Chapter 6, Administering Quorum, in Oracle Solaris Cluster System
Administration Guide.
Unreachable quorum device If you see messages on the cluster nodes that a quorum device is
unreachable or if you see failures of cluster nodes with the message CMM: Unable to acquire
the quorum device, there might be a problem with the quorum device or the path to it. Check
that both the quorum device and the path to it are functional.
If the problem persists, use a different quorum device. Or, if you want to use the same quorum
device, increase the quorum timeout to a high value, as follows:
Note For Oracle Real Application Clusters (Oracle RAC), do not change the default quorum
timeout of 25 seconds. In certain split-brain scenarios, a longer timeout period might lead to the
failure of Oracle RAC VIP failover, due to the VIP resource timing out. If the quorum device
being used is not conforming with the default 25second timeout, use a different quorum
device.
1. Become superuser.
2. On each cluster node, edit the /etc/system file as superuser to set the timeout to a
high value.
The following example sets the timeout to 700 seconds.
phys-schost# vi /etc/system
...
set cl_haci:qd_acquisition_timer=700
3. From one node, shut down the cluster.
phys-schost-1# cluster shutdown -g0 -y
4. Boot each node back into the cluster.
Changes to the /etc/system file are initialized after the reboot.
118 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Establishing a New Global Cluster or New Global-Cluster Node
1 From any global-cluster node, verify the device and node quorum configurations.
phys-schost$ clquorum list
Output lists each quorum device and each node.
Next Steps Determine from the following list the next task to perform that applies to your cluster
configuration. If you need to perform more than one task from this list, go to the first of those
tasks in this list.
If you want to change any private hostnames, go to How to Change Private Hostnames on
page 120.
If you want to install or modify the NTP configuration file, go to Configuring Network
Time Protocol (NTP) on page 120.
If you want to install a volume manager, go to Chapter 4, Configuring Solaris Volume
Manager Software, to install volume management software.
If you want to create cluster file systems, go to How to Create Cluster File Systems on
page 143.
To find out how to install third-party applications, register resource types, set up resource
groups, and configure data services, see the documentation that is supplied with the
application software and the Oracle Solaris Cluster Data Services Planning and
Administration Guide.
When your cluster is fully configured, validate the configuration. Go to How to Validate
the Cluster on page 123.
Before you put the cluster into production, make a baseline recording of the cluster
configuration for future diagnostic purposes. Go to How to Record Diagnostic Data of the
Cluster Configuration on page 127.
An archived backup of your cluster configuration facilitates easier recovery of the your cluster
configuration. For more information, see How to Back Up the Cluster Configuration in
Oracle Solaris Cluster System Administration Guide.
Note Do not perform this procedure after applications and data services have been configured
and have been started. Otherwise, an application or data service might continue to use the old
private hostname after the hostname is renamed, which would cause hostname conflicts. If any
applications or data services are running, stop them before you perform this procedure.
3 Type the option number for Private Hostnames and press the Return key.
The Private Hostname Menu is displayed.
4 Type the option number for Change a Node Private Hostname and press the Return key.
Next Steps Update the NTP configuration with the changed private hostnames. Go to How to Update
NTP After Changing a Private Hostname on page 123.
120 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Establishing a New Global Cluster or New Global-Cluster Node
Note If you installed your own /etc/inet/ntp.conf file before you installed the Oracle Solaris
Cluster software, you do not need to perform this procedure. Proceed to How to Validate the
Cluster on page 123.
Next Steps Determine from the following list the next task to perform that applies to your cluster
configuration. If you need to perform more than one task from this list, go to the first of those
tasks in this list.
If you want to install a volume manager, go to Chapter 4, Configuring Solaris Volume
Manager Software.
If you want to create cluster file systems, go to How to Create Cluster File Systems on
page 143.
To find out how to install third-party applications, register resource types, set up resource
groups, and configure data services, see the documentation that is supplied with the
application software and the Oracle Solaris Cluster Data Services Planning and
Administration Guide.
When your cluster is fully configured, validate the configuration. Go to How to Validate
the Cluster on page 123.
Before you put the cluster into production, make a baseline recording of the cluster
configuration for future diagnostic purposes. Go to How to Record Diagnostic Data of the
Cluster Configuration on page 127.
2 Copy the /etc/inet/ntp.conf and /etc/inet/ntp.conf.sc files from the added node to the
original cluster node.
These files were created on the added node when it was configured with the cluster.
3 On the original cluster node, create a symbolic link named /etc/inet/ntp.conf.include that
points to the /etc/inet/ntp.conf.sc file.
phys-schost# ln -s /etc/inet/ntp.conf.sc /etc/inet/ntp.conf.include
Next Steps Determine from the following list the next task to perform that applies to your cluster
configuration. If you need to perform more than one task from this list, go to the first of those
tasks in this list.
If you want to install a volume manager, go to Chapter 4, Configuring Solaris Volume
Manager Software.
If you want to create cluster file systems, go to How to Create Cluster File Systems on
page 143.
To find out how to install third-party applications, register resource types, set up resource
groups, and configure data services, see the documentation that is supplied with the
application software and the Oracle Solaris Cluster Data Services Planning and
Administration Guide.
When your cluster is fully configured, validate the configuration. Go to How to Validate
the Cluster on page 123.
Before you put the cluster into production, make a baseline recording of the cluster
configuration for future diagnostic purposes. Go to How to Record Diagnostic Data of the
Cluster Configuration on page 127.
122 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Establishing a New Global Cluster or New Global-Cluster Node
2 On each node of the cluster, update the /etc/inet/ntp.conf.sc file with the changed private
hostname.
Next Steps Determine from the following list the next task to perform that applies to your cluster
configuration. If you need to perform more than one task from this list, go to the first of those
tasks in this list.
If you want to install a volume manager, go to Chapter 4, Configuring Solaris Volume
Manager Software.
If you want to create cluster file systems, go to How to Create Cluster File Systems on
page 143.
To find out how to install third-party applications, register resource types, set up resource
groups, and configure data services, see the documentation that is supplied with the
application software and the Oracle Solaris Cluster Data Services Planning and
Administration Guide.
When your cluster is fully configured, validate the configuration. Go to How to Validate
the Cluster on page 123.
Before you put the cluster into production, make a baseline recording of the cluster
configuration for future diagnostic purposes. Go to How to Record Diagnostic Data of the
Cluster Configuration on page 127.
Tip For ease of future reference or troubleshooting, for each validation that you run, use the -o
outputdir option to specify a subdirectory for log files. Reuse of an existing subdirectory name
will remove all existing files in the subdirectory. Therefore, to ensure that log files are available
for future reference, specify a unique subdirectory name for each cluster check that you run.
Before You Begin Ensure that you have completed the installation and configuration of all hardware and software
components in the cluster, including firmware and software updates.
b. In the Advanced Search, select Solaris Cluster as the Product and type check in the
Description field.
The search locates Oracle Solaris Cluster software updates that contain checks.
c. Apply any software updates that are not already installed on your cluster.
124 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Establishing a New Global Cluster or New Global-Cluster Node
b. Determine which functional checks perform actions that would interfere with cluster
availability or services in a production environment.
For example, a functional check might trigger a node panic or a failover to another node.
# cluster list-checks -v -C check-ID
-C check-ID Specifies a specific check.
c. If the functional check that you want to perform might interrupt cluster functioning, ensure
that the cluster is not in production.
e. Repeat Step c and Step d for each remaining functional check to run.
Note For record-keeping purposes, specify a unique outputdir subdirectory name for each
check you run. If you reuse an outputdir name, output for the new check overwrites the
existing contents of the reused outputdir subdirectory.
============================================================
If the node running this check is brought down during execution the check
must be rerun from this same node after it is rebooted into the cluster in
order for the check to be completed.
1) continue
2) exit
choice: 1
============================================================
126 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Establishing a New Global Cluster or New Global-Cluster Node
...
Follow onscreen directions
Next Steps Before you put the cluster into production, make a baseline recording of the cluster
configuration for future diagnostic purposes. Go to How to Record Diagnostic Data of the
Cluster Configuration on page 127.
1 Become superuser.
4 Save the files to a location that you can access if the entire cluster is down.
5 Send all explorer files to the Oracle Explorer database for your geographic location.
Follow the procedures in Oracle Explorer Data Collector User's Guide to use FTP or HTTPS to
submit Oracle Explorer files.
The Oracle Explorer database makes your explorer output available to Oracle technical
support if the data is needed to help diagnose a technical problem with your cluster.
128 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
4
C H A P T E R 4
Configure your local and multihost disks for Solaris Volume Manager software by using the
procedures in this chapter, along with the planning information in Planning Volume
Management on page 37. See your Solaris Volume Manager documentation for additional
details.
Task Instructions
Plan the layout of your Solaris Volume Manager Planning Volume Management on page 37
configuration.
Create state database replicas on the local disks. How to Create State Database Replicas on page 129
1 Become superuser.
129
Creating Disk Sets in a Cluster
2 Create state database replicas on one or more local devices for each cluster node.
Use the physical name (cNtXdY sZ), not the device-ID name (dN), to specify the slices to use.
phys-schost# metadb -af slice-1 slice-2 slice-3
Tip To provide protection of state data, which is necessary to run Solaris Volume Manager
software, create at least three replicas for each node. Also, you can place replicas on more than
one device to provide protection if one of the devices fails.
See the metadb(1M) man page and your Solaris Volume Manager documentation for details.
Next Steps Go to Creating Disk Sets in a Cluster on page 130 to create Solaris Volume Manager disk sets.
The following table lists the tasks that you perform to create disk sets. Complete the procedures
in the order that is indicated.
Task Instructions
Create disk sets by using the metaset command. How to Create a Disk Set on page 131
130 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Creating Disk Sets in a Cluster
TABLE 42 Task Map: Configuring Solaris Volume Manager Disk Sets (Continued)
Task Instructions
Add drives to the disk sets. How to Add Drives to a Disk Set on page 133
(Optional) Repartition drives in a disk set to allocate How to Repartition Drives in a Disk Set on page 135
space to different slices.
List DID pseudo-driver mappings and define volumes How to Create an md.tab File on page 135
in the /etc/lvm/md.tab files.
3 On each node, verify that the command has completed processing before you attempt to create
any disk sets.
The command executes remotely on all nodes, even though the command is run from just one
node. To determine whether the command has completed processing, run the following
command on each node of the cluster:
phys-schost# ps -ef | grep scgdevs
5 Become superuser on the cluster node that will master the disk set.
Note When you run the metaset command to configure a Solaris Volume Manager device
group on a cluster, the command designates one secondary node by default. You can change the
desired number of secondary nodes in the device group by using the clsetup utility after the
device group is created. Refer to Administering Device Groups in Oracle Solaris Cluster
System Administration Guide for more information about how to change the numsecondaries
property.
7 If you are configuring a replicated Solaris Volume Manager device group, set the replication
property for the device group.
phys-schost# cldevicegroup sync device-group-name
For more information about data replication, see Chapter 4, Data Replication Approaches, in
Oracle Solaris Cluster System Administration Guide.
132 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Creating Disk Sets in a Cluster
device-group
Specifies the name of the device group. The device-group name is the same as the disk-set
name.
See the cldevicegroup(1CL) for information about device-group properties.
Next Steps Add drives to the disk set. Go to Adding Drives to a Disk Set on page 133.
1 Become superuser.
In the following example, the entries for DID device /dev/did/rdsk/d3 indicate that the drive
is shared by phys-schost-1 and phys-schost-2.
Note Do not use the lower-level device name (cNtXdY) when you add a drive to a disk set.
Because the lower-level device name is a local name and not unique throughout the cluster,
using this name might prevent the metaset from being able to switch over.
Next Steps If you want to repartition drives for use in volumes, go to How to Repartition Drives in a Disk
Set on page 135.
134 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Creating Disk Sets in a Cluster
Otherwise, go to How to Create an md.tab File on page 135 to find out how to define
metadevices or volumes by using an md.tab file.
1 Become superuser.
2 Use the format command to change the disk partitioning for each drive in the disk set.
When you repartition a drive, take steps to prevent the metaset command from repartitioning
the drive.
a. Create slice 6 for EFI starting at cylinder 0, large enough to hold a state database replica.
Do not allow the target slice to overlap any other slice on the drive.
See your Solaris Volume Manager administration guide to determine the size of a state
database replica for your version of the volume-manager software.
Next Steps Define volumes by using an md.tab file. Go to How to Create an md.tab File on page 135.
Note If you are using local volumes, ensure that local volume names are distinct from the
device IDs that are used to form disk sets. For example, if the device ID /dev/did/dsk/d3 is
used in a disk set, do not use the name /dev/md/dsk/d3 for a local volume. This requirement
does not apply to shared volumes, which use the naming convention
/dev/md/setname/{r}dsk/d#.
1 Become superuser.
2 List the DID mappings for reference when you create your md.tab file.
Use the full DID device names in the md.tab file in place of the lower-level device names (cN
tXdY). The DID device name takes the form /dev/did/rdsk/dN.
phys-schost# cldevice show | grep Device
3 Create an /etc/lvm/md.tab file that contains the volume definitions for the disk sets you
created.
See Example 44 for a sample md.tab file.
Note If you have existing data on the drives that will be used for the submirrors, you must back
up the data before volume setup. Then restore the data onto the mirror.
To avoid possible confusion between local volumes on different nodes in a cluster environment,
use a naming scheme that makes each local volume name unique throughout the cluster. For
example, for node 1 choose names from d100 to d199. For node 2 use d200 to d299.
See your Solaris Volume Manager documentation and the md.tab(4) man page for details about
how to create an md.tab file.
136 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Creating Disk Sets in a Cluster
1. The first line defines the device d0 as a mirror of volumes d10 and d20. The -m signifies that
this device is a mirror device.
dg-schost-1/d0 -m dg-schost-1/d0 dg-schost-1/d20
2. The second line defines volume d10, the first submirror of d0, as a one-way stripe.
dg-schost-1/d10 1 1 /dev/did/rdsk/d1s0
3. The third line defines volume d20, the second submirror of d0, as a one-way stripe.
dg-schost-1/d20 1 1 /dev/did/rdsk/d2s0
Next Steps Activate the volumes that are defined in the md.tab files. Go to How to Activate Volumes on
page 137.
1 Become superuser.
3 Ensure that you have ownership of the disk set on the node where the command will be
executed.
5 Activate the disk set's volumes, which are defined in the md.tab file.
phys-schost# metainit -s setname -a
-s setname
Specifies the disk set name.
-a
Activates all volumes in the md.tab file.
6 Repeat Step 3 through Step 5 for each disk set in the cluster.
If necessary, run the metainit(1M) command from another node that has connectivity to the
drives. This step is required for cluster-pair topologies where the drives are not accessible by all
nodes.
Next Steps If your cluster contains disk sets that are configured with exactly two disk enclosures and two
nodes, add dual-string mediators. Go to Configuring Dual-String Mediators on page 138.
Otherwise, go to How to Create Cluster File Systems on page 143 to find out how to create a
cluster file system.
A single disk string consists of a disk enclosure, its physical drives, cables from the enclosure to
the node or nodes, and the interface adapter cards. A dual-string disk set includes disks in two
disk strings, and is attached to exactly two nodes. If a single disk string in a dual-string disk set
fails such that exactly half the Solaris Volume Manager replicas remain available, the disk set
138 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Configuring Dual-String Mediators
will stop functioning. Dual-string mediators are therefore required for all Solaris Volume
Manager dual-string disk sets. The use of mediators enables the Oracle Solaris Cluster software
to ensure that the most current data is presented in the instance of a single-string failure in a
dual-string configuration.
A dual-string mediator, or mediator host, is a cluster node that stores mediator data. Mediator
data provides information about the location of other mediators and contains a commit count
that is identical to the commit count that is stored in the database replicas. This commit count is
used to confirm that the mediator data is in sync with the data in the database replicas.
The following table lists the tasks that you perform to configure dual-string mediator hosts.
Complete the procedures in the order that is indicated.
Task Instructions
Check the status of mediator data and, if necessary, fix How to Check For and Fix Bad Mediator Data on
bad mediator data. page 141
These rules do not require that the entire cluster consist of only two nodes. An N+1 cluster and
many other topologies are permitted under these rules.
1 If you will use a third mediator host for a dual-string disk set and that host does not already have
disk sets configured, modify the /etc/group file and create a dummy disk set.
2 Become superuser on the node that currently masters the disk set to which you intend to add
mediator hosts.
3 Add each node with connectivity to the disk set as a mediator host for that disk set.
phys-schost# metaset -s setname -a -m mediator-host-list
-s setname
Specifies the disk set name.
-m mediator-host-list
Specifies the name of the node to add as a mediator host for the disk set.
See the mediator(7D) man page for details about mediator-specific options to the metaset
command.
Next Steps Check the status of mediator data. Go to How to Check For and Fix Bad Mediator Data on
page 141.
140 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Configuring Dual-String Mediators
Before You Begin Ensure that you have added mediator hosts as described in How to Add Mediator Hosts on
page 139.
2 Check the Status field of the medstat output for each mediator host.
3 Become superuser on the node that owns the affected disk set.
4 Remove all mediator hosts with bad mediator data from all affected disk sets.
phys-schost# metaset -s setname -d -m mediator-host-list
-s setname
Specifies the disk set name.
-d
Deletes from the disk set.
-m mediator-host-list
Specifies the name of the node to remove as a mediator host for the disk set.
Next Steps Determine from the following list the next task to perform that applies to your cluster
configuration.
If you want to create cluster file systems, go to How to Create Cluster File Systems on
page 143.
To find out how to install third-party applications, register resource types, set up resource
groups, and configure data services, see the documentation that is supplied with the
application software and the Oracle Solaris Cluster Data Services Planning and
Administration Guide.
142 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
5
C H A P T E R 5
This chapter describes how to create a cluster file system to support data services.
Alternatively, you can use a highly available local file system to support a data service. For
information about choosing between creating a cluster file system or a highly available local file
system to support a particular data service, see the manual for that data service. For general
information about creating a highly available local file system, see Enabling Highly Available
Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.
143
Creating Cluster File Systems
Tip For faster file system creation, become superuser on the current primary of the global
device for which you create a file system.
Caution Any data on the disks is destroyed when you create a file system. Be sure that you
specify the correct disk device name. If you specify the wrong device name, you might erase data
that you did not intend to delete.
Solaris Volume Manager /dev/md/nfs/rdsk/d1 Raw disk device d1 within the nfs
disk set
3 On each node in the cluster, create a mount-point directory for the cluster file system.
A mount point is required on each node, even if the cluster file system is not accessed on that
node.
Tip For ease of administration, create the mount point in the /global/device-group/ directory.
This location enables you to easily distinguish cluster file systems, which are globally available,
from local file systems.
4 On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.
See the vfstab(4) man page for details.
a. In each entry, specify the required mount options for the type of file system that you use.
144 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Creating Cluster File Systems
b. To automatically mount the cluster file system, set the mount at boot field to yes.
c. For each cluster file system, ensure that the information in its /etc/vfstab entry is identical
on each node.
d. Ensure that the entries in each node's /etc/vfstab file list devices in the same order.
6 Mount the cluster file system from any node in the cluster.
phys-schost# mount /global/device-group/mountpoint/
7 On each node of the cluster, verify that the cluster file system is mounted.
You can use either the df command or mount command to list mounted file systems. For more
information, see the df(1M) man page or mount(1M) man page.
Next Steps To find out how to install third-party applications, register resource types, set up resource
groups, and configure data services, see the documentation that is supplied with the
application software and the Oracle Solaris Cluster Data Services Planning and
Administration Guide.
146 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
6
C H A P T E R 6
The utility operates in the following levels of scope, similar to the zonecfg utility:
The cluster scope affects the entire zone cluster.
The node scope affects only the one zone cluster node that is specified.
The resource scope affects either a specific node or the entire zone cluster, depending on
which scope you enter the resource scope from. Most resources can only be entered from the
node scope. The scope is identified by the following prompts:
clzc:zone-cluster-name:resource> cluster-wide setting
clzc:zone-cluster-name:node:resource> node-specific setting
147
Configuring a Zone Cluster
You can use the clzonecluster utility to specify any Oracle Solaris zones resource
parameter as well as the parameters that are specific to zone clusters. For information about
parameters that you can set in a zone cluster, see the clzonecluster(1CL) man page.
Additional information about Oracle Solaris zones resource parameters is in the
zonecfg(1M) man page.
Note If you do not configure an IP address for each zone cluster node, two things will occur:
That specific zone cluster will not be able to configure NAS devices for use in the zone
cluster. The cluster uses the IP address of the zone cluster node when communicating
with the NAS device, so not having an IP address prevents cluster support for fencing
NAS devices.
The cluster software will activate any Logical Host IP address on any NIC.
148 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Configuring a Zone Cluster
Add authorization for the public-network addresses that the zone cluster is allowed to use
clzc: zone-cluster-name> add net
clzc: zone-cluster-name:net> set address=IP-address1
clzc: zone-cluster-name:net> end
The -c config-profile.xml option provides a configuration profile for all non-global zones of
the zone cluster. Using this option changes only the hostname of the zone, which is unique
for each zone in the zone cluster. All profiles must have a .xml extension.
If the base global-cluster nodes for the zone-cluster are not all installed with the same Oracle
Solaris Cluster packages but you do not want to change which packages are on the base
nodes, add the following option:
-M manifest.xml
The -M manifest.xml option specifies a custom Automated Installer manifest that you
configure to install the necessary packages on all zone-cluster nodes. If the clzonecluster
install command is run without the -M option, zone-cluster installation fails on a base
node if it is missing a package that is installed on the issuing base node.
150 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Configuring a Zone Cluster
8 If you did not set the profile during zone cluster installation, configure the profile manually.
On each zone-cluster node, issue the following command and progress through the interactive
screens.
phys-schost-1# zlogin -C zone-cluster-name
9 After all zone-cluster nodes are modified, reboot the global-cluster nodes to initialize the
changes to the zone-cluster /etc/inet/hosts files.
phys-schost# init -g0 -y -i6
In the following configuration, the zone cluster sczone is created on the global-cluster node
phys-schost-1. The zone cluster uses /zones/sczone as the zone path and the public IP
address 172.16.2.2. The first node of the zone cluster is assigned the hostname zc-host-1 and
uses the network address 172.16.0.1 and the net0 adapter. The second node of the zone
cluster is created on the global-cluster node phys-schost-2. This second zone-cluster node is
assigned the hostname zc-host-2 and uses the network address 172.16.0.2 and the net1
adapter.
create
set zonepath=/zones/sczone
add net
set address=172.16.2.2
end
add node
set physical-host=phys-schost-1
set hostname=zc-host-1
add net
set address=172.16.0.1
set physical=net0
end
end
add node
set physical-host=phys-schost-2
set hostname=zc-host-2
add net
set address=172.16.0.2
set physical=net1
end
end
commit
exit
Next Steps If you want to add the use of a file system to the zone cluster, go to Adding File Systems to a
Zone Cluster on page 153.
If you want to add the use of global storage devices to the zone cluster, go to Adding Storage
Devices to a Zone Cluster on page 157.
See Also If you want to update a zone cluster, follow procedures in Chapter 11, Updating Your
Software, in Oracle Solaris Cluster System Administration Guide. These procedures include
special instructions for zone clusters, where needed.
152 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Configuring a Zone Cluster
This section provides the following procedures to add file systems for use by the zone cluster:
How to Add a Local File System to a Zone Cluster on page 153
How to Add a ZFS Storage Pool to a Zone Cluster on page 155
How to Add a Cluster File System to a Zone Cluster on page 156
In addition, if you want to configure a ZFS storage pool to be highly available in a zone cluster,
see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS File System
Highly Available in Oracle Solaris Cluster Data Services Planning and Administration Guide.
Note To add a ZFS pool to a zone cluster, instead perform procedures in How to Add a ZFS
Storage Pool to a Zone Cluster on page 155.
Alternatively, to configure a ZFS storage pool to be highly available in a zone cluster, see How
to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS File System Highly
Available in Oracle Solaris Cluster Data Services Planning and Administration Guide.
1 Become superuser on a node of the global cluster that hosts the zone cluster.
You perform all steps of the procedure from a node of the global cluster.
2 On the global cluster, create a file system that you want to use in the zone cluster.
Ensure that the file system is created on shared disks.
special=disk-device-name
Specifies the name of the disk device
raw=raw-disk-device-name
Specifies the name of the raw disk device
type=FS-type
Specifies the type of file system
Next Steps Configure the file system to be highly available by using an HAStoragePlus resource. The
HAStoragePlus resource manages the mounting of the file system on the zone-cluster node that
currently host the applications that are configured to use the file system. See Enabling Highly
Available Local File Systems in Oracle Solaris Cluster Data Services Planning and
Administration Guide.
154 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Configuring a Zone Cluster
Note To configure a ZFS storage pool to be highly available in a zone cluster, see How to Set
Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS File System Highly
Available in Oracle Solaris Cluster Data Services Planning and Administration Guide.
1 Become superuser on a node of the global cluster that hosts the zone cluster.
You perform all steps of this procedure from a node of the global zone.
Next Steps Configure the ZFS storage pool to be highly available by using an HAStoragePlus resource. The
HAStoragePlus resource manages the mounting of file systems in the pool on the zone-cluster
node that currently hosts the applications that are configured to use the file system. See
Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning
and Administration Guide.
1 Become superuser on a voting node of the global cluster that hosts the zone cluster.
You perform all steps of this procedure from a voting node of the global cluster.
2 On the global cluster, configure the cluster file system that you want to use in the zone cluster.
3 On each node of the global cluster that hosts a zone-cluster node, add an entry to the
/etc/vfstab file for the file system that you want to mount on the zone cluster.
phys-schost# vi /etc/vfstab
...
/dev/global/dsk/d12s0 /dev/global/rdsk/d12s0/ /global/fs ufs 2 no global, logging
4 Configure the cluster file system as a loopback file system for the zone cluster.
phys-schost# clzonecluster configure zone-cluster-name
clzc:zone-cluster-name> add fs
clzc:zone-cluster-name:fs> set dir=zone-cluster-lofs-mountpoint
clzc:zone-cluster-name:fs> set special=global-cluster-mount-point
clzc:zone-cluster-name:fs> set type=lofs
clzc:zone-cluster-name:fs> end
clzc:zone-cluster-name> verify
clzc:zone-cluster-name> commit
clzc:zone-cluster-name> exit
dir=zone-cluster-lofs-mount-point
Specifies the file system mount point for LOFS to make the cluster file system available to the
zone cluster.
special=global-cluster-mount-point
Specifies the file system mount point of the original cluster file system in the global cluster.
For more information about creating loopback file systems, see How to Create and Mount an
LOFS File System in Oracle Solaris Administration: Devices and File Systems.
156 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Configuring a Zone Cluster
phys-schost-1# vi /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/apache ufs 2 yes global, logging
phys-schost-1# clzonecluster configure zone-cluster-name
clzc:zone-cluster-name> add fs
clzc:zone-cluster-name:fs> set dir=/zone/apache
clzc:zone-cluster-name:fs> set special=/global/apache
clzc:zone-cluster-name:fs> set type=lofs
clzc:zone-cluster-name:fs> end
clzc:zone-cluster-name> verify
clzc:zone-cluster-name> commit
clzc:zone-cluster-name> exit
phys-schost-1# clzonecluster show -v sczone
...
Resource Name: fs
dir: /zone/apache
special: /global/apache
raw:
type: lofs
options: []
cluster-control: true
...
Next Steps Configure the cluster file system to be available in the zone cluster by using an HAStoragePlus
resource. The HAStoragePlus resource manages the mounting of the file systems in the global
cluster, and later performs a loopback mount on the zone-cluster nodes that currently host the
applications that are configured to use the file system. For more information, see Configuring
an HAStoragePlus Resource for Cluster File Systems in Oracle Solaris Cluster Data Services
Planning and Administration Guide.
Note To import raw-disk devices (cNtXdYsZ) into a zone cluster node, use the zonecfg
command as you normally would for other brands of non-global zones.
Such devices would not be under the control of the clzonecluster command, but would be
treated as local devices of the node. See Mounting File Systems in Running Non-Global Zones
in Oracle Solaris Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource
Management for more information about importing raw-disk devices into a non-global zone.
After a device is added to a zone cluster, the device is visible only from within that zone cluster.
1 Become superuser on a node of the global cluster that hosts the zone cluster.
You perform all steps of this procedure from a node of the global cluster.
2 Identify the disk set that contains the metadevice to add to the zone cluster and determine
whether it is online.
phys-schost# cldevicegroup status
3 If the disk set that you are adding is not online, bring it online.
phys-schost# cldevicegroup online diskset
4 Determine the set number that corresponds to the disk set to add.
phys-schost# ls -l /dev/md/diskset
lrwxrwxrwx 1 root root 8 Jul 22 23:11 /dev/md/diskset -> shared/set-number
158 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Configuring a Zone Cluster
match=/dev/md/shared/N/*dsk/metadevice
Specifies the full physical device path of the disk set number
1 Become superuser on a node of the global cluster that hosts the zone cluster.
You perform all steps of this procedure from a node of the global cluster.
2 Identify the disk set to add to the zone cluster and determine whether it is online.
phys-schost# cldevicegroup status
3 If the disk set that you are adding is not online, bring it online.
phys-schost# cldevicegroup online diskset
4 Determine the set number that corresponds to the disk set to add.
phys-schost# ls -l /dev/md/diskset
lrwxrwxrwx 1 root root 8 Jul 22 23:11 /dev/md/diskset -> shared/set-number
1 Become superuser on a node of the global cluster that hosts the zone cluster.
You perform all steps of this procedure from a node of the global cluster.
160 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Configuring a Zone Cluster
This chapter provides procedures for uninstalling or removing certain software from an Oracle
Solaris Cluster configuration. This chapter contains the following procedures:
How to Unconfigure Oracle Solaris Cluster Software to Correct Installation Problems on
page 163
How to Uninstall Oracle Solaris Cluster Quorum Server Software on page 166
How to Unconfigure a Zone Cluster on page 167
Note If you want to uninstall a node from an established cluster, see Removing a Node From a
Cluster in Oracle Solaris Cluster System Administration Guide.
163
Uninstalling the Software
Note If the node has already joined the cluster and is no longer in installation mode, as
described in Step 2 of How to Verify the Quorum Configuration and Installation Mode on
page 118, do not perform this procedure. Instead, go to How to Uninstall Oracle Solaris
Cluster Software From a Cluster Node in Oracle Solaris Cluster System Administration Guide.
Before You Begin Attempt to rerun cluster configuration of the node by using the scinstall utility. You can
correct certain cluster node configuration failures by repeating Oracle Solaris Cluster software
configuration on the node.
1 Add to the cluster's node-authentication list each node that you intend to unconfigure.
If you are unconfiguring a single-node cluster, skip to Step 2.
a. On an active cluster member other than the node that you are unconfiguring, become
superuser.
SPARC:
ok boot -x
x86:
a. In the GRUB menu, use the arrow keys to select the appropriate Oracle Solaris entry and
type e to edit its commands.
For more information about GRUB based booting, see Booting and Shutting Down
Oracle Solaris on x86 Platforms.
b. In the boot parameters screen, use the arrow keys to select the kernel entry and type e
to edit the entry.
164 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Uninstalling the Software
c. Add -x to the command to specify that the system boot into noncluster mode.
d. Press Enter to accept the change and return to the boot parameters screen.
The screen displays the edited command.
Note This change to the kernel boot parameter command does not persist over the
system boot. The next time you reboot the node, it will boot into cluster mode. To boot
into noncluster mode instead, perform these steps to again add the -x option to the
kernel boot parameter command.
5 Change to a directory, such as the root (/) directory, that does not contain any files that are
delivered by the Oracle Solaris Cluster packages.
phys-schost# cd /
To unconfigure the node but leave Oracle Solaris Cluster software installed, run the
following command:
phys-schost# /usr/cluster/bin/clnode remove
The node is removed from the cluster configuration but Oracle Solaris Cluster software is
not removed from the node.
See the clnode(1CL) man page for more information.
To unconfigure the node and also remove Oracle Solaris Cluster software, run the following
command:
phys-schost# /usr/cluster/bin/scinstall -r [-b BE-name]
-r Removes cluster configuration information and uninstalls Oracle Solaris
Cluster framework and data-service software from the cluster node. You
can then reinstall the node or remove the node from the cluster.
-b BE-name Specifies the name of a new boot environment, which is where you boot
into after the uninstall process completes. Specifying a name is optional. If
you do not specify a name for the boot environment, one is automatically
generated.
See the scinstall(1M) man page for more information.
Troubleshooting If the cluster node that you are removing is at least partially configured with the cluster, running
the clnode remove command might exit with errors such as Node is still enabled. If such
errors occur, add the -F option to the clnode remove command.
Next Steps Before you reinstall or reconfigure Oracle Solaris Cluster software on the node, refer to
Table 21. This table lists all installation tasks and the order in which to perform the tasks.
To physically remove the node from the cluster, see How to Remove an Interconnect
Component in Oracle Solaris Cluster 4.0 Hardware Administration Manual and the removal
procedure in the Oracle Solaris Cluster manual for your storage array.
166 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Uninstalling the Software
2 Take offline each resource group in the zone cluster and disable its resources.
Note The following steps are performed from a global-cluster node. To instead perform these
steps from a node of the zone cluster, log in to the zone-cluster node and omit -Z zone-cluster
from each command.
Resource: resource
Enabled{nodename1}: False
Enabled{nodename2}: False
...
g. Verify that all resources on all nodes are Offline and that all resource groups are in the
Unmanaged state.
phys-schost# cluster status -Z zone-cluster -t resource,resourcegroup
h. Delete all resource groups and their resources from the zone cluster.
phys-schost# clresourcegroup delete -F -Z zone-cluster +
168 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Index
169
Index
170 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Index
171
Index
172 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Index
installing (Continued) M
pconsole, 4749 man pages, installing, 48
integrated mirroring, 5152 manifest, Automated Installer, 86
internal hardware disk mirroring, 5152 md.tab file, configuring, 135137
IP addresses mediators, See dual-string mediators
adding to a naming service, 44 metadevices, adding to a zone cluster, 158159
changing the private IP address range, 9499 mirroring
guidelines for zone clusters, 31 differing device sizes, 39
planning the private network, 2425 internal disks, 5152
planning the public network, 1718 multihost disks, 39
IP Filter, See Solaris IP Filter planning, 3940
IP network multipathing (IPMP), See IPMP root pools
IP type, zone clusters, 32 planning, 40
ipadm listing private IP addresses, 71, 76, 90 mount options for cluster file systems
ipadmlisting private IP addresses, 104, 110 requirements, 144
IPMP UFS, 36
automatic group creation during installation, 19 mount points
configuring groups, 46 adding to new nodes, 45
cluster file systems, 37
planning the public network, 19
modifying the /etc/vfstab file, 144
IPv6 addresses
nested, 37
private network restriction, 25, 27
MPxIO, See Oracle Solaris I/O multipathing
public-network use, 18
multihost disks
mirroring, 39
planning, 38
L multiported disks, See multihost disks
multiuser services
licenses, planning, 17
verifying, 70, 76, 88, 104
link-based IPMP groups, 19
local
file systems
adding to a zone cluster, 153154 N
MAC address, 19 naming convention, raw-disk devices, 144
lofi device naming conventions
space requirement, 14 cluster, 23
use restrictions, 14 global-cluster voting nodes, 23
LOFS private hostnames, 26
restriction, 13, 34 tagged VLAN adapters, 27
log files, Oracle Solaris Cluster installation, 70 zone clusters, 32
logging for cluster file systems, planning, 39 naming service, adding IP-address mappings, 44
logical addresses, planning, 19 NAS devices
logical network interfaces, restriction, 27 configuring as quorum devices, 113118
loopback file system (LOFS) fencing, 21, 33
restriction, 13, 34 NAT and Oracle Solaris IP Filter, 13
173
Index
O
/opt/SUNWcluster/bin/ directory, 48
/opt/SUNWcluster/bin/cconsole command, P
installing the software, 4749 package installation
Oracle Explorer software, 127128 Oracle Solaris Cluster man pages, 48
Oracle Solaris Oracle Solaris Cluster software, 5357
publisher, 45, 48, 49, 57 pconsole software, 4749
174 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Index
175
Index
176 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702
Index
starting unconfiguring
NTP, 121 See also removing
pconsole, 48 See also uninstalling
quorum servers, 50 Oracle Solaris Cluster software, 163166
state database replicas, configuring, 129130 zone clusters, 167168
status uninstalling
dual-string mediators, 141142 See also removing
verifying, 118119 See also unconfiguring
Sun Explorer software, See Oracle Explorer software Oracle Solaris Cluster software, 163166
Sun StorageTek Availability Suite, See Availability Suite quorum servers, 166
feature of Oracle Solaris software unique naming, Solaris Volume Manager, 38
swap, planning, 14 user-initialization files, modifying, 58
switches, planning, 28 /usr/cluster/bin/ directory, 58
/usr/cluster/bin/claccess command
adding nodes to the authorized-node list, 164
removing nodes from the authorized-node list, 104
T /usr/cluster/bin/cldevice command
tagged VLAN adapters determining device-ID names, 115
cluster interconnect guidelines, 27
updating the global-devices namespace, 131
public-network guidelines, 18
verifying command processing, 131
TCP wrappers for RPC
/usr/cluster/bin/clnode command, viewing private
disabling, 68, 74, 87, 102, 109
hostnames, 120
enabling, 104, 110
/usr/cluster/bin/clquorumserver command,
modifying /etc/hosts.allow, 71, 76, 90
starting the quorum server, 50
technical support, 10
/usr/cluster/bin/clresource command
three-way mirroring, 39
disabling resources, 167
transport adapters, See adapters
transport switches, planning, 28 listing resources, 167
troubleshooting taking resource groups offline, 167
Automated Installer installation, 91 /usr/cluster/bin/clsetup command
configuring adding cluster interconnects, 93
additional nodes, 106 changing private hostnames, 120
new global clusters, 72, 79, 106, 110 postinstallation setup, 116
explorer baseline record of the /usr/cluster/bin/cluster check command
configuration, 127128 validating the cluster, 123127
quorum devices vfstab file check, 145
clsetup failure, 117 /usr/cluster/bin/cluster command
vote count, 118 adding nodes, 106110
quorum server installation, 51 creating new global clusters, 7280
removing partially configured node, 166 verifying installation mode, 119
/usr/cluster/bin/scinstall command
adding nodes, 99106
creating the global cluster, 6472
U creating the global cluster by using Automated
UFS logging, planning, 39 Installer, 8091
177
Index
Z
ZFS root pools
internal disk mirroring, 5152
mirroring
planning, 40
178 Oracle Solaris Cluster Software Installation Guide March 2012, E2343702