Quick Installation Guide Oracle9I Rac On Ibm Eserver Pseries With Aix 4.3.3
Quick Installation Guide Oracle9I Rac On Ibm Eserver Pseries With Aix 4.3.3
Authors : Fabienne Lepetit - Oracle France Michel Passet - EMEA Oracle/IBM JSC
Many thanks to : John McHugh - Oracle Corporate Franois Pons - Oracle EMEA PTS EMEA Oracle/IBM JSC team
page 1/ 53
page 2/ 53
There are two ways to setup a RAC environment depending on the type of hardware that is used and the software that is installed. Your configuration will therefore direct you towards one or the other of the following two methods : - Virtual Shared Disks (VSD) when available - High Availability Cluster Management Program (HACMP) always possible In this document, both methods are explained. The information contained in this paper resulted from : - Oracle and IBM documentations - Installation runs of Oracle9i RAC - Workshop experiences done in the Oracle/IBM Join Solutions Center - Contributions from Oracle and IBM specialists. Please also refer to Oracle documentation for more information. (http://docs.us.oracle.com). Oracle9i Quick Installation Procedure Release 9.0.1 for AIX-Based Systems Oracle9i Installation Guide Release 9.0.1 for UNIX Systems Oracle9i Administrator's Reference 9.0.1 for UNIX Systems Oracle9i Release Notes Release 9.0.1 for AIX-Based Systems Oracle9i Online Generic Documentation CD-ROM Installation and Usage Notes Oracle9i Real Application Clusters Installation and Configuration Oracle9i Installation Checklist for AIX-Based Systems Oracle Enterprise Manager Configuration Guide
Your comments are important for us. We want our technical papers to be as helpful as possible. Please send us your comments about this document to the EMEA Oracle/IBM Joint Solutions Center.
[email protected]
or our phone number :
+33 (0)4 67 34 67 49
page 3/ 53
page 4/ 53
SOFTWARE REQUIREMENTS AIX 4.3.3 Maintenance level 9 To determine the current operating system version : oslevel To check the maintenance level applied : instfix -i | grep ML PSSP 3.2 ptfset 12 SP machine only VSD (Virtual Shared Disks) SP machine only HACMP/ES 4.4 Clusters of pSeries servers only
AIX PATCHES REQUIREMENTS IY01050 IY03478 IY04109 IY04149 IY04767 IY07276 IY06749 IY15138 IY20220 IY22458 (AIXV43 ONLY) SUPPORT FOR NON-ROOT ACCESS TO KERNEL PERF STA PTCs rejected because instance number wraps PROBLEM WITH HA_GS_INIT() FOR 64-BIT CLIENTS PTCs rejected because instance number wraps css0 IP driver shouldn't return ENETDOWN during (AIXV43 only) AIO_SUSPEND RETURNS WITHOUT I/O COMPLETION HAES/ORACLE: add event management variables for (AIXV43 only) LIO_LISTIO RETURNS 0 INSTEAD OF -1 FOR EAGAIN E RSCT 1.2.1 Maintainence Level PTF. Mandatory to link Oracle binaries (AIXV43 only) GETPWUID IN 64BIT APPLICATION FAILS WHILE USI instfix ik IY15138
You can download new AIX maintenance levels and specified patches on http://techsupport.services.ibm.com/server/nav?fetch=ffa4e
SSA EXTERNAL DISKS MICROCODE It is particulary important for HACMP that all the SSA disks connected to the cluster are all at the same level of microcode.
page 5/ 53
Last microcode levels (november 1st, 2001) are : 9911, 0023, 0012 and 0070 (depending on the model of the disk). To list the SSA disks of the cluster : lscfg | grep pdisk To check the microcode level : lscfg vl pdisknn (ROS level and ID line) For more information on the procedure of download and installation of new microcode, see appendix A. SSA ADAPTERS MICROCODE The latest level of microde is B300 (12/2001) Check if there is some AIX default limitations (especially on the file size) File size limitation: ulimit f All limitations : ulimit a See also the file /etc/security/limits which shows the limits for each user. The default stanza applies to all new user to be created. This file can be modified by root with vi. The default limits should be set to unlimited, except for core (e.g. 1 in the file /etc/security/limits) To turn some user limitation to unlimited, use smit users
page 6/ 53
C. NETWORK CONFIGURATION
Set up user equivalence for the oracle account, to enable rsh, rcp, rlogin commands.
/etc/hosts file.
! When VSD are used :
/etc/hosts for VSD (on both nodes) 10.10.10.1 int-rac1 10.10.10.2 int-rac2 192.128.194.1 rac1 rac1.mop.ibm.com 192.128.194.2 rac2 rac2.mop.ibm.com
# # # #
Internal private net.; fast interconnect Internal private net.; fast interconnect External, fixed IP address External, fixed IP address
(This file should contain private network addresses which would use the private link (interconnect) between the two nodes.)
! When HACMP is used :
/etc/hosts for HACMP (on both nodes) 9.9.9.1 rac1_boot 9.9.9.11 rac1_service 10.10.10.1 rac1_stdby 9.9.9.2 9.9.9.22 10.10.10.2 192.128.194.1 192.128.194.2 rac2_boot rac2_service rac2_stdby rac1_fixed rac2_fixed
# External, fixed IP address, out of HACMP management # External, fixed IP address, out of HACMP management
/etc/hosts.equiv file.
Put the list of machines or nodes into hosts.equiv.
/etc/hosts.equiv # for VSD configuration : rac1 rac2 int-rac1 int-rac2 # for HACMP configuration : rac1_boot rac1_service rac1_stdby rac2_boot rac2_service rac2_stdby rac1_fixed rac2_fixed
.rhosts file.
In the roots home directory, put the list of machines.
$HOME/.rhosts # for VSD configuration : rac1 rac2 int-rac1 int-rac2
page 7/ 53
# for HACMP configuration : rac1_boot rac1_service rac1_stdby rac2_boot rac2_service rac2_stdby rac1_fixed rac2_fixed
Note : It is possible, but not advised because of security reasons, to put a + in hosts.equiv and .rhosts files.
Test if the user equivalence is correctly set up (rac2 is the secondary server name) You are logged on rac1 as oracle.
$ rlogin rac2 (-> no pwd) $ rcp /tmp/toto rac2:/tmp/toto $ rsh rac2 pwd
page 8/ 53
Database files: Size 5M 12M 80M 90M 100M 100M 10M 10M 120M 120M 120M 120M 120M 160M 512M 512M 400M 160M Logical Volume name Spfile_lv Tools_lv Index_lv Drsys_lv Temp_lv cmwlite_lv ctrl1_lv ctrl2_lv Users_lv redolog1_1_lv redolog1_2_lv redolog2_1_lv redolog2_1_lv example_lv Undo1_lv Undo2_lv System_lv oemrepo_lv Raw device name /dev/rvsd_spfile /dev/rvsd_tools /dev/rvsd_index /dev/rvsd_drsys /dev/rvsd_temp /dev/rvsd_cmwlite /dev/rvsd_ctrl1 /dev/rvsd_ctrl2 /dev/rvsd_users /dev/rvsd_redolog1_1 /dev/rvsd_ redolog1_2 /dev/rvsd_ redolog2_1 /dev/rvsd_ redolog2_1 /dev/rvsd_example /dev/rvsd_undo1 /dev/rvsd_undo2 /dev/rvsd_system /dev/rvsd_oemrepo Purpose Server Parameter File (replacing init.ora) TOOLS Tablespace INDX Tablespace DRSYS (intermedia & Ultrasearch) Tablespace TEMP Tablespace CMWLITE (OLAP) Tablespace Control File # 1 Control File # 2 USERS Tablespace Redo Log Thread #1, Group #1 Redo Log Thread #1, Group #2 Redo Log Thread #2, Group #1 Redo Log Thread #2, Group #2 EXAMPLE Tablespace UNDO Tablespace (instance #1) UNDO Tablespace (instance #2) SYSTEM Tablespace OEM repository
All these raw devices can be created for our database. But some of them (for exemple drsys, oemrepo, cmwlite) are not mandatory. Except for the 2 redo logs and undo files, that have to be duplicated on each instance, the others datafiles, for VSD configuration, are splited on the two volumes groups, in order to have roughly the same volume. A script is provided in appendix to create the volumes groups and the logical volume as specified above. The two volumes groups, ora1vg for the primary node, and ora2vg of the secondary are build with a physical partition size of 16MB. The size of the logical volume is expressed in number of PP, not in Kbytes. Once a logical volume new_lv is created on a node, two new entries appears in the /dev directory : /dev/new_lv which is normally used by LVM for file systems. /dev/rnew_lv where the r stands for raw device. This is the device name to use with VSD.
page 9/ 53
syspar_ctrl A syspar_ctrl D
Check if the High Availability is up and running on the nodes : It is composed of the following services (or sub-systems): hags (HA Group Services) hagsglsm hats (HA Topology Services) rvsd hc.hc To list the status of a service : To start a service : To stop a service : lssrc g <group of services> startsrc g <group of service> stopsrc g <group of service>
page 10/ 53
The hags daemon creates a socket (size 0) the first time it starts : /var/ha/soc/hagsdsocket.<CWS_name> should belong to the user root and the group hagsuser. The user and the group must both have read and write permissions on this socket. To enable the VSD of a SP, all the following commands have to be run on the Control WorkStation (CWS) of the SP.
Step 1 : define the nodes of a cluster. The primary node, the secondary node, and the link between them Step 2 : define the VSD volumes groups. For each volume group, indicate the node which normally access the volume group, and the backup node. Step 3 : define the logical volumes (raw devices) that can be used by VSD. A new VSD name is set, which reference the volume group and the logical volume. Step 4 : start the VSD A script is provided in Appendix, to do the 4 steps for all the raw devices involved in the database.
page 11/ 53
C C C C C C C C C C C C C C C C C C C C C
HAES Web-based HTML HAES PDF Documentation - U.S. HAES Postscript Documentation HACMP Web-based HTML HACMP PDF Documentation - U.S. HACMP Postscript Documentation ES Client Libraries ES Client Runtime ES Client Utilities ES for AIX Concurrent Access ES CSPOC Commands ES CSPOC dsh ES CSPOC Runtime Commands ES HC Daemon ES Server Diags ES Server Events ES Base Server Runtime ES Server Utilities ES Man Pages - U.S. English HACMP CSPOC Messages - U.S. ES VSM Configuration Utility
The two instances of the same parallel database have a concurrent access on the same external disks. It is a real concurrent access, and not a shared one like in the VSD environment. Because several instances access at the same time the same files and data, locks have to be managed. These locks, at the CLVM layer (including memory cache), are managed by HACMP. After all the filesets have been installed: - add in the PATH environment variable the following directories # /usr/es/sbin/cluster # /usr/es/sbin/cluster/utilities # /usr/es/sbin/cluster/sbin # /usr/es/sbin/cluster/diag - check the existence of symbolic links from files contained in /usr/sbin/cluster to /usr/es/sbin/cluster. Else, create then with ln s.
page 12/ 53
Important note: your cluster must be synchronized after each new modification.
page 13/ 53
2 Create at the AIX level on the first machine a concurrent volume group, myvg smit vg
Add a Volume Group Type or select values in entry fields. Press Enter AFTER making all desired changes. VOLUME GROUP name Physical partition SIZE in megabytes * PHYSICAL VOLUME names Activate volume group AUTOMATICALLY at system restart? Volume Group MAJOR NUMBER Create VG Concurrent Capable? Auto-varyon in Concurrent Mode? [Entry Fields] [myvg] 32 [] no [64] yes no
+ + + +# + +
Never choose YES to activate at system restart, neither auto-varyon. These tasks have to be managed by HACMP. The volume group just has to be created with concurrent capability. You must choose the major number to be sure the volume groups have the same major number in all the nodes (attention, before choosing this number, you must be sure its free on all the nodes). To check all defined major number, type
crw-rw---1 root system ls al /dev/* 57, 0 Aug 02 13:39 /dev/myvg
The major number for myvg volume group is 57. On this volume group, create all the logical volumes and file systems you need for your database.
3 Import myvg volume group on the second machine On the first machine, type varyoffvg myvg
+ +# + +
The physical volume name (hdisk) could not have the same number on both sides. Check the PVID of the disk, because its the only information reliable and common thru the cluster. Be sure to have the same major number. This number has to be undefined on all the nodes. The new volume group is now defined on the all the machines of the cluster, with the concurrent capable feature set on.
page 14/ 53
4 Define myvg volume group in an HACMP resource smit hacmp Change/show ressources/attributes for ressource group Service IP label None Concurrent VG myvg Fsck sequential Synchronize the topology and the ressources
check hagsuser group exists, else create it place "oracle" into the "hagsuser" group change the permissions on the "cldomain" executable :
# chmod a+x /usr/sbin/cluster/utilities/cldomain
For more information, see the note # 2064876.102 in Metalink (also presented in appendix H : Oracle Technical notes).
page 15/ 53
For more information, see the note # 115792.1 in Metalink (also presented in appendix H : Oracle Technical notes).
3 Concurrent Volume group Check the concurrent volume group is active on all the nodes of the cluster.
lsvg myvg VOLUME GROUP: VG STATE: VG PERMISSION: MAX LVs: LVs: OPEN LVs: TOTAL PVs: STALE PVs: ACTIVE PVs: Concurrent: VG Mode: Node ID: MAX PPs per PV: myvg active read/write 256 37 30 1 0 1 Capable Concurrent 2 1016 VG IDENTIFIER: 000915700ab7290d PP SIZE: 32 megabyte(s) TOTAL PPs: 543 (17376 megabytes) FREE PPs: 40 (1280 megabytes) USED PPs: 503 (16096 megabytes) QUORUM: 1 VG DESCRIPTORS: 2 STALE PPs: 0 AUTO ON: no Auto-Concurrent: Disabled Active Nodes: MAX PVs: 1 32
If customers wish to use the Concurrent Logical Volume Manager (CLVM) instead of VSDs, they must enable HACMP functionality by setting the environment variable PGSD_SUBSYS to grpsvcs. Oracle will not allow VSDs and Concurrent Logical Volumes (CLVs) to be used on the same database. If PSSP services are being used, Oracle will report an error if the customer attempts to use CLVs. If HACMP services are used (i.e., PGSD_SUBSYS is set to grpsvcs), Oracle will report an error if the customer attempts to use VSDs. The PGSD_SUBSYS environment variable should be set in all the environments where Oracle is used, including the listener.ora file []
page 17/ 53
G. INSTALLING ORACLE
SET UP THE ORACLE ENVIRONMENT Use smit group or smitty group to create the groups dba Primary group for the oracle user. hagsuser For high availability (if not already created). oinstall The ora inventory group. This group is not mandatory. If it exists, it will be the group owner of the oracle code files. This group is a secondary group for oracle user. Use smit user to create the users oracle Owner of the database. The oracle user must have dba as primary group, oinstall and hagsuser as secondary groups. Also add the secondary group hagsuser to the root account. Verification : check if the file /etc/group contains lines such as : (the numbers could be different) hagsuser:!:203:oracle, root dba:!:204:oracle oinstall:!:205:oracle Create the file system for Oracle code. This file system of 4 GB, is generally located on an internal SCSI disk. The external SSA disks will store the datafiles. To list the internal disks : lscfg | grep -i scsi | grep hdisk Suppose we have hdisk1, and internal SCSI free disk of 18.2 GB Create a volume group called oraclevg : mkvg -f -y'oraclevg' -s'16' hdisk1 Create a 4GB file system /oracle in the previous volume group (large file enabled): crfs -v jfs -a bf=true -g'oraclevg' -a size='8388608' -m'/oracle' A'yes' -p'rw' -t'no' -a nbpi='8192' -a ag='64' mount /oracle chown oracle:dba /oracle
Verify that the /etc/oraInst.loc and /etc/oratab files are writable by the oracle account. After installation, these two files (created by root.sh) will contain information that briefly describes the Oracle software installations and databases on the server. These commands verify that the oracle account has the appropriate permissions :
touch /etc/oraInst.loc /etc/oratab /etc/srvConfig.loc chown oracle:dba /etc/oraInst.loc /etc/oratab /etc/srvConfig.loc chmod 644 /etc/oraInst.loc /etc/oratab /etc/srvConfig.loc
Edit the /etc/srvConfig.loc file on each node, with a single line : srvconfig_loc=/dev/rvsd_srvconfig In Sqlplus, when you connect / as sysdba if you get insufficient privileges (even if you are the user oracle with group dba), as root, execute the following command : touch /etc/passwd
page 18/ 53
CLUSTER MANAGER SOFTWARE If the cluster manager software is correctly set up, the Oracle RAC option should be automatically preselected in Oracle Universal Installer.
UNZIP THE DISTRIBUTION OR MOUNT THE CDROM If you have downloaded the distribution, you should have 5 CDs. Note: If you copy the five CDs to the hard disk, use cp a to maintain directories and links.
page 19/ 53
At the OUI Welcome screen, click Next. The installer will ask the user to run /tmp/orainstRoot.sh in a separate window. This script creates the file /etc/oraInst.loc, which is used by OUI for the list of installed products. OUI asks you to run rootpre.sh as root in another window. Remember that the hagsuser should be secondary group for the user oracle.
page 20/ 53
A prompt will appear for the Inventory Location (if this is the first time that OUI has been run on this system). This is the base directory into which OUI will install files. Oracle user should have write permissions on this directory. The Oracle Inventory definition can be found in the file /etc/oraInst.loc. Click OK.
Verify the UNIX group name of the user which controls the installation of the Oracle9i software. If the pre-installation steps have not been completed successfully, you are asked to run /tmp/orainstRoot.sh, forcing Oracle Inventory files, and others, to be written to the ORACLE_HOME directory. This screen only appears the first time Oracle9i products are installed on the system. Click Next.
page 21/ 53
The File Location window will appear. Do not change the Source field. The Destination field defaults to the ORACLE_HOME environment variable. Click Next.
Select the Products to install. In this example, select the Oracle9i Server then click Next.
page 22/ 53
Select the installation type. Choose the Enterprise Edition option. The selection on this screen refers to the installation operation, not the database configuration. The next screen allows for a customized database configuration to be chosen. Click Next.
page 23/ 53
Select the configuration type. In this example, Customized configuration is selected; so, a customizable database will be created. Click Next.
Select the other nodes on to which the Oracle RDBMS software will be installed. It is not necessary to select the node on which the OUI is currently running. Click Next.
page 24/ 53
Identify the raw partition in to which the Oracle9i Real Application Clusters (RAC) configuration information will be written. It is recommended that this raw partition has a minimum size of 100MB. Enter the name of the raw device previously created; in our case, /dev/rvsd_svrconfig.
page 25/ 53
An option to Upgrade or Migrate an existing database is presented. Do NOT select the radio button. The Oracle Migration utility is not able to upgrade a RAC database, and will error if selected to do so.
The Summary screen will be presented. Confirm that the RAC database software will be installed and then click Install. The OUI will install the Oracle9i software on to the local node, and then copy this information to the other nodes selected.
page 26/ 53
During the installation, you will be prompted to give the location of the second disk. And so on until the disk #5. When the installation progress bar will reach 100%, the installer will continue to work because it will copy the files on the other node. The user wont be informed of this copy.
From the root command prompt, execute /oracle/product/9.0.1/root.sh. This script must be executed on both nodes. The results of this are shown below :
page 27/ 53
page 28/ 53
gsd (Oracle Global Services Daemon) needs to be running on each node under oracle privileges; start gsd by executing gsd.sh gsd is used by OEM and global tools such as srvctl to execute commands on all the nodes at the same time. The configuration device should be pointed to by SRVM_SHARED_CONFIG (containing /var/opt/oracle/srvConfig.loc for example).
MANUALLY 1) on one node : srvctl add db -p <db_name> -o <oracle_home> 2) for each instance of the database : srvctl add instance p <db_name> -i <SID> -n <node_name> (it is advised to set the SID to db_name plus instance_number) 3) on each node, check the configuration : srvctl config p <db_name> 4) on each node, create (or update) oratab file (in /etc directory) with the following line : <db_name>:<$ORACLE_HOME>:N 5) create the udump, cdump and bdump directories. 6) set the SID in .profile of oracle user 7) create the init.ora file for each node. It is possible to prefix parameter which are local to the instance with the instance name. For example, thread, instance_name, rollback_segments It is possible to create a spfile with create spfile=/dev/rawspfile from pfile=//init.ora. 8) create the database creation script. You can use $ORACLE_HOME/srvm/clustdb.sql script as sample. This script is also presented in appendix F. The script must be adapted to your environment. And be careful, there are some errors in this file 9) you can create a password file. Under $ORACLE_HOME execute orapwd file=orapw password=###. 10) as sysdba, execute the script.
page 29/ 53
WITH THE DATABASE CONFIGURATION ASSISTANT (DBCA) Create a parameter file $HOME/dbca_raw_config which will be used by DBCA to map the typical tablespaces to raw devices. This parameter file needs to be pointed to by the environment variable DBCA_RAW_CONFIG and makes easier to create the database (see example in Appendix). First use NETCA to create listeners configuration on each node. Start DBCA Do not use a pre-configured database (as the DDL statement used to create the TEMP tablespace does not work against a raw device); instead have DBCA create a customized database (create database). DBCA can also be used to cleanly remove instances or add new instance to an existing multi-instance database Optionally back up the spfile
SQL> create pfile='?/dbs/initXXX.ora' from spfile='/dev/rvsd_3'
page 30/ 53
J. POST INSTALLATION
STARTUP/SHUTDOWN SERVICES GSD must be running for srvctl to be able to run each command on all the nodes. $ srvctl command -h -> help To start/stop/check all instances and listeners: su - oracle srvctl start|stop|status -p <db_name> srvctl start|stop|status -p <db_name> -i <instance_name> srvctl start|stop p <db_name> s lsnr srvctl start|stop p <db_name> s inst srvctl config p <db_name> srvctl get env -p <db_name> srvctl set env -p <db_name> LANG=en
To start/stop the listeners only: To start/stop the instances only: To list the instances: To get environment information : To set an env. variable globally :
To start/stop the Oracle Intelligent Agent: To start/stop the Oracle Management Server:
agentctl start oemctl start oms oemctl status oms oemctl stop oms oemapp console oemapp dbastudio OH/Apache/Apache/bin/apachectl start
CONFIGURE LISTENER.ORA / SQLNET.ORA / TNSNAMES.ORA Use netca and/or netmgr to check the configuration of the listener and configure Oracle Net services (by default the Net service may be equal to the global database name (see instance parameter service_names ).
CONFIGURE ORACLE ENTERPRISE MANAGER Use the Java assistant emca to configure the Oracle Management Server, then start it : oemctl start oms Enter a username/password with DBA privileges to connect to the instance where the repository is to be set up. The first administrator for the domain will be sysman/oem_temp. For the console to be able to give a single system image of the cluster database, it is necessary to start the Intelligent Agent on each node and to discover the nodes.
page 31/ 53
Start the Oracle Intelligent Agent on each node Add the following line for Oracle Intelligent Agent to /etc/snmpd.conf (this is specific to AIX). smux 0.0 129.1.11.106 # OEM agent (IP address of current node) The SNMP master agent needs to be restarted: #stopsrc s snmpd #startsrc s snmpd Then start the OEM agent: $agentctl start
Check /etc/oratab The file should contain a reference to the database name, not to the instance name. The last field should always be N on a RAC environment to avoid 2 instances of the same name to be started.
Register the database with srvctl (this should not be necessary if the database was created by DBCA) srvctl add db p <db_name> o <ORACLE_HOME path> srvctl add instance p <db_name> i <SID1> n <node1> srvctl add instance p <db_name> i <SID2> n <node1>
page 32/ 53
Depending on the disk model, the latest level of microcode to download and install on all the SSA disks of the cluster is : Release of new disk drive microcode for AIX 4.3.3 (november 1st, 2001): DCHC(CUSM)->9911, DFHC(CUSJ)->9911 DGHC(CUSJ)->9911, DGHC(CUSM)->9911 DRVC(CUSH)->0023 DRHC(CUSS)->0012 DMVC(CUSN)->0070 From the site http://www.storage.ibm.com/hardsoft/products/ssa, download the file ssacode433.tar onto your system in your temporary directory How to apply a new disk drive microcode A. 1.Login as Root 2.cd to your temporary directory 3.Type tar -xvf ssacode433.tar 4.Run smit install 5.Select install & update software 6.Select install & update from ALL available software 7.Use the directory that you saved and unpacked the ssacode433.tar file into as the install device 8.Select all filesets in this directory for install 9.Execute the command B. 1. At the prompt, execute the command diag 2. Task Selection(Diagnostics, Advanced Diagnostics, Service Aids, etc.) 3. SSA Service Aids 4.Display/Download Disk Drive Microcode 5.Download Microcode to all SSA Physical Disk Drives 6.Continue with the microcode installation 7.No (because software is in /etc/microcode ) 8.Do you want to continue? Yes.... This will upgrade the microcode on all the disks with a lower level of microcode than the one installed in /etc/microcode during the phase A.
To install the new adapter microcode, download the file devices.pci.14109100.ucode (4.2.1.6), and install it as decribed above.
page 33/ 53
# To be exectued on the secondary node, which owns ssa_disk_21, ssa_disk_22 and ora2vg # Creation of the volume group, with two disks mkvg -f -y $secondary_oravg -s'16' ssa_disk_21 ssa_disk_22 # Creation of the logical volumes (raw devices) # The number is the size of the LV (number of 16MB physical mklv -y'ctrl2lv' $secondary_oravg 1 ssa_disk_21 mklv -y'undo2lv' $secondary_oravg 32 ssa_disk_21 mklv -y'redolog2_1lv' $secondary_oravg 8 ssa_disk_21 mklv -y'redolog2_2lv' $secondary_oravg 8 ssa_disk_21 mklv -y'oemrepolv' $secondary_oravg 10 ssa_disk_21 mklv -y'indexlv' $secondary_oravg 5 ssa_disk_21 mklv -y'examplelv' $secondary_oravg 10 ssa_disk_21 mklv -y'spfilelv' $secondary_oravg 1 ssa_disk_21 mklv -y'srvconfiglv' $secondary_oravg 7 ssa_disk_21 mklv -y'cmwlitelv' $secondary_oravg 7 ssa_disk_21 partitions) # 16 MB # 512 MB # 128 MB # 128 MB # 160 MB # 80 MB # 160 MB # 16 MB # 112 MB # 112 MB
# To be exectued on the primary node, to recognize ora2vg on the primary node varyoffvg $primary_oravg redefinevg -d ssa_disk_21 $secondary_oravg varyonvg $primary_oravg chown oracle.dba /dev/*vsd* chmod go+rw /dev/*vsd* # To be exectued on the secondary node, to recognize ora1vg on the secondary node varyoffvg $secondary_oravg redefinevg -d ssa_disk_11$primary_oravg varyonvg $secondary_oravg chown oracle.dba /dev/*vsd* chmod go+rw /dev/*vsd*
page 34/ 53
# VSD node database information vsdnode_proc "$primary_node_num $secondary_node_num" 'en0' '64' '256' '256' '48' '4096' '131072' '4' '61440' "RAC_${primary_node_num}_${secondary_node_num}"
page 35/ 53
# VSD global Volume Group information vsdvg_proc -g vsd_$primary_oravg $primary_oravg $primary_node_name $secondary_node_name vsdvg_proc -g vsd_$secondary_oravg $secondary_oravg $secondary_node_name $primary_node_name # Define a virtual shared disk (logical volume) defvsd_proc systemlv vsd_$primary_oravg vsd_system defvsd_proc templv vsd_$primary_oravg vsd_temp defvsd_proc undolv vsd_$primary_oravg vsd_undo defvsd_proc log1lv vsd_$primary_oravg vsd_log1 defvsd_proc ctrl1lv vsd_$primary_oravg vsd_ctrl1 defvsd_proc userslv vsd_$primary_oravg vsd_users defvsd_proc toolslv vsd_$primary_oravg vsd_tools defvsd_proc defvsd_proc defvsd_proc defvsd_proc defvsd_proc defvsd_proc defvsd_proc log2lv vsd_$secondary_oravg vsd_log2 ctrl2lv vsd_$secondary_oravg vsd_ctrl2 oemrepolv vsd_$secondary_oravg vsd_oemrepo indexlv vsd_$secondary_oravg vsd_index examplelv vsd_$secondary_oravg vsd_example spfilelv vsd_$secondary_oravg vsd_spfile srvmconfiglv vsd_$secondary_oravg vsd_srvmconfig
# Configure a VSD vsdconfig_proc -v vsdconfig_proc -v vsdconfig_proc -v vsdconfig_proc -v vsdconfig_proc -v vsdconfig_proc -v vsdconfig_proc -v vsdconfig_proc vsdconfig_proc vsdconfig_proc vsdconfig_proc vsdconfig_proc vsdconfig_proc vsdconfig_proc -v -v -v -v -v -v -v
-n -n -n -n -n -n -n
$secondary_node_num $secondary_node_num $secondary_node_num $secondary_node_num $secondary_node_num $secondary_node_num $secondary_node_num $secondary_node_num $secondary_node_num $secondary_node_num $secondary_node_num $secondary_node_num $secondary_node_num $secondary_node_num
# Start a single VSD on the cluster #vsdstart_proc -v '<vsd_name>' -n $primary_node_num $secondary_node_num # Start all VSD on all nodes vsdstart_proc -v 'All_VSDs' -n $primary_node_num $secondary_node_num
page 36/ 53
page 37/ 53
Service Interface tty_palavas: IP address: /dev/tty3 Hardware Address: Network: rs232_net Attribute: serial Aliased Address?: False Service Interface tty_palavas has no boot interfaces. Service Interface tty_palavas has no standby interfaces.
Breakdown of network connections: Connections to network giga_net Node gard is connected to network giga_net by these interfaces: gard Node palavas is connected to network giga_net by these interfaces: palavas Connections to network inter_net Node gard is connected to network inter_net by these interfaces: gard_stby Node palavas is connected to network inter_net by these interfaces: palavas_stby Connections to network rs232_net Node gard is connected to network rs232_net by these interfaces: tty_gard Node palavas is connected to network rs232_net by these interfaces: tty_palavas
fsck sequential
opsvg
page 38/ 53
Fast Connect Services Shared Tape Resources Application Servers Highly Available Communication Links Miscellaneous Data Automatically Import Volume Groups Inactive Takeover Cascading Without Fallback 9333 Disk Fencing SSA Disk Fencing Filesystems mounted before IP configured Run Time Parameters: Node Name Debug Level Host uses NIS or Name Server Format for hacmp.out Node Name Debug Level Host uses NIS or Name Server Format for hacmp.out
false false false false false false palavas high false Standard gard high false Standard
page 39/ 53
--- /etc
($ORACLE_BASE)
/admin /<SID> /
/product /9.0.1./
page 40/ 53
$Header: clustdb.sql 08-may-2001.10:10:37 rajayar Exp $ clustdb.sql Copyright (c) Oracle Corporation 1999, 2000. All Rights Reserved. NAME clustdb.sql - Example database creation script DESCRIPTION Creates a RAC database on Unix NOTES ****************************************************************** ** UNIX clustdb.SQL Version ** ** Please update this file to reflect the correct values for ** 1) The init.ora file in startup nomount pfile= ** 2) The sysdba account and password if not using connect / as sysdba ** (Note: The connect / as sysdba statement occurs multiple ** times in the sql script below) ** 3) The location of the sql scripts, to reflect your ORACLE_HOME. ** 4) The raw partition names(Symbolic link name) for log and data files ** Note: This script will add two additional log files for the second ** node. If your cluster will contain more nodes(instances) you ** must create and enable the additional logfiles for those node. ** 5) The name of the database in the "CREATE DATABASE " statement. ** 6) The character and national character sets for the databse, see ** "CREATE DATABASE " statement. ** 6) The size of tablespaces, if you would like to increase or ** decrease the default size. ** ******************************************************************
spool createdb.log set echo on connect / as sysdba startup nomount pfile="%ORACLE_BASE%/admin/clustdb/pfile/init.ora" CREATE DATABASE clustdb CONTROLFILE REUSE MAXLOGMEMBERS 5 MAXLOGHISTORY 100 MAXDATAFILES 254 MAXINSTANCES 32 MAXLOGFILES 64 DATAFILE '/dev/vx/rdsk/oracle_dg/clustdb_raw_system_400m' SIZE 325M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED UNDO TABLESPACE "UNDOTBS1" DATAFILE '/dev/vx/rdsk/oracle_dg/clustdb_raw_undotbs1_290m' SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED LOGFILE GROUP 1 ('/dev/vx/rdsk/oracle_dg/clustdb_raw_log11_120m') REUSE, GROUP 2 ('/dev/vx/rdsk/oracle_dg/clustdb_raw_log12_120m') REUSE
page 41/ 53
CHARACTER SET US7ASCII NATIONAL CHARACTER SET AL16UTF16; spool off spool createdb1.log set echo on REM *** WHEN USING UNDO_MANAGEMENT=MANUAL, DELETE THE UNDO TABLESPACE ... REM LINE FROM THE CREATE DATABASE COMMAND AND UNCOMMENT THE FOLLOWING REM SQL STATEMENT FOR RBS TABLESPACE. REM CREATE TABLESPACE RBS DATAFILE '/dev/vx/rdsk/oracle_dg/clustdb_rbs1' REM SIZE 520M REUSE MINIMUM EXTENT 512K; REM ********** TABLESPACE FOR USER ********** CREATE TABLESPACE USERS DATAFILE '/dev/vx/rdsk/oracle_dg/clustdb_raw_users_120m' SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED MANAGEMENT LOCAL; REM ********** TABLESPACE FOR TEMPORARY ********** CREATE TABLESPACE TEMP DATAFILE '/dev/vx/rdsk/oracle_dg/clustdb_raw_temp_100m' SIZE 40M REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED TEMPORARY MANAGEMENT LOCAL; REM ********** TABLESPACE FOR Tools ********** CREATE TABLESPACE TOOLS DATAFILE '/dev/vx/rdsk/oracle_dg/clustdb_raw_temp_12m' SIZE 10M REUSE AUTOEXTEND ON NEXT 320K MAXSIZE UNLIMITED MANAGEMENT LOCAL; REM ********** TABLESPACE FOR INDEX ********** CREATE TABLESPACE INDX DATAFILE '/dev/vx/rdsk/oracle_dg/clustdb_raw_indx_70m' SIZE 25M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED MANAGEMENT LOCAL;
REM ********** UNDO TABLESPACE FOR SECOND INSTANCE ********** CREATE TABLESPACE UNDOTBS2 DATAFILE '/dev/vx/rdsk/oracle_dg/clustdb_raw_undotbs2_290m' SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL; REM REM REM REM REM REM REM REM REM REM REM REM REM REM REM REM REM REM **** comment out rollback_segments for automatic undo management **** Rollback segments for our 2 nodes *************** spool psrbs.log; connect / as sysdba create rollback segment rbs1_1 storage(initial 200K next 200K) tablespace RBS; alter rollback segment rbs1_1 online; create rollback segment rbs1_2 storage(initial 200K next 200K) tablespace RBS; alter rollback segment rbs1_2 online; create rollback segment rbs2_1 storage(initial 200K next 200K) tablespace RBS; alter rollback segment rbs2_1 online; create rollback segment rbs2_2 storage(initial 200K next 200K) tablespace RBS; alter rollback segment rbs2_2 online; spool off **** End Rollback segments for our 2 nodes ***************
alter user sys temporary tablespace TEMP; REM **** Various SQL packages *************** @%ORACLE_HOME%/rdbms/admin/catalog.sql; @%ORACLE_HOME%/rdbms/admin/catexp7.sql
page 42/ 53
@%ORACLE_HOME%/rdbms/admin/catproc.sql @%ORACLE_HOME%/rdbms/admin/caths.sql connect system/manager @%ORACLE_HOME%/dbs/pupbld.sql REM **** End various SQL packages *************** REM ***** Scott's tables *************** connect / as sysdba @%ORACLE_HOME%/rdbms/admin/scott.sql REM ***** End Scott table *************** REM **** Demo support *************** connect / as sysdba @%ORACLE_HOME%/rdbms/admin/demo.sql connect / as sysdba spool off REM **** End Demo *************** REM **** Redo logfiles for the second instance *************** spool clustlog.log; connect / as sysdba alter database add logfile thread 2 group 3 '/dev/vx/rdsk/oracle_dg/clustdb_raw_log21_120m' reuse, group 4 '/dev/vx/rdsk/oracle_dg/clustdb_raw_log22_120m' reuse; REM **** Enable the new logfile for thread 2 alter database enable public thread 2; spool off REM **** End Logfiles for the second instance *************** REM **** Cluster Database SQL support *************** spool catclust.log; connect / as sysdba @%ORACLE_HOME%/rdbms/admin/catclust.sql spool off REM **** End Cluster Database SQL support *************** connect / as sysdba alter user system default tablespace TOOLS; alter user system temporary tablespace TEMP; REM **** Auto extend is turned off ******* REM For undo_management=MANUAL, uncomment next line and comment the 2 lines REM after that. REM alter database datafile '/dev/vx/rdsk/oracle_dg/clustdb_raw_rbs_580m' autoextend OFF; alter database datafile '/dev/vx/rdsk/oracle_dg/clustdb_raw_undotbs1_290m' autoextend OFF; alter database datafile '/dev/vx/rdsk/oracle_dg/clustdb_raw_undotbs2_290m' autoextend OFF; alter database datafile '/dev/vx/rdsk/oracle_dg/clustdb_raw_system_400m' autoextend OFF; alter database datafile '/dev/vx/rdsk/oracle_dg/clustdb_raw_temp_20m' autoextend OFF;
page 43/ 53
alter database datafile '/dev/vx/rdsk/oracle_dg/clustdb_raw_users_120m' autoextend OFF; alter database datafile '/dev/vx/rdsk/oracle_dg/clustdb_raw_indx_70m' autoextend OFF; exit;
init.ora This init.ora file is provided with the Oracle 9i software (in the same directory as clustdb.sql).
# # $Header: init.ora 04-may-2001.17:38:43 rajayar Exp $ # # Copyright (c) 1991, 2001, Oracle Corporation. All rights reserved. */ # ############################################################################## # Example INIT.ORA file # # This file is provided by Oracle Corporation to help you customize # your RDBMS installation for your site. Important system parameters # are discussed, and example settings given. # # Some parameter settings are generic to any size installation. # For parameters that require different values in different size # installations, three scenarios have been provided: SMALL, MEDIUM # and LARGE. Any parameter that needs to be tuned according to # installation size will have three settings, each one commented # according to installation size. # # Use the following table to approximate the SGA size needed for the # three scenarious provided in this file: # # -------Installation/Database Size-----# SMALL MEDIUM LARGE # Block 2K 4500K 6800K 17000K # Size 4K 5500K 8800K 21000K # # To set up a database that multiple instances will be using, use the # same file for all instance. Place all instance-specific parameters # at the end of the file using the <sid>.<parameter_name> = <value> syntax. # This way, when you change a public # parameter, it will automatically change on all instances. This is # necessary, since all instances must run with the same value for many # parameters. For example, if you choose to use private rollback segments, # these must be specified differently for each instance, but since all gc_* # parameters must be the same on all instances, they should be in one file. # # INSTRUCTIONS: Edit this file and the other INIT files it calls for # your site, either by using the values provided here or by providing # your own. ############################################################################### # replace "clustdb" with your database name db_name = clustdb compatible=9.0.0 db_files = 1024 # INITIAL
page 44/ 53
db_block_size=8192 # db_block_size=4096 open_cursors=300 #sort_area_size=524288 sort_area_size=1048576 large_pool_size=1048576 db_cache_size=50331648 java_pool_size=67108864 # db_block_buffers = 200
# Datawarehouse # Transaction processing #Transaction processing #Datawarehouse #Transaction processing, Datawarehouse #Datawarehouse, Transaction processing #Datawarehouse, Transaction processing # INITIAL # INITIAL # datewarehouse, transaction processing
# INITIAL
parallel_max_servers = 5 log_buffer = 8192 # INITIAL # if you want auditing # if you want timed statistics # limit trace file size to 10 K each
Uncommenting the line below will cause automatic archiving if archiving has been enabled using ALTER DATABASE ARCHIVELOG. log_archive_start = true log_archive_dest = %ORACLE_HOME%/admin/clustdb/arch log_archive_format = "%%ORACLE_SID%%T%TS%S.ARC"
# If using private rollback segments, place lines of the following # form at the end of this file: # <sid>.rollback_segments = (name1, name2) # # # # # # If using public rollback segments, define how many rollback segments each instance will pick up, using the formula # of rollback segments = transactions / transactions_per_rollback_segment In this example each instance will grab 40/10 = 4: transactions = 40 transactions_per_rollback_segment = 10
# Global Naming -- enforce that a dblink has same name as the db it connects to global_names = TRUE # # # # # Edit and uncomment the following line to provide the suffix that will be appended to the db_name parameter (separated with a dot) and stored as the global database name when a database is created. If your site uses Internet Domain names for e-mail, then the part of your e-mail address after the '@' is a good candidate for this parameter value. # global database name is db_name.db_domain
# db_domain = us.acme.com # # # # # #
Uncomment the following line if you wish to enable the Oracle Trace product to trace server activity. This enables scheduling of server collections from the Oracle Enterprise Manager Console. Also, if the oracle_trace_collection_name parameter is non-null, every session will write to the named collection, as well as enabling you to schedule future collections from the console.
# oracle_trace_enable = TRUE # define directories to store trace and alert files background_dump_dest=%ORACLE_HOME%/admin/clustdb/bdump user_dump_dest=%ORACLE_HOME%/admin/clustdb/udump
page 45/ 53
db_block_size = 4096 remote_login_passwordfile = exclusive # text_enable = TRUE # The following parameters are needed for the Advanced Replication Option job_queue_processes = 2 # job_queue_processes = 4 # job_queue_interval = 10 # job_queue_keep_connections = false distributed_transactions = 5 open_links = 4 # The following parameters are instance-specific parameters that are # specified for two instances named clustdb1 and clustdb2 user_dump_dest=%ORACLE_HOME%/admin/clustdb/udump undo_management=AUTO # For automatic undo management # = MANUAL For manual/RBS undo management cluster_database= true cluster_database_instances=2 remote_listener=LISTENERS_CLUSTDB # First instance specific parameters clustdb1.thread=1 clustdb1.instance_name=clustdb1 clustdb1.instance_number=1 clustdb1.local_listener=listener_clustdb1 # Comment out clustdb1.undo_tablespace and uncomment clustdb1.rollback_segments # when undo_management=MANUAL clustdb1.undo_tablespace = UNDOTBS1 # clustdb1.rollback_segments = (rbs1_1,rbs1_2) # Second instance specific parameters clustdb2.thread=2 clustdb2.instance_name = clustdb2 clustdb2.instance_number = 2 clustdb2.local_listener = listener_clustdb2 # Comment out clustdb2.undo_tablespace and uncomment clustdb2.rollback_segments # when undo_management=MANUAL clustdb2.undo_tablespace = UNDOTBS2 # clustdb2.rollback_segments = (rbs2_1,rbs2_2) #datawarehouse
page 46/ 53
rac1 rac2
User equivalence (both files are identical; hosts.equiv is not sufficient for root) /etc/hosts.equiv rac1 rac2 int-rac1 int-rac2
Raw devices parameter file for the Database Configuration Assistant $HOME/dbca_raw_config
system=/dev/rvsd_system temp=/dev/rvsd_temp undo1=/dev/rvsd_undo1 redo1_1=/dev/rvsd_redolog1_1 redo1_2=/dev/rvsd_redolog1_2 control1=/dev/rvsd_ctrl1 users=/dev/rvsd_users tools=/dev/rvsd_tools drsys=/dev/rvsd_drsys undo2=/dev/rvsd_undo2 redo2_1=/dev/rvsd_redolog2_1 redo2_2=/dev/rvsd_redolog2_2 control2=/dev/rvsd_ctrl2 oemrepo=/dev/rvsd_oemrepo index=/dev/rvsd_index example=/dev/rvsd_example spfile=/dev/rvsd_spfile srvconfig=/dev/rvsd_srvconfig cmwlite=/dev/rvsd_cmwlite
page 47/ 53
page 48/ 53
(SERVICE_NAME = RAC) (FAILOVER_MODE = (TYPE = SELECT) (METHOD = PRECONNECT) (RETRIES = 20) (DELAY = 60) ) ) ) INST1_HTTP = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC1)(PORT = 1521)) ) (CONNECT_DATA = (SERVER = SHARED) (SERVICE_NAME = MODOSE) (PRESENTATION = http://HRService) ) )
Example listener.ora implementing TAF sqlnet.ora NAMES.DIRECTORY_PATH= (TNSNAMES) Server configuration file exported by srvconfig srvconfig.txt RAC:/oracle/product/9.0.1:N RAC.spfile = /dev/rvsd_3 RAC.node_list = "int-rac1,int-rac2" RAC.inst_oracle_sid = (RAC1,RAC2)
page 49/ 53
Note #2064876.102 : How to setup High Availability Group Services (HAGS) on IBM AIX/RS6000.
Article-ID: Circulation: Folder: Topic: Title: Document-Type: Impact: Skill-Level: Server-Version: Updated-Date: References: Shared-Refs: Authors: Attachments: Content-Type: Keywords: Products: Platforms: Purpose ======= This article gives quick reference instructions on how to configure High Availability Group Services (HAGS) on IBM AIX RS6000 for Oracle 8.1.X. Scope and Application ===================== These instructions are helpful to any customer using Oracle on IBM AIX/RS6000 on which HACMP is installed. How to Configure High Availability Group Services (HAGS) ======================================================== In order to configure High Availability Group Services (HAGS), you need to be connected as root. Do the following on all nodes that form the cluster: 1. Create the "hagsuser" group and place "oracle" into the "hagsuser" group: Verify the group does not exists: # grep hagsuser /etc/group If this returns nothing do the following: # smitty groups Select "Add a Group" and fill in the following: <Note:2064876.102> PUBLISHED (EXTERNAL) server.OPS.Parallelserver - - - IBM RS6000 and SP How to setup High Availability Group Services (HAGS) on IBM AIX/RS6000 BULLETIN LOW CASUAL 08.01.XX.XX.XX 04-FEB-2002 08:21:25 BLEVE.US NONE TEXT/PLAIN LMON; OPS; PARALLEL; SERVER; SUBCOMP-OPS; 236/RDBMS (08.01.XX.XX.XX); 319 (4.3);
page 50/ 53
You can take the defaults for the other settings. Also note that after the group is created you will have to log out and log back in as "oracle" to be sure "oracle" is part of the "hagsuser" group. 2. Change the permissions on the "cldomain" executable: # chmod a+x /usr/sbin/cluster/utilities/cldomain 3. Change the group to "hagsuser" for the "svcsdsocket.<domain>" socket: # chgrp hagsuser /var/ha/soc/grpsvcsdsocket.`/usr/sbin/cluster/utilities/ cldomain` 4. Change the group permissions for the "grpsvcsdsocket.<domain>" socket: # chmod g+w /var/ha/soc/grpsvcsdsocket.`/usr/sbin/cluster/utilities/ cldomain` The HAGS socket needs to be writeable by "oracle" and the "cldomain" executable needs to be executable by "oracle". By configuring the group and permissions for the "grpsvcsdsocket.<domain>" file the instance will be able to communicate with HAGS and the instance will mount. References ========== Oracle Installation Guide for AIX RS6000, release 8.1.5. Search Words ============ OPS HAGS RS6000
page 51/ 53
************************************************************* This article is being delivered in Draft form and may contain errors. Please use the MetaLink "Feedback" button to advise Oracle of any issues related to this article. ************************************************************* PURPOSE ------This article helps to resolve problems with Oracle Parallel Server startup related to HACMP configuration SCOPE & APPLICATION ------------------How to setup HACMP cluster interconnect adapter
----------------------------Oracle Parallel Server software is successfully installed. The first OPS instance starts without errors. Trying to start a second OPS instance on another cluster node fails with ORA-600 [KCCSBCK_FIRST]. $ORACLE_HOME/bin/lsnodes will list all cluster nodes. /usr/sbin/cluster/diag/clverify doesn't show any errors. Check HACMP interconnect network adapter configuration with /usr/sbin/cluster/utilities/cllsif Adapter Address pfpdb3 11.2.18.24 pfpdb4 11.2.18.3 Type service service Network pfpdb3 pfpdb4 Net Type ether ether Attribute private private Node pfpdb3 pfpdb4 IP
The network parameter doesn't match. It has to be identical for both adapters. cllsif on a working configuration should look like this:
page 52/ 53
IP
page 53/ 53