Oracle RAC Cluster
Oracle RAC Cluster
Oracle RAC Cluster
Table of contents
Executive summary............................................................................................................................... 2
High availability for Oracle applications ................................................................................................ 2
Configuration tested ............................................................................................................................. 2
Why the HP BladeSystem c-Class enclosure and ProLiant BL460c blades? .................................................. 3
Why the HP 4400 Enterprise Virtual Array? ............................................................................................ 5
Why Oracle Clusterware and Oracle ASM Cluster File System software? ................................................... 5
Oracle Clusterware .............................................................................................................................. 6
Steps for creating a highly available environment for Oracle applications .................................................. 6
Create virtual disks for server boot and root file systems ....................................................................... 7
Use host bus adapter BIOS to record HBA WWIDs .............................................................................. 7
Utilize HP Command View EVA to associate WWIDs with Vdisks .......................................................... 9
Install operating system ................................................................................................................... 19
Create virtual disks to utilize for ASM disk storage ............................................................................. 27
Check Oracle prerequisites ............................................................................................................. 29
Modify multipath configuration to prevent blacklist of new virtual disks ................................................. 32
Create partitions and ASM disks ...................................................................................................... 35
Install Oracle Clusterware ............................................................................................................... 38
Create ASM cluster file system ......................................................................................................... 49
Install application in the cluster file system ......................................................................................... 57
Steps for placing an application under the protection of the Oracle Clusterware ....................................... 62
Create and register an application VIP ............................................................................................. 63
Create an action program ............................................................................................................... 63
Create and register an application ................................................................................................... 63
Commands to monitor, start, stop, service ......................................................................................... 64
Appendix A: Oracle HTTPD Server action program ............................................................................... 65
For more information .......................................................................................................................... 66
Executive summary
This document describes a set of best practices for Oracle applications high availability (HA) on Linux
using HP BladeSystem, Oracle Clusterware, and the Oracle ASM Cluster File Systems (ACFS). These
best practices provide the basis for a highly available failover clustered server environment for Oracle
applications. The final section of the document details installing the Oracle Web Server in an Oracle
Clusterware environment and provides an example of using this environment for a real world
application.
Target audience: This document is intended to assist Oracle application administrators, HP pre-sales,
and HP partners. Readers should already be familiar with their Oracle applications products, Oracle
Clusterware, and Oracle ASM, including basic administration and installation.
Configuration tested
The diagram shown in figure 1 illustrates the configuration tested for the preparation of this document.
The table below shows the mapping of systems, names, HBA information, and virtual disks for this
example.
Blade bay #4 #5
2
Figure 1. Tested configuration
3
Figure 2. BladeSystem c7000 enclosure
As the world's most popular blade server, the HP ProLiant BL460c server blade sets the standard for
the data center computing. Packing two processors, two hot plug hard drives, up to 384GB of
memory, and a dual-port FlexFabric adapter into a half-height blade, the BL460c gives IT managers
the performance and expandability they need for demanding data center applications. More
information on the BL460c server blades can be found at: http://www.hp.com/servers/bl460c.
4
Why the HP 4400 Enterprise Virtual Array?
The HP 4400 Enterprise Virtual Array (EVA4400) offers an easily deployed enterprise class virtual
storage array for midsized customers at an affordable price. More information on the EVA4400 can
be found at: http://www.hp.com/go/eva4400.
Figure 4. HP EVA4400
5
Oracle Clusterware
Oracle Clusterware can be used to protect any application (restarting or failing over the application
in the event of a failure), free of charge, if one or more of the following conditions are met.
The server OS is supported by a valid Oracle Unbreakable Linux support contract.
The product to be protected is either an Oracle product (e.g. Oracle applications, Siebel,
Hyperion, Oracle Database EE, Oracle Database XE), or any third-party product that directly or
indirectly stores data in an Oracle database.
At least one of the servers in the cluster is licensed for Oracle Database (SE or EE).
A cluster is defined to include all the machines that share the same Oracle Cluster Registry (OCR) and
voting disk.
6
Create virtual disks for server boot and root file systems
You must first create the virtual disks that you intend to utilize for your boot from SAN environment. In
this example we utilized the Command View EVA (web interface) to create the virtual disks on the
EVA4400 SAN. In this example, four 100 GB virtual disks (RAID 0+1) were created, which is two
virtual disks for each server blade. You could use one for the boot partition (in which case it can be
smaller), and use another one for the root file system. In this example, we will place the boot and root
file systems on a single 100 GB virtual disk and utilize the second drive for possible future expansion.
Note
You may utilize control-q to escape to the QLogic HBA BIOS during the boot
process. This process is observed via the iLO console for a server blade.
In Figure 5 you see two QMH2462 HBAs, each with two ports. Select the first adapter in slot 1.
7
Figure 6. Adapter settings for HBA in slot 1
Make sure the Host Bus Adapter (HBA) BIOS is enabled. Record the adapter port name (WWID) in a
safe location. The WWID in Figure 6 is 50060B00006AE2A4. This is the bay 5, BL460c server
blade mezzanine HBA in slot 1.
Repeat for the second blade mezzanine HBA in bay 5. It is located in slot 2. We are not utilizing the
second ports on the HBA in slot 1 or the HBA in slot 2. The goal is to ensure that we multipath across
two distinct HBAs for the highest availability.
8
Figure 7. Adaptor settings for HBA in slot 2
Make sure the Host Bus Adapter (HBA) BIOS is enabled. Record the second Adapter Port Name
(WWID) in a safe location. The WWID in Figure 7 is 500110A000863260. This is the bay 5,
BL460c server blade mezzanine HBA in slot 2.
9
Figure 8. Initial Command View EVA screen after login
10
Figure 9 below shows the screen for adding host information.
11
Figure 10. EVA controller 2 WWN
The Controller WWN in this example is 50014380 025B7B8C for Controller 2 Port FP1, and
50014380 025B7B89 for Controller 1 Port FP2. We use different controllers for high availability.
In this example, Vdisk003 will be used for both the boot and root file systems of the server blade in
bay 5. A second virtual disk was created for future expansion. In this example Vdisk004, was utilized
for that purpose.
Now present the virtual disks to the associated host as shown in Figure 11. The names used in this
example are blade4-mezzslot1, blade4-mezzslot2, blade5-mezzslot1, and blade5-mezzslot2.
12
Figure 11. Present Vdisk to hosts
Specify the LUN numbers for these virtual disks. In Figure 12, Vdisk003 is presented as LUN3 to both
HBAs of the server blade in bay 5.
13
Figure 12. Change LUN number for first disk (Vdisk003)
Repeat the process for the second virtual disk. In this example, present Vdisk004 as LUN4
Note
It is very IMPORTANT under Vdisk properties to click on
“Save changes” when you have made changes so that they
are properly saved.
Now go back to the blade server iLO console (which is in the HBA BIOS) and scan the fibre channel
devices to view the controllers on the EVA.
14
The screen in Figure 13 below shows 4 SAN targets because there are two controllers with two ports
each.
15
The server blade in bay 5 has LUN3 assigned to Vdisk003 and LUN4 to Vdisk004. Enter 3 (LUN3)
as a boot device as shown in Figure 14.
Repeat the steps for the second HBA as shown in Figure 15, but use a different SAN target WWN
(controller 1 - 50014380025B7B89). A different controller is utilized for high availability. The LUN
will still be LUN3.
16
Figure 15. Assign boot device for second controller
Now reboot the server and press F9 during the system BIOS startup (figure 16). This action will allow
you to access the system BIOS.
17
Figure 16. System options
Note
Make sure the HBA storage is listed first in the Boot Controller order, as shown in Figure 17. The
Local disk storage is not utilized in this solution.
18
Figure 17. Boot controller order
19
# Private
192.168.20.91 nodea-priv.localdomain nodea-priv
192.168.20.92 nodeb-priv.localdomain nodeb--priv
# Virtual
192.168.10.93 nodea-vip.localdomain nodea-vip
192.168.10.94 nodeb-vip.localdomain nodeb-vip
# SCAN
192.168.10.95 node-scan.localdomain node-scan
Note
The SCAN address should be defined on the DNS to round-robin between
the two public IPs.
The mpath option is necessary to do a multipath installation of the OS. The vnc option makes it easy
to access the installation consoles via a VNC client. Refer to Figure 18. This option allows you to view
the installation process via a VNC client and to use the iLO console interface to switch to alternate
terminal windows and verify the multipath configuration during the install. Use the iLO interface to
define a hot key for switching to alternate consoles (e.g. ctrl-alt-f1) as shown in Figure 19.
20
Figure 19 shows how to program a remote console hot key.
Once you press control-t in your iLO session, the cntrl-alt-f1 is sent to the console window, which
allows you to switch between alternate terminal windows during the installation. This action is useful
to check on the multipath configurations that the installation is intending to utilize.
21
Figure 20. Select the drives for installation
Check (by pressing control-t) and using the multipath –ll command to see if the LUNs you
expected are being presented for my mpath0 and mpath1 device. See Figure 21.
This solution utilized mpath0 for/boot and the root file system so that mpath1 will be left unchecked.
22
Figure 21. Verify usage of the correct LUN
23
The image in Figure 22 shows the chosen drive (mpath0).
Next check the grub boot loader options as shown in Figure 23.
24
Make sure grub is installed in the master boot record as shown in Figure 24.
25
Customize software install now to add multipath options. Check the Customize now option as shown
in Figure 25.
Add the device mapper multipath software package from the base and ensure that the prerequisite
Oracle-required software is installed. The Oracle Clusterware Installation guide for Linux lists the
software prerequisites. It can be found at:
http://download.oracle.com/docs/cd/B28359_01/install.111/b28263/prelinux.htm#BABIHHFG
26
System tools
X Windows system
The following information should be set during the installation.
hostname: nodeb.localdomain
IP Address eth0: 192.168.20.92 (private address)
Default Gateway eth0: none
IP Address eth1: 192.168.10.92 (public address)
Default Gateway eth1: 192.168.10.10 (public address)
Installation of the operating system with support for multipath should now be completed. Reboot the
system and verify a successful boot. Repeat the operating system installation using similar steps for
nodea (the blade in bay 4). Nodea will utilize LUN1 as its mpath0 device for the boot and root file
systems.
27
is necessary because LUN1 is used for OS multipath on blade 4. LUN2 is used for future expansion
on blade 4, LUN 3 is used for the OS multipath on blade 5, and LUN4 is used for future expansion
on blade 5.
In the example below the ASM disk storage was prefixed with the name ASM. In Figure 27,
ASMVdisk001, ASMVdisk002, ASMVdisk003, ASMVdisk004, ASMVdisk005 are shown.
Present these virtual disks to all hosts in this cluster. These are the hosts’ definitions you created during
the OS installation (i.e. blade4-mezzslot1, blade4-mezzslot2, blade5-mezzslot1, blade5-mezzslot2).
Figure 28 shows the presentation the first ASM disk.
28
Figure 28. ASM Vdisks presented to both HBAs on both blades.
As the conclusion to the step to create virtual disks for ASM storage, you will reboot blade 4 and
blade 5.
29
The /etc/yum.conf file:
[main]
cachedir=/var/cache/yum
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
distroverpkg=redhat-release
tolerant=1
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
# Default.
# installonly_limit = 3
Once the basic installation is complete, install the following packages while logged in as the root
user, including the 64-bit and 32-bit versions of prerequisite packages from the Enterprise Linux 5
DVD.
# cd /media/cdrom/Server
# rpm -Uvh binutils-2.*
# rpm -Uvh compat-libstdc++-33*
# rpm -Uvh elfutils-libelf-0.*
# rpm -Uvh elfutils-libelf-devel-*
# rpm -Uvh gcc-4.*
# rpm -Uvh gcc-c++-4.*
# rpm -Uvh glibc-2.*
# rpm -Uvh glibc-common-2.*
# rpm -Uvh glibc-devel-2.*
# rpm -Uvh glibc-headers-2.*
# rpm -Uvh ksh-2*
# rpm -Uvh libaio-0.*
# rpm -Uvh libaio-devel-0.*
# rpm -Uvh libgcc-4.*
# rpm -Uvh libstdc++-4.*
# rpm -Uvh libstdc++-devel-4.*
# rpm -Uvh make-3.*
# rpm -Uvh sysstat-7.*
# rpm -Uvh unixODBC-2.*
# rpm -Uvh unixODBC-devel-2.*
# cd /
30
Perform the following steps while logged into the nodea machine as the root user.
Ensure that the shared memory file system is big enough for Oracle Automatic Memory Manager to
work.
# umount tmpfs
# mount -t tmpfs shmfs -o size=1500m /dev/shm
If you are using NTP, you must add the "-x" option into the following line in the "/etc/sysconfig/ntpd"
file.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
31
Create the new groups and users.
# groupadd -g 1000 oinstall
# groupadd -g 1200 dba
# useradd -u 1100 -g oinstall -G dba oracle
# passwd oracle
Login as the Oracle user and add the following lines at the end of the .bash_profile file:
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
from the blacklist section of the /etc/multipath.conf file. Then reload the multipath daemon.
service multipathd reload
32
Below is an example of the output from the multipath –ll command.
mpath2 (36001438002a5642d00011000004e0000) dm-6 HP,HSV300
[size=50G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 2:0:0:9 sdai 66:32 [active][ready]
\_ 2:0:1:9 sdap 66:144 [active][ready]
\_ 0:0:0:9 sdg 8:96 [active][ready]
\_ 0:0:1:9 sdn 8:208 [active][ready]
\_ round-robin 0 [prio=40][enabled]
\_ 0:0:3:9 sdab 65:176 [active][ready]
\_ 2:0:2:9 sdaw 67:0 [active][ready]
\_ 2:0:3:9 sdbd 67:112 [active][ready]
\_ 0:0:2:9 sdu 65:64 [active][ready]
mpath1 (36001438002a5642d00011000004a0000) dm-5 HP,HSV300
[size=50G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 0:0:3:8 sdaa 65:160 [active][ready]
\_ 2:0:2:8 sdav 66:240 [active][ready]
\_ 2:0:3:8 sdbc 67:96 [active][ready]
\_ 0:0:2:8 sdt 65:48 [active][ready]
\_ round-robin 0 [prio=40][enabled]
\_ 2:0:0:8 sdah 66:16 [active][ready]
\_ 2:0:1:8 sdao 66:128 [active][ready]
\_ 0:0:0:8 sdf 8:80 [active][ready]
\_ 0:0:1:8 sdm 8:192 [active][ready]
mpath0 (36001438002a5642d00011000000a0000) dm-0 HP,HSV300
[size=100G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 2:0:0:1 sdac 65:192 [active][ready]
\_ 2:0:1:1 sdaj 66:48 [active][ready]
\_ 0:0:0:1 sda 8:0 [active][ready]
\_ 0:0:1:1 sdh 8:112 [active][ready]
\_ round-robin 0 [prio=40][enabled]
\_ 2:0:2:1 sdaq 66:160 [active][ready]
\_ 2:0:3:1 sdax 67:16 [active][ready]
\_ 0:0:2:1 sdo 8:224 [active][ready]
\_ 0:0:3:1 sdv 65:80 [active][ready]
mpath6 (36001438002a5642d0001100000460000) dm-12 HP,HSV300
[size=50G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 2:0:0:7 sdag 66:0 [active][ready]
\_ 2:0:1:7 sdan 66:112 [active][ready]
\_ 0:0:0:7 sde 8:64 [active][ready]
\_ 0:0:1:7 sdl 8:176 [active][ready]
\_ round-robin 0 [prio=40][enabled]
\_ 2:0:2:7 sdau 66:224 [active][ready]
\_ 2:0:3:7 sdbb 67:80 [active][ready]
\_ 0:0:2:7 sds 65:32 [active][ready]
\_ 0:0:3:7 sdz 65:144 [active][ready]
mpath5 (36001438002a5642d0001100000420000) dm-11 HP,HSV300
[size=50G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 2:0:2:6 sdat 66:208 [active][ready]
\_ 2:0:3:6 sdba 67:64 [active][ready]
\_ 0:0:2:6 sdr 65:16 [active][ready]
\_ 0:0:3:6 sdy 65:128 [active][ready]
\_ round-robin 0 [prio=40][enabled]
\_ 2:0:0:6 sdaf 65:240 [active][ready]
\_ 2:0:1:6 sdam 66:96 [active][ready]
\_ 0:0:0:6 sdd 8:48 [active][ready]
\_ 0:0:1:6 sdk 8:160 [active][ready]
33
mpath4 (36001438002a5642d00011000003e0000) dm-10 HP,HSV300
[size=50G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 2:0:0:5 sdae 65:224 [active][ready]
\_ 2:0:1:5 sdal 66:80 [active][ready]
\_ 0:0:0:5 sdc 8:32 [active][ready]
\_ 0:0:1:5 sdj 8:144 [active][ready]
\_ round-robin 0 [prio=40][enabled]
\_ 2:0:2:5 sdas 66:192 [active][ready]
\_ 2:0:3:5 sdaz 67:48 [active][ready]
\_ 0:0:2:5 sdq 65:0 [active][ready]
\_ 0:0:3:5 sdx 65:112 [active][ready]
mpath3 (36001438002a5642d00011000000e0000) dm-8 HP,HSV300
[size=100G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=200][active]
\_ 2:0:2:2 sdar 66:176 [active][ready]
\_ 2:0:3:2 sday 67:32 [active][ready]
\_ 0:0:2:2 sdp 8:240 [active][ready]
\_ 0:0:3:2 sdw 65:96 [active][ready]
\_ round-robin 0 [prio=40][enabled]
\_ 2:0:0:2 sdad 65:208 [active][ready]
\_ 2:0:1:2 sdak 66:64 [active][ready]
\_ 0:0:0:2 sdb 8:16 [active][ready]
\_ 0:0:1:2 sdi 8:128 [active][ready]
You can verify the WWNs given from the multipath –ll output with the WWNs information provided
by the HP Command View EVA - see Figure 29. Verify the device names of the virtual disks you
created on the EVA for the ASM cluster file system. Record the WWNs mappings and their respective
mpath device names. These WWNs will be used later in a multipath aliases file. The purpose of the
aliases file is to ensure that the mpath devices are always mapped to the same vdisk (as identified by
the WWNs).
34
Figure 29. ASM Virtual disk properties
35
The packages can be found at:
http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html
uname -rm
2.6.18-164.el5 x86_64
Download the appropriate ASMLib RPMs from OTN. In our example, we needed the following:
oracleasm-support-2.1.3-1.el5.x86_64.rpm
oracleasmlib-2.0.4-1.el5.x86_64.rpm
oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm
Repeat the ASM software download, ASM package installation and ASM library configuration for
nodeb.
While logged into nodea. Create the ASM disk for each of the 5 devices. Do this only on nodea.
/usr/sbin/oracleasm createdisk DISK1 /dev/mpath/mpath1p1
/usr/sbin/oracleasm createdisk DISK2 /dev/mpath/mpath2p1
/usr/sbin/oracleasm createdisk DISK3 /dev/mpath/mpath4p1
/usr/sbin/oracleasm createdisk DISK4 /dev/mpath/mpath5p1
/usr/sbin/oracleasm createdisk DISK5 /dev/mpath/mpath6p1
36
Scan and lists the ASM disks.
/usr/sbin/oracleasm scandisks
/usr/sbin/oracleasm listdisks
DISK1
DISK2
DISK3
DISK4
DISK5
Next, on nodeb,
make a copy of the /var/lib/multipath/bindings file from nodea. Save it in /tmp and edit the
/var/lib/multipath/bindings file for nodeb, so mpath1, 2, 4, 5, and 6 map to the same WWIDs To
ensure those devices will have consistent naming across the cluster.
Note
DO NOT CHANGE the bindings for mpath0 as it is unique for each root file
system. Other ways to accomplish this would be to use aliases in the
multipath.conf file for these WWIDs.
Modify the asm scan order, and scan exclude to ensure that the mpath devices are scanned first.
/etc/sysconfig/oracleasm
Modify.
ORACLEASM_SCANORDER=”mapper”
ORACLEASM_SCANEXCLUDE=”sd”
37
Verify on each node that the disks can be seen.
/usr/sbin/oracleasm listdisks
DISK1
DISK2
DISK3
DISK4
DISK5
Verify that all disks show up properly after a reboot and that the multipath bindings (multipath –ll) all
map to the same WWID.
ping –c 3 nodea
ping –c 3 nodeb
ping –c 3 nodea-priv
ping –c 3 nodeb-priv
The Oracle Clusterware software is part of the Oracle 11gR2 Grid infrastructure kit. It can be found
under the database 11g downloads for Linux, available at
http://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010-linx8664soft-
100572.html
Download and save the Oracle 11gR2 Grid infrastructure kit for Linux. This can be done from either
nodea or nodeb. This example uses nodeb.
Download and run the cluster verification utility.
http://www.oracle.com/technetwork/database/clustering/downloads/index.html
Note that the cluster verification will fail because user equivalence has not been set up yet; in 11gR2
the install process will set up user equivalence for you.
Unzip the Grid infrastructure kit.
Unzip linux.x64_11gR2.grid.zip
Perform the Grid installation.
./runInstaller
Figures 30-40 show the installation process. Select Install and Configure Grid Infrastructure for a
Cluster – see Figure 30.
38
Figure 30. Install and configure grid infrastructure for a cluster option
39
Figure 31. Typical Installation
40
Figure 32. Set scan name
41
Figure 33. Inventory directory
Add the additional nodes. In our example, node nodea was added, because the install was done
from nodeb.
42
Figure 34. Additional nodes
Click on SSH connectivity, do setup and then test – see Figure 35. You will need to give the oracle
user password (oracle was used in our example case).
43
Figure 35. SSH connectivity
Set up the network interfaces for the private and public networks for nodea and nodeb – see figure
36.
44
Figure 36. Private and public networks
Specify Oracle base, software location, and utilize ASM for the cluster registry storage type. Set your
passwords and Oracle ASM group – see figure 37.
45
Figure 37. Install location
Select the Disk Group Characteristics. The Disk Group Name used was DATA. Select all five disks.
Redundancy could be external since we used hardware RAID (1+0) for the disks. This example just
happened to use the default of “normal” – see figure 38.
46
Figure 38. ASM Disk Group
Check the summary and finish the installation – see figure 39.
47
Figure 39. Summary
48
Figure 40. Configuration scripts
ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_SID
export ORACLE_HOME
In this example we used nodea.
As the oracle user, set the correct environment to work with ASM.
. oraenv
ORACLE_SID = [RAC2] ? +ASM2
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle
Run the ASM Configuration Assistant.
asmca
It will show the disk group shown in Figure 41.
49
Figure 41. ASM configuration assistant
50
Figure 42. ASM Instance
Click on the ASM cluster file system tab – see figure 43.
51
Figure 43. ASM Cluster File Systems tab
Click on the Create (see figure 44) and specify volume name and size (figure 45). In this example,
siebelvol1 was used for the volume name and 20.0GB was used for the size.
52
Figure 44. Create ASM cluster file system initial screen before changes
53
Figure 45. Create volume screen before entering a volume name and size
54
Figure 46. Mount the volume as a general purpose file system
When the system is finished creating the volume, select General Purpose File System (see figure 46),
the default mount point which will be used by your Oracle application for installation. The register
mount point value is “yes” (automatically mount and unmount at startup/shutdown). Click OK.
55
Figure 47. Confirmation of creation of ASM cluster file system
Note
The default mount point will be /u01/app/oracle/acfsmounts/data_siebelvol1
Directory names must begin with an alphabetic character and can use underscores as part of the
name, but no spaces or special characters may be used.
56
Figure 48. ASM cluster file systems list
Open a terminal window to see the new ASM cluster file system. Create a file on one node and verify
that it is seen on the others. If you need to manually mount the ASM cluster file system, use the
following:
/sbin/mount.acfs –o all
To unmount, use /bin/umount –t acfs –a
This ASM cluster file system can now be used for Oracle applications such as an Oracle Web Server,
E-Business Suite or Siebel. This shared file system can be used for application binaries, logs, shared
documents, reports, etc. You can also export the ACFS file system as an NFS file system and NFS
mount it for access by other middle-tier nodes.
57
For more information, please refer to the following documentation guide on how to install Oracle
Application Server on a 64-bit Linux at:
http://download.oracle.com/docs/cd/B14099_19/linux.1012/install.1012/install/reqs.htm#CIHD
EIFG
Unpack the companion CD.
On nodea,
cpio –idmv < as_linux_x86_companion_cd_101300_disk1.cpio
Change the default group of the user doing the installation to oinstall. In this example, we used
siebeluser since this web server can be the base for a Siebel Web Server installation.
usermod –g oinstall siebeluser
The libXp.so libraries are deprecated and no longer installed as part of RHEL 5.0. The missing
libraries are on the RHEL 5.0 installation DVD.
In the server directory on the DVD,
libXp-1.0.0-8.1.el5.i386.rpm
libXp-1.0.0-8.1.el5.x86_64.rpm
58
Install both versions using the following:
rpm –Uvh libXp-1.0.0-8.1.el5.i386.rpm
rpm –Uvh libXp-1.0.0-8.1.el5.x86_64.rpm
The installation will be successful for the Oracle HTTPD server, but the HTTPD process will not
automatically start until the libdb library is fixed. This process is detailed in Oracle document
415244.1
To solve the issue with not being able to find the libdb-3.3.so library,
cd /usr/lib
ln –s libdb-4.3.so libdb-3.3.so
59
Figure 49. OHS installer
60
Figure 50. Web Server Services
61
You will see the summary screen shown in Figure 51.
After the installation is complete, verify the httpd server is running. You start the daemon by going
to the installation directory and using the opmnctl application.
In this case,
#cd /home/siebeluser/oracle/companionCDHome_1/opmn/bin
#linux32 bash
#./opmnctl startall
Test that the web server is up and running. The default port is 7780. In this example
http://nodea.zko.hp:7780
62
Create and register an Application Profile – The profile describes the application process and the
limits covering how it is to be protected.
Once established, Oracle Clusterware will manage the application. It will be started, stopped and
made highly available according to the information and rules contained in the Application Profile
used to register with the Oracle Cluster Registry (OCR). The OCR is used to provide consistent
configuration information of applications to Oracle Clusterware.
On all cluster nodes, add ORA_CRS_HOME environment variable to your root .bash_profile.
ORA_CRS_HOME=/u01/app/11.2.0/grid
export ORA_CRS_HOME
The Application VIP script has to run as root. Change the owner of the resource.
# $ORA_CRS_HOME/bin/crs_setperm webvip -o root
You can use ifconfig to verify that this IP has also been assigned to interface eth1.
In the example above, a profile webapp.cap has been created in the $ORA_CRS_HOME/crs/public
directory. It is an application type profile; the description is “Oracle HTTPD Server”. It has a
dependency on another Clusterware-managed resource called webvip. The action program is called
webapp.sh. The check interval is every 5 seconds.
63
The application profile is now registered with the Oracle Clusterware service using the crs_register
command.
oracle@nodea$ $ORA_CRS_HOME/bin/crs_register webapp
Verify that the service is now online using the crs_stat command.
oracle@nodea ~]$ $ORA_CRS_HOME/bin/crs_stat webapp
NAME=webapp
TYPE=application
TARGET=ONLINE
STATE=ONLINE on nodea
You can force the relocation of a service with the crs_relocate command.
$ORA_CRS_HOME/bin/crs_relocate –f webapp
Attempting to stop `webapp` on member `nodea`
Stop of `webapp` on member `nodea` succeeded.
Attempting to stop `webvip` on member `nodea`
Stop of `webvip` on member `nodea` succeeded.
Attempting to start `webvip` on member `nodeb`
Start of `webvip` on member `nodeb` succeeded.
Attempting to start `webapp` on member `nodeb`
Start of `webapp` on member `nodeb` succeeded.
64
Appendix A: Oracle HTTPD Server action program
#!/bin/bash
#
start() {
echo -n $"Starting $prog: "
/u01/app/oracle/acfsmounts/data_siebelvol1/ohinst/opmn/bin/opmnctl
startall
if [ "$?" -eq 0 ];
then
RETVAL=0
else
RETVAL=1
fi
return $RETVAL
}
stop() {
echo -n $"Stopping $prog: "
/u01/app/oracle/acfsmounts/data_siebelvol1/ohinst/opmn/bin/opmnctl
stopall
if [ "$?" -eq 0 ];
then
RETVAL=0
else
RETVAL=1
fi
return $RETVAL
}
check() {
echo -n $"check $prog: "
/u01/app/oracle/acfsmounts/data_siebelvol1/ohinst/opmn/bin/opmnctl status
| grep "Alive"
if [ "$?" -eq 0 ];
then
RETVAL=0
else
RETVAL=1
fi
return $RETVAL
}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
check)
check
;;
*)
echo $"Usage: $prog {start|stop|check}"
exit 1
esac
exit $RETVAL
65
For more information
HP ProLiant BL460c server: http://www.hp.com/servers/bl460c
HP BladeSystem c7000 Enclosure:
http://h18004.www1.hp.com/products/blades/components/enclosures/c-class/c7000/
HP 4400 Enterprise Virtual Array: www.hp.com/go/eva4400
Booting Linux x86 and x86_64 systems from a Storage Area Network with Device Mapper Multipath:
(http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01513866/c01513866.pdf?ju
mpid=reg_R1002_USEN)
HP Single Point of Connectivity Knowledge (SPOCK) web site: http://www.hp.com/storage/spock
Red Hat Enterprise Linux for ProLiant:
http://h18004.www1.hp.com/products/servers/linux/redhat/rhel/index.html?jumpid=reg_R1002_
USEN
Oracle licensing information can be found at:
http://download.oracle.com/docs/cd/E11882_01/license.112/e10594/editions.htm#CJAHFHBJ
The Oracle Clusterware software is part of the Oracle 11gR2 Grid infrastructure kit. It can be found
under the database 11g downloads for Linux.
http://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010-linx8664soft-
100572.html
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to
change without notice. The only warranties for HP products and services are set forth in the express warranty
statements accompanying such products and services. Nothing herein should be construed as constituting an
additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. Oracle and Java are
registered trademarks of Oracle and/or its affiliates.