Linux Administration - Red Hat Cluster With Oracle Service Failover
Linux Administration - Red Hat Cluster With Oracle Service Failover
Linux Administration
Sunday, June 5, 2011 Facebook Badge
Arnab Mukherjee
Pages
Home
System Administration
Table of Contents
Install Red Hat Linux Operating System
Configure Network Bonding
Configuring Cluster
Setting up a HighAvailability Cluster
Creating Your Cluster Using Conga
User Configuration
Setting up a Storage Cluster
Common Issues
Networking Issues
Popular Posts
Troubleshooting Conga
Running luci on a Cluster Node Red Hat Cluster with Oracle Service Failover
Debugging problems with luci ...
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 1/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
# cat /etc/sysconfig/network‐scripts/ifcfg‐bond0 1) First install the java and set the class path in
/etc/profile How to install JDK 6 (jdk6u24linux
i586rpm.bin) Download jdk6u2...
BOOTPROTO=none
► 2013 (2)
► 2012 (4)
# cat /etc/sysconfig/network‐scripts/ifcfg‐bond1 Arnab ▼ 2011 (16)
Mukherjee ► December (6)
View my complete
► July (6)
profile
DEVICE=bond1 ▼ June (2)
► April (1)
BOOTPROTO=none
► 2010 (5)
ONBOOT=yes
Configuring Cluster
This section discusses how to install and configure Red Hat Cluster Suite and Global File
System on your Dell & Red Hat HA Cluster system using Conga and CLI Tools.
Conga is a configuration and management suite based on a server/agent model. You can
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 2/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
access the management server luci using a standard web browser from anywhere on the
network. Luci communicates to the client agent ricci on the nodes and installs all required
packages, synchronizes the cluster configuration file, and manages the storage cluster.
Though there are other possible methods such as systemconfigcluster and creating an
xml configuration file by hand, it is recommended that you use Conga to configure and
manage your cluster.
Two nodes with RHEL 5.6 X86_64 installed and want to create a cluster to have High
Availability for Oracle services.
It also assumes that Storage Area Network (SAN) accessible from the two systems have
free space on it.
First of all need to install on both the systems all needed packages.
For doing this, create a cluster.repo file in /etc/yum.repos.d with the following command
touch /etc/yum.repos.d/cluster.repo
echo [Server] >> /etc/yum.repos.d/cluster.repo
echo name=Server >> /etc/yum.repos.d/cluster.repo
echo baseurl=file:///misc/cd/Server >> /etc/yum.repos.d/cluster.repo
echo enabled=1 >> /etc/yum.repos.d/cluster.repo
echo gpgcheck=0 >> /etc/yum.repos.d/cluster.repo
echo [Cluster] >> /etc/yum.repos.d/cluster.repo
echo name=Cluster >> /etc/yum.repos.d/cluster.repo
echo baseurl=file:///misc/cd/Cluster >> /etc/yum.repos.d/cluster.repo
echo enabled=1 >> /etc/yum.repos.d/cluster.repo
echo gpgcheck=0 >> /etc/yum.repos.d/cluster.repo
echo [ClusterStorage] >> /etc/yum.repos.d/cluster.repo
echo name=ClusterStorage >> /etc/yum.repos.d/cluster.repo
echo baseurl=file:///misc/cd/ClusterStorage >> /etc/yum.repos.d/cluster.repo
echo enabled=1 >> /etc/yum.repos.d/cluster.repo
echo gpgcheck=0 >> /etc/yum.repos.d/cluster.repo
Insert the RHEL 5.6 X86_64 media on you CD/DVD Reader, and run the following
command to update yum database:
yum update
If yum can’t use the new repository, check if autofs service is up and running (or start it)
with the folowing command :
service autofs restart
At this point you can install all needed packages from create and administer a cluster :
yum groupinstall y “Cluster Storage” “Clustering”
The two “rhelclusternodeX” systems have two NICs, one for production and one for High
Availability check.
# node1
132.158.201.177 PBOADQ1A.intersil.corp PBOADQ1A
132.158.201.187 node1.intersil.corp node1
# node2
132.158.201.179 PBOADQ1B.intersil.corp PBOADQ1B
132.158.201.188 node2.intersil.corp node2
# Virtual IP
132.158.201.181 PBOADQC1.intersil.corp PBOADQC1
The Virtual IP Address (132.158.201.181) who shares the service from (132.158.201.177)
node1 and (132.158.201.179) node2 Servers.
Note: hosts file entry should be same in both the nodes
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 3/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
Start the luci service:
2) Click on Cluster
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 4/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
[root@PBOADQ1B ~]# ipmitool lan print 1
Set in Progress : Set Complete
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5
: User : MD2 MD5
: Operator: MD2 MD5
: Admin : MD2 MD5
: OEM :
IP Address Source : Static Address
IP Address : 132.158.201.180
Subnet Mask : 255.255.252.0
MAC Address : 14:fe:b5:d8:7c:19
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 132.158.202.254
Default Gateway MAC : 00:00:00:00:00:00
Backup Gateway IP : 0.0.0.0
Backup Gateway MAC : 00:00:00:00:00:00
802.1q VLAN ID : Disabled
802.1q VLAN Priority : 0
RMCP+ Cipher Suites : 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14
Cipher Suite Priv Max : aaaaaaaaaaaaaaa
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
Note: The IP address is set, so we need to enable the interface for both the nodes.
User Configuration
First, we need to list the users and get the user ID of the administrative user for IPMI:
[root@PBOADQ1A ~]# ipmitool user list
ID Name Callin Link Auth IPMI Msg Channel Priv Limit
2 root true true true ADMINISTRATOR
[root@PBOADQ1B ~]# ipmitool user list
ID Name Callin Link Auth IPMI Msg Channel Priv Limit
2 root true true true ADMINISTRATOR
If you don't already know the password for the 'root' user, you can set it from the
command line as well. This should NOT be the same as the root user password for the
system). Note the 2 in the command line this matches the administrator / root account
from above:
[root@PBOADQ1A ~]# ipmitool user set password 2
Password for user 2: calvin
Password for user 2: calvin
[root@PBOADQ1B ~]#ipmitool user set password 2
Password for user 2: calvin
Password for user 2: calvin
Preliminary Testing
The easiest thing to check is the power status. Note you CAN NOT issue IPMIoverlan
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 5/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
commands from the same machine. That is, you must perform the following test from a
different machine than the one you have just configured.
Chassis Power is on
Chassis Power is on
(10) To configure the fence device from the Luci. Click on the n
ode1.intersil.corp
Add a fence device to this level, and then select IPMI Lan. Next
provide the IP Address, username & password
Note: We have to follow the same process for the LUN. All will have separate volume
group
The Volume groups created are vg1, vg2, vg3, vg4, vg5, vg6, vg7, vg8.
Now create the directory /d01, /d02, /d03, /d04, /d05, /d06, /d07, /d08, /d09 in both the
nodes.
b) Configuring Global File System (GFS).
To create the clustered GFS file system on the device using the command below:
mkfs.gfs2 p lock_dlm t PBOADQC1: sanvol1 j 4 /dev/vg1/lv1
Created a GFS file system, with locking protocol “lock_dlm” for a cluster called
“PBOADQC1” and with name “sanvol1”
Note: For creating multiple cluster GFS file system the volume name will be different. Now
it has sanvol1 for the next it will be sanvol2 and so on. For every GFS file system we will
have different name.
(14) Install Oracle software on both nodes and create an Oracle database
Install the Oracle binaries on both servers. Do not create a database at this time.
We will create the database after we have installed the binaries. Make sure that your
Oracle home is set to a local, nonshared, directory. It is not advisable to install the
Oracle Binaries on a shared partition at this time.
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 7/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
To create your database you will need to have the shared storage LUN mounted to both
nodes in the cluster. Choose the mount point for the shared storage as the location for
the Oracle files.
Note: The tnsnames.ora and listener.ora files on each server should be configured to use
the virtual IP address for client connections.
The sample TNS entry for Oracle is below. Please note the “Host” entry is pointing to the
hostname associated with Virtual IP.
APD2 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1529))
(ADDRESS = (PROTOCOL = TCP)(HOST = PBOADQC1.intersil.corp)(PORT
= 1529))
)
)
(15) Click on Resource then click on Add a Resource
(16) Choose GFS file system. Enter the Name, Mount point, device, File system type, Click
on the option Force unmount. Repeat this step for the number of mount point.
(17) Click on resource. Next Add a Resource. Choose IP Address. Enter the Virtual IP.
Click on submit.
(18) Click on resource. Next Add a Resource. Choose Script. Enter the Name, Full path to
the script file.
Note: The script file should be placed in the both nodes.
The script file will look like this
#!/bin/bash
export ORACLE_HOME=/home/oracle/product/11.2.0
export ORACLE_SID=APD2
cd /home/oracle
export ORACLE_TRACE='T'
case $1 in
start)
echo "Starting Oracle: `date`"
su oracle c "/home/oracle/product/11.2.0/bin/dbstart $ORACLE_HOME
$ORACLE_SID"
;;
stop)
echo "Stopping Oracle: `date`"
su oracle c "/home/oracle/product/11.2.0/bin/dbshut $ORACLE_HOME
$ORACLE_SID"
;;
status)
echo "DB must be running. I don't know how to check. Can you please check it
manually? Don't mind!!"
;;
esac
For the customized scripts dbstart and dbshut, please refer AppendixA.
(19) Click on Services. Next click on Add a Service.
(20) Enter the Service name, Click on Automatically start this service, select the failover
domain, choose Recovery policy as Relocate
(21) Click on Add a resource to this service. First resource should be File System
(22) Click on Add a child. Next Resource should be the virtual IP
(23) Click on Add a child. Next resource will be the script.
(24) The Service Composition looks like this. First File system, Next Virtual IP and then the
script.
These changes are reflected in the /etc/cluster/cluster.conf file on each of the servers in
the cluster. The sample cluster.conf is attached in the AppendixB. This is just a sample
file and should be used as a reference only.
Verification Checklist
Item Verified
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 8/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
Red Hat Cluster Suite installed and configured Yes
Common Issues
Networking Issues
Red Hat Clustering nodes use Multicast to communicate. Your switches must be
configured to enable multicast addresses and support IGMP.
Troubleshooting Conga
The following sections describe issues you may encounter while creating the cluster
initially and the possible workaround.
Running luci on a Cluster Node
If you are using a cluster node also as a management node and running luci, you have to
restart luci manually after the initial configuration. For example:
Unable to add the key for node node1.intersil.corp to the trusted keys list.
Unable to add the key for node node2.intersil.corp to the trusted keys list.
This error occurs when the luci server cannot communicate with the ricci agent. Verify that
ricci is installed and started on each node.
Ensure all nodes have a consistent view of the shared storage with the command
partprobe or clicking reprobe storage in Conga. As a last resort, reboot all nodes, or
select restart cluster in Conga.
3. View the logs on node1 and the console node2. Node 1 should successfully fence
node2.
4. Continue to watch the messages file for status changes. You can also use the Cluster
Status tool to see the cluster view of a node. The parameteri 2 refreshes the tool every
two seconds. For more information on clusters see:
[root@node1]# clustat ‐i 2
Appendix – A
The customized dbstart and dbshut scripts are below:
dbstart
#!/bin/sh
#
# $Id: dbstart.sh 22may2008.05:05:45 arogers Exp $
# Copyright (c) 1991, 2008, Oracle. All rights reserved.
#
###################################
#
# usage: dbstart $ORACLE_HOME
#
# This script is used to start ORACLE from /etc/rc(.local).
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 10/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
# It should ONLY be executed as part of the system boot procedure.
#
# This script will start all databases listed in the oratab file
# whose third field is a "Y". If the third field is set to "Y" and
# there is no ORACLE_SID for an entry (the first field is a *),
# then this script will ignore that entry.
#
# This script requires that ASM ORACLE_SID's start with a +, and
# that nonASM instance ORACLE_SID's do not start with a +.
#
# If ASM instances are to be started with this script, it cannot
# be used inside an rc*.d directory, and should be invoked from
# rc.local only. Otherwise, the CSS service may not be available
# yet, and this script will block init from completing the boot
# cycle.
#
# If you want dbstart to autostart a singleinstance database that uses
# an ASM server that is autostarted by CRS (this is the default behavior
# for an ASM cluster), you must change the database's ORATAB entry to use
# a third field of "W" and the ASM's ORATAB entry to use a third field of "N".
# These values specify that dbstart autostarts the database only after
# the ASM instance is up and running.
#
# Note:
# Use ORACLE_TRACE=T for tracing this script.
#
# The progress log for each instance bringup plus Error and Warning message[s]
# are logged in file $ORACLE_HOME/startup.log. The error messages related to
# instance bringup are also logged to syslog (system log module).
# The Listener log is located at $ORACLE_HOME_LISTNER/listener.log
#
# On all UNIX platforms except SOLARIS
# ORATAB=/etc/oratab
#
# To configure, update ORATAB with Instances that need to be started up
# Entries are of the form:
# $ORACLE_SID:$ORACLE_HOME::
# An example entry:
# main:/usr/lib/oracle/emagent_10g:Y
#
# Overall algorithm:
# 1) Bring up all ASM instances with 'Y' entry in status field in oratab entry
# 2) Bring up all Database instances with 'Y' entry in status field in
# oratab entry
# 3) If there are Database instances with 'W' entry in status field
# then
# iterate over all ASM instances (irrespective of 'Y' or 'N') AND
# wait for all of them to be started
# fi
# 4) Bring up all Database instances with 'W' entry in status field in
# oratab entry
#
#####################################
trap 'exit' 1 2 3
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 11/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
ORACLE_HOME_LISTNER=$1
if [ ! $ORACLE_HOME_LISTNER ] ; then
echo "ORACLE_HOME_LISTNER is not SET, unable to autostart Oracle Net Listener"
echo "Usage: $0 ORACLE_HOME"
else
LOG=$ORACLE_HOME_LISTNER/listener.log
# Set the ORACLE_HOME for the Oracle Net Listener, it gets reset to
# a different ORACLE_HOME for each entry in the oratab.
export ORACLE_HOME=$ORACLE_HOME_LISTNER
echo ""
echo "$0: Starting up database \"$ORACLE_SID\""
date
echo ""
checkversionmismatch
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 12/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
# See if it is a V6 or V7 database
VERSION=undef
if [ f $ORACLE_HOME/bin/sqldba ] ; then
SQLDBA=sqldba
VERSION=`$ORACLE_HOME/bin/sqldba command=exit | awk '
/SQL\*DBA: (Release|Version)/ {split($3, V, ".") ;
print V[1]}'`
case $VERSION in
"6") ;;
*) VERSION="internal" ;;
esac
else
if [ f $ORACLE_HOME/bin/svrmgrl ] ; then
SQLDBA=svrmgrl
VERSION="internal"
else
SQLDBA="sqlplus /nolog"
fi
fi
STATUS=1
if [ f $ORACLE_HOME/dbs/sgadef${ORACLE_SID}.dbf ] ; then
STATUS="1"
fi
if [ f $ORACLE_HOME/dbs/sgadef${ORACLE_SID}.ora ] ; then
STATUS="1"
fi
pmon=`ps ef | grep w "ora_pmon_$ORACLE_SID" | grep v grep`
if [ "$pmon" != "" ] ; then
STATUS="1"
$LOGMSG "Warning: ${INST} \"${ORACLE_SID}\" already started."
fi
if [ $? eq 0 ] ; then
STATUS=1
else
$LOGMSG "Error: ${INST} \"${ORACLE_SID}\" NOT started."
fi
fi
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 13/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
quit
EOF
;;
esac
if [ $? eq 0 ] ; then
echo ""
echo "$0: ${INST} \"${ORACLE_SID}\" warm started."
else
$LOGMSG ""
$LOGMSG "Error: ${INST} \"${ORACLE_SID}\" NOT started."
fi
else
$LOGMSG ""
$LOGMSG "No init file found for ${INST} \"${ORACLE_SID}\"."
$LOGMSG "Error: ${INST} \"${ORACLE_SID}\" NOT started."
fi
fi
}
else
COUNT=0
$ORACLE_HOME/bin/crsctl check css
RC=$?
while [ "$RC" != "0" ];
do
COUNT=`expr $COUNT + 1`
if [ $COUNT = 15 ] ; then
# 15 tries with 20 sec interval => 5 minutes timeout
$LOGMSG "Timed out waiting to start ASM instance $ORACLE_SID"
$LOGMSG " CSS service is NOT available."
exit $COUNT
fi
$LOGMSG "Waiting for Oracle CSS service to be available before starting "
$LOGMSG " ASM instance $ORACLE_SID. Wait $COUNT."
sleep 20
$ORACLE_HOME/bin/crsctl check css
RC=$?
done
fi
startinst
}
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 14/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
cat $ORATAB | while read LINE
do
case $LINE in
\#*) ;; #commentline in oratab
*)
ORACLE_SID=`echo $LINE | awk F: '{print $1}' `
if [ "$ORACLE_SID" = '*' ] ; then
# same as NULL SID ignore this entry
ORACLE_SID=""
continue
fi
# Proceed only if last field is 'Y'.
if [ "`echo $LINE | awk F: '{print $NF}' `" = "Y" ] ; then
# If ASM instances
if [ `echo $ORACLE_SID | cut b 1` = '+' ]; then
INST="ASM instance"
ORACLE_HOME=`echo $LINE | awk F: '{print $2}' `
# Called scripts use same home directory
export ORACLE_HOME
# file for logging script's output
LOG=$ORACLE_HOME/startup.log
touch $LOG
chmod a+r $LOG
echo "Processing $INST \"$ORACLE_SID\": log file $ORACLE_HOME/startup.log"
startasminst >> $LOG 2>&1
fi
fi
;;
esac
done
#
# Following loop brings up 'Database instances'
#
cat $ORATAB | while read LINE
do
case $LINE in
\#*) ;; #commentline in oratab
*)
ORACLE_SID=`echo $LINE | awk F: '{print $1}' `
if [ "$ORACLE_SID" = '*' ] ; then
# same as NULL SID ignore this entry
ORACLE_SID=""
continue
fi
# Proceed only if last field is 'Y'.
if [ "`echo $LINE | awk F: '{print $NF}' `" = "Y" ] ; then
# If nonASM instances
if [ `echo $ORACLE_SID | cut b 1` != '+' ]; then
INST="Database instance"
ORACLE_HOME=`echo $LINE | awk F: '{print $2}' `
# Called scripts use same home directory
export ORACLE_HOME
# file for logging script's output
LOG=$ORACLE_HOME/startup.log
touch $LOG
chmod a+r $LOG
echo "Processing $INST \"$ORACLE_SID\": log file $ORACLE_HOME/startup.log"
startinst >> $LOG 2>&1
fi
fi
;;
esac
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 15/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
done
#
# Following loop brings up 'Database instances' that have wait state 'W'
#
cat $ORATAB | while read LINE
do
case $LINE in
\#*) ;; #commentline in oratab
*)
ORACLE_SID=`echo $LINE | awk F: '{print $1}' `
if [ "$ORACLE_SID" = '*' ] ; then
# same as NULL SID ignore this entry
ORACLE_SID=""
continue
fi
# Proceed only if last field is 'W'.
if [ "`echo $LINE | awk F: '{print $NF}' `" = "W" ] ; then
W_ORACLE_SID=`echo $LINE | awk F: '{print $1}' `
# DB instances with 'W' (wait state) have a dependency on ASM instances via CRS.
# Wait here for 'all' ASM instances to become available.
cat $ORATAB | while read LINE
do
case $LINE in
\#*) ;; #commentline in oratab
*)
ORACLE_SID=`echo $LINE | awk F: '{print $1}' `
if [ "$ORACLE_SID" = '*' ] ; then
# same as NULL SID ignore this entry
ORACLE_SID=""
continue
fi
if [ `echo $ORACLE_SID | cut b 1` = '+' ]; then
INST="ASM instance"
ORACLE_HOME=`echo $LINE | awk F: '{print $2}' `
if [ x $ORACLE_HOME/bin/srvctl ] ; then
COUNT=0
NODE=`olsnodes l`
RNODE=`srvctl status asm n $NODE | grep "$ORACLE_SID is running"`
RC=$?
while [ "$RC" != "0" ]; # wait until this comes up!
do
COUNT=$((COUNT+1))
if [ $COUNT = 5 ] ; then
# 5 tries with 60 sec interval => 5 minutes timeout
$LOGMSG "Error: Timed out waiting on CRS to start ASM instance
$ORACLE_SID"
exit $COUNT
fi
$LOGMSG "Waiting for Oracle CRS service to start ASM instance $ORACLE_SID"
$LOGMSG "Wait $COUNT."
sleep 60
RNODE=`srvctl status asm n $NODE | grep "$ORACLE_SID is running"`
RC=$?
done
else
$LOGMSG "Error: \"${W_ORACLE_SID}\" has dependency on ASM instance
\"${ORACLE_SID}\""
$LOGMSG "Error: Need $ORACLE_HOME/bin/srvctl to check this dependency"
fi
fi # asm instance
;;
esac
done # innner while
fi
;;
esac
done # outer while
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 16/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
# by now all the ASM instances have come up and we can proceed to bring up
# DB instance with 'W' wait status
dbshut
#!/bin/sh
#
# $Id: dbshut.sh 22may2008.05:19:31 arogers Exp $
# Copyright (c) 1991, 2008, Oracle. All rights reserved.
#
###################################
#
# usage: dbshut $ORACLE_HOME
#
# This script is used to shutdown ORACLE from /etc/rc(.local).
# It should ONLY be executed as part of the system boot procedure.
#
# This script will shutdown all databases listed in the oratab file
# whose third field is a "Y" or "W". If the third field is set to "Y" and
# there is no ORACLE_SID for an entry (the first field is a *),
# then this script will ignore that entry.
#
# This script requires that ASM ORACLE_SID's start with a +, and
# that nonASM instance ORACLE_SID's do not start with a +.
#
# Note:
# Use ORACLE_TRACE=T for tracing this script.
# Oracle Net Listener is also shutdown using this script.
#
# The progress log for each instance shutdown is logged in file
# $ORACLE_HOME/shutdown.log.
#
# On all UNIX platforms except SOLARIS
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 17/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
# ORATAB=/etc/oratab
#
# To configure, update ORATAB with Instances that need to be shutdown
# Entries are of the form:
# $ORACLE_SID:$ORACLE_HOME::
# An example entry:
# main:/usr/lib/oracle/emagent_10g:Y
#
#####################################
trap 'exit' 1 2 3
case $ORACLE_TRACE in
T) set x ;;
esac
# Set the ORACLE_HOME for the Oracle Net Listener, it gets reset to
# a different ORACLE_HOME for each entry in the oratab.
export ORACLE_HOME=$ORACLE_HOME_LISTNER
# Stops an instance
stopinst() {
ORACLE_SID=`echo $LINE | awk F: '{print $1}' `
if [ "$ORACLE_SID" = '*' ] ; then
ORACLE_SID=""
fi
# Called programs use same database ID
export ORACLE_SID
ORACLE_HOME=`echo $LINE | awk F: '{print $2}' `
# Called scripts use same home directory
export ORACLE_HOME
# Put $ORACLE_HOME/bin into PATH and export.
PATH=$ORACLE_HOME/bin:${SAVE_PATH} ; export PATH
# add for bug 652997
LD_LIBRARY_PATH=${ORACLE_HOME}/lib:${SAVE_LLP} ; export LD_LIBRARY_PATH
PFILE=${ORACLE_HOME}/dbs/init${ORACLE_SID}.ora
# See if it is a V6 or V7 database
VERSION=undef
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 18/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
if [ f $ORACLE_HOME/bin/sqldba ] ; then
SQLDBA=sqldba
VERSION=`$ORACLE_HOME/bin/sqldba command=exit | awk '
/SQL\*DBA: (Release|Version)/ {split($3, V, ".") ;
print V[1]}'`
case $VERSION in
"6") ;;
*) VERSION="internal" ;;
esac
else
if [ f $ORACLE_HOME/bin/svrmgrl ] ; then
SQLDBA=svrmgrl
VERSION="internal"
else
SQLDBA="sqlplus /nolog"
fi
fi
case $VERSION in
"6") sqldba command=shutdown ;;
"internal") $SQLDBA <connect internal
shutdown immediate
EOF
;;
*) $SQLDBA <connect / as sysdba
shutdown immediate
quit
EOF
;;
esac
#
# Loop for every entry in oratab file and and try to shut down
# that ORACLE
#
# Following loop shuts down 'Database Instance[s]' with 'Y' entry
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 19/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
#
# Following loop shuts down 'Database Instance[s]' with 'W' entry
#
cat $ORATAB | while read LINE
do
case $LINE in
\#*) ;; #commentline in oratab
*)
ORACLE_SID=`echo $LINE | awk F: '{print $1}' `
if [ "$ORACLE_SID" = '*' ] ; then
# NULL SID ignore
ORACLE_SID=""
continue
fi
# Proceed only if last field is 'Y' or 'W'
if [ "`echo $LINE | awk F: '{print $NF}' `" = "W" ] ; then
if [ `echo $ORACLE_SID | cut b 1` != '+' ]; then
INST="Database instance"
ORACLE_HOME=`echo $LINE | awk F: '{print $2}' `
LOG=$ORACLE_HOME/shutdown.log
echo "Processing $INST \"$ORACLE_SID\": log file $LOG"
stopinst >> $LOG 2>&1
fi
fi
;;
esac
done
#
# Following loop shuts down 'ASM Instance[s]'
#
Appendix – B
The sample /etc/cluster/cluster.conf file is attached below:
5 comments:
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 20/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
Ramzi September 20, 2011 at 7:56 AM
Good Job
Reply
Replies
Thanks
Reply
Reply
Replies
#!/bin/bash
export ORACLE_HOME=/home/oracle/product/11.2.0
export ORACLE_SID=APD2
cd /home/oracle
export ORACLE_TRACE='T'
case $1 in
start)
;;
stop)
;;
status)
echo "DB must be running. I don't know how to check. Can you please check it
manually? Don't mind!!"
;;
esac
Reply
The databse has to be created from only one of the nodes but the service must be running
on all the nodes. Am i right?
Reply
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 21/22
21/04/2015 Linux Administration: Red Hat Cluster with Oracle Service Failover
Publish Preview
http://arnabmukherjee81.blogspot.com/2011/06/redhatclusterwithoracleservice.html 22/22