0% found this document useful (0 votes)
188 views7 pages

RAC Commands

This document provides instructions for configuring and managing an Oracle Real Application Clusters (RAC) database using the srvctl command line tool. It describes how to add, configure, enable, start and stop database instances, services, and listeners. It also covers checking status, removing components, and retrieving cluster information. The document concludes with steps for removing a node from the cluster, including deleting the node-specific configuration and deinstalling the Oracle Clusterware software.

Uploaded by

Rrohit Sawney
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
188 views7 pages

RAC Commands

This document provides instructions for configuring and managing an Oracle Real Application Clusters (RAC) database using the srvctl command line tool. It describes how to add, configure, enable, start and stop database instances, services, and listeners. It also covers checking status, removing components, and retrieving cluster information. The document concludes with steps for removing a node from the cluster, including deleting the node-specific configuration and deinstalling the Oracle Clusterware software.

Uploaded by

Rrohit Sawney
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

SRVCTL :

===========
Addition :
srvctl add database -d <dbname> -o <homepath>
add instance -d <dbname> -i <instance_name> -n <node>
add service

Configuration :
srvctl config database -d <dbname> -n <node> -a (optional)
config service
config asm
config listener
Enable / Disable
srvctl disable/enable database -d <dbname>
instance -d <dbname> -i <instance>
service -d <dbname> -s <service> -i <instance>
asm -n <nodename> -i <instance>
start/stop

srvctl start database -d <dbname> -o <open/mount>


start instance -d -i -o
service -d -i -s
listener -n <nodename> -l <listenername>
asm -n <nodename> -i <instance>
Status:
srvctl status database -d <dbname> -
instance -d <dbname> -i <instance>
service -d <dbname> -s <service>
listener -n -- refer config option status not applicable
asm -n <nodename> -i <instance>
Remove :
srvctl remove database -d <> -f (f= force)
instance -d -i
service -d -s -n
asm -n -i
===========================================================================
cluster name & version :
cemutlo -n -w
===========================================================================

OCR & Voting Disk:

>ocrcheck
OCR Backup :
ocrconfig -export <filename>

OCR restore :
crsctl stop crs
ocrconfig restore <backup filename>
crsctl start crs
OCR Multiplex :
crsctl query css votedisk
crsctl add css votedisk /dev/raw/raw12 (multiplex location) --force

Voting disk backup :


Use dd command to take Voting disk
dd if= <voting disk location> of= <voting disk bkup location>
Voting disk restore :
dd if= <voting disk bkup location> of= <voting disk location>
OCR Mirror / Multiplex:
./ocrconfig -replace ocrmirror /dev/raw/raw11 (multiplex location)

Services :
>Internal services
select service from v$session;
sys$background
sys$users
>Application Services:
active /spare : one node active other servs as fail over
active symmetric : all node active any one fails others will take over
the load based on laod
active asummetric : all node have the service, but only one will be in active
others will work as a fail over

Adding node to a Cluster :


================================

Remove Cluster Configuration :


----------------------------------
cd /home/oracle10g -- home directory of oracle user
rm -rf oracle
rm -rf oraInventory
cd /etc
rm oratab oraIns.loc
mv inittab.org inittab
rm inittab.crs inittab.no_crs
cd /etc/init.d
rm init.crs init.crsd init.cssd init.evmd
cd /etc/rc0.d ; rm k96init.crs
cd /etc/rc1.d ; rm k96init.crs
startup related script starts with s96* where shutdown starts with k96
cd /etc/rc2.d ; rm k96init.crs
cd /etc/rc3.d ; rm s96init.crs
cd /etc/rc4.d ; rm k96init.crs
cd /etc/rc5.d ; rm s96init.crs
cd /etc/rc6.d ; rm k96init.crs
cd /var/tmp
rm -r .oracle

Adding Node : logon to both the cluster nodes


---------------------------------------------
edit hosts file
edit /etc/sysctl.conf
create user equivalance
run cluvfy
cd $CRS_HOME/oui/bin
./addNode.sh

log on to the new node


-----------------------
cd $CRS_HOME/bin
./racgons add_config rac3:6200
copy oracle directory to the new node
cd $CRS_HOME/oui/bin
./addNode.sh
copy /etc/iscsi.conf from cluster node the new node
restart the iscsi service
COpy /etc/sysconfig/rawdevices from cluster node to new node
restart the rawdevices service
netca
dbca

Removing a Node from a cluster :(primary node)


==================================
delete instance
dbca
clean up asm
srvctl stop asm -n <node>
srvctl remove asm -n <node>
remove listener from nodes
netca

remove node from database


cd $ORACLE_HOME/oui/bin
./runInstaller -update NodeList ORACLE_HOME=$ORACLE_HOME " CLUSTER_NODES
={<nodename>}"
./runInstaller
deinstall products

Detaching node from cluster nodes


-----------------------------------
cd $ORACLE_HOME/oui/bin
./runInstaller -update NodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={nodelist}"

racgons remove_config <cluster_node>:6200

delete node from the node to be deleted:


-----------------------------------------
login as root
cd $ORACLE_HOME/crs/install
./rootdelete.sh <node_to_be_deleted>
cd $CRS_HOME/oui/bn
./runInstaller -update NodeList ORACLE_HOME=$CRS_HOME
"CLUSTER_NODES={<node_to_be_deleted>}" crs=true -local
./runInstaller
deinstall

Logon to any one of the cluster node


--------------------------------------
cd $CRS_HOME/oui/bin
./runInstaller -update NodeList ORACLE_HOME=$CRS_HOME "CLUSTER_NODES={clusternodelist}"
CRS=TRUE
olsnodes -n

Remove Node-Specific Interface Configuration

Run the following commands to remove node-specific interface


configurations from the node to be deleted. For this example, run
these commands from linux3 as the oracle user account:

$ $ORA_CRS_HOME/bin/racgons remove_config linux3:6200

$ $ORA_CRS_HOME/bin/oifcfg delif -node linux3


PROC-4: The cluster registry key to be operated on does not exist.
PRIF-11: cluster registry error
Disable Oracle Clusterware Applications

From the node you are deleting from the cluster (linux3), run the
script $ORA_CRS_HOME/install/rootdelete.sh to disable
the Oracle Clusterware applications that are on the node. This script
should only be run once. Given the Clusterware software install is on
local disk (non-shared), make certain to use the nosharedhome
argument. The default for this script is sharedhome which prevents
you from updating the permissions of local files such that they can be
removed by the oracle user account.

Running this script will stop the CRS stack and delete the ocr.loc
file on the node to be removed. The nosharedvar option assumes
the ocr.loc file is not on a shared file sytem.

While logged into linux3 as the root user account, run the
following:

$ su
# cd $ORA_CRS_HOME/install
# ./rootdelete.sh local nosharedvar nosharedhome
CRS-0210: Could not find resource 'ora.linux3.LISTENER_LINUX3.lsnr'.
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down...
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in '/etc/oracle/scls_scr'

Delete Node from Cluster and Update OCR

Upon successful completion of the rootdelete.sh script, run the


rootdeletenode.sh script to delete the node (linux3) from
the Oracle cluster and to update the Oracle Cluster Registry (OCR).
This script should be run from a pre-existing / available node in the
cluster (linux1) as the root user account:

Before executing rootdeletenode.sh, we need to know the


node number associated with the node name to be deleted from the
cluster. To determine the node number, run the following command
as the oracle user account from linux1:

$ $ORA_CRS_HOME/bin/olsnodes -n
linux1 1
linux2 2
linux3 3

From the listing above, the node number for linux3 is 3.

While logged into linux1 as the root user account, run the
following using the name linux3 and the node number 3:

$ su
# cd $ORA_CRS_HOME/install
# ./rootdeletenode.sh linux3,3
CRS-0210: Could not find resource 'ora.linux3.LISTENER_LINUX3.lsnr'.
CRS-0210: Could not find resource 'ora.linux3.ons'.
CRS-0210: Could not find resource 'ora.linux3.vip'.
CRS-0210: Could not find resource 'ora.linux3.gsd'.
CRS-0210: Could not find resource ora.linux3.vip.
CRS nodeapps are deleted successfully
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully deleted 14 values from OCR.
Key SYSTEM.css.interfaces.nodelinux3 marked for deletion is not there. Ignoring.
Successfully deleted 5 keys from OCR.
Node deletion operation successful.
'linux3,3' deleted successfully

To verify that the node was successfully removed, use the following
as either the oracle or root user:

$ $ORA_CRS_HOME/bin/olsnodes -n
linux1 1
linux2 2

Update Node List for Oracle Clusterware Software - (Remove linux3)

From the node to be deleted (linux3), run the OUI as the oracle
user account to update the inventory node list for the Oracle
Clusterware software:

$ DISPLAY=<your local workstation>:0.0; export DISPLAY

$ cd $ORA_CRS_HOME/oui/bin
$ ./runInstaller -updateNodeList ORACLE_HOME=$ORA_CRS_HOME
CLUSTER_NODES="" -local CRS=true
Starting Oracle Universal Installer...

No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will


be executed.
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oracle/oraInventory
'UpdateNodeList' was successful.

De-install Oracle Clusterware Software


Next, run the OUI from the node to be deleted (linux3) to de-install
the Oracle Clusterware software. Make certain that you choose the
home to be removed and not just the products under that home.

From linux3 as the oracle user account, run the following:

$ DISPLAY=<your local workstation>:0.0; export DISPLAY

$ cd $ORA_CRS_HOME/oui/bin
$ ./runInstaller

Screen Name Response


Welcome Screen Click the Installed Products button.
Inventory: Contents Tab Check the Oracle home to deleted (OraCrs10g_home) and click the Remove button.
Confirmation Acknowledge the warning dialog by clicking Yes to remove the Oracle Clusterware
software and to remove the /u01/app/crs directory.
Deinstallation Process A progress bar is displayed while the Oracle Clusterware software is being removed. Onc
this process has completed, you are returned to the "Inventory: Contents Tab" dialog. Afte
confirming the Oracle Clusterware software (Clusterware home) was successfully remove
Click Close to exit this dialog.
Welcome Screen Click Cancel to exit the OUI.

Update Node List for Remaining Nodes in the Cluster

Finally, from linux1 logged in as the oracle user account (and


user equivalence enabled), update the Oracle Clusterware inventory
node list for all nodes that will remain in the cluster:

$ DISPLAY=<your local workstation>:0.0; export DISPLAY

$ cd $ORA_CRS_HOME/oui/bin
$ ./runInstaller -updateNodeList ORACLE_HOME=$ORA_CRS_HOME
"CLUSTER_NODES={linux1,linux2}" CRS=true
Starting Oracle Universal Installer...

No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will


be executed.
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oracle/oraInventory
'UpdateNodeList' was successful.

Verify Node to be Deleted is no Longer a Member of the Cluster

Run the following commands to verify that the node to be deleted


from the Oracle RAC cluster is no longer a member of the cluster and
to verify that the Oracle Clusterware components have been
successfully removed from that node.

Run the following commands from linux1 as the oracle user


account:
$ srvctl status nodeapps -n linux3
PRKC-1056 : Failed to get the hostname for node linux3
PRKH-1001 : HASContext Internal Error
[OCR Error(Native: getHostName:[21])]

The error above indicates that linux3 is no longer a member of the


cluster.

$ $ORA_CRS_HOME/bin/crs_stat | grep -i linux3

You should not see any output from the above command

$ $ORA_CRS_HOME/bin/olsnodes -n
linux1 1
linux2 2

You should see the present node list without the deleted node (that is
linux1 and linux2 only).

Remove/Rename any Remaining Oracle Files from Node to be Deleted

From the node to be deleted (linux3), remove/rename any


remaining Oracle files while logged in as the root user account:

# mv -f /etc/inittab.no_crs /etc/inittab
# rm -f /etc/inittab.orig
# rm -f /etc/inittab.crs

# rm -rf /etc/oracle
# rm -f /etc/oratab
# rm -f /etc/oraInst.loc
# rm -rf /etc/ORCLcluster
# rm -rf /u01/app/oracle
# rm -rf /u01/app/crs
# rm -f /usr/local/bin/coraenv
# rm -f /usr/local/bin/dbhome
# rm -f /usr/local/bin/oraenv

Finally, remove the Oracle user account and all associated UNIX
groups from linux3:

# userdel -r oracle
# groupdel oinstall
# groupdel dba

You might also like