0% found this document useful (0 votes)
353 views100 pages

RAC Implementation 10GR2

This document provides documentation for installing and configuring a 2-node Oracle Real Application Clusters (RAC) database with 10G Release 2 on an existing infrastructure at a customer. It describes configuring shared disks, installing Oracle Clusterware, creating the database with Automatic Storage Management (ASM), and configuring Transparent Application Failover (TAF) for high availability and load balancing.

Uploaded by

iamsujan_scribd
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
353 views100 pages

RAC Implementation 10GR2

This document provides documentation for installing and configuring a 2-node Oracle Real Application Clusters (RAC) database with 10G Release 2 on an existing infrastructure at a customer. It describes configuring shared disks, installing Oracle Clusterware, creating the database with Automatic Storage Management (ASM), and configuring Transparent Application Failover (TAF) for high availability and load balancing.

Uploaded by

iamsujan_scribd
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 100

2 Node 10G Release 2 RAC IMPLEMENTATION

2 Node 10G Release 2 RAC Implementation

DOCUMENTATION FOR

<Customer Name>

[<Vendor> is submitting this documentation to <Customer Name> on the understanding


that the contents would not be divulged to any third party without prior written consent
from <Vendor>]

Prepared By:

Date: Oct 10th, 2009

Rev: 1.0

Page 1
2 Node 10G Release 2 RAC IMPLEMENTATION

Table of Contents

1 REVISION HISTORY............................................................................................................... 3

2 APPROVALS.......................................................................................................................... 4

3 EXISTING INFRASTRUCTURE.............................................................................................. 5

4 TECHNICAL SOLUTION........................................................................................................ 6

5 ORACLE CLUSTER WARE INSTALLATION OF 10.2.0.1 & CREATION OF STRUCTURE


DATABASE.................................................................................................................................... 6
5.1 CONFIGURATION OF SHARED DISKS......................................................................................6
5.2 USER AND GROUP CREATION............................................................................................... 6
5.3 CONFIGURING SSH ON ALL CLUSTER NODES.........................................................................7
5.4 CREATION OF DISK DRIVES................................................................................................... 9
# CHOWN ROOT:DBA /DEV/RDSK/C4T600A0B8000566B30000007B04ACD6E75D0S0...................9
5.5 TABLESPACE MANAGEMENT PROPERTIES............................................................................10
5.6 SETTING KERNEL PARAMETERS.......................................................................................... 10
5.7 10G RELEASE 2 RAC PRE-INSTALLATION TASKS................................................................10
5.7.1 Establish Oracle environment variables:.................................................................12
5.8 USING ORACLE UNIVERSAL INSTALLER TO INSTALL ORACLE CLUSTERWARE ON WINDOWS....13
5.9 USING THE ORACLE UNIVERSAL INSTALLER TO INSTALL ORACLE 10.2.0.1 REAL APPLICATION
CLUSTERS BINARIES SOFTWARE................................................................................................... 30
5.10 DATABASES PATCH APPLIED..........................................................................................37
5.11 STEPS FOR CONFIGURING DATABASE AND LISTENER CONFIGURATION..............................50
5.11.1 Configuration of Listener......................................................................................... 50
6 DATABASE CREATION - INDCECDS.................................................................................59
6.1 DATABASE CREATION WITH ASM........................................................................................ 60
7 CONFIGURING TAF FOR THE “INDCECDS” DATABASE................................................91
7.1 TAF-FAILOVER TESTING FOR ORACLE DATABASE AND APPLICATION IN RAC..........................92
TESTS PERFORMED AFTER MIGRATION & POST CONFIGURATION OF TAF........................................93
8 10G RELEASE 2 RAC PRODUCT DOCUMENTATION.......................................................93
8.1 WHAT IS ORACLE 10G RELEASE 2 REAL APPLICATIONS CLUSTERS?...................................93
8.2 ORACLE 10G REAL APPLICATION CLUSTERS – CACHE FUSION TECHNOLOGY.......................94
8.3 TRANSPARENT APPLICATION FAILOVER (TAF)......................................................................95
8.3.1 Failover Basics........................................................................................................ 95
8.3.2 Duration of Failover................................................................................................. 96
8.3.3 Client Failover......................................................................................................... 96
8.3.4 Transparent Application Failover............................................................................96
8.3.5 Elements Affected by Transparent Application Failover..........................................96
8.3.6 Uses of Transparent Application Failover...............................................................97
8.3.7 Database Client Processing During Failover...........................................................98
8.3.8 Transparent Application Fail over Processing During Shutdowns...........................99
8.3.9 Transparent Application Failover Restrictions.......................................................100

Page 2
2 Node 10G Release 2 RAC IMPLEMENTATION

Page 3
2 Node 10G Release 2 RAC IMPLEMENTATION

1 Revision History

Date Author Description Version


10th Oct 2008 Elvis Carlo Final Document 1.0

Page 4
2 Node 10G Release 2 RAC IMPLEMENTATION

2 Approvals
Task Name Signature Date
Approved by M/s <Customer Name>

Page 5
2 Node 10G Release 2 RAC IMPLEMENTATION

3 Existing Infrastructure
The existing infrastructure is listed below

S.No Configuration DB Servers


1 Server Model T5140
2 CPU 2 * 8 Core 1.4GHz UltraSPARC T2 Plus
3 Memory 32GB on each node
4 Storage Hitachi 6140
5 RAID concept RAID 6
6 Database version 10.2.0.3 – Enterprise Edition (Base product: - 10.2.0.1)
7 Cluster Oracle Clusterware (CRS) 10.2.0.3

8 Load balancing Through TAF


9 Redundancy availability Storage Level

Page 6
2 Node 10G Release 2 RAC IMPLEMENTATION

4 Technical Solution
In order to improvise the database high availability and to enable to balance the load, <Customer
Name> opted for implementing 2 node 10G Release 2 Oracle Real Application Cluster on
Automatic Storage Management (ASM). Oracle CRS was installed on both the nodes, which was
the base layer for installing the Oracle 10G Release 2 Real Application Cluster binaries. All the
data volumes were installed using ASM to enable better I/O performance & database
maintenance.

IMPLEMENTATION PHASE
During the implementation phase, fail over testing was done at the database level after
configuring Transparent Application Fail over (TAF) at the server level. This feature enables to
have an effective fail over to another node, incase the connected node shuts down abruptly.

5 ORACLE CLUSTER WARE INSTALLATION OF 10.2.0.1 &


Creation of Structure Database
5.1 Configuration of Shared Disks
Real Application Clusters requires that all each instance be able to access a set of ASM
diskgroups on a shared disk subsystem. The Oracle instances in Real Application Clusters write
data onto the ASM to update the control file, server parameter file, each datafile, and each redo
log file. All instances in the RAC share these ASM diskgroups.

The Oracle instances in the RAC configuration write information to ASM defined for:

 The control file

 The spfile.ora

 Each datafile

 Each ONLINE redo log file

5.2 User and Group Creation

Create the OS groups and users as follows on both nodes.

# /usr/sbin/groupadd oinstall
# /usr/sbin/groupadd dba
# useradd -g oinstall -G dba –d /oracle oracle

Page 7
2 Node 10G Release 2 RAC IMPLEMENTATION

Set the password for Oracle using

# passwd oracle

5.3 Configuring SSH on all cluster nodes

Before you install and use Oracle Real Application Clusters, you should configure secure shell
(SSH) for the oracle user on all cluster nodes. Oracle Universal Installer uses the rsh and scp
commands during installation to run remote commands on and copy files to the other cluster
nodes. You must configure SSH (or RSH) so that these commands do not prompt for a password.

Use the following steps to create the RSA key pair. Please note that these steps will need to be
completed on both Oracle RAC nodes in the cluster:

1. Logon as the oracle user account.


# su - oracle

2. If necessary, create the .ssh directory in the oracle user's home directory and set the
correct permissions on it:
$ mkdir -p ˜/.ssh
$ chmod 700 ˜/.ssh

3. Enter the following command to generate an RSA key pair (public and private key) for
version 3 of the SSH protocol:
$ /usr/bin/ssh-keygen -t rsa
At the prompts:

o Accept the default location for the key files.

o Enter and confirm a pass phrase. This should be different from the oracle user
account password; however it is not a requirement i.e. you do not have to enter
any password.

This command will write the public key to the ˜/.ssh/id_rsa.pub file and the private key to
the ˜/.ssh/id_rsa file. Note that you should never distribute the private key to anyone.

4. Repeat the above steps for each Oracle RAC node in the cluster.
Now that both Oracle RAC nodes contain a public and private key pair for RSA, you will need to
create an authorized key file on one of the nodes. An authorized key file is nothing more than a
single file that contains a copy of everyone's (every node's) RSA public key. Once the authorized
key file contains all of the public keys, it is then distributed to all other nodes in the RAC cluster.
Complete the following steps on one of the nodes in the cluster to create and then distribute the
authorized key file. For the purpose of this article, I am using indc1s209.

1. First, determine if an authorized key file already exists on the node


(˜/.ssh/authorized_keys). In most cases this will not exist since this article assumes you

Page 8
2 Node 10G Release 2 RAC IMPLEMENTATION

are working with a new install. If the file doesn't exist, create it now:
$ touch ˜/.ssh/authorized_keys
$ cd ˜/ssh

2. In this step, use SSH to copy the content of the ˜/.ssh/id_rsa.pub public key from each
Oracle RAC node in the cluster to the authorized key file just created
(˜/.ssh/authorized_keys). Again, this will be done from indc1s209. You will be prompted
for the oracle user account password for both Oracle RAC nodes accessed. Notice that
when using SSH to access the node you are on (indc1s209), the first time it prompts for
the oracle user account password. The second attempt at accessing this node will prompt
for the pass phrase used to unlock the private key. For any of the remaining nodes, it will
always ask for the oracle user account password.

The following example is being run from indc1s209 and assumes a 2-node cluster, with
nodes indc1s209 and indc1s210:

$ ssh indc1s209 cat ˜/.ssh/id_rsa.pub >> ˜/.ssh/authorized_keys


The authenticity of host 'indc1s209 (10.200.1.55)' can't be established.
RSA key fingerprint is a5:de:ee:2a:d8:10:98:d7:ce:ec:d2:f9:2c:64:2e:e5
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'indc1s209,10.200.1.55' (RSA) to the list of known hosts.
oracle@indc1s209's password:
$ ssh indc1s210 cat ˜/.ssh/id_rsa.pub >> ˜/.ssh/authorized_keys
The authenticity of host 'indc1s210 (10.200.1.57)' can't be established.
RSA key fingerprint is d2:99:ed:a2:7b:10:6f:3e:e1:da:4a:45:d5:34:33:5b
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'indc1s210,10.200.1.57' (RSA) to the list of known hosts.
oracle@indc1s210's password:

Note: The first time you use SSH to connect to a node from a particular system, you may
see a message similar to the following:

The authenticity of host 'indc1s209 (10.200.1.55)' can't be established.


RSA key fingerprint is a5:de:ee:2a:d8:10:98:d7:ce:ec:d2:f9:2c:64:2e:e5
Are you sure you want to continue connecting (yes/no)? yes

Enter yes at the prompt to continue. You should not see this message again when you
connect from this system to the same node.

3. At this point, we have the content of the RSA public key from every node in the cluster in
the authorized key file (˜/.ssh/authorized_keys) on indc1s209. We now need to copy it to
the remaining nodes in the cluster. In out two-node cluster example, the only remaining
node is indc1s210. Use the scp command to copy the authorized key file to all remaining
nodes in the cluster:

$ scp ˜/.ssh/authorized_keys indc1s210:.ssh/authorized_keys


oracle@indc1s210's password:
authorized_keys 100% 1534 1.2KB/s 00:00

Page 9
2 Node 10G Release 2 RAC IMPLEMENTATION

4. Change the permission of the authorized key file for both Oracle RAC nodes in the
cluster by logging into the node and running the following:

$ chmod 600 ˜/.ssh/authorized_keys

5. At this point, if you use ssh to log in or run a command on another node, you are
prompted for the pass phrase that you specified when you created the RSA key. For
example, test the following from indc1s209:

$ ssh indc1s209 hostname


Enter passphrase for key '/u01/app/oracle/.ssh/id_rsa':
indc1s209
$ ssh indc1s210 hostname
Enter passphrase for key '/u01/app/oracle/.ssh/id_rsa':
indc1s210

5.4 Creation of Disk drives


 It is mandatory to create the disk partitions prior to Clusterware & RAC installation.
The hosts should see the shared volumes as having the same device identification. For
example, to a Solaris host, LUN 1 (Volume Name = Oracle Index) is seen as /dev/rdsk/c1t1d0s0
on host #1. The same volume-to-LUN mapping must be /dev/rdsk/c1t1d0s0 on host #2. Oracle
RAC is not set up correctly unless all shared volumes have the same device identification.

In this setup, we have 1 raw device of 256MB for OCR, 1 raw device of 256MB for Voting Disk,
1 raw device of 256GB for database files and 1 raw device of 256Gb for archivelogs.The
ownership and permissions of the raw devices need to be set as follows.

# chown root:dba /dev/rdsk/c4t600A0B8000566B30000007B04ACD6E75d0s0


# chmod 660 /dev/rdsk/c4t600A0B8000566B30000007B04ACD6E75d0s0
# chown root:dba /dev/rdsk/c4t600A0B8000566B64000007F54ACD6EB0d0s0
# chmod 660 /dev/rdsk/c4t600A0B8000566B64000007F54ACD6EB0d0s0
# chown oracle:dba /dev/rdsk/c4t600A0B8000566B30000007AD4ACD6C12d0s0
# chmod 660 /dev/rdsk/c4t600A0B8000566B30000007AD4ACD6C12d0s0
# chown oracle:dba /dev/rdsk/c4t600A0B8000566B64000007F34ACD6C5Bd0s0
# chmod 660 /dev/rdsk/c4t600A0B8000566B64000007F34ACD6C5Bd0s0

Page 10
2 Node 10G Release 2 RAC IMPLEMENTATION

5.5 Tablespace Management properties


The Oracle Database Configuration Assistant (DBCA) will create a structure database with
default tablespaces.

Note: Automatic Undo Management requires an undo tablespace per instance therefore you
would require a minimum of 2 tablespaces.

5.6 Setting Kernel parameters

Create a new resource project using the command :

# projadd oracle

Assign the oracle project to the oracle user by adding the following line to the /etc/user_attr file

oracle::::project=oracle

Use the below command to set the SHMMAX to 8GB

# projmod -s -K "project.max-shm-memory=(priv,8gb,deny)" oracle

5.7 10G Release 2 RAC Pre-Installation Tasks


After configuring the raw volumes, perform the following steps prior to installation as
administrator user.

 Require 10G Release 2 Clusterware software shipped in Oracle Medias or software


downloaded from “metalink” site only.

 Create “oracle” user & “dba” group on 2 nodes with administrator user privileges on
partner node, to update the node registry information.

 On the node from which you will run the Oracle Universal Installer, set up user
equivalence by adding entries for all nodes in the cluster, including the local node.

 On both the nodes setup the hosts file with the following entries:
10.200.1.55 indc1s209
10.200.1.57 indc1s210

192.168.1.2 indc1s209-priv
192.168.1.3 indc1s210-priv

Page 11
2 Node 10G Release 2 RAC IMPLEMENTATION

10.200.1.59 indc1s209-vip
10.200.1.60 indc1s210-vip

Post setting up the “hosts” file; please ensure to reboot both the nodes, to take effect of
the new hosts file.

 Ensure that the IP allotted to “indc1s209-vip” & “indc1s210-vip” is of the same class as
that of Public IP.

 If one has to include the IP entries in the DNS file, include both the “vip” entries
including the PUBLIC IPs. All the listeners on both the nodes would be configured on
“vip” entries.

 Execute the following commands to check whether the required prerequisites are met
prior to initiating the clusterware installation.
Go to /oracle/cluster/cluvfy & run the following command
$ runcluvfy.sh stage -pre crsinst -n indc1s209,indc1s210

Please note :- An error for “vip” node detection failed could be ignored, as there is a
known bug in the runcluvfy.bat file for all IPs starting with “192”, “10”or “172”series.

 Determine the complete path for the raw devices or shared file systems, and set up the
voting disk and Oracle Cluster Registry partitions

o During installation, at the Cluster Configuration Storage page, you are asked to
provide paths for two files that must be shared across all nodes of the cluster,
either on a shared raw device, or a shared file system file:

o The Cluster Synchronization Services (CSS) voting disk is a partition that Oracle
Clusterware uses to verify cluster node membership and status. Provide at least
256 MB disk space for the voting disk.

o The Oracle Cluster Registry (OCR) contains cluster and database configuration
information for the RAC database and for Oracle Clusterware, including the node
list, and other information about cluster configuration and profiles. Provide at
least 256 MB disk space for the OCR.

o In addition, if you intend to use ASM, do not format the partitions that you want
to use for ASM.

o Ensure that you create at least the minimum required partitions for installation.

 Host names, private names, and virtual host names are not domain-qualified. If you
provide a domain in the address field during installation, then the OUI removes the
domain from the address.

Page 12
2 Node 10G Release 2 RAC IMPLEMENTATION

 Private IP addresses should not be accessible as public interfaces. Using public interfaces
for Cache Fusion can cause performance problems.

 Determine your cluster name, public node names, private node names, and virtual node
names for each node in the cluster

If you install the cluster ware during installation, and are not using third-party vendor
cluster ware, then you are asked to provide a public node name and a private node name
for each node. Use your third-party vendor documentation to complete setup of your
public and private domain addresses.

When you enter the public node name, use the primary host name of each node. In other
words, use the name displayed by the hostname command but without any portion of the
domain name that may be returned by the command.

In addition, ensure that the following are true:

o Determine a cluster name with the following characteristics:

 It must be globally unique throughout your host domain

 It must be at least one character long and less than 15 characters long

 It must consist of the same character set used for host names:
underscores (_), hyphens (-), and single-byte alphanumeric characters (a
to z, A to Z, and 0 to 9). If you use third-party vendor clusterware, then
Oracle recommends that you use the vendor cluster name

o Determine a private node name or private IP address for each node. The private
IP address is an address that is only accessible by the other nodes in this cluster.
Oracle uses private IP addresses for inter-node, or instance-to-instance Cache
Fusion traffic. Oracle recommends that you provide a name in the format
public_hostname-priv. Example: DB-priv.

o Determine a virtual host name for each node. A virtual host name is a public
node name that is used to reroute client requests sent to the node if the node is
down. Oracle uses virtual IP addresses (VIPs) for client to database connections,
so the VIP address must be publicly accessible. Oracle recommends that you
provide a name in the format public_hostname-vip. Example: DB-vip

5.7.1 Establish Oracle environment variables:


Set the following Oracle environment variables:

Environment Variable Variable values


ORACLE_HOME /oracle/product/10.2.0/db_1
ORA_CRS_HOME /oracle/product/10.2.0/crs_1

Page 13
2 Node 10G Release 2 RAC IMPLEMENTATION

5.8 Using Oracle Universal Installer to Install Oracle Clusterware on


Windows
Perform the following procedures to complete phase one, install Oracle Clusterware with the Oracle
Universal Installer, of the installation of the Oracle Database 10g Release 2 with Real Application Clusters
(RAC):
1. Log in to “oracle” user & run the “runInstaller.sh” command on the Oracle Clusterware media
from one of the node only (Primary node i.e. indc1s209), as the same installation window would
install the software & would configure the cluster information automatically on the partner node.
This will open the Oracle Universal Installer (OUI) Welcome page.

2. After you click Next on the Welcome page, you are prompted for the inventory location and the
dba group. On the next screen, the Specify File Locations page will allow you to accept the
displayed path name for the Oracle Clusterware products or select a different one. You may also
accept default directory and path name for the location of your Oracle Clusterware home or
browse for an alternate directory and destination. You must select a destination that exists on each
cluster node that is part of this installation. Click Next to confirm your choices.

Page 14
2 Node 10G Release 2 RAC IMPLEMENTATION

3. Leave the source path unchanged. Modify the destination as required.

Page 15
2 Node 10G Release 2 RAC IMPLEMENTATION

4. The installer verifies that your environment meets all of the minimum requirements for installing
and configuring the products that you have chosen to install. The results are displayed on the
Product-Specific Prerequisite Checks page. Verify and confirm the items that are flagged with
warnings and items that require manual checks. After you confirm your configuration, the OUI
proceeds to the Cluster Configuration page.

Note:
If the check identifies an existing, local CSS, you must shutdown the
Oracle database and ASM instance from the Oracle home where
CSS is running. To accomplish this, run the following command, using
the existing Oracle home, in a separate window before you continue
with the installation:
Oracle home\bin\localconfig delete

Page 16
2 Node 10G Release 2 RAC IMPLEMENTATION

5. The Cluster Configuration page contains predefined node information if the OUI detects that your
system has the Oracle9i Release 2 clusterware. Otherwise, the OUI displays the Cluster
Configuration page without predefined node information.

Provide your own cluster name if you do not wish to use the name provided by the OUI. Note that
the selected cluster name must be globally unique throughout the enterprise and its allowable
character set is the same as that for hostnames, that is, underscores (_), hyphens (-), and single-
byte alphanumeric characters (a to z, A to Z, and 0 to 9).

Page 17
2 Node 10G Release 2 RAC IMPLEMENTATION

Page 18
2 Node 10G Release 2 RAC IMPLEMENTATION

6. Enter a public, a virtual, and a private host name for each node. Do not include a domain qualifier
with the host names. When you enter the public host name, use the primary host name of each
node, that is, the name displayed by the hostname command. The virtual node name is the name to
be associated with the VIP for the node. The private node refers to an address that is only
accessible by the other nodes in this cluster, and which Oracle uses for Cache Fusion processing.
You should enter the private host name for each node.

Note:
You may provide the cluster configuration information in a text file
instead of entering it in the individual fields on the Cluster
Configuration page. The contents of your cluster configuration file
should be similar to the following example:
Cluster configuration file:
--------------------------------------------------
# Cluster Name
crs

# Node Information

Page 19
2 Node 10G Release 2 RAC IMPLEMENTATION

# Public Node Name Private Node Name Virtual Host Name


indc1s209 indc1s209-priv indc1s209-vip
indc1s210 indc1s210-priv indc1s210-vip

Click Next after you have entered the cluster configuration information. This saves your entries
and opens the Specify Network Interface Usage page.

7. In the Specify Network Interface Usage page the OUI displays a list of cluster-wide interfaces.
Use the drop-down menus on this page to classify each interface as Public, Private, or Do Not
Use. The default setting for each interface is Do Not Use. You must classify at least one
interconnect as Public and one as Private. Click Next when you have made your selections to open
the Select Disk Formatting Options page.

8. On the Cluster Configuration Storage page, identify the disks that you want to use for the Oracle
Clusterware files. Enter the path of each of these disks one at a time

Page 20
2 Node 10G Release 2 RAC IMPLEMENTATION

Notes:
The OUI page described in this step displays logical drives from which you must
make your selections.

If you are installing on a cluster with an existing cluster file system from an earlier
release of Oracle, then the OCR and voting disk will be stored in that file system. In
this case, you do not require new partitions for the OCR and voting disk, even if you
do not format a logical drive for data file storage.

Page 21
2 Node 10G Release 2 RAC IMPLEMENTATION

9. After you click Next, the OUI checks whether the remote inventories are set. If they are not set,
then the OUI sets up the remote inventories by setting registry keys. The OUI also verifies the
permissions to enable writing to the inventory directories on the remote nodes. After completing
these actions, the OUI displays a Summary page that shows the cluster node information along
with the space requirements and availability. Verify the installation that the OUI is about to
perform and click Finish.

Page 22
2 Node 10G Release 2 RAC IMPLEMENTATION

10. When you click Finish, the OUI installs the Oracle Clusterware software on the local node and
validates the installation again. After validating the installation, the OUI completes the Oracle
Clusterware software installation and configuration on the remote nodes.

Page 23
2 Node 10G Release 2 RAC IMPLEMENTATION

11. Run the above scripts as mentioned on each node. Post the clusterware installation, invoke the
VIPCA from /oracle/product/10.2.0/crs_1/bin to configure the VIP’s.

Page 24
2 Node 10G Release 2 RAC IMPLEMENTATION

Page 25
2 Node 10G Release 2 RAC IMPLEMENTATION

Page 26
2 Node 10G Release 2 RAC IMPLEMENTATION

Page 27
2 Node 10G Release 2 RAC IMPLEMENTATION

Page 28
2 Node 10G Release 2 RAC IMPLEMENTATION

Page 29
2 Node 10G Release 2 RAC IMPLEMENTATION

5.9 Using the Oracle Universal Installer to install Oracle 10.2.0.1 Real Application
Clusters binaries software
Follow these procedures to use the Oracle Universal Installer to install the Oracle Enterprise
Edition Cluster Ware installation and the Real Application Clusters software. To install the
Oracle 10G Release 2 RAC binaries, perform the following:

 Prior to installation of 10G Release 2 RAC, check the installation of clusterware by


typing the following command:

# /oracle/product/10.2.0/crs_1/bin/crs_stat

Execute the command on both the nodes. Once the commands are executed successfully
on both the nodes, perform the next steps

 Login as the “oracle” user

 Go to the /oracle/database & execute “runInstaller.sh”, to initiate the installation of


RAC installation

 At the OUI Welcome screen, click Next.

 A prompt will appear for the Inventory Location (if this is the first time that OUI has
been run on this system). This is the base directory into which OUI will install files. The

Page 30
2 Node 10G Release 2 RAC IMPLEMENTATION

Oracle Inventory definition can be found in the file /oracle/product/10.2.0/oraInst.loc.


Click OK.

 Select the installation type. Choose the Standard Edition option. The selection on this
screen refers to the installation operation, not the database configuration. The next screen
allows for a customized database configuration to be chosen. Click Next.

 The File Location window will appear. Do NOT change the Source field. The
Destination field defaults to the ORACLE_HOME environment variable. Click Next.

Page 31
2 Node 10G Release 2 RAC IMPLEMENTATION

 Select the nodes to install the binaries.

Page 32
2 Node 10G Release 2 RAC IMPLEMENTATION

 Select Software Only and click Next.

Page 33
2 Node 10G Release 2 RAC IMPLEMENTATION

 The Summary screen will be presented. Confirm that the RAC database software will be
installed and then click Install. The OUI will install the Oracle 10.2.0.1 software on to
the local node, and then copy this information to the other nodes selected.

Page 34
2 Node 10G Release 2 RAC IMPLEMENTATION

 Once Install is selected, the OUI will install the Oracle RAC software on to the local
node, and then copy software to the other nodes selected earlier. This will take some
time. During the installation process, the OUI does not display messages indicating that
components are being installed on other nodes - I/O activity may be the only indication
that the process is continuing.

 Run the scripts as root user as described above

Page 35
2 Node 10G Release 2 RAC IMPLEMENTATION

Page 36
2 Node 10G Release 2 RAC IMPLEMENTATION

5.10 Databases Patch Applied


The base version of Oracle database installed is 10.2.0.1.0. The database patch-set applied on
both the servers is 10.2.0.4, which is an upgrade patch to take care of critical issues viz.
ORA-600 errors & other memory leak issues. Moreover, required patches for Oracle RAC
were automatically installed on “indc1s209”, “indc1s210” servers.

 Stop all the CRS services on both nodes

 Navigate to the patch directory and invoke the runinstaller. Click next on the
welcome screen.

Page 37
2 Node 10G Release 2 RAC IMPLEMENTATION

 Select the Home to be patched and click Next

Page 38
2 Node 10G Release 2 RAC IMPLEMENTATION

 Select the nodes to install the Patch.

Page 39
2 Node 10G Release 2 RAC IMPLEMENTATION

 Click on Next once the pre-requisite check is complete

Page 40
2 Node 10G Release 2 RAC IMPLEMENTATION

 Click Next on the summary screen to start the Patch installation for CRS

Page 41
2 Node 10G Release 2 RAC IMPLEMENTATION

 Login as root user and run the below scripts

Page 42
2 Node 10G Release 2 RAC IMPLEMENTATION

Page 43
2 Node 10G Release 2 RAC IMPLEMENTATION

 Exit the installer and invoke it once again from the same location for patching the
database binaries

Page 44
2 Node 10G Release 2 RAC IMPLEMENTATION

 Select the Home to be patched.

Page 45
2 Node 10G Release 2 RAC IMPLEMENTATION

 Select the nodes to be patched.

Page 46
2 Node 10G Release 2 RAC IMPLEMENTATION

 Click on Next once the pre-requisite check is complete

Page 47
2 Node 10G Release 2 RAC IMPLEMENTATION

 Click install on the summary screen to start the patch installation.

Page 48
2 Node 10G Release 2 RAC IMPLEMENTATION

 Post installation, run the scripts as root user as mentioned

Page 49
2 Node 10G Release 2 RAC IMPLEMENTATION

5.11 Steps for configuring Database and Listener Configuration


5.11.1 Configuration of Listener

Prior to running the DBCA it is necessary to run the “netca” utility to configure the listener or to
manually set up your network files. This will configure the necessary listener names and protocol
addresses, client naming methods, Net service names. Listener are configured with the following
names i.e. LISTENER_INDC1S209 and LISTENER_INDC1S210 to listen on port 1521. It
should be ensured that any new listeners configured should be running on different port other
than configured. The configuration file is attached below:

LISTENER_INDC1S209 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = indc1s209-vip)(PORT = 1521)(IP
= FIRST))
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.200.1.55)(PORT = 1521)(IP =
FIRST))
)
)

Page 50
2 Node 10G Release 2 RAC IMPLEMENTATION

LISTENER_INDC1S210 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = indc1s210-vip)(PORT = 1521)(IP
= FIRST))
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.200.1.57)(PORT = 1521)(IP =
FIRST))
)
)

To create the Listener, invoke the NETCA as the oracle user

 Select Cluster configuration.

 Select All Nodes

Page 51
2 Node 10G Release 2 RAC IMPLEMENTATION

 Select Listener Configuration

Page 52
2 Node 10G Release 2 RAC IMPLEMENTATION

 Select Add

Page 53
2 Node 10G Release 2 RAC IMPLEMENTATION

 Select name as Listener

Page 54
2 Node 10G Release 2 RAC IMPLEMENTATION

 Select TCP

Page 55
2 Node 10G Release 2 RAC IMPLEMENTATION

 Select the default port 1521

Page 56
2 Node 10G Release 2 RAC IMPLEMENTATION

 Select No

Page 57
2 Node 10G Release 2 RAC IMPLEMENTATION

Page 58
2 Node 10G Release 2 RAC IMPLEMENTATION

6 Database Creation - INDCECDS


Invoke the database creation window through “dbca” & create the database with the
following initialization parameter file on the both the nodes. ECDSNOD1 & ECDSNOD2
would be created respectively on INDC1S209 & INDC1S210 nodes. The initialization
parameters are mentioned as follows :

ECDSNOD1.__db_cache_size=1191182336
ECDSNOD2.__db_cache_size=1191182336
ECDSNOD1.__java_pool_size=16777216
ECDSNOD2.__java_pool_size=16777216
ECDSNOD1.__large_pool_size=16777216
ECDSNOD2.__large_pool_size=16777216
ECDSNOD1.__shared_pool_size=352321536
ECDSNOD2.__shared_pool_size=352321536
ECDSNOD1.__streams_pool_size=0
ECDSNOD2.__streams_pool_size=0
*.audit_file_dest='/oracle/product/10.2.0/db_1/admin/INDCECDS/adump'
*.background_dump_dest='/oracle/product/10.2.0/db_1/admin/INDCECDS/bdump
'
*.cluster_database_instances=2
*.cluster_database=true
*.compatible='10.2.0.3.0'
*.control_files='+DGDATA/INDCECDS/control01.ctl','+DGDATA/INDCECDS/contr
ol02.ctl','+DGDATA/INDCECDS/control03.ctl'
*.core_dump_dest='/oracle/product/10.2.0/db_1/admin/INDCECDS/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='INDCECDS'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=ECDSNODXDB)'
ECDSNOD1.instance_number=1
ECDSNOD2.instance_number=2
*.job_queue_processes=10
*.log_archive_dest_1='LOCATION=+DGARCH/'
*.log_archive_format='ECDSNOD_%t_%s_%r.arc'
*.open_cursors=300
*.pga_aggregate_target=2548039680
*.processes=150
*.remote_listener='LISTENERS_INDCECDS'
*.remote_login_passwordfile='exclusive'
*.sga_target=1610612736
ECDSNOD2.thread=2
ECDSNOD1.thread=1
*.undo_management='AUTO'
ECDSNOD1.undo_tablespace='UNDOTBS1'
ECDSNOD2.undo_tablespace='UNDOTBS2'
*.user_dump_dest='/oracle/product/10.2.0/db_1/admin/INDCECDS/udump'

Page 59
2 Node 10G Release 2 RAC IMPLEMENTATION

6.1 Database creation with ASM

Invoke the DBCA as oracle user

Select the RAC option and click next.

Page 60
2 Node 10G Release 2 RAC IMPLEMENTATION

Select configure Automatic Storage Management and click next

Page 61
2 Node 10G Release 2 RAC IMPLEMENTATION

Select all nodes and click next

Page 62
2 Node 10G Release 2 RAC IMPLEMENTATION

Enter the password for the ASM instance as sys. Select create initialization parameter
file and click next.

Page 63
2 Node 10G Release 2 RAC IMPLEMENTATION

Page 64
2 Node 10G Release 2 RAC IMPLEMENTATION

Click on create new to create the diskgroups

Page 65
2 Node 10G Release 2 RAC IMPLEMENTATION

Click on OK

Page 66
2 Node 10G Release 2 RAC IMPLEMENTATION

Click on Create New to create the DGARCH disk group

Page 67
2 Node 10G Release 2 RAC IMPLEMENTATION

Click on OK

Page 68
2 Node 10G Release 2 RAC IMPLEMENTATION

After all the diskgroups are created, click on Finish

Page 69
2 Node 10G Release 2 RAC IMPLEMENTATION

Click on Yes to return to the DBCA main screen

Page 70
2 Node 10G Release 2 RAC IMPLEMENTATION

Select Oracle RAC database and click next

Page 71
2 Node 10G Release 2 RAC IMPLEMENTATION

Select create a database and click next.

Page 72
2 Node 10G Release 2 RAC IMPLEMENTATION

Select all and click next.

Page 73
2 Node 10G Release 2 RAC IMPLEMENTATION

Select custom database and click next.

Page 74
2 Node 10G Release 2 RAC IMPLEMENTATION

Enter the name for the RAC database. For each cluster database instance, the SID is
comprised of a common prefix for the database and a number for each instance that is
automatically generated.

Page 75
2 Node 10G Release 2 RAC IMPLEMENTATION

Leave the default options checked.

Page 76
2 Node 10G Release 2 RAC IMPLEMENTATION

Specify the database user passwords. Here the password is given as indecds.

Page 77
2 Node 10G Release 2 RAC IMPLEMENTATION

Select Automatic Storage Management.

Page 78
2 Node 10G Release 2 RAC IMPLEMENTATION

Select diskgroup name as DGDATA.

Page 79
2 Node 10G Release 2 RAC IMPLEMENTATION

Specify the diskgroup ARCHIVEDG for archivelogs.

Page 80
2 Node 10G Release 2 RAC IMPLEMENTATION

Page 81
2 Node 10G Release 2 RAC IMPLEMENTATION

Select all required database components and click Next.

Page 82
2 Node 10G Release 2 RAC IMPLEMENTATION

Click on Next.

Page 83
2 Node 10G Release 2 RAC IMPLEMENTATION

Page 84
2 Node 10G Release 2 RAC IMPLEMENTATION

Page 85
2 Node 10G Release 2 RAC IMPLEMENTATION

Page 86
2 Node 10G Release 2 RAC IMPLEMENTATION

Page 87
2 Node 10G Release 2 RAC IMPLEMENTATION

Page 88
2 Node 10G Release 2 RAC IMPLEMENTATION

Click on exit. Oracle will now start the RAC instance on both nodes.

Page 89
2 Node 10G Release 2 RAC IMPLEMENTATION

This completes the database creation process.

Page 90
2 Node 10G Release 2 RAC IMPLEMENTATION

7 Configuring TAF for the “INDCECDS” database


 The (load_balance=yes) parameter instructs net protocol to proceed through the list of listener
addresses in a random sequence, balancing the load on the various listeners. When set to
OFF, instructs Net to try the addresses sequentially until one succeeds. This parameter must
be correctly coded in your net service name (connect descriptor). By default, this parameter
is set to ON for DESCRIPTION_LISTs. Load balancing can be specified for an
ADDRESS_LIST or associated with a set of Addresses or set DESCRIPTIONs. If you use
ADDRESS_LIST,(load_balance=yes) should be within the (ADDRESS_LIST=) portion. If
you do not use ADDRESS_LIST, (load_balance=yes) should be within the (description=)
portion.
 (failover=on) is default for ADDRESS_LISTs, DESCRIPTION_LISTs, and a set of
DESCRIPTIONs., therefore, you do not have to specify. This is for connect-time-failover,
please do not confuse it with transparent application failover (TAF).
 failover_mode=): The FAILOVER_MODE parameter must be included in the
CONNECT_DATA portion of a net_service_name.
 The tnsnames.ora file configured w.r.t. TAF Configuration is attached the next :

INDCECDS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = indc1s209-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = indc1s210-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = INDCECDS)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 5)
(DELAY = 5)
)
)
)

Page 91
2 Node 10G Release 2 RAC IMPLEMENTATION

7.1 TAF-Failover testing for oracle database and application in RAC


Before proceeding with the testing, start the listener. Check whether “tnsnames.ora”
configuration is working by doing “tnsping” to tnsnames for all instances as below.

In “indc1s209”,

$ lsnrctl start LISTENER_INDC1S209


$ lsnrctl services LISTENER_INDC1S209
$ tnsping INDCECDS

In “indc1s210”,

$ lsnrctl start LISTENER_INDC1S210


$ lsnrctl services LISTENER_INDC1S210
$ tnsping INDCECDS

2..On “indc1s209”, connect to the instance:


$ sqlplus system/********@INDCECDS

3. From the session issue a select:

SQL> select host_name, thread# from v$instance;

HOST_NAME THREAD#

INDC1S209 1

4. On indc1s209 abort instance ECDSNOD1 as below

SQL > shutdown

Database unmounted

Database closed

5. Issue another select from the sqlplus session above.

SQL> select host_name, thread# from v$instance;

HOST_NAME THREAD#

INDC1S210 2

Note, the session got failed over from instance 1 (ECDSNOD1) in indc1s209 to instance 2
(ECDSNOD2) indc1s210.

6. The same above testing has been done in both instances.

Page 92
2 Node 10G Release 2 RAC IMPLEMENTATION

Tests performed after migration & post configuration of TAF

Sr.No. Name of Test Description of Test Required Criteria Result of Test


Connected a session to
database “INDCECDS”.
Checked the instance
connected. Shutdown the
Users should be able to seeinstance “ECDSNOD1” in
& do operation on dataindc1s209. Again checked the
The first instanceeven the first instanceinstance connected to which is
failover ECDSNOD1 in indc1s209now the instance
1 (ECDSNOD1) server is down “ECDSNOD2”. Test Successful
Connected a session to
database INDCECDS.
Checked the instance
connected (ECDSNOD2).
Users should be able to seeShutdown the instance
& do operation on dataECDSNOD2 in indc1s210
even the one of thenode. Again checked the
The first instance failinstance (ECDSNOD2 ininstance connected to which is
2 over (ECDSNOD2) indc1s210 server) is down now the instance ECDSNOD1.Test Successful

Shutdown of theDatabase should come


entire database andwithout any error after
starting up the same inmigration in all 2 serversDatabase came up without any
3. sequence (indc1s209, indc1s210) error. Test successful.

8 10G Release 2 RAC PRODUCT DOCUMENTATION


8.1 What is Oracle 10G Release 2 Real Applications Clusters?
Oracle 10G Release 2 Real Application Clusters is a computing environment that
harnesses the processing power of multiple, interconnected computers. Oracle 10G
Release 2 Real Application Clusters software and a collection of hardware known as a
"cluster," unites the processing power of each component to become a single, robust
computing environment. A cluster generally comprises two or more computers, or
"nodes."
In Oracle 10G Release 2 Real Application Clusters (RAC) environments, all nodes concurrently
execute transactions against the same database. Oracle 10G Release 2 Real Application Clusters
coordinates each node's access to the shared data to provide consistency and integrity.

Page 93
2 Node 10G Release 2 RAC IMPLEMENTATION

Oracle 10G Release 2 Real Application Clusters serves as an important component of robust high
availability solutions. A properly configured Oracle 10G Release 2 Real Application Clusters
environment can tolerate failures with minimal downtime.

Oracle 10G Release 2 Real Application Clusters is also applicable for many other system types.
For example, data warehousing applications accessing read-only data are prime candidates for
Oracle 10G Release 2 Real Application Clusters. In addition, Oracle 10G Release 2 Real
Application Clusters successfully manages increasing numbers of online transaction processing
systems as well as hybrid systems that combine the characteristics of both read-only and
read/write applications.

Harnessing the power of multiple nodes offers obvious advantages. If you divide a large task into
sub-tasks and distribute the sub-tasks among multiple nodes, you can complete the task faster
than if only one node did the work. This type of parallel processing is clearly more efficient than
sequential processing. It also provides increased performance for processing larger workloads and
for accommodating growing user populations. Oracle 10G Release 2 Real Application Clusters
can effectively scale your applications to meet increasing data processing demands. As you add
resources, Oracle 10G Release 2 Real Application Clusters can exploit them and extend their
processing powers beyond the limits of the individual components.

From a functional perspective RAC is equivalent to single-instance Oracle. What the RAC
environment does offer is significant improvements in terms of availability, scalability and
reliability.

In recent years, the requirement for highly available systems, able to scale on demand, has
fostered the development of more and more robust cluster solutions. Prior to Oracle 10G Release
2, HP and Oracle, with the combination of Oracle Parallel Server and HP Service Guard OPS
edition, provided cluster solutions that lead the industry in functionality, high availability,
management and services. Now with the release of Oracle 10G Real Application Clusters (RAC)
with the new Cache Fusion architecture based on an ultra-high bandwidth, low latency cluster
interconnect technology, RAC cluster solutions have become more scalable without the need for
data and application partitioning. The information contained in this document covers the
installation and configuration of Oracle Real Application Clusters in a typical environment; a two
node HP cluster, utilizing the HP-UX operating system.

8.2 Oracle 10G Real Application Clusters – Cache Fusion technology


Oracle 10G cache fusion utilizes the collection of caches made available by all nodes in
the cluster to satisfy database requests. Requests for a data block are satisfied first by a
local cache, then by a remote cache before a disk read is needed. Similarly, update
operations are performed first via the local node and then the remote node caches in the
cluster, resulting in reduced disk I/O. Disk I/O operations are only done when the data
block is not available in the collective caches or when an update transaction performs a
commit operation.
Oracle 10G cache fusion thus provides Oracle users an expanded database cache for queries and
updates with reduced disk I/O synchronization which overall speeds up database operations.
However, the improved performance depends greatly on the efficiency of the inter-node message
passing mechanism, which handles the data block transfers between nodes.

Page 94
2 Node 10G Release 2 RAC IMPLEMENTATION

The efficiency of inter-node messaging depends on three primary factors:

 The number of messages required for each synchronization sequence. Oracle


10G’s Distributed Lock Manager (DLM) coordinates the fast block transfer
between nodes with two inter- node messages and one intra-node message. If
the data is in a remote cache, an inter-node message is sent to the Lock
Manager Daemon (LMD) on the remote node. The DLM and Cache Fusion
processes then update the in-memory lock structure and send the block to the
requesting process.

 The frequency of synchronization (the less frequent the better). The cache fusion
architecture reduces the frequency of the inter-node communication by
dynamically migrating locks to a node that shows a frequent access pattern for a
particular data block. This dynamic lock allocation increases the likelihood of
local cache access thus reducing the need for inter-node communication. At a
node level, a cache fusion lock controls access to data blocks from other nodes
in the cluster.
The latency of inter-node communication. This is a critical component in Oracle 10G
RAC as it determines the speed of data block transfer between nodes. An efficient
transfer method must utilize minimal CPU resources; support high availability as well as
highly scalable growth without bandwidth constraints.

8.3 Transparent Application failover (TAF).

The Transparent Application Failover (TAF) feature is a runtime failover for high-availability
environments, such as Oracle 10G Release 2 Real Application Clusters and Oracle 10G Release 2
Real Application Clusters Guard. TAF fails over and reestablishes application-to-service
connections. It enables client applications to automatically reconnect to the database if the
connection fails and, optionally, resume a SELECT statement that was in progress. The
reconnection happens automatically from within the Oracle Call Interface (OCI) library. To
understand the concept and working flow of TAF,first we need to understand the below failover
basics.

8.3.1 Failover Basics


Failover requires that highly available systems have accurate instance monitoring or heartbeat
mechanisms. In addition to having this functionality for normal operations, the system must be
able to quickly and accurately synchronize resources during failover.

The process of synchronizing, or remastering, requires the graceful shutdown of the failing
system as well as an accurate assumption of control of the resources that were mastered on that
system. In Real Application Clusters, your system records resource information to remote nodes
as well as local. This makes the information needed for failover and recovery available to the
recovering instances.

Page 95
2 Node 10G Release 2 RAC IMPLEMENTATION

8.3.2 Duration of Failover


The duration of failover includes the time a system requires to remaster system-wide resources
and the time to recover from failures. The duration of the failover process can be a relatively short
interval on certified platforms.

 For existing users, failover entails both server and client failover actions

 For new users, failover only entails the duration of server failover processing
8.3.3 Client Failover
It is important to hide system failures from database client connections. Such connections can
include application users in client server environments or middle-tier database clients in
multitiered application environments. Properly configured failover mechanisms transparently
reroute client sessions to an available node in the cluster. This capability in the Oracle database is
referred to as Transparent Application Failover
8.3.4 Transparent Application Failover
Transparent Application Failover (TAF) enables an application user to automatically
reconnect to a database if the connection fails. Active transactions roll back, but the new database
connection, which is achieved using a different node, is identical to the original. This is true
regardless of how the connection fails.
8.3.5 Elements Affected by Transparent Application Failover
There are several elements associated with active database connections. These include:

 Client/Server database connections

 Users' database sessions executing commands

 Open cursors used for fetching

 Active transactions

 Server-side program variables

Transparent Application Failover automatically restores some of these elements. For example,
during normal client/server database operations, a client maintains a connection to the database so
the client and server can communicate. If the server fails, then so does the connection. The next
time the client tries to use the connection the client issues an error. At this point, the user must log
in to the database again.

With Transparent Application Failover, however, Oracle automatically obtains a new connection
to the database. This enables users to continue working as if the original connection had never
failed. Therefore, with Transparent Application Failover, a client notices no connection loss as
long as one instance remains active to serve the application

Page 96
2 Node 10G Release 2 RAC IMPLEMENTATION

8.3.6 Uses of Transparent Application Failover


While the ability to fail over client sessions is an important benefit of Transparent Application
Fail over, there are other useful scenarios where Transparent Application Fail over improves
system availability. These topics are discussed in the following subsections:

 Transactional Shutdowns

 Quiescing the Database

 Load Balancing

 Database Client Processing During Failover

 Transparent Application Failover Restrictions


8.3.6.1 Transactional Shutdowns
It is sometimes necessary to take nodes out of service for maintenance or repair. For example, if
you want to apply patch releases without interrupting service to application clients. Transactional
shutdowns facilitate shutting down selected nodes rather than an entire database. Two
transactional shutdown options are available:

 Use the TRANSACTIONAL clause of the SHUTDOWN statement to remove a node


from service so that the shutdown event is deferred until all existing transactions
are completed. In this way, client sessions can be migrated to another node of
the cluster at transaction boundaries.

 Use the TRANSACTIONAL LOCAL clause of the SHUTDOWN statement to perform


transactional shutdown on a specified local instance. You can use this statement
to prevent new transactions from starting locally, and to perform an immediate
shutdown after all local transactions have completed. With this option, you can
gracefully move all sessions from one instance to another by shutting down
selected instances transactionally.

After performing a transactional shutdown, Oracle routes newly submitted transactions to an


alternate node. An immediate shutdown is performed on the node when all existing transactions
complete.
8.3.6.2 Quiescing the Database
You may need to perform administrative tasks that require isolation from concurrent user
transactions or queries. To do this, you can use the quiesce database feature. This prevents you,
for example, from having to shut down the database and re-open it in restricted mode to perform
such tasks.

To do this, you can use the ALTER SYSTEM statement with the QUIESCE RESTRICTED
clause. The QUIESCE RESTRICTED clause enables you to perform administrative tasks in
isolation from concurrent user transactions or queries.

Page 97
2 Node 10G Release 2 RAC IMPLEMENTATION

Note:
You cannot open the database on one instance if the database is being
quiesced on another node. In other words, if you issued the ALTER SYSTEM
QUIESCE RESTRICTED statement but it is not finished processing, you
cannot open the database. Nor can you open the database if it is already in a
quiesced state.

8.3.6.3 Load Balancing


A database is available when it processes transactions in a timely manner. When the load exceeds
a node's capacity, client transaction response times are adversely affected and the database
availability is compromised. It then becomes important to manually migrate client sessions to a
less heavily loaded node to maintain response times and application availability.

In Real Application Clusters, the Transport Network Services (TNS) listener files provide
automated load balancing across nodes in both shared server and dedicated server configurations.
Because the parameters that control cross-instance registration are also dynamic, Real
Application Clusters' load balancing feature automatically adjusts for cluster configuration
changes. For example, if you add a node to your cluster database, then Oracle updates all the
listener files in the cluster with the new node's listener information.
8.3.7 Database Client Processing During Failover
Failover processing for query clients is different than the failover processing for Database
Manipulation Language clients. The important issue during failover operations in either case is
that the failure is masked from existing client connections as much as possible. The following
subsections describe both types of failover processing.
8.3.7.1 Query Clients
At failover, in-progress queries are reissued and processed from the beginning. This might extend
the duration of the next query if the original query required longer to complete. With Transparent
Application Failover (TAF), the failure is masked for query clients with an increased response
time being the only issue affecting the client. If the client query can be satisfied with data in the
buffer cache of the surviving node to which the client reconnected, then the increased response
time is minimal. Using TAF's PRECONNECT method eliminates the need to reconnect to a
surviving instance and thus further minimizes response time. However, PRECONNECT allocates
resources awaiting the failover event.

After failover, server-side recovery must complete before access to the datafiles is allowed. The
client transaction experiences a system pause until server-side recovery completes, if server-side
recovery has not already completed.

You can also use a callback function through an OCI call to notify clients of the failover so that
the clients do not misinterpret the delay for a failure. This prevents the clients from manually
attempting to reestablish connections.
8.3.7.2 Database Manipulation Language Clients
Database Manipulation Language (DML) database clients perform INSERT, UPDATE, and
DELETE operations. Oracle handles certain errors and performs a reconnect when those errors
occur.

Page 98
2 Node 10G Release 2 RAC IMPLEMENTATION

Without this application code, INSERT, UPDATE, and DELETE operations on the failed
instance return an un-handled Oracle error code. Upon re-submission, Oracle routes the client
connections to a surviving instance. The client transaction then stops only momentarily until
server-side recovery completes.
8.3.8 Transparent Application Fail over Processing During Shutdowns
Queries that cross the network after shutdown processing completes will fail over. However,
Oracle returns an error for queries that are in progress during shutdowns. Therefore, TAF only
operates when the operating system returns a network error and the instance is completely down.

Applications that use TAF for transactional shutdown must be written to process the error
ORA-01033 "ORACLE initialization or shutdown in progress". In the event of a failure, an
instance will return error ORA-01033 once shutdown processing begins. Such applications need
to periodically retry the failed operation, even when Oracle reports multiple ORA-01033 errors.
When shutdown processing completes, TAF recognizes the failure of the network connection to
instance and restores the connection to an available instance.

Connection load balancing improves connection performance by balancing the number of active
connections among multiple dispatchers. In single-instance Oracle environments, the listener
selects the least loaded dispatcher to manage incoming client requests. In Real Application
Clusters environments, connection load balancing also has the capability of balancing the number
of active connections among multiple instances.

Due to dynamic service registration, a listener is always aware of all of the instances and
dispatchers regardless of their locations. Depending on the load information, a listener determines
to which instance and to which dispatcher to send incoming client requests if you are using the
shared server configuration.

In shared server configurations, listeners select dispatchers using the following criteria in the
order shown:

1. Least loaded node

2. Least loaded instance

3. Least loaded dispatcher for that instance

In dedicated server configurations, listeners select instances in the following order:

1. Least loaded node

2. Least loaded instance

If a database service has multiple instances on multiple nodes, then the listener chooses the least
loaded instance on the least loaded node. If you have configured the shared server, then the least
loaded dispatcher of the selected instance is chosen.
8.3.9 Transparent Application Failover Restrictions
When a connection fails, you might experience the following:

Page 99
2 Node 10G Release 2 RAC IMPLEMENTATION

 All PL/SQL package states on the server are lost at failover

 ALTER SESSION statements are lost

 If failover occurs when a transaction is in progress, then each subsequent call


causes an error message until the user issues an OCITransRollback call. Then
Oracle issues an Oracle Call Interface (OCI) success message. Be sure to check
this message to see if you must perform additional operations.

 Oracle fails over the database connection and if TYPE=SELECT in the


FAILOVER_MODE section of the service name description, Oracle also attempts
to fail over the query.
 Continuing work on failed-over cursors can result in an error message

Page 100

You might also like