0% found this document useful (0 votes)
22 views260 pages

Oracle Internal & OAI Use Only Oracle Internal & OAI Use Only

The document is a student guide for Oracle9i Database: Real Application Clusters on Linux, covering various topics related to the implementation and management of RAC on Linux systems. It includes sections on system preparation, installation, database management, and advanced deployment topics, along with prerequisites and objectives for each section. The guide emphasizes hands-on exercises and practical applications to reinforce learning.

Uploaded by

vineethamolv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views260 pages

Oracle Internal & OAI Use Only Oracle Internal & OAI Use Only

The document is a student guide for Oracle9i Database: Real Application Clusters on Linux, covering various topics related to the implementation and management of RAC on Linux systems. It includes sections on system preparation, installation, database management, and advanced deployment topics, along with prerequisites and objectives for each section. The guide emphasizes hands-on exercises and practical applications to reinforce learning.

Uploaded by

vineethamolv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 260

Oracle9i Database: Real

Application Clusters on Linux

Student Guide

nly
e O
Us
AI
& O
D16335GC10
al
n
Edition 1.0
February 2004
D38296
ter
In
cle
ra
O
Author Copyright © 2004, Oracle. All rights reserved.

Jim Womack This documentation contains proprietary information of Oracle Corporation. It is


provided under a license agreement containing restrictions on use and disclosure and
is also protected by copyright law. Reverse engineering of the software is prohibited.
Technical Contributors If this documentation is delivered to a U.S. Government Agency of the Department of
Defense, then it is delivered with Restricted Rights and the following legend is
and Reviewers applicable:

Fawzi Alswaimil Restricted Rights Legend


Harald Van Breederode Use, duplication or disclosure by the Government is subject to restrictions for
Jack Cai commercial computer software and shall be deemed to be Restricted Rights software
Michael Cebulla under Federal law, as set forth in subparagraph (c)(1)(ii) of DFARS 252.227-7013,
Rights in Technical Data and Computer Software (October 1988).
Dairy Chan
Robert Gasz This material or any portion of it may not be copied in any form or by any means
Steven George without the express prior written permission of Oracle Corporation. Any other copying
is a violation of copyright law and may result in civil and/or criminal penalties.
Joel Goodman
Scott Heisey If this documentation is delivered to a U.S. Government Agency not within the
Department of Defense, then it is delivered with “Restricted Rights,” as defined in
Tamas Kerepes FAR 52.227-14, Rights in Data-General, including Alternate III (June 1987).
Kim Kirschenman
Michael Moeller The information in this document is subject to change without notice. If you find any
problems in the documentation, please report them in writing to Education Products,
Jorgen Quaade Oracle Corporation, 500 Oracle Parkway, Box SB-6, Redwood Shores, CA 94065.
James Spiller Oracle Corporation does not warrant that this document is error-free.
John Watson All references to Oracle and Oracle products are trademarks or registered trademarks
of Oracle Corporation.

All other products or company names are used for identification purposes only, and
Publisher may be trademarks of their respective owners.

Joseph Fernandez

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Contents
Preface
1 Oracle Real Application Clusters on Linux: Overview
Objectives 1-2
What Is a Cluster? 1-3
Cluster Hardware Components 1-4
Oracle9i Real Application Clusters 1-5
Why Implement RAC? 1-6
Scalability Considerations 1-7
Linux RAC Architecture 1-9
RAC on Linux Storage 1-10
Oracle Cluster File System 1-11
OCFS Features 1-12
Cluster Management on Linux 1-13
RAC/Linux Hardware Compatibility Matrix 1-14
Oracle/Linux Compatibility Matrix 1-15
Summary 1-16

2 Preparing the Operating System


Objectives 2-2
Installing Oracle9i RAC on Linux 2-3
Verifying the Linux Environment 2-4
Viewing Resource Use 2-8
Oracle Preinstallation Tasks 2-9
Oracle Environment Variables 2-10

ly
Asynchronous I/O 2-11
Enabling Asynchronous I/O 2-12
Downloading OCFS 2-13
On
Installing the RPM Packages 2-14
se
Starting ocfstool 2-15

I U
A
Generating the ocfs.conf File 2-16
Loading OCFS at Startup 2-17
Preparing the Disks 2-18
& O
al
Creating Extended Partitions 2-19

rn
The OCFS Format Window 2-21

e
Int
OCFS Command-Line Interface 2-22
Alternate OCFS Mounting Method 2-24

le
System Parameter Configuration for OCFS 2-25

c
a
Swap Space Configuration 2-26

Or
Red Hat Network Adapter Configuration 2-27
UnitedLinux Network Adapter Configuration 2-28
Known Limitations and Requirements 2-29
Summary 2-30

iii
3 Oracle Cluster Management System
Objectives 3-2
Linux Cluster Management Software 3-3
OCMS 3-4
The Hangcheck-Timer 3-5
The Node Monitor (NM) 3-6
The Cluster Monitor 3-7
Starting OCMS 3-8
The Quorum Disk 3-9
Configuring the User Environment 3-10
Starting the Installer 3-11
Specifying Inventory Location 3-12
File Locations 3-13
Available Products 3-14
Node Information 3-15
Interconnect Information 3-16
Watchdog Parameter 3-17
Quorum Disk 3-18
9.2.0.1.0 Summary Window 3-19
Installation Progress 3-20
End of Installation 3-21
The Hangcheck-Timer RPM 3-22
Hangcheck Settings 3-24
The Oracle 9.2.0.2 Patch Set 3-25
9.2.0.4.0 Cluster Manager Patch 3-26
Node Selection 3-27

ly
Node Information 3-28
Interconnect Information 3-29
Watchdog Parameter 3-30
On
Quorum Disk 3-31
se
9.2.0.4.0 Summary Window 3-32
I U
Starting Cluster Manager 3-33
Summary 3-34
OA
l &
a
4 Installing Oracle on Linux

rn
Objectives 4-2

e
t
Starting the Installation 4-3

In
Choose the Target Node 4-4

le
File Locations 4-5

c
a
Product Selection 4-6

Or
Installation Type 4-7
Product Components 4-8
Component Locations 4-9
Shared Configuration File 4-10

iv
Operating System Groups 4-11
OMS Repository 4-12
Create Database Options 4-13
Installation Summary 4-14
Installation Progress 4-15
The root.sh Script 4-16
Net Configuration Assistant 4-17
Enterprise Manager Configuration Assistant (EMCA) 4-18
Installer Message 4-19
End of Installation 4-20
Updating Universal Installer 4-21
The Oracle 9.2.0.4 Patch Set 4-22
Installing the 9.2.0.4 Patch Set 4-23
Node Selection 4-24
Finishing Up 4-25
Summary 4-26

5 Building the Database


Objectives 5-2
Starting DBCA 5-3
Creating a Database 5-4
Node Selection 5-5
Database Templates 5-6
Database Identification 5-7
Database Features and Example Schemas 5-8
Standard Database Features 5-9

ly
Database Features 5-10

n
Database Connections 5-11
Initialization Parameters 5-12
File Locations 5-13
e O
Database Storage 5-14
Us
Control File Specifications 5-15
AI
O
Tablespaces 5-16

&
Redo Log Groups 5-17

al
DBCA Summary 5-18

rn
Database Creation Progress 5-19

e
t
Database Passwords 5-20

In
Remote Password File 5-21

le
Summary 5-22

c
ra
6 Managing RAC on Linux

O
Objectives 6-2
Group Services Management 6-3
Server Control Utility 6-4

v
SRVCTL Command Syntax 6-5
SRVCTL Cluster Database Configuration Tasks 6-6
Adding and Deleting Databases 6-7
Adding and Deleting Instances 6-8
SRVCTL Cluster Database Tasks 6-9
Starting Databases and Instances 6-10
Stopping Databases and Instances 6-12
Inspecting Status of Cluster Database 6-13
Inspecting Database Configuration Information 6-14
Parameter Files in Cluster Databases 6-15
Creating and Managing Server Parameter File 6-16
Parameter File Search Order 6-17
Enterprise Manager and Cluster Databases 6-18
Displaying Objects in the Navigator Pane 6-19
Starting a Cluster Database 6-20
Stopping a Cluster Database 6-21
Viewing Cluster Database Status 6-22
Instance Management 6-23
Management Menu 6-24
Storage Management 6-25
Performance Manager and RAC 6-28
Monitoring RAC 6-29
Summary 6-31

7 Advanced Deployment Topics


Objectives 7-2

ly
Adding New Nodes 7-3

n
Adding Log Files, and Enabling and Disabling Threads 7-4
Allocating Rollback Segments 7-5
Adding an Instance with DBCA 7-6
e O
Choosing a Cluster Database 7-8
Us
Instance Name 7-9
AI
O
Redo Log Groups 7-10

&
Confirming Instance Creation 7-11

al
Instance Creation Progress 7-12

rn
Using Raw Devices 7-13

e
t
Transparent Application Failover 7-14

In
Failover Mode Options 7-15

le
Failover Types 7-16

c
a
Failover Methods 7-17

Or
TAF Configuration: Example 7-18
Connection Load Balancing 7-20
Service and Instance Names 7-21

vi
Adaptive Parallel Query 7-22
Monitoring Parallel Query 7-23
Summary 7-24

Appendix A
Appendix B

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

vii
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Preface

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Preface - 2
Profile

Before You Begin This Course


Before you begin this course, you should have the following qualifications:
• Thorough knowledge of Oracle9i Database administration
• Working experience with Linux or Unix
Prerequisites
• Oracle9i Real Application Clusters (D12837GC10)
• Oracle9i Database Release 2: Real Application Clusters New Features (D14342GC10)
How This Course Is Organized
Oracle9i Database: Real Application Clusters on Linux is an instructor-led course featuring lecture and
hands-on exercises. Online demonstrations and written practice sessions reinforce the concepts and skills
introduced.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Preface - 3
Typographic Conventions
Typographic Conventions in Text

Convention Element Example


Bold italic Glossary term (if there is The algorithm inserts the new key.
a glossary)

Caps and Buttons, Click the Executable button.


lowercase check boxes, Select the Can’t Delete Card check box.
triggers, Assign a When-Validate-Item trigger to the
windows ORD block.
Open the Master Schedule window.

Courier new, Code output, Code output: debug.set (‘I”, 300);


case sensitive directory names, Directory: bin (DOS), $FMHOME (UNIX)
(default is filenames, Filename: Locate the init.ora file.
lowercase) passwords, Password: User tiger as your password.
pathnames, Pathname: Open c:\my_docs\projects
URLs, URL: Go to http://www.oracle.com
user input, User input: Enter 300
usernames Username: Log on as scott

Initial cap Graphics labels Customer address (but Oracle Payables)


(unless the term is a
proper noun)

Italic Emphasized words and Do not save changes to the database.

ly
phrases,
titles of books and
courses,
On
For further information, see Oracle7 Server
SQL Language Reference Manual.
variables
se
U
Enter [email protected], where

I
user_id is the name of the user.
A
Quotation Interface elements with
O
Select “Include a reusable module component”
&
marks

al
long names that have
only initial caps;
and click Finish.

rn
lesson and chapter titles
e
This subject is covered in Unit II, Lesson 3,

Int
in cross-references “Working with Objects.”

Uppercase
cle SQL column names, Use the SELECT command to view information

ra commands, functions,
schemas, table names
stored in the LAST_NAME
column of the EMP table.
O

Preface - 4
Convention Element Example
Arrow Menu paths Select File > Save.

Brackets Key names Press [Enter].

Commas Key sequences Press and release keys one at a time:


[Alternate], [F], [D]
Plus signs Key combinations Press and hold these keys simultaneously:
[Ctrl]+[Alt]+[Del]

Typographic Conventions in Code

Convention Element Example


Caps and Oracle Forms When-Validate-Item
lowercase triggers

Lowercase Column names, SELECT last_name


table names FROM s_emp;

Passwords DROP USER scott


IDENTIFIED BY tiger;
PL/SQL objects OG_ACTIVATE_LAYER
(OG_GET_LAYER (‘prod_pie_layer’))

Lowercase italic Syntax variables CREATE ROLE role

Uppercase SQL commands SELECT userid


nly
and functions FROM emp;
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Preface - 5
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle Real Application Clusters
on Linux: Overview

Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Objectives

After completing this lesson, you should be able to do


the following:
• Discuss the necessary Real Application Clusters
(RAC) components on Linux
• Choose the proper Oracle version to use
• Identify the supported Linux vendors and
revisions
• List the supported Intel 32-bit hardware platforms

1-2 Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 1-2


What Is a Cluster?

• Interconnected nodes
• Cluster software
– Hidden structure
• Shared disks

Disks

Interconnect
Node

1-3 Copyright © 2004, Oracle. All rights reserved.

ly
What Is a Cluster?

On
A cluster consists of two or more independent, but interconnected servers. Several hardware
vendors have provided cluster capability over the years to meet a variety of needs. Some clusters

se
were only intended to provide high availability by allowing work to be transferred to a secondary

U
node if the active node failed. Others were designed to provide scalability by allowing user
I
connections or work to be distributed across the nodes.

OA
Another common feature of a cluster is that it should appear to an application as a single server.

l &
Similarly, management of the cluster should be as similar to the management of a single server

rna
as possible. Cluster management software helps provide this transparency.

nte
In order for the nodes to act as if they were a single server, you must store files in such a way
that they can be found by the specific node that needs them. There are several cluster topologies

e I
that address the data access issue, each dependent on the primary goals of the cluster designer.

cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 1-3


Cluster Hardware Components

• Nodes
• Interconnect
• Shared disk subsystem

1-4 Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 1-4


Oracle9i Real Application Clusters

• Database with instances on separate nodes


• Physical or logical access to each database file
• Software controlled data access

Database files

Instance on each node

1-5 Copyright © 2004, Oracle. All rights reserved.

ly
What Is Oracle9i Real Application Clusters?

On
Real Applications Clusters (RAC) is an Oracle9i database software option that you can use to
take advantage of clustered hardware by running multiple instances against a database. The

se
database files are stored on disks that are either physically or logically connected to each node so
that every active instance can read from or write to them.
I U
OA
The RAC software manages data access so that the changes are coordinated between the
instances and each instance uses a consistent image of the database. The cluster interconnect

l &
enables instances to pass coordination information and data images between each other.

na
Oracle9i RAC replaces clustered database options that were available in earlier releases. It offers
r
nte
transparent scalability, high availability with minimal downtime following an instance failure,
and centralized management of the database and its instances.

e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 1-5


Why Implement RAC?

Implementing RAC:
• Enables systems to scale up by increasing
throughput
• Increases performance by speeding up database
operations
• Provides higher availability
• Provides support for a greater number of users

1-6 Copyright © 2004, Oracle. All rights reserved.

ly
Why Implement RAC?
Increased Throughput
On
e
Parallel processing breaks a large task into smaller subtasks that can be performed concurrently.

s
With tasks that grow larger over time, a parallel system that also grows, or “scales up,” can
U
I
maintain a constant time for completing the same task.
Increased Performance
OA
For a given task, a parallel system that can scale up improves response time for completing the

l &
same task. For decision support systems (DSS) applications and parallel query, parallel

na
processing decreases response time. For online transaction processing (OLTP) applications,
r
e
speedup cannot be expected because of the overhead of synchronization.

nt
Higher Availability
I
le
Because each node that runs in the parallel system is isolated from other nodes, a single node

c
failure or crash should not cause other nodes to fail. This enables other instances in the parallel

ra
server environment to run normally. This also depends on the failover capabilities of the

O
operating system and the fault tolerance of the distributed cluster software.
Support for a Greater Number of Users
Because each node has its own set of resources, such as memory, CPU, and so on, each node can
support several users. As nodes are added to the system, more users can also be added, thereby
enabling the system to continue to scale up.

Oracle9i Database: Real Application Clusters on Linux 1-6


Scalability Considerations

• Hardware: Disk I/O


• Internode communication: High bandwidth and
low latency
• Operating system: Number of CPUs (SMP)
• Locking: Concurrent lock requests
• Database: Design
• Application: Design

1-7 Copyright © 2004, Oracle. All rights reserved.

ly
Scalability Considerations

On
It is important to remember that if any of the following areas are not scalable, no matter how
scalable the other areas are, then parallel cluster processing may not be successful:

se
• System scalability: High bandwidth and low latency offer maximum scalability. A high

U
amount of remote I/O may prevent system scalability, because remote I/O is much slower
I
OA
than local I/O. Bandwidth of the communication interface is the total size of messages that
can be sent per second. Latency of the communication interface is the time it takes to place

&
a message on the interconnect. It indicates the number of messages that can be put on the
l
rna
interconnect per unit of time.
• Operating system: Nodes with multiple CPUs and methods of synchronization in the

te
operating system can determine how well the system scales. Symmetric multiprocessing
n
I
(SMP) can process multiple requests to resources concurrently.
e
l
• Locking system: The scalability of the system that is used to handle locks of global
c
a
resources across the nodes determines the number of concurrent requests that can be
r
O
handled at one time and number of local lock requests that can be handled concurrently.
• Database scalability: Database scalability depends on how well the database is designed,
such as how the data files are arranged and how well objects are partitioned.

Oracle9i Database: Real Application Clusters on Linux 1-7


Scalability Considerations (continued)
• Scalability of the application: Application design is one of the keys to taking advantage of
the other elements of scalability. Regardless of how well the hardware and database scale,
if the application does not scale, then parallel processing will not work as desired.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 1-8


Linux RAC Architecture

• Hardware
– Intel-based hardware
– External shared SCSI or Fiber Channel disks
– Interconnect by using NIC
• Operating system
– Red Hat 7.1, Red Hat 2.1 and 3.0 Advanced Server
– SuSE 7.2 and SuSE SLES7
– UnitedLinux 1.0
• Oracle software
– Oracle9i Enterprise database
– Oracle Cluster File System
– Oracle Cluster Management System

1-9 Copyright © 2004, Oracle. All rights reserved.

ly
Linux RAC Architecture

requirements:
On
To successfully configure and run Oracle RAC on Linux, you must observe the following

• At least two 32-bit Pentium III Intel servers (or nodes)


se
U
• A separate and dedicated intracluster network among the nodes with network interface
I
the intracluster network would be necessary.
OA
cards (NIC). If the cluster has more than two nodes, then a switch or a high speed hub in

l &
rna
nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 1-9


RAC on Linux Storage

• Storage options for RAC on Linux:


– Oracle Cluster File System
– Raw devices named /dev/raw[1-255]
- Up to 255 raw devices can be addressed.
- The tool that is used to set up and query raw
devices is raw.
• Currently, Linux has no cluster file system.
– SuSE has a Logical Volume Manager (LVM).

1-10 Copyright © 2004, Oracle. All rights reserved.

ly
RAC on Linux Storage

On
Regular UNIX file system I/O routines do not support simultaneous remote access, which is
required by RAC instances. Raw devices have been the standard for RAC on the UNIX platform

se
because they bypass the OS file handling function calls, such as iget(), fopen(),

U
fclose(), and so on. However, the disadvantage of using raw devices is the difficulty in
I
OA
managing very large number of raw disk devices. This has been addressed by the use of volume
managers like Veritas Volume Manager. These volume managers work very well but they tend
to be very expensive.
l &
rna
nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 1-10


Oracle Cluster File System

• Is a shared file system that is designed


specifically for Oracle RAC
• Eliminates the need for database files to be linked
to logical drives
• Volumes can span one shared disk or multiple
shared disks
• Guarantees consistency of metadata across nodes
in a cluster

1-11 Copyright © 2004, Oracle. All rights reserved.

ly
Oracle Cluster File System

On
Oracle Cluster File System (OCFS) is a shared file system that is designed specifically for Oracle
RAC. OCFS eliminates the requirement for Oracle database files to be linked to logical drives or

se
raw devices. OCFS volumes can span one shared disk or multiple shared disks for redundancy
and performance enhancements.
I U
The Oracle Cluster File System:
OA
• Is extensible without interrupting availability. Oracle homes and data files that are stored

l
on the OCFS can be extended dynamically.
&
rna
• Takes full advantage of RAID volumes and storage area networks (SANs)
• Provides uniform accessibility to archive logs in the event of physical node failures

nte
• Guarantees, when applying Oracle patches, that the updated Oracle home is visible to all
I
nodes in the cluster
e
cl
• Guarantees consistency of metadata across nodes in a cluster

ra
O

Oracle9i Database: Real Application Clusters on Linux 1-11


OCFS Features

• Node-specific files and directories


• Unique clustername integrity
– Allows a hardware cluster to be segregated into
logical software clusters
– Simplifies storage area network management
• Automatic configuration of new nodes

1-12 Copyright © 2004, Oracle. All rights reserved.

ly
OCFS Features
Node-Specific Files and Directories
On
e
OCFS supports node-specific files and directories, which are also known as Content Dependent

s
Symbolic Links (CDSL). This allows nodes in a cluster to see different views of the same files
U
I
and directories although they have the same pathname on OCFS. This feature supports products

OA
that are installed on the Oracle home (like Oracle Intelligent Agent) that need to have the same
filename on different nodes but require a private copy on each node because node-specific
information might be stored in these files.
l &
rna
Unique Clustername Integrity

e
OCFS associates a unique clustername with an OCFS volume. The clustername is automatically

Int
selected from the Cluster Manager registry and, if a valid nondefault cluster name is present,
then any volume that is formatted from this node is available to nodes with the same clustername

le
as this node. The ocfsutil command provides a way to change the clustername for a volume
c
a
to another clustername or no clustername, which makes the volume visible to all nodes in the
r
O
cluster. Clustername allows a hardware cluster to be segregated into logical software clusters
from a storage viewpoint. This is important for supporting a storage area network (SAN).
When new nodes are added to an existing cluster, they automatically have access to the OCFS
volume.

Oracle9i Database: Real Application Clusters on Linux 1-12


Cluster Management on Linux

Oracle Cluster Management oracm maintains both node


System (OCMS) status view and Oracle
instance status view.
Oracle instance

The hangcheck thread driver


oracm monitors oracm and
reconciles with the
Hangcheck thread hangcheck-timer at defined
driver intervals.
User mode
The timer resets the node if a
Kernel mode new thread is not started
Hangcheck-timer
within a specified time.

1-13 Copyright © 2004, Oracle. All rights reserved.

ly
Cluster Management on Linux

On
In contrast to other UNIX platforms, RAC on Linux does not rely on a cluster software layer that
is supplied by the system vendor. OCMS is included with Oracle9i for Linux.
OCMS consists of the following components:
se
• Hangcheck thread driver
I U
• Cluster manager (oracm)
OA
OCMS resides above the operating system and provides the clustering that is needed by RAC.

l &
OCMS also provides cluster membership services, global view of clusters, node monitoring, and

na
cluster reconfiguration as needed. The binaries, logs, and configuration files can be found in
r
te
$ORACLE_HOME/oracm/.

n
I
In Oracle Release 9.2.0.2, Watchdog and the Watchdog timer have been replaced by the

e
l
hangcheck thread driver and the hangcheck-timer, respectively. The hangcheck thread driver

rac
starts a thread with a timeout value that is controlled by the hangcheck_margin parameter.
If the thread is not scheduled within that timeout value, then the machine is restarted. The default
O
value for the parameter is 60 seconds.

Oracle9i Database: Real Application Clusters on Linux 1-13


RAC/Linux Hardware Compatibility Matrix

Technology Technology Exclusions/


Category Limitations/Notes
Server/Processor 32-bit Intel Architecture
Architecture Pentium III or IV, including Xeon
based systems
Network 100 Mbps or Gigabit NICs and • Crossover cables
Interconnect Switches only supported on two
Technologies nodes
• Infiniband not currently
supported
Storage • Fiber Channel Switched Fabric Fiber Channel
Technologies adhering to ANSI FC-FS specs • Switch required for
• Fiber Channel Arbitrated Loop greater than two nodes
adhering to ANSI FC-A specs SCSI
• Any SCSI product that is • Only two nodes
supported by the host and supported
storage device • iSCSI not supported
• All arrays of JBOD (Just a • Infiniband not supported
Bunch of Disks) devices

1-14 Copyright © 2004, Oracle. All rights reserved.

ly
RAC/Linux Hardware Compatibility Matrix

On
Oracle Corporation supports the Oracle software on clusters that comprise RAC-compatible
technologies and certified software combinations. Consult your hardware and clusterware vendor

se
because not all vendors may choose to support their hardware or clusterware in every possible

U
cluster combination. Oracle Corporation does not provide hardware certification or compliance;
I
this is still the responsibility of the hardware vendor.

OA
l &
rna
nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 1-14


Oracle/Linux Compatibility Matrix

Operating system Products Certified with Version Status

Red Hat 2.1, 3.0 9.2 Oracle OSD 9.2 Certified


Advanced Server Clusterware
SuSE SLES7 9.2 Oracle OSD 9.2 Certified
Clusterware
Red Hat 2.1 9.0.1 Oracle OSD 9.0.1 Certified
Advanced Server Clusterware
Red Hat 7.1 9.0.1 Oracle OSD 9.0.1 Certified
Clusterware
SuSE 7.1& 7.2 9.0.1 Oracle OSD 9.0.1 Certified
Clusterware
SuSE SLES7 9.0.1 Oracle OSD 9.0.1 Certified
Clusterware
UnitedLinux 1.0 9.2 Oracle OSD 9.2 Certified
Clusterware

1-15 Copyright © 2004, Oracle. All rights reserved.

ly
Linux Compatibility

On
Oracle Corporation supports Red Hat Linux Advanced Server on any platform that Red Hat
certifies. It is a requirement that the operating system binaries have not been modified or

se
relinked. As can be seen from the compatibility matrix, Oracle Corporation is also committed to

U
the SuSE Linux platform, but note that there are no plans to certify RAC 9.2 on any versions of
I
SuSE earlier than SLES7.

OA
Oracle Corporation has also worked with UnitedLinux to confirm compatibility of Oralce9i

l
products (including RAC) on UnitedLinux 1.0.
&
rna
nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 1-15


Summary

In this lesson, you should have learned how to:


• Discuss the necessary RAC components on Linux
• Choose the proper Oracle version to use
• Identify the supported Linux vendors and
revisions
• List the supported Intel 32-bit hardware platforms

1-16 Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 1-16


Preparing the Operating System

Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Objectives

After completing this lesson, you should be able to do


the following:
• Set Linux kernel parameters as required by the
cluster database
• Install and configure Oracle Cluster File System
• Configure the network and interconnect interfaces

2-2 Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 2-2


Installing Oracle9i RAC on Linux

• Set the kernel parameters correctly.


• Create oracle user, dba and oinstall group.
• Determine the storage methodology:
– OCFS
– Raw devices
• Install and configure Oracle Cluster Management
System.
• Configure the network and interconnect
interfaces.

2-3 Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 2-3


Verifying the Linux Environment

• Verify the Linux version in use:


$ uname -rv
2.4.9-e.3smp #1 SMP Fri May 3 16:48:54 EDT 2002

• Verify the host names and IP addresses:


$ cat /etc/hosts
# Do not remove the following line or some programs that
require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
138.1.162.61 git-raclin01.us.oracle.com raclin01
138.1.162.62 git-raclin02.us.oracle.com raclin02
# Interconnect addresses
192.168.1.2 raclin01_IC racic1
192.168.1.3 raclin02_IC racic2

2-4 Copyright © 2004, Oracle. All rights reserved.

ly
Verifying the Linux Environment

On
Before loading any software, determine whether the system is ready. First, verify that the Linux
version is compatible with the Oracle version that you intend to use. Use the uname command
to get this information.
se
I U
Verify that all the systems that comprise the cluster have entries in the /etc/hosts file. There

A
should also be entries for the network cards that are used for the interconnects.
O
&
$ cat /etc/hosts
127.0.0.1
al localhost.localdomain localhost
138.1.162.61
ern git-raclin01.us.oracle.com # node 1
138.1.162.62
Int git-raclin02.us.oracle.com # node 2

e
raclin01_IC racic1 # interconnect node 1
l
192.168.1.2

a
192.168.1.3

r c raclin02_IC racic2 # interconnect node 2

Oracle9i Database: Real Application Clusters on Linux 2-4


Verifying the Linux Environment

Recommended values for interprocess communication:


Parameter Definition Value
rmem_default The setting in bytes of the socket 65535
receive buffer
rmem_max The maximum socket receive buffer 65535
size in bytes
wmem_default The default setting in bytes of the 65535
socket send buffer
wmem_max The maximum socket send buffer 65535
size in bytes

2-5 Copyright © 2004, Oracle. All rights reserved.

ly
Interprocess Communication Settings

On
Interprocess communication is an important issue for RAC because cache fusion transfers data
between instances by using this mechanism. Thus, networking parameters are important for RAC
e
databases. The values in the table, which is shown on the slide, are the default on most
s
following command:
I U
distributions and should be acceptable for most configurations. To see these values, run the

$ cat /proc/sys/net/core/rmem_default
OA
&
65535

l
Use vi or the echo command to change the value:
a
rn
echo 65535 > /proc/sys/net/core/rmem_default

e
Int
This method is not persistent, so this must be done each time the system starts. Some
distributions such as Red Hat have a persistent method for setting these parameters during

cle
startup. You can edit the /etc/sysctl.conf file to make the settings more permanent.

a
vi /etc/sysctl.conf

Or
net.core.rmem_default = 65535
net.core.rmem_max = 65535
net.core.wmem_default = 65535
net.core.wmem_max = 65535
...

Oracle9i Database: Real Application Clusters on Linux 2-5


Verifying the Linux Environment

• Verify the shared memory kernel parameters:


# cat /etc/sysctl.conf
...
# Oracle shared memory parameters
kernel.shmmax = 2147483648
kernel.shmmni = 1024
...
– For UnitedLinux, view /etc/sysconfig/oracle
• Verify the semaphore kernel parameters:
# cat /proc/sys/kernel/sem
250 32000 100 128

2-6 Copyright © 2004, Oracle. All rights reserved.

ly
Shared Memory and Semaphores

On
Several shared memory parameters must be set to enable the Oracle database to function
properly. These parameters are best set in the /etc/sysctl.conf file.

e
• SHMMAX: The maximum size of a single shared memory segment. This should be slightly
s
• SHMMNI: The number of shared memory identifiers
I U
larger than the largest anticipated size of the SGA, if possible.

Semaphore parameters that can be manually set include:


OA
• SEMMNS: The number of semaphores in the system
l &
rna
• SEMMNI: The number of semaphore set identifiers that controls the number of semaphore
sets that can be created at any one time

te
• SEMMSL: Semaphores are “grouped” into semaphore sets, and SEMMSL controls the array
n
I
size, or the number of semaphores that are contained per semaphore set. It should be about
e
l
ten more than the maximum number of Oracle processes.
c
a
• SEMOPM: Maximum number of operations per semaphore op call

Or
You can adjust these semaphore parameters manually by writing the contents of the
/proc/sys/kernel/sem file:
# echo SEMMSL_value SEMMNS_value SEMOPM_value \
SEMMNI_value > /proc/sys/kernel/sem

Oracle9i Database: Real Application Clusters on Linux 2-6


Shared Memory and Semaphores (continued)
For example:
# echo 250 32000 100 128 > /proc/sys/kernel/sem
In this example, 250 is the SEMMSL parameter value, 32000 is the SEMMNS parameter value, 32
is the SEMOPM parameter value, and 128 is the SEMMNI parameter value. To make the
semaphore parameters persistent, set the SEM parameter in the /etc/sysctl.conf file:
# vi /etc/sysctl.conf
...
kernel.sem = 100 32000 100 128
If you are using UnitedLinux 1.0, shared memory and semaphore parameters can be set in the
/etc/sysconfig/oracle file.
$ cat /etc/sysconfig/oracle
# SHMMAX max. size of a shared memory segment in bytes
#
SHMMAX=3294967296
#
# SHMMNI (default: 4096): max. number of shared segments system wide
# No change is needed for running Oracle!
#
SHMMNI=4096
#
# SHMALL (default: 8G [2097152]): max. shm system wide (pages)
# No change is needed for running Oracle!
#
SHMALL=2097152
#
# Sempahore values
# Kernel sources header file: /usr/src/linux/include/linux/sem.h
#
# SEMMSL: max. number of semaphores per id. Set to 10 plus the largest
# PROCESSES parameter of any Oracle database on the system (see init.ora).
# Max. value possible is 8000.

ly
#
SEMMSL=1250
#

On
# SEMMNS: max. number of semaphores system wide. Set to the sum of the

se
# PROCESSES parameter for each Oracle database, adding the largest one

U
# twice, then add an additional 10 for each database (see init.ora).

I
# Max. value possible is INT_MAX (largest INTEGER value on this

A
# architecture, on 32-bit systems: 2147483647).

O
#

&
SEMMNS=32000

l
#

rna
# SEMOPM: max. number of operations per semop call. Oracle recommends
# a value of 100. Max. value possible is 1000.
#

nte
I
SEMOPM=100
#

le
# SEMMNI: max. number of semaphore identifiers. Oracle recommends a

c
a
# a value of (at least) 100. Max. value possible is 32768 (defined

r
# in include/linux/ipc.h: IPCMNI)

O
#
SEMMNI=256
...

Oracle9i Database: Real Application Clusters on Linux 2-7


Viewing Resource Use

You can view the shared memory and semaphore


usage on your system:
# ipcs -m
# ipcs -s
# ipcs -a

2-8 Copyright © 2004, Oracle. All rights reserved.

ly
Viewing Resource Use

On
When a database creation fails, or an instance does not start while displaying a memory error or
a semaphores error, it is useful to be able to view the shared memory allocations and the
semaphore allocations on the system.
se
• To display the shared memory segments, use: ipcs -m
I U
• To display the semaphore sets, use: ipcs -s

OA
• To display all resources that are allocated, use: ipcs -a
For example:
l &
# ipcs -m

rna
e
------ Shared Memory Segments --------
key shmid

Int owner perms bytes nattch status

e
0x00000000 524288 oracle 640 4194304 12

cl
0x00000000 557057 oracle 640 201326592 12

ra
0x9808bbd8 589826 oracle 640 205520896 60

O
0x152464c8 622595 oracle 640 142606336 85

Oracle9i Database: Real Application Clusters on Linux 2-8


Oracle Preinstallation Tasks

• Create an oracle user, a dba and oinstall


group on each node:
# groupadd -g 500 dba
# groupadd -g 501 oinstall
# useradd -u 500 -d /usr/local/oracle -g "dba" \
–G oinstall -m -s /bin/bash oracle

• Create the ORACLE_HOME directory:


# mkdir /u01/oracle
# chown oracle:dba /u01/oracle

• Create a directory for srvConfig.loc:


# mkdir /var/opt/oracle
# chown oracle:dba /var/opt/oracle

2-9 Copyright © 2004, Oracle. All rights reserved.

ly
Oracle Preinstallation Tasks

On
You must perform several tasks before any Oracle software can be installed. Verify that the
UNIX user oracle and group dba exist on the system. To do this, view the /etc/passwd

se
and /etc/group files, respectively. If they do not exist, then you must create them.
# groupadd -g 500 dba
I U
A
# groupadd -g 501 oinstall

-m -s /bin/bash oracle
& O
# useradd -u 500 -d /usr/local/oracle -g "dba" –G oinstall \

al
Note that the group is added first because it is not possible to create the user and add it to a

ern
nonexistent group. Note that the group oinstall is the secondary group that the user oracle

t
belongs to. You must create the ORACLE_HOME directory if it is not already present. The

In
oracle user must own the directory.

le
Check for the existence of the /var/opt/oracle directory. The cluster software expects the

c
a
directory to exist before the installation begins, otherwise the installation will terminate. This is

Or
the directory where the installation writes the srvConfig.loc file, which contains the pointer
to the shared file that is needed by the srvctl utility. Make sure that the directory is associated
with the oracle user and the dba group. Note that all the operating system commands that are
discussed here are best run as the superuser (root).

Oracle9i Database: Real Application Clusters on Linux 2-9


Oracle Environment Variables

Environment variable Suggested value


ORACLE_BASE /u01/oracle ( Or suitable)
ORACLE_HOME /u01/oracle/product/9.2.0
NLS_LANG AMERICAN_AMERICA.UTF8, for example
TNS_ADMIN $ORACLE_HOME/network/admin
ORA_NLS33 $ORACLE_HOME/ocommon/nls/admin/data
PATH Should contain $ORACLE_HOME/bin
and $ORACLE_HOME/oracm/bin
LD_LIBRARY_PATH Should contain
$ORACLE_HOME/lib
CLASSPATH $ORACLE_HOME/JRE:$ORACLE_HOME/jlib \
$ORACLE_HOME/rdbms/jlib: \
$ORACLE_HOME/network/jlib
THREADS_FLAG native

2-10 Copyright © 2004, Oracle. All rights reserved.

ly
Oracle Environment Variables

On
The Oracle environment variables that are listed in the slide should be set in the user login file.
Generally, this is the .bash_profile file if the default bash shell is used, but it is shell

se
dependent. Make sure that you unset LANG, JRE_HOME and JAVA_HOME in your profile. If

U
these are set, then they may interfere with Oracle variables such as NLS_LANG and
I
CLASSPATH.

OA
If you are using UnitedLinux, please check the /etc/profile.d/oracle.sh file. You will

l &
find that many Oracle environment variables like ORACLE_HOME, ORACLE_BASE, and

rna
TNS_ADMIN are pre-set here. The values will most certainly be incorrect for your installation.
Please remove or comment out the unneeded entries or you may encounter difficulties during the
installation.
nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 2-10


Asynchronous I/O

• Sequential I/O causes the calling process to sleep


until its I/O request is complete.
• Asynchronous I/O allows a process to submit an
I/O request without waiting for it to complete.
• Oracle processes can issue multiple I/O requests
to disk with a single system call, rather than a
large number of single I/O requests.
• Instead of sleeping, the process is able to perform
other tasks until the I/O is complete.
• Asynchronous I/O is NOT supported under OCFS.

2-11 Copyright © 2004, Oracle. All rights reserved.

ly
Asynchronous I/O

On
One of the most important enhancements on Linux is asynchronous I/O (or nonblocking I/O) in
the kernel. Before the introduction of asynchronous I/O in Advanced Server, the processes

se
submitted disk I/O requests sequentially. Each I/O request would cause the calling process to

U
sleep until the request was completed. Asynchronous I/O enables a process to submit an I/O
I
OA
request without waiting for it to complete. The implementation also enables Oracle processes to
issue multiple I/O requests to disk with a single system call, rather than a large number of single

&
I/O requests. This improves performance in two ways. First, because a process can queue
l
rna
multiple requests for the kernel to handle, the kernel can optimize disk activity by reordering
requests or combining individual requests that are adjacent on disk into fewer, larger requests.

te
Second, because the system does not put the process to sleep while the hardware processes the
n
I
request, the process is able to perform other tasks until the I/O is complete.
e
cl
Please note that at this time Asynchronous I/O is not supported under OCFS.

ra
O

Oracle9i Database: Real Application Clusters on Linux 2-11


Enabling Asynchronous I/O

• Oracle9i Release 2 is shipped with asynchronous


I/O support disabled.
• This is done because other Linux distributions
may not support this feature.
• You must enable asynchronous I/O as
documented in the product documentation.

2-12 Copyright © 2004, Oracle. All rights reserved.

ly
Enabling Asynchronous I/O

On
By default, Oracle9i Release 2 is shipped with asynchronous I/O support disabled. This is
necessary to accommodate other Linux distributions that do not support this feature. To enable

se
asynchronous I/O for Oracle9i Release 2 on Red Hat Linux Advanced Server 2.1, you must

U
perform the following steps as outlined in the product documentation:
I
1. Change directory to $ORACLE_HOME/rdbms/lib.
# make -f ins_rdbms.mk async_on
OA
&
2. If asynchronous I/O needs to be disabled for some reason, then change directory to
l
rna
$ORACLE_HOME/rdbms/lib.
# make -f ins_rdbms.mk async_off

te
3. Parameter settings in the parameter file for raw devices:
n
I
set 'disk_asynch_io=true' (default value is true)

e
l
4. Make sure that all Oracle data files reside on file systems that support asynchronous I/O.
c
ra
Parameter settings in the parameter file for file system files:
set 'disk_asynch_io=true' (default value is true)
O
set 'filesystemio_options=asynch'

Oracle9i Database: Real Application Clusters on Linux 2-12


Downloading OCFS

• You can get OCFS for Linux from:


http://oss.oracle.com
• Download the following Red Hat Package
Management (RPM) packages:
– ocfs-support-1.0-9.i686.rpm
– ocfs-tools-1.0-9.i686.rpm
• Download the following RPM kernel module:
ocfs-2.4.9-3typeversion.rpm,
where typeversion is the Linux version.

2-13 Copyright © 2004, Oracle. All rights reserved.

ly
Downloading OCFS

n
Download OCFS for Linux in a compiled form from the following Web site:
O
e
http://oss.oracle.com
In addition, you must download the following RPM packages:
Us
• ocfs-support-1.0-9.i686.rpm
AI
O
• ocfs-tools-1.0-9.i686.rpm

&
Also, download the RPM kernel module ocfs-2.4.9-3typeversion.rpm, where the

l
a
variable typeversion stands for the type and version of the kernel that is used. Use the

rn
following command to find out which kernel version is installed on your system:

e
$ uname -a

Int
The alphanumeric identifier at the end of the kernel name indicates the kernel version that you

cle
are running. Download the kernel module that matches your kernel version. For example, if the

ra
kernel name that is returned with the uname command ends with

O
-e.3smp, then you would download the kernel module ocfs-2.4.9-e.3-smp-1.0-
1.i686.rpm.

Oracle9i Database: Real Application Clusters on Linux 2-13


Installing the RPM Packages

1. Install the support RPM file


ocfs-support-1.0.-n.i686.rpm:
# rpm -i ocfs-support-1.0-9.i686.rpm

2. Install the correct kernel module RPM file


ocfs-2.4.9-3typeversion.rpm:
# rpm -i ocfs-2.4.9-e.3-enterprise-1.0-1.i686.rpm

3. Install the tools RPM file


ocfs-tools-1.0-n.i686.rpm:
# rpm -i ocfs-tools-1.0-n.i686.rpm

2-14 Copyright © 2004, Oracle. All rights reserved.

ly
Installing the OCFS RPM Packages

On
Complete the following procedure to prepare the environment to run OCFS. Note that you must
perform all steps as the root user and that each step must be performed on all the nodes of the
cluster.
se
I U
First, install the support RPM file, ocfs-support-1.0.-n.i686.rpm, and then the

release of the support and tools RPM. For example:


OA
correct kernel module RPM file for your system. Note that the n represents the most current

l
# rpm –i ocfs-support-1.0-n.i686.rpm &
na
To install the kernel module RPM file for an e.3 enterprise kernel, you must enter the following
r
command:

nte
I
# rpm -i ocfs-2.4.9-e.3-enterprise-1.0-1.i686.rpm

e
l
Next, install the tools RPM, ocfs-tools-1.0-n.i686.rpm. To install the files, enter the
c
ra
following command:

O
# rpm -i ocfs-tools-1.0-n.i686.rpm
Where n is the latest release number of the RPM that you are installing.

Oracle9i Database: Real Application Clusters on Linux 2-14


Starting ocfstool

2-15 Copyright © 2004, Oracle. All rights reserved.

ly
Starting ocfstool

On
By using the ocfstool utility, generate the needed /etc/ocfs.conf file. Start up
ocfstool from a graphical display (Xterm, SSH, VNC, etc) as shown in the following
example:
se
# /usr/bin/ocfstool&
I U
OA
The OCFS Tool window appears in a new X window. Click in the window to make it active and
select the Generate Config option from the Tasks menu. The OCFS Generate Config window
opens.
l &
rna
nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 2-15


Generating the ocfs.conf File

• Confirm that the values are correct:

• View the /etc/ocfs.conf file:


$ cat /etc/ocfs.conf
# Ensure this file exists in /etc
node_name = racic01
node_number = 1
ip_address = 192.168.1.2
ip_port = 7000
guid = 98C704EBD14F6EBC68660060976E5460

2-16 Copyright © 2004, Oracle. All rights reserved.

ly
The ocfs.conf File

On
When the OCFS Generate Config window opens, check the values that are displayed in the
window to confirm that they are correct, and then click the OK button. Based on the information

se
that is gathered from your installation, the ocfstool utility generates the necessary

U
/etc/ocfs.conf file. After the generation is completed, open the /etc/ocfs.conf file
I
OA
in a text file tool and verify that the information is correct before continuing.
The guid value is generated from the Ethernet adapter hardware address and must not be edited

l &
manually. If the adapter is switched or replaced, then remove the ocfs.conf file and

rna
regenerate it or run the ocfs_uid_gen utility that is located in /usr/local/sbin or
/usr/sbin, depending on the OCFS version used.

nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 2-16


Loading OCFS at Startup

Edit /etc/rc.local to load OCFS at startup:


echo "Loading OCFS Module"
[ -f /usr/local/sbin/load_ocfs ] && /usr/local/sbin/load_ocfs
[ "$?" -eq "0" ] && echo "OCFS Module loaded"

echo "Mounting OCFS Filesystems"


/bin/mount -t ocfs /dev/device_name /mount_point
# Where device_name is the ocfs formatted device and
# mount_point is the directory the ocfs formatted device is
# mounted under
echo "OCFS Filesystems Mounted"
...

2-17 Copyright © 2004, Oracle. All rights reserved.

ly
Loading OCFS at Startup

On
To start OCFS, the module ocfs.o must be loaded at system startup. To do this, add the lines
that are shown in the slide to the /etc/rc.local file. Because the script is linked to

se
S99local in the rc5.d directory, it is processed at startup as the system progresses through
the UNIX run levels.
I U
OA
Note that there is an entry to mount an OCFS file system in this example. Alternatively, OCFS
file systems can be mounted by adding appropriate entries in the /etc/fstab file. This will be

l
shown later in this lesson.
&
rna
nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 2-17


Preparing the Disks

1. Partition the disk for the OCFS file system.


2. Create the necessary mount points.
3. Start the ocfstool
utility and format
the partitions.

2-18 Copyright © 2004, Oracle. All rights reserved.

ly
Preparing the Disks

On
By using the fdisk utility, partition the disk to allocate space for the OCFS file system
according to your storage needs. You should partition your system in accordance with Oracle

se
Optimal Flexible Architecture (OFA) standards. In Linux, SCSI disk devices are named by using
the following convention:
I U
• Sd: SCSI disk
• a–z: Disks 1 through 26
OA
• 1–4: Partitions one through four
l &
rna
Therefore, in the slide example, the OCFS file system that is mounted on /u01 is the first
partition on the sixth SCSI drive (sdf1). After the partitions are created, use the following

nte
command to create the mount points for the OCFS file system:

e I
# mkdir -p /u01 /u02 /u03 /u04 ... (more as needed)

cl
Note these mount points, because you must provide them later.

ra
As the root user, start the ocfstool utility:
O
# /sbin/ocfstool&

Oracle9i Database: Real Application Clusters on Linux 2-18


Creating Extended Partitions

# /sbin/fdisk /dev/sde
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1020, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-
1020, default 1020): 1020

Command (m for help): w


The partition table has been altered!

2-19 Copyright © 2004, Oracle. All rights reserved.

ly
Creating a Primary Partition

On
Before starting, identify an unused disk. As the root user, execute the /sbin/fdisk
command. At any command prompt, you can use the option m to print help information for
fdisk.
se
# /sbin/fdisk /dev/sde
I U
A
Command (m for help): m

O
a toggle a bootable flag

&
b edit bsd disklabel
c
l
toggle the dos compatibility flag
a
n
d delete a partition
l
er
list known partition types
t
In
m print this menu

e
n add a new partition
o
cl
create a new empty DOS partition table
p
ra
print the partition table

O
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table

Oracle9i Database: Real Application Clusters on Linux 2-19


Creating Primary Partitions (continued)
w write table to disk and exit
x extra functionality (experts only)
Before the disk can be used, a primary partition must be created. After starting fdisk, choose
the option n to create a new partition. Then choose p for primary. Up to four primary partitions
can be created on each disk. If the device /dev/sde is used, devices sde1 through sde4 are
reserved for numbering primary partitions. Finally, enter w to write the information to the
partition table. After completing this task, it is necessary to restart both the nodes. Use the init
command as follows:
# init 6

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 2-20


The OCFS Format Window

• From the Tasks menu, choose the Format option.


• Change the values as needed:
– Default block
size is 128 K.
– Change user
and group to
oracle and
dba, respectively.

2-21 Copyright © 2004, Oracle. All rights reserved.

ly
The OCFS Format Window

On
The OCFS Tool window appears as shown in the slide. Click in the window to make it active,
and press [CTRL] + [F], or choose the Tasks menu and select Format. The OCFS Format

se
window appears. Use the values in the text boxes to format the partitions and mount the file
systems.
I U
OA
Fill the text field boxes according to the specifications for your system. The block size setting
must be a multiple of the Oracle block size. It is recommended that you do not change the

&
default block size, which is set to 128. Set the value for the User text field to oracle and the
l
a
value for the Group text field to dba. Set the values for the Volume Label and Mountpoint text

rn
fields to the values that you had set earlier and then click the OK button. Formatting then begins.

e
t
The amount of time it takes to format and mount partitions depends on the speed of your system

In
disk drives and CPU.

le
Note: After the partition is properly formatted, you must initially mount the partitions
c
a
individually. When you mount each node for the first time, no other node should attempt to

Or
mount the file systems.
OCFS requires this procedure for the initial mount to allow OCFS to initialize the file system
properly. To perform an individual mount, use the following mount command syntax:
# mount -t ocfs /dev/device /mountpoint

Oracle9i Database: Real Application Clusters on Linux 2-21


OCFS Command-Line Interface

• Format OCFS by using the command-line


interface:
# mkfs.ocfs -F -b 128 -L oracle -m /ocfs -u 500 \
-g 500 -p 0775 /dev/sdd1

• Mount the OCFS file system manually :


# mount -t ocfs /dev/device /mountpoint

2-22 Copyright © 2004, Oracle. All rights reserved.

ly
OCFS Command-Line Interface

On
If you want to format the OCFS partitions manually, then you can use the mkfs.ocfs utility.
This is the same utility that is called by the OCFS Format window. Given below is a summary of
the usage and syntax of mkfs.ocfs:
se
I U
mkfs.ocfs -b block-size [-C] [-F] [-g gid] -L volume-label \

The following options for mkfs.ocfs are supported:


OA
-m mount-path [-n] [-p permissions] [-u uid] [-v] [-V] device

l
• -b: Block size in kilobytes
&
rna
• -C: Clear all data blocks
• -F: Force format existing OCFS volume

te
• -g: Group ID (GID) for the root directory
n
I
• -L: Volume label
e
l
• -m: Path where this device will be mounted
c
ra
• -n: Query only

O
• -p: Permissions for the root directory
• -q: Quiet execution
• -u: User ID (UID) for the root directory
• -V: Print version and exit

Oracle9i Database: Real Application Clusters on Linux 2-22


OCFS Command-Line Interface (continued)
The following is an example of the mkfs.ocfs command as it might actually be used:
# mkfs.ocfs -F -b 128 -L oracle -m /ocfs -u 500 -g 500 \ -p
0775 /dev/sdd1
After the OCFS file systems have been formatted, they can be mounted. Generally, the mounting
process for file systems is automated by the /etc/fstab file, but it is also possible to mount
the OCFS partitions manually. The installation of the OCFS RPM modifies the UNIX mkfs
command to accept the type option (-t) ocfs. Following is an example of manually mounting
the OCFS device sde1 to the mount point /u04:
# mount -t ocfs /dev/sdd1 /ocfs

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 2-23


Alternate OCFS Mounting Method

Edit /etc/fstab and add lines that are similar to the


following:
/dev/sdf1 /ocfs1 ocfs uid=500,gid=500
/dev/sdg1 /ocfs2 ocfs uid=500,gid=500
/dev/sdh1 /quorum ocfs uid=500,gid=500

• The uid is the user ID of the oracle user as


defined in /etc/passwd.
# grep oracle /etc/passwd
ora901:x:500:500::/home/oracle:/bin/bash

• The gid is the group ID of the dba group as


defined in /etc/group.
# grep dba /etc/group
dba:x:500:

2-24 Copyright © 2004, Oracle. All rights reserved.

ly
Alternate OCFS Mounting Method

the /etc/fstab file for each OCFS file system:


On
To mount the file systems automatically on startup, add lines that are similar to the following to

/dev/sdf1 /u01 ocfs uid=500,gid=500


se
I U
This is the more traditional method for mounting file systems on Linux systems.
Ensure that the OCFS file systems are mounted in sequence, node after node, and wait for each

OA
mount to complete before starting the mount on the next node. The OCFS file systems must be

&
mounted after the standard file systems as indicated below:
# cat /etc/fstab
al
rn
LABEL=/ / ext3 defaults 1 1

e
t
LABEL=/tmp /tmp ext3 defaults 1 2

In
LABEL=/usr /usr ext3 defaults 1 2

le
LABEL=/var /var ext3 defaults 1 2

c
/dev/sdb2 swap swap defaults 0 0
...
ra
O
/dev/sdf1 /ocfs1 ocfs uid=500,gid=500
/dev/sdg1 /ocfs2 ocfs uid=500,gid=500
/dev/sdh1 /quorum ocfs uid=500,gid=500
Note: The load_ocfs command must be executed in the startup scripts before the OCFS file
systems can be mounted.

Oracle9i Database: Real Application Clusters on Linux 2-24


System Parameter Configuration for OCFS

• You must verify some system parameters in


support of OFCS.
• Create the script
/etc/init.d/rhas_ossetup.sh.
#!/bin/sh
# /etc/init.d/rhas_ossetup.sh
# This script sets parameters for Oracle9i RAC and OCFS
echo "65536 " > /proc/sys/fs/file-max
echo 1024 65000 > /proc/sys/net/ipv4/ip_local_port_range
echo "1276 2552 3828 " > /proc/sys/vm/freepages
ulimit -u 16384
echo "100 32000 100 100" > /proc/sys/kernel/sem
ulimit -n 65536

2-25 Copyright © 2004, Oracle. All rights reserved.

ly
OCFS System Parameter Configuration

On
You must verify some of the system parameters to accommodate Oracle9i RAC and OCFS. Use
the script /etc/init.d/rhas_ossetup.sh on Red Hat Linux to perform this
configuration. As the root user, enter:
se
# /etc/init.d/rhas_ossetup.sh

I U
A
Using this script ensures that your system is correctly configured, and helps avoid problems.

& O
Note that the settings are valid for a cycle only, which means that it is automatically reset to its
original values upon restarting. To make the process automatic during the startup of the system,

l
enter the following commands as the root user:
a
rn
# ln -s /etc/init.d/rhas_ossetup.sh /etc/rc5.d/S77rhas_ossetup

e
Int
# ln -s /etc/init.d/rhas_ossetup.sh /etc/rc3.d/S77rhas_ossetup
Alternatively, the lines configuring kernel parameters can be included in the

le
/etc/sysctl.conf file.

c
a
If your platform is UnitedLinux, you may add the lines individually to /etc/rc.local file.

Or
# vi /etc/rc.local
...
echo "65536 " > /proc/sys/fs/file-max
echo 1024 65000 > /proc/sys/net/ipv4/ip_local_port_range
...

Oracle9i Database: Real Application Clusters on Linux 2-25


Swap Space Configuration

• At least 1 GB must be allocated to the swap


partition.
• As the root user, use the following command to
display swap information:
# /sbin/swapon -s
Filename Type Size Used Priority
/dev/sda5 partition 2096440 0 -1
/dev/sda6 partition 2096440 0 -2

2-26 Copyright © 2004, Oracle. All rights reserved.

ly
OCFS Swap Space Requirements

On
You must allocate at least 1 GB to the local swap partition. As the root user, use the command
swapon -s to verify that you have enough disk space allocated. If you require more disk

se
space, then use the command swapon -a. Note that you can create a swap partition with a

U
maximum size of 2 GB. To have the swap automatically set on startup, add lines that are similar
I
to the following to the /etc/fstab file:
/dev/sdb2 swap swap defaults 0 0
OA
l &
Swap entries in /etc/fstab should occur after the standard file system entries and before the

na
OCFS file system entries.
r
te
LABEL=/tmp /tmp ext3 defaults 1 2

n
I
LABEL=/usr /usr ext3 defaults 1 2

e
cl
LABEL=/var /var ext3 defaults 1 2

ra
/dev/sdb2 swap swap defaults 0 0

O
/dev/sdb3 swap swap defaults 0 0
/dev/sdc1 swap swap defaults 0 0
/dev/sdf1 /ocfs1 ocfs uid=500,gid=500
/dev/sdg1 /ocfs2 ocfs uid=500,gid=500

Oracle9i Database: Real Application Clusters on Linux 2-26


Red Hat Network Adapter Configuration

# /usr/sbin/redhat-config-network

• Select the “Activate device


when computer starts”option.
• Click the Hardware Device tab.

2-27 Copyright © 2004, Oracle. All rights reserved.

ly
Network Adapter Configuration

On
You must have the network consistently available during system startup. To ensure that all
network adapters are automatically enabled and in the correct order, perform the following tasks:

se
1. Ensure that you have the DISPLAY variable properly set, and launch the

U
/usr/sbin/redhat-config-network program. The Ethernet Device window
I
opens.

OA
2. Select the “Activate device when computer starts” check box, and click the OK button.

&
3. Click the Hardware Devices tab. Select the Use Hardware Address check box, and click the
l
r
save the changes.
na
Probe for Address button to populate the Hardware Address field. Click the OK button to

te
4. Ensure that the public and private node names of all member nodes in the RAC are listed in
n
I
the /etc/hosts file.

e
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 2-27


UnitedLinux Network Adapter
Configuration
# yast2

• Select "Network/Basic" option


• Then select "Network Device Configuration"

2-28 Copyright © 2004, Oracle. All rights reserved.

ly
UnitedLinux Network Adapter Configuration

following tasks:
On
If you are running UnitedLinux and need to configure your network adapters, perform the

se
1. Ensure that you have the DISPLAY variable properly set, and launch the /sbin/yast2
program. The YaST Control Center window opens.
I U
OA
2. Select the “Activate device when computer starts” check box, and click the OK button.
3. Select Network/Basic, and click on Network Card Configuration.

&
4. Select the adapter from the Network Device pull down menu and configure the IP address
l
rna
and host name as needed.
5. Ensure that the public and private node names of all member nodes in the RAC are listed in

te
the /etc/hosts file.
n
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 2-28


Known Limitations and Requirements

• The Oracle9i Release 2 (9.2.0.2 or better) patch


must be installed.
• OCFS supports only the files that are used by the
Oracle server.
• Asynchronous I/O is not supported under OCFS.
• RMAN must be used to perform online backups.

2-29 Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 2-29


Summary

In this lesson, you should have learned how to:


• Set Linux kernel parameters as required by the
cluster database
• Install and configure Oracle Cluster File System
• Configure the network and interconnect interfaces

2-30 Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 2-30


Oracle Cluster Management System

Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Objectives

After completing this lesson, you should be able to do


the following:
• Prepare Linux for Oracle Cluster Management
System (OCMS)
• Install OCMS by using Oracle Universal Installer
• Apply the necessary patches
• Configure and start the cluster

3-2 Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-2


Linux Cluster Management Software

• OCMS is the Oracle-supplied cluster manager.


• Install Oracle Cluster Manager 9.2.0.1.
• Install the hangcheck-timer RPM.
– Not necessary under UnitedLinux 1.0
• Install the Oracle Cluster Manager 9.2.0.4 patch.

3-3 Copyright © 2004, Oracle. All rights reserved.

Oracle Cluster Management System


nly
e O
Unlike the Oracle RAC versions on UNIX platforms, it is no longer necessary to rely on the
system vendor to provide the clusterware layer (the operating system–dependent modules or the
equivalents). OCMS is now included with Oracle9i for Linux.
Us
AI
It is necessary to first load the cluster manager from the 9.2.0.1 distribution. In this version, the

& O
watchdog daemon is an integral part of the cluster manager. With the introduction of Oracle
9.2.0.2, the architecture of the cluster manager has been changed. The hangcheck-timer is a

al
kernel module, whereas the watchdog daemon is essentially a user process. The kernel module

rn
approach is a faster, more efficient solution to node monitoring. The kernel module is contained
e
Int
in the hangcheck-timer RPM, which can be found on http://metalink.oracle.com.
Oracle 9.2.0.2 (and higher) is installed as a patch from the 9.2.0.1 installer. You must repeat

le
these installation tasks on each node in your cluster.
c
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-3


OCMS

• OCMS consists of: Node 1 Node 2


– Hangcheck-timer module Cluster Cluster
monitor monitor
– Node monitor (NM)
Node Node
– Cluster monitor (CM) monitor monitor
• Binaries are located in hangcheck hangcheck
$ORACLE_HOME/ocms/bin. timer timer
Oracle Oracle
• OCMS is configured by: CFS CFS
– $ORACLE_HOME/oracm/ad
min/cmcfg.ora
– $ORACLE_HOME/oracm/ad
min/ocmargs.ora

Shared disks

3-4 Copyright © 2004, Oracle. All rights reserved.

OCMS
nly
e O
OCMS is included as part of the Oracle9i distribution for Linux. OCMS resides above the
operating system and provides all the clustering services that Oracle RAC needs to function as a

Us
high-availability and a highly scalable solution. It provides cluster membership services, global

I
view of clusters, node monitoring, and cluster reconfiguration.
A
& O
The cluster monitor (CM) maintains the process-level cluster status. It also accepts the
registration of Oracle instances to the cluster and provides a consistent view of Oracle instances.

al
The node monitor provides the interface to other modules for determining cluster resources’

rn
status, that is, node membership. It obtains the status of the cluster resources from the cluster
e
cluster manager.
Int
manager for remote nodes and provides the status of the cluster resources of the local node to the

le
The hangcheck-timer module monitors the Linux kernel for any long operating system hangs that
c
ra
might adversely affect the cluster or damage the database.

O
The parameters that control the behavior of the cluster manager are set in two files that are
located in $ORACLE_HOME/oracm/admin: cmcfg.ora and ocmargs.ora, respectively.

Oracle9i Database: Real Application Clusters on Linux 3-4


The Hangcheck-Timer

• Replaces the watchdog daemon in Oracle 9.2.0.2.


• Is loaded as a kernel module
• Is specified by the KernelModuleName parameter
in the CMCFG.ORA file

$ cd $ORACLE_HOME/oracm/admin
$ grep KernelModuleName cmcfg.ora
KernelModuleName=hangcheck-timer

3-5 Copyright © 2004, Oracle. All rights reserved.

The Hangcheck-Timer
nly
e O
In place of the watchdog daemon, the 9.2.0.2 version of the cluster manager for Linux now
includes the use of a Linux kernel module called hangcheck-timer. This module is not required

Us
for cluster manager operation but its use is highly recommended. This module monitors the
I
Linux kernel for long operating system hangs that could affect the reliability of an RAC node
A
O
and damage an RAC database. When such a hang occurs, this module sends a signal to reset the

&
node. This approach offers three advantages over the watchdog approach:

l
• Node resets are triggered from within the Linux kernel, making them much less affected by
a
ern
the system load.

t
• The cluster manager on an RAC node can easily be stopped and reconfigured because its

In
operation is completely independent of the kernel module.

cle
• The features that are provided by the hangcheck-timer module closely resemble those
found in the implementation of the cluster manager for RAC on the Windows platform, on

ra
which the cluster manager on Linux was based.
O

Oracle9i Database: Real Application Clusters on Linux 3-5


The Node Monitor (NM)

• Maintains a consistent view of the cluster, and


reports the node status to the cluster manager
• Uses a heartbeat mechanism
• Works with hangcheck-timer and acts depending
on the type of failure
• Is integrated into the cluster monitor process,
oracm, in Oracle 9.2.0.2 and higher

3-6 Copyright © 2004, Oracle. All rights reserved.

The Node Monitor


nly
e O
The node monitors on all nodes send heartbeat messages to each other. Each node maintains a
database containing status information about the other nodes. The node monitors in a cluster

Us
mark a node inactive if the hangcheck-timer determines that the kernel is inactive for too long a
period.
AI
• Termination of the NM on the remote server
& O
The hangcheck-timer sends a node reset signal for the following reasons:

• Node failure
al
rn
• Heavy load on the remote server
e
Int
The node monitor reconfigures the cluster to terminate the isolated nodes, ensuring that the
remaining nodes in the reconfigured cluster continue to function properly.

cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-6


The Cluster Monitor

• The cluster monitor (CM) maintains the process-


level cluster status.
• It accepts registration of Oracle instances to the
cluster and provides a consistent view of Oracle
instances.
• When an Oracle process that writes to the shared
disk quits abnormally, the CM on the node detects
it and takes appropriate action.

3-7 Copyright © 2004, Oracle. All rights reserved.

The Cluster Monitor


nly
e O
If an Oracle background process terminates abnormally, then the CM daemon on the node
detects it and requests that the node stop completely. This prevents the node from issuing

Us
physical I/O to the shared disk before CM daemons on the other nodes report the cluster
I
reconfiguration to instances on the nodes. This action prevents database damage.
A
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-7


Starting OCMS

• To start OCMS on Linux, perform the following


steps:
1. Load the hangcheck-timer kernel module.
2. Start cluster monitor.
• Use the ocmstart.sh script that is located in
$ORACLE_HOME/oracm/bin to start OCMS.
• Unlike other platforms, you must start OCMS on
all nodes in the cluster.

3-8 Copyright © 2004, Oracle. All rights reserved.

Starting OCMS
nly
command to confirm this:
e O
Before starting OCMS, make sure the hangcheck-timer module is loaded. Use the lsmod

# lsmod
Us
Module Size Used by
I
Not tainted

A
O
hangcheck-timer 1208 0 (unused)

&
ocfs 402980 5

l
...
aic7xxx

rna 179076 11
When OCMS is patched to 9.2.0.2 or higher, the ocmstart.sh script located in

te
$ORACLE_HOME/oracm/bin must be edited to comment out or remove all Watchdog related
n
I
entries since it is no longer needed:
e
cl
# watchdogd’s default log file

ra
# WATCHDOGD_LOG_FILE=$ORACLE_HOME/oracm/log/wdd.log

O
...
# if watchdogd status | grep ‘Watchdog daemon active’ >/dev/null
# then
# echo ‘ocmstart.sh: Error: watchdogd is already running’
# exit 1
# fi

Oracle9i Database: Real Application Clusters on Linux 3-8


The Quorum Disk

• OCMS requires the use of a shared disk resource


called the quorum disk.
• This resource can be a raw slice or an OCFS file.
• Use the fdisk command to create raw slices on
an unused disk.
– Slicing disk /dev/sde with fdisk:
# /sbin/fdisk /dev/sdd

– If an OCFS file is used, then the file must exist


before starting the cluster manager for the first
time.
# touch /ocfs/quorum.dbf
# chown oracle:dba /ocfs/quorum.dbf

3-9 Copyright © 2004, Oracle. All rights reserved.

The Quorum Disk


nly
e O
RAC uses a quorum disk to improve cluster availability. Oracle stores cluster status information
on the partition that is reserved for the quorum disk. The node monitor uses the quorum disk

Us
configuration information to manage the cluster configuration.

AI
The Oracle configuration and administrative tools also require access to cluster configuration

& O
data that is stored on shared disks. You must configure a shared disk resource to use the
Database Configuration Assistant, Oracle Enterprise Manager, and the Server Control command-

al
line administrative utility.

rn
Note: On some platforms such as Windows NT, the quorum disk is sometimes called the voting
e
disk.

Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-9


Configuring the User Environment

• Edit the user .bash_profile to create a


persistent environment:
export ORACLE_HOME=/oracle/9.2.0
export ORACLE_BASE=/oracle/9.2.0
export ORACLE_SID=U1N1
export PATH=$PATH:$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib

• Make the .bash_profile executable for the


owner:
$ chmod u+x .bash_profile

3-10 Copyright © 2004, Oracle. All rights reserved.

User Environment
nly
e O
There are some environment variables that have to be set for the oracle user with the export
command. Rather than setting them every time after logging on to the system, put them into the

Us
.bash_profile script within the oracle user’s home directory. Therefore, log in as

I
oracle user and, in the home directory, modify the .bash_profile login file and ensure
A
O
that it looks similar to the example below:

&
$ cat .bash_profile

al
export ORACLE_HOME=/oracle/9.2.0

n
export ORACLE_BASE=/oracle/9.2.0

er
export ORACLE_SID=U1N1

t
n
export PATH=$ORACLE_HOME/bin:$PATH

e I
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib

cl
After modifying the .bash_profile login file, make sure that the newly defined

ra
environment variables become active in the current session by running .bash_profile in the

O
current shell:
$ . .bash_profile

Oracle9i Database: Real Application Clusters on Linux 3-10


Starting the Installer

3-11 Copyright © 2004, Oracle. All rights reserved.

Starting the Installer


nly
e O
You install OCMS by using the Oracle Universal Installer. You must perform the installation on
each node of your cluster. First, log in as oracle user from an Xterm window or a VNC

Us
session. If installing from a CD-ROM, then start the Oracle Universal Installer by using the
following command:
AI
$ /mnt/cdrom/runInstaller

& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-11


Specifying Inventory Location

3-12 Copyright © 2004, Oracle. All rights reserved.

Inventory Location
nly
e O
The Inventory Location window is displayed next. If you have not installed any Oracle products
on the node, then you have the option of specifying a location. If the node has previously

Us
installed Oracle software, then the installer should detect the existing inventory and display that
location, which you can accept.
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-12


File Locations

3-13 Copyright © 2004, Oracle. All rights reserved.

File Locations
nly
e O
In the File Locations window, specify the source and destination file locations for the
installation. If there are existing Oracle homes on this node, then they appear in a drop-down

Us
menu in the Name field. Otherwise, indicate the path where you would like the Oracle files to be
written.
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-13


Available Products

3-14 Copyright © 2004, Oracle. All rights reserved.

Available Products
nly
e O
In order to install Oracle9i Database together with the Real Application Clusters option, the
Oracle Cluster Manager must be installed first. Choose Oracle Cluster Manager 9.2.0.1.0 from
the list of products in the Available Products window.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-14


Node Information

3-15 Copyright © 2004, Oracle. All rights reserved.

Node Information
nly
e O
Next, specify the public names of the nodes of your cluster. These are the node names that are
used from the outside network (that is, the network excluding the node interconnects). You can
find these names in the /etc/hosts file.
Us
$ cat /etc/hosts
AI
O
# Do not remove the following line, or various programs

&
# that require network functionality will fail.
127.0.0.1
al localhost.localdomain localhost
#

ern
t
138.1.162.61 git-raclin01.us.oracle.com git-raclin01
138.1.162.62
In git-raclin02.us.oracle.com git-raclin02

le
# Addresses for the interconnects

c
a
192.168.1.2 racic1

Or
192.168.1.3 racic2

Oracle9i Database: Real Application Clusters on Linux 3-15


Interconnect Information

3-16 Copyright © 2004, Oracle. All rights reserved.

Interconnect Information
nly
e O
In the next window, specify the private node names. These are the names that are used to identify
the interconnects between the nodes in the cluster. You can find these names also in the
/etc/hosts file.
Us
$ cat /etc/hosts
AI
O
# Do not remove the following line, or various programs

&
# that require network functionality will fail.
127.0.0.1
al localhost.localdomain localhost
#

ern
t
138.1.162.61 git-raclin01.us.oracle.com git-raclin01
138.1.162.62
In git-raclin02.us.oracle.com git-raclin02

le
# Addresses for the interconnects

c
a
192.168.1.2 racic1

Or
192.168.1.3 racic2

Oracle9i Database: Real Application Clusters on Linux 3-16


Watchdog Parameter

3-17 Copyright © 2004, Oracle. All rights reserved.

Watchdog Parameter
nly
e O
Leave the Watchdog parameter at the default value of 60000. This parameter is deprecated in
Oracle Cluster Manager 9.2.0.2. It is removed later in the installation process when the

Us
hangcheck-timer module and the Oracle 9.2.0.4 patch are installed.

AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-17


Quorum Disk

3-18 Copyright © 2004, Oracle. All rights reserved.

Quorum Disk
nly
e O
Specify the name of a raw device or file for the quorum disk. This can be either a raw device or
an OCFS file. If using an OCFS file, make sure the file already exists and can be written by the
oracle user and members of the dba group.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-18


9.2.0.1.0 Summary Window

3-19 Copyright © 2004, Oracle. All rights reserved.

9.2.0.1.0 Summary Window


nly
e O
Check the information that is displayed in the Summary window. If you are satisfied that your
choices are accurately reflected on it, then click the Install button to continue with the
installation.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-19


Installation Progress

3-20 Copyright © 2004, Oracle. All rights reserved.

Installation Progress
nly
e O
When the Oracle Universal Installer starts installing the Oracle Cluster Management software for
Linux, the installation progress is displayed in the Install window. It should only take a few
minutes for this product to load.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-20


End of Installation

3-21 Copyright © 2004, Oracle. All rights reserved.

End of Installation
nly
e O
After the Oracle Cluster Manager is loaded, the End of Installation window appears. Click the
Exit button to quit the installer. Do not start the cluster manager yet. You must first install the
hangcheck-timer module and the Oracle 9.2.0.4 patch set.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-21


The Hangcheck-Timer RPM

• Install before the 9.2.0.4 patch.


• Needed for Red Hat AS 2.1
• Included with UnitedLinux1.0
• Download from http://metalink.oracle.com.

3-22 Copyright © 2004, Oracle. All rights reserved.

The Hangcheck-Timer RPM


nly
e O
If Red Hat AS2.1 is used, download, install and configure the hangcheck-timer kernel module. If
UnitedLinux 1.0 is used, it is included in the SP1 update disk. The steps that are listed below
must be performed only once unless otherwise indicated:
Us
I
1. Download the kernel modules from Metalink. For this, the user must have a Metalink login
A
O
account.

&
2. Enter the username and password, and then click OK.

l
3. Click Patches on the left of the window.
a
ern
4. Enter 2594820 in the Patch Number field, and then click Submit.
5. Click Download, and save the p2594820_20_LINUX.zip file to the local disk.

Int
6. Unzip and identify the RPM that is needed for your kernel by running the uname
command.

cle
a
# unzip p2594820_20_LINUX.zip

Or
# uname –a
Linux git-raclin01 2.4.9-e.3smp #1 SMP
7. From the directory where the RPM is unzipped, run the RPM command:
# rpm -ivh <RPM-matching-your-kernel>

Oracle9i Database: Real Application Clusters on Linux 3-22


The hangcheck-timer RPM (continued)
8. Disable the mechanism that is used to start the Oracle watchdogd daemon at system
startup. This action is imperative for the success of later steps in the installation process.
Move the /etc/rc3.d/S99cluster file to /etc/rc3.d/s99cluster. Because
the filename now starts with a lowercase “s,” it is not processed at startup. It is possible
that this file appears in /etc/rc5.d as well. If S99cluster appears there as well,
then it must be moved as well.
9. As the root user, load the hangcheck-timer kernel module by using the following
command:
# /sbin/insmod hangcheck-timer hangcheck_tick=30 \
hangcheck_margin=180
10. Append the following line to the /etc/rc.local file:
/sbin/insmod hangcheck-timer hangcheck_tick=30 \
hangcheck_margin=180
This ensures that the module loads automatically at system initialization.
Note: If UnitedLinux 1.0 is used, only step 10 need be performed.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-23


Hangcheck Settings

Parameter Service Value


hangcheck_tick Hangcheck-timer 30 seconds
hangcheck_margin Hangcheck-timer 180 seconds
KernelModuleName oracm Hangcheck-timer
MissCount oracm Greater than
hangcheck_tick +
hangcheck_margin
(greater than 210)

3-24 Copyright © 2004, Oracle. All rights reserved.

Hangcheck Settings
nly
e O
It is recommended that the hangcheck-timer module be loaded and the cluster manager be started
with the parameter values that are shown above (in addition to recommendations that are made

Us
elsewhere in the Oracle RAC documentation). The inclusion of the hangcheck-timer kernel
I
module also introduces two new configuration parameters to be used when the module is loaded:
A
O
• hangcheck_tick: This is an interval that indicates how often the hangcheck-timer

&
checks the condition of the system.

l
• hangcheck_margin: Certain kernel activities may randomly introduce delays in the
a
ern
operation of the hangcheck-timer. The hangcheck_margin parameter provides a

t
margin of error to prevent unnecessary system resets because of these delays.

In
Taken together, these two parameters indicate how long an RAC node must stop responding

le
before the hangcheck-timer module resets the system. A node reset occurs when the following
c
a
condition is true:

Or
(system hang time) > (hangcheck_tick + hangcheck_margin)

Oracle9i Database: Real Application Clusters on Linux 3-24


The Oracle 9.2.0.2 Patch Set

The Oracle 9.2.0.2 (and higher) patchset:


• Is not a complete distribution
• Requires that OCMS 9.2.0.1 be installed previously
• Is installed by using 9.2.0.1 Universal Installer
• Changes the fundamental architecture of OCMS
– Replaces the user process watchdogd with the
kernel-based hangcheck-timer for improved
performance

3-25 Copyright © 2004, Oracle. All rights reserved.

The Oracle 9.2.0.2 Patch Set


nly
e O
The Oracle 9.2.0.2 (and higher) patch set includes upgrades for the RDBMS, PL/SQL,
Precompilers, Networking, Oracle Text (formerly interMedia Text), JDBC, JavaVM, XML

Us
Developers Kit, Oracle9i Globalization, Oracle Core, Ultrasearch, Spatial, SQL*Plus, SQLJ,
I
JPublisher, Intermedia, OLAP, and Oracle Internet Directory products. This is not a complete
A
O
software distribution and you must install it over an existing Oracle9i Release 2 Oracle Server

&
installation.

al
The Oracle 9.2.0.2 (and higher) patch set also includes upgrades to the Oracle Cluster Manager

rn
on Linux. Again, the Oracle Cluster Manager Software patch set is not a complete software
e
Int
distribution and must be installed over an existing Oracle9i Release 2 Oracle Cluster Manager
Software installation.

cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-25


9.2.0.4.0 Cluster Manager Patch

3-26 Copyright © 2004, Oracle. All rights reserved.

Installing the 9.2.0.4 Patch


nly
e O
To install the 9.2.0.4 patch, you must first start the installer in your Oracle 9.2.0.1
ORACLE_HOME. After you reach the File Locations window, change the directory that is

Us
specified in the Source... field to point to the patch location. When you click the Next button, the
I
Available Products window appears with the products that may be installed from the location
A
O
specified. Choose Oracle9iR2 Cluster Manager 9.2.0.4.0 and continue.

l &
rna
nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-26


Node Selection

3-27 Copyright © 2004, Oracle. All rights reserved.

Node Selection
nly
e O
You can use the Oracle 9.2.0.4 patch set to install the included patches onto multiple nodes in a
cluster when the base release (9.2.0.1.0) is already installed on those nodes. The Oracle

Us
Universal Installer detects whether the machine on which you are running the installer is part of
I
the cluster. If it is, then you are prompted to select the nodes from the cluster on which you
A
O
would like the patch set installed. For this to work properly, user equivalence must be in effect

&
for the oracle user on each node of the cluster. To enable user equivalence, make sure that the

l
/etc/hosts.equiv file exists on each node with an entry for each trusted host. For example,
a
ern
if the cluster has two nodes, git-raclin01 and git-raclin02, then the hosts.equiv

t
files will look like this:

In
[root@git-raclin01]# cat /etc/hosts.equiv
git-raclin02
cle
ra
[root@git-raclin02]# cat /etc/hosts.equiv

O
git-raclin01

Oracle9i Database: Real Application Clusters on Linux 3-27


Node Information

3-28 Copyright © 2004, Oracle. All rights reserved.

Node Information
nly
/etc/hosts file.
e O
You must provide the host names of your nodes again. These names can be verified in the

$ cat /etc/hosts
Us
...
AI
O
138.1.162.61 git-raclin01.us.oracle.com git-raclin01

&
138.1.162.62 git-raclin02.us.oracle.com git-raclin02

al
# Addresses for the interconnects
192.168.1.2

ern racic1

t
192.168.1.3 racic2

In
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-28


Interconnect Information

3-29 Copyright © 2004, Oracle. All rights reserved.

Interconnect Information
nly
file.
e O
Provide the interconnect names again. These names can also be verified in the /etc/hosts

$ cat /etc/hosts
Us
...
AI
O
138.1.162.61 git-raclin01.us.oracle.com git-raclin01

&
138.1.162.62 git-raclin02.us.oracle.com git-raclin02

al
# Addresses for the interconnects
192.168.1.2

ern racic1

t
192.168.1.3 racic2

In
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-29


Watchdog Parameter

3-30 Copyright © 2004, Oracle. All rights reserved.

Watchdog Parameter
nly
Watchdog daemon is not used.
e O
Leave the Watchdog parameter at the default value of 60000 as was done earlier. The

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-30


Quorum Disk

3-31 Copyright © 2004, Oracle. All rights reserved.

Quorum Disk
nly
or an OCFS file.
e O
Specify the name of a device or file to use for the quorum disk. This can be either a raw device

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-31


9.2.0.4.0 Summary Window

3-32 Copyright © 2004, Oracle. All rights reserved.

9.2.0.4.0 Summary Window


nly
9.2.0.4.0 Cluster Manager patch.
e O
The slide shows a summary of the installation actions. Click the Install button to apply the

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-32


Starting Cluster Manager

• Cluster manager startup is controlled by


$ORACLE_HOME/oracm/admin/ocmstartup.sh.
• Arguments are passed by
$ORACLE_HOME/oracm/admin/ocmargs.ora.
• Edit /etc/rc.local to start the ocmstartup.sh
script as the root user on all nodes in cluster.
# vi /etc/rc.local
...
echo "Starting Oracle Cluster Manager"
[ -f $ORACLE_HOME/oracm/log/ocmstart.ts ] && rm
$ORACLE_HOME/oracm/log/ocmstarts
su - root -c "$ORACLE_HOME/oracm/bin/ocmstart.sh"

3-33 Copyright © 2004, Oracle. All rights reserved.

Starting Cluster Manager


nly
e O
Use the OCMS startup scripts ocmstartup.sh and ocmargs.ora that are provided to start
the cluster manager. You must modify the scripts and remove any watchdog-related information.

Us
In the $ORACLE_HOME/oracm/admin/ocmargs.ora script, remove the first line that
contains watchdogd.
AI
& O
Changes that must be made to $ORACLE_HOME/oracm/bin/ocmstart.sh include:
• Remove the words watchdog and from the line which says "Sample startup

al
n
script for watchdogd and oracm".

ter
• Remove or comment out all the lines that contain watchdogd (both uppercase and
lowercase) from the rest of the script.
In
If the word watchdog is used within an if/then/fi block, then delete or comment out the

cle
lines containing if/then/fi also. You must perform these modifications on all nodes in the

ra
cluster before continuing.

O
Start the modified script from /etc/rc.local. You must run the ocmstart.sh startup
command as the root user because the oracm processes have their priorities (nice values)
adjusted at startup. Remove the ocmstart.ts timestamp file before starting or the script will
fail.

Oracle9i Database: Real Application Clusters on Linux 3-33


Summary

In this lesson, you should have learned how to:


• Prepare Linux for Oracle Cluster Management
System (OCMS)
• Install OCMS by using the Oracle Universal
Installer
• Apply the necessary patches
• Configure and start the cluster

3-34 Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 3-34


Installing Oracle on Linux

Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Objectives

After completing this lesson, you should be able to do


the following:
• Install the Oracle database by using Oracle
Universal Installer
• Configure RAC options
• Identify and install the necessary patches

4-2 Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-2


Starting the Installation

4-3 Copyright © 2004, Oracle. All rights reserved.

Starting the Installation


nly
e O
Before starting, ensure that both nodes in the cluster are functional. Also, ensure that you are
logged in as the oracle user. Assuming that you use the CD-ROM for the installation, start the
Oracle Universal Installer with the following command:
Us
$ /cdrom/runInstaller
AI
O
The Welcome window is displayed. Click the Next button to continue.

&
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-3


Choose the Target Node

4-4 Copyright © 2004, Oracle. All rights reserved.

Choose the Target Node


nly
e O
The Oracle Universal Installer is cluster aware. This makes installing software across your
cluster more manageable. Because you have already installed the cluster file system and cluster

Us
manager, the installer can see all the nodes in the cluster during the installation. Choose both
I
nodes of the cluster where you want the software to be copied during this installation. If the
A
O
Cluster Node Selection window is not displayed, then check that the cluster manager is started

&
properly. You can do this by checking for active oracm process on both nodes:

al
[oracle@git-raclin01 /]# ps -ef|grep oracm
root 1621
rn
1 0 May14 ?

e
00:00:00 oracm

t
root 1624 1621 0 May14 ? 00:00:00 oracm
root
In
1625 1624 0 May14 ? 00:00:00 oracm

e
root 1626 1624 0 May14 ? 00:00:00 oracm
...
cl
ra
[oracle@git-raclin02 /]# ps -ef|grep oracm

O
root 1627 1 0 May14 ? 00:00:00 oracm
root 1628 1627 0 May14 ? 00:00:00 oracm
root 1629 1628 0 May14 ? 00:00:00 oracm
root 1631 1629 0 May14 ? 00:00:00 oracm
...

Oracle9i Database: Real Application Clusters on Linux 4-4


File Locations

4-5 Copyright © 2004, Oracle. All rights reserved.

File Locations
nly
e O
Because you had previously loaded Oracle software (Oracle Cluster Manager), the File
Locations window displays an existing ORACLE_HOME. Accept the default Source and
Destination file locations.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-5


Product Selection

4-6 Copyright © 2004, Oracle. All rights reserved.

Product Selection
nly
e O
When the Available Products window appears, select Oracle9i Database 9.2.0.1 as the product to
install. This must be done before the database files can be upgraded to release 9.2.0.2 (or higher).

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-6


Installation Type

4-7 Copyright © 2004, Oracle. All rights reserved.

Installation Type
nly
e O
In the Installation Type window, specify Custom as the installation method. Do not choose either
of the other two methods because they do not satisfy all the installation requirements for the
RAC option.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-7


Product Components

4-8 Copyright © 2004, Oracle. All rights reserved.

Product Components
nly
e O
In the Available Product Components window, choose the Oracle9i Real Application Clusters
9.2.0.1.0 option. In addition, make sure that Oracle Partitioning 9.2.0.1.0 and Oracle Net
Services 9.2.0.1.0 are also selected.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-8


Component Locations

4-9 Copyright © 2004, Oracle. All rights reserved.

Component Locations
nly
e O
Unless you have a specific need to change the destination of non-ORACLE_HOME components
that are listed in the Component Locations window, accept the default location that is displayed
in the window.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-9


Shared Configuration File

4-10 Copyright © 2004, Oracle. All rights reserved.

Shared Configuration File


nly
e O
The shared configuration file you are prompted for is used by the srvctl utility and Group
Services. This is not the same file as used by the quorum disk. The file specified here will be

Us
written to the srvConfig.loc file located in /var/opt/oracle or

I
$ORACLE_HOME/srvm/config. The file must exist before the group services daemon
A
O
(GSD) is started. Create the file by using the Unix touch command. Make sure the file is

&
associated with the oracle user and the dba group. Make sure it is readable and writable by

l
both. The following steps need to be performed once only.
a
rn
[oracle@git-raclin01]# touch /quorum/srvm.dbf

e
t
[oracle@git-raclin01]# chown dba:oracle /quorum/srvm.dbf

In
[oracle@git-raclin01]# chown 666 /quorum/srvm.dbf

cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-10


Operating System Groups

4-11 Copyright © 2004, Oracle. All rights reserved.

Privileged Groups
nly
OSDBA and OSOPER group name fields.
e O
In the Privileged Operating System Groups window, specify the UNIX dba group for both the

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-11


OMS Repository

4-12 Copyright © 2004, Oracle. All rights reserved.

OMS Repository
nly
Server will use an existing repository.
e O
In the Oracle Management Server Repository window, specify that the Oracle Management

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-12


Create Database Options

4-13 Copyright © 2004, Oracle. All rights reserved.

Create Database Options


nly
e O
When the Create Database window appears, decline by clicking the No option button. As stated
earlier in this lesson, you create the database later by using the Database Configuration Assistant.

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-13


Installation Summary

4-14 Copyright © 2004, Oracle. All rights reserved.

Installation Summary
nly
e O
Check the information that is displayed in the Summary window. If you are satisfied that your
choices are accurately reflected on it, then click the Install button to continue with the
installation.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-14


Installation Progress

4-15 Copyright © 2004, Oracle. All rights reserved.

Installation Progress
nly
e O
When the Oracle Universal Installer starts installing the Oracle database distribution for Linux,
the installation progress is displayed in the Install window. The distribution is large and takes
time to completely install.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-15


The root.sh Script

4-16 Copyright © 2004, Oracle. All rights reserved.

The root.sh Script


nly
e O
When the software installation is about to be completed, you are prompted to run the root.sh
script. To do this, open another terminal and run the script from the directory that is specified. As

Us
indicated by the Setup Privileges window, you must run the root.sh script as the root user.

I
Remember, because you install the software on a cluster, you must run the root.sh script on
A
O
all the nodes to which the files are copied.

l &
rna
nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-16


Net Configuration Assistant

4-17 Copyright © 2004, Oracle. All rights reserved.

Net Configuration Assistant


nly
e O
When the Oracle Net Configuration Assistant starts, defer directory configuration. The Assistant
will then take you through sever screens to configure the listener. Accept the default name of

Us
LISTNER, the default protocol TCP, and the default port of 1521. When asked if you prefer
I
another naming method (other than tnsnames.ora), answer no. Click on the Finish button on
A
O
the last page to continue.

l &
rna
nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-17


Enterprise Manager Configuration
Assistant (EMCA)

4-18 Copyright © 2004, Oracle. All rights reserved.

EMCA
nly
e O
When the Enterprise Manager Configuration Assistant starts, choose Cancel, and then confirm
your choice by clicking the No button. Enterprise Manager is configured after the database is
created.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-18


Installer Message

4-19 Copyright © 2004, Oracle. All rights reserved.

Installer Message
nly
e O
At this point in the installation, the Oracle Universal Installer generates an error. The error is
generated by canceling one or more configuration tools. Click the OK button to proceed with the
installation.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-19


End of Installation

4-20 Copyright © 2004, Oracle. All rights reserved.

End of Installation
nly
e O
When the End of Installation window appears, quit the installer by clicking the Exit button.
Because the installation is complete, the Oracle Database 9.2.0.4.0 patch set must now be
applied.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-20


Updating Universal Installer

4-21 Copyright © 2004, Oracle. All rights reserved.

Updating Universal Installer


nly
e O
Before The 9.2.0.4 database patch can be applied, you must update the installer. To do this, go to
the $ORACLE_HOME/bin directory and execute the runInstaller command:

Us
I
$ cd $ORACLE_HOME/bin

A
$ runInstaller

& O
You should point the installer to the location of the 9.2.0.4 patch in the Files Location screen,
then choose both nodes on the Cluster Node Selection screen. Accept the default destination for

al
the product and install the product.

ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-21


The Oracle 9.2.0.4 Patch Set

Oracle 9.2.0.4 patch set:


• Is not a complete distribution
• Requires that Oracle9i Database 9.2.0.1 be
installed previously
• Is installed by using 9.2.0.1 Universal Installer

4-22 Copyright © 2004, Oracle. All rights reserved.

The Oracle 9.2.0.4 Patch Set


nly
e O
The Oracle 9.2.0.4 patch set includes upgrades for the RDBMS, PL/SQL, Precompilers,
Networking, Oracle Text (formerly interMedia Text), JDBC, JavaVM, XML Developers Kit,

Us
Oracle9i Globalization, Oracle Core, UltraSearch, Spatial, SQL*Plus, SQLJ, JPublisher,
I
Intermedia, OLAP and Oracle Internet Directory products. This is not a complete software
A
O
distribution and you must install it on an existing Oracle9i 9.2.0.1.0 Oracle Server installation.

l &
rna
nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-22


Installing the 9.2.0.4 Patch Set

4-23 Copyright © 2004, Oracle. All rights reserved.

Installing the 9.2.0.4 Patch Set


nly
e O
To install the 9.2.0.4 patch set, you must first start the installer in your Oracle 9.2.0.1
ORACLE_HOME. In the File Locations window, change the directory that is specified in the

Us
Source... field to point to the patch location. When you click the Next button, the Available
I
Products window appears with the products that may be installed from the location that is
A
O
specified. Choose Oracle9iR2 Patch Set 9.2.0.4.0 and continue.

l &
rna
nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-23


Node Selection

4-24 Copyright © 2004, Oracle. All rights reserved.

Node Selection
nly
e O
You can use the 9.2.0.4 patch set to install the included patches on multiple nodes in a cluster
when the base release (9.2.0.1.0) is already installed on those nodes. The Oracle Universal

Us
Installer detects whether the machine on which you are installing is part of the cluster. If it is,
I
then you are prompted to select the nodes from the cluster on which you would like the patch set
A
O
installed.

l &
rna
nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-24


Finishing Up

4-25 Copyright © 2004, Oracle. All rights reserved.

Finishing Up
nly
e O
At the end of the upgrade process you will be prompted to run the root.sh script. Please note
that you are required to run the script on both nodes in your cluster. When this is finished, click

Us
the OK button to dismiss the notification. The upgrade is successfully completed so you can
I
click the Exit button to quit the installer. Before continuing with the database creation, you must
A
O
start Group Services on each node. Use the gsdctl command to do this:

&
$ gsdctl start

al
Successfully started GSD on local node

ern
Repeat this on the second node. If Group Services is not running on both nodes, database
creation with DBCA will not be possible.

Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-25


Summary

In this lesson, you should have learned how to:


• Install the Oracle database by using Oracle
Universal Installer
• Configure RAC options
• Identify and install the necessary patches

4-26 Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 4-26


Building the Database

Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Objectives

After completing this lesson, you should be able to do


the following:
• Identify the database requirements
• Identify the partitions that are to be used
– OCFS
– Raw
• Create the database by using Database
Configuration Assistant (DBCA)

5-2 Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-2


Starting DBCA

5-3 Copyright © 2004, Oracle. All rights reserved.

Starting DBCA
nly
e O
The Database Configuration Assistant is capable of installing single instance or cluster
databases. From an Xterm, log on to one of the nodes of your cluster and launch DBCA as shown
below:
Us
$ cd $ORACLE_HOME/bin
AI
$ ./dbca

& O
l
The Welcome window appears as shown in the slide. You have the option to create a single
a
n
instance database or a cluster database. Click the “Oracle cluster database” option button. Click

er
the Next button to continue.
t
In
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-3


Creating a Database

5-4 Copyright © 2004, Oracle. All rights reserved.

Creating a Database
nly
click the Next button to continue.
e O
The Operations window is displayed next. Click the “Create a database” option button, and then

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-4


Node Selection

5-5 Copyright © 2004, Oracle. All rights reserved.

Node Selection
nly
e O
The Node Selection window is displayed next. Because you are creating a cluster database,
choose both the nodes. Click the Select All button to choose both the nodes of the cluster. Each

Us
node must be highlighted before continuing. Click the Next button to proceed.

AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-5


Database Templates

5-6 Copyright © 2004, Oracle. All rights reserved.

Database Templates
nly
e O
In the Database Templates window, you must choose a template for the creation of the database.
Click the New Database option button and then click the Next button to continue.

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-6


Database Identification

5-7 Copyright © 2004, Oracle. All rights reserved.

Database Identification
nly
e O
In the Database Identification window, you must enter the database name in the Global Database
Name field. A system identifier (SID) prefix is required and DBCA will suggest a name. This

Us
prefix is used to generate unique SID names for the two instances that comprise the cluster
I
database. If you do not want to use the system-supplied prefix, then enter a prefix of your choice.
A
O
Click the Next button to continue.

l &
rna
nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-7


Database Features and Example Schemas

5-8 Copyright © 2004, Oracle. All rights reserved.

Database Features and Example Schemas


nly
e O
The Database Features window is displayed next. Click the Database Features tab. You can
select special database features and various sample schema components on this window. You

Us
should clear all database features and example schemas unless you know that they are needed.
I
Some of the features have related tablespaces. If you deselect them, you will also be asked to
A
O
confirm deletion of the associated tablespace. Click the Next button to continue.

l &
rna
nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-8


Standard Database Features

5-9 Copyright © 2004, Oracle. All rights reserved.

Standard Database Features


nly
display more available database features.
e O
After returning to the Database Features window, click the Standard Database Features button to

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-9


Database Features

5-10 Copyright © 2004, Oracle. All rights reserved.

Database Features
nly
e O
Standard database features include Oracle JVM, Intermedia, Oracle Text, and Oracle XML. You
should clear these additional features unless you know that they are needed. Click the OK button

Us
to return to the Database Features window. Click the Next button to continue. Confirm the
deletion of any related tablespaces.
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-10


Database Connections

5-11 Copyright © 2004, Oracle. All rights reserved.

Database Connections
nly
e O
Next, in the Database Connection Options window, you can choose how users will connect to the
database. The default is dedicated server mode. Click the Next button to accept the default value.

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-11


Initialization Parameters

5-12 Copyright © 2004, Oracle. All rights reserved.

Initialization Parameters
nly
e O
The Initialization Parameters window is displayed next. The Memory tab is displayed. Accept
the default parameters on the Memory tab. Click the File Locations tab to review or specify
various Oracle file locations.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-12


File Locations

5-13 Copyright © 2004, Oracle. All rights reserved.

File Locations
nly
e O
After clicking the File Locations tab, specify the location of the server parameter file. Enter an
OCFS file if the cluster file system is used. If raw devices are used, then enter a raw file. Click

Us
the Next button to continue. If you have properly set the environment variables ORACLE_HOME
and ORACLE_BASE, this will be a review only.
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-13


Database Storage

5-14 Copyright © 2004, Oracle. All rights reserved.

Database Storage
nly
e O
By using the Database Storage window, determine the location of control files, data files, redo
logs, and so on. To begin, expand the Controlfile folder that is located in the navigation pane on
the left.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-14


Control File Specifications

5-15 Copyright © 2004, Oracle. All rights reserved.

Control File Specifications


nly
e O
Use three control files for your cluster database. Enter the OCFS file and file directory in the
worksheet for each of the control files. If raw devices are used, then they may be specified in this
worksheet as well.
Us
AI
Next, expand the Datafiles folder and specify the data file locations and storage behavior.

& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-15


Tablespaces

5-16 Copyright © 2004, Oracle. All rights reserved.

Tablespaces
nly
e O
The pane on the left of the window lists all the tablespaces that are used when the cluster
database is created. Choose a tablespace and click the Storage tab on the details pane. In the

Us
example above, the SYSTEM tablespace details are listed. You may edit them to suit your needs.
I
By clicking the general folder tab, you can adjust the tablespace file size if the default is too
A
O
small (or too large). Review each tablespace and verify that the size and storage settings are

&
suitable for your purposes.

al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-16


Redo Log Groups

5-17 Copyright © 2004, Oracle. All rights reserved.

Redo Log Groups


nly
e O
To configure the redo log groups, click the first log group in the navigation pane on the left.
Specify the OCFS file and the file directory for the first redo log group in the worksheet. Repeat

Us
the steps for each log group that is listed. You will require at least two log members for each
I
thread. For a two node cluster, you will require a minimum of four redo logs. Review all entries
A
O
carefully and click the Next button to continue.

l &
rna
nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-17


DBCA Summary

5-18 Copyright © 2004, Oracle. All rights reserved.

DBCA Summary
nly
displayed next. Click the OK button to continue.
e O
Review the information in the Database Configuration Assistant Summary window that is

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-18


Database Creation Progress

5-19 Copyright © 2004, Oracle. All rights reserved.

Database Creation Progress


nly
Progress screen is displayed.
e O
If the information that is entered in the previous steps is correct, then the Database Creation

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-19


Database Passwords

5-20 Copyright © 2004, Oracle. All rights reserved.

Database Passwords
nly
SYSTEM. Click the Exit button to exit DBCA.
e O
After the database is created, DBCA prompts you to set passwords for the users SYS and

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-20


Remote Password File

Update the password file on all the remote nodes:


$ id
oracle
$ cd $ORACLE_HOME/dbs
$ rcp orapwRACDB1 node2:$ORACLE_HOME/dbs/orapwRACDB2

5-21 Copyright © 2004, Oracle. All rights reserved.

Remote Password File


nly
e O
After the database creation is complete and DBCA is exited, the database should be functional
on both nodes. The only step that must be completed is to make sure a password file exists on the

Us
second node. If it does not, you can do it manually using the Linux rcp command. You can use
the example below as a guide:
AI
O
$ cd $ORACLE_HOME/dbs

&
$ rcp orapwDbname1 node2_name:$ORACLE_HOME/dbs/orapwDbname2

l
(where Dbname is the database name)
a
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-21


Summary

In this lesson, you should have learned how to:


• Identify the database requirements
• Identify the partitions that are to be used
– OCFS
– Raw
• Create the database by using Database
Configuration Assistant (DBCA)

5-22 Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 5-22


Managing RAC on Linux

Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Objectives

After completing this lesson, you should be able to do


the following:
• Effectively use the cluster database server
manager
• Manage SPFILE
• Manage tablespaces, segments, and extents
• Use Enterprise Manager to monitor the Real
Application Clusters (RAC) environment
• Monitor RAC statistics

6-2 Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-2


Group Services Management

• Start a group services daemon (GSD) on all the


nodes in the RAC database for use by the
management tools including:
– Server Control Utility (SRVCTL)
– Database Configuration Assistant (DBCA)
– Enterprise Manager
• Run only one GSD on each node.
• Use the gsdctl command to start the GSD.
$ gsdctl start

Successfully started the daemon on the local node.

6-3 Copyright © 2004, Oracle. All rights reserved.

GSD Management
nly
e O
Clients of GSD, such as SRVCTL, DBCA, and Enterprise Manager, interact with the daemon to
perform various manageability operations on the nodes in your cluster. You must start the GSD

Us
on all the nodes in your Real Applications Clusters database before you use SRVCTL commands
I
or attempt to employ the other tools across the cluster. However, you need only one GSD on
A
O
each node no matter how many cluster databases you create.

&
The name of the daemon is gsd and the daemon is located in the $ORACLE_HOME/bin
l
rna
directory. Start the daemon with the gsdctl command as shown in the example. Logging
information is written to the $ORACLE_HOME/srvm/log/gsdaemon.log file.

nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-3


Server Control Utility

The Server Control Utility


(SRVCTL):
• Is the preferred tool for
administering your RAC
database environment
• Manages cluster database Agent Agent
configuration information
that is used by several
Oracle tools SRVCTL SRVCTL
• Provides cluster database
management commands
GSD GSD
• Requires the GSD to
be running Node 1 Node 2

6-4 Copyright © 2004, Oracle. All rights reserved.

Server Control Utility (SRVCTL)


nly
e O
It is recommended that you use the Server Control Utility, SRVCTL, to administer your RAC
database environment. SRVCTL manages the configuration information that is required for a

Us
cluster wide perspective of your database and its instances. This information is used by SRVCTL
I
database management commands as well as by several other tools. For example, node and
A
O
instance mappings, which are needed for discovery and monitoring operations that are performed

&
by Enterprise Manager and its intelligent agents, are generated by SRVCTL. Many of these tools

l
run SRVCTL commands to complete the operations that are requested through their graphical
a
user interface (GUI).

ern
Int
SRVCTL works with the GSD to manage and retrieve cluster and database configuration
information that are stored in the shared disk location.

cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-4


SRVCTL Command Syntax

$ srvctl

Usage: srvctl verb noun [options]


verbs:
start|stop|status|add|remove|modify|getenv|setenv|un
setenv|config
nouns: database|instance

$ srvctl config -h

Usage: srvctl config


Usage: srvctl config database -d <dbname> -n <node>
Usage: srvctl config [-V] [-h] -n <node>

-h Print usage
-V Show version

6-5 Copyright © 2004, Oracle. All rights reserved.

SRVCTL Command Syntax


nly
To see a list of available command options, enter:

e O
s
srvctl

U
To see the online command syntax and options for each SRVCTL command, enter:
I
A
srvctl command option -h

O
where command option is one of the valid options (verbs) such as start, stop, or

&
l
status.

rna
Note: To use SRVCTL, you must already have created the configuration information for the
database that you want to administer by using either DBCA or the srvctl add command.

nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-5


SRVCTL Cluster Database
Configuration Tasks

To modify database configuration information:


• Add and delete cluster databases
• Add instances to and delete instances from a
cluster database
• Rename instances
• Move instances
• Set and unset the environment for an entire
cluster database
• Set and unset instance environments

6-6 Copyright © 2004, Oracle. All rights reserved.

SRVCTL Cluster Database Configuration Tasks


nly
e O
Use SRVCTL to update the cluster database configuration information repository that is stored
on the shared file and is used by GSD to execute commands that are appropriate for the cluster

Us
database or for specific instances. The following types of information can be created or modified
with SRVCTL:
AI
O
• Define a new cluster database configuration or remove obsolete database configuration

&
information.

l
• Add information about a new instance to a cluster database configuration or remove
a
ern
instance information from a cluster database.

t
• Rename instance name within a cluster database configuration.

In
• Change the node where an instance will run in a cluster database configuration.

cle
• Set and unset the definitions that are used to assign environment variables for an entire
cluster database.

ra
• Set and unset the definitions that are used to assign environment variables for an instance
O
in a cluster database configuration.

Oracle9i Database: Real Application Clusters on Linux 6-6


Adding and Deleting Databases

• Add the database configuration information:


$ srvctl add db –d U1 –o $ORACLE_HOME

• Remove the database configuration information:

$ srvctl remove db –d U2

6-7 Copyright © 2004, Oracle. All rights reserved.

Adding and Deleting Databases


nly
e O
The srvctl add db command creates the configuration information for the RAC database.
The following syntax adds the configuration information for an RAC database to the
configuration repository:
Us
$ srvctl add db -d db_name -o ORACLE_HOME
AI
& O
This database is identified by the name that you provide for db_name and the ORACLE_HOME
value must be the location where you installed Oracle9i RAC.

al
The example in the slide shows the creation of the database U1 on a UNIX system. Use the

rn
srvctl remove db command to delete the static configuration for an RAC database. The
e
Int
following syntax deletes the RAC database that is identified by the name that you provide:
$ srvctl remove db -d db_name

cle
The second example in the slide shows the removal of repository information for the database
called U2.
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-7


Adding and Deleting Instances

• Add the instance configuration information:


$ srvctl add instance –d U2 –i U2N3 –n raclin3

• Remove the instance configuration information:


$ srvctl remove instance –d U2 –i U2N1

6-8 Copyright © 2004, Oracle. All rights reserved.

Adding and Deleting Instances


nly
e O
Use the srvctl add instance command to add static configuration information for an
instance. This command only updates the configuration information in the repository; it does not

Us
create the database or the instance. The following syntax adds an instance, which is named
I
instance_name, to the specified database on the node that you identify with node_name:
A
& O
$ srvctl add instance -d db_name -i instance_name -n node_name
The example in the slide adds the instance U2N3 on node RACLIN3 to the configuration

al
information for database U2.

rn
The srvctl remove instance command deletes static configuration information for an
e
nt
RAC instance. Use the following syntax to delete the configuration for the instance that is
I
identified by the database name that you provide:

cle
$ srvctl remove instance -d db_name -i instance_name

ra
The second example in the slide removes the instance U2N1 from the configuration information
O
for database U2.
Note: It is recommended that you use the Instance Management feature of DBCA to add and
delete cluster databases and instances.

Oracle9i Database: Real Application Clusters on Linux 6-8


SRVCTL Cluster Database Tasks

To manage cluster database components:


• Start cluster databases and instances
• Stop cluster databases and instances
• Obtain the status of a cluster database instance

6-9 Copyright © 2004, Oracle. All rights reserved.

SRVCTL Cluster Database Tasks


nly
e O
You can use the same SRVCTL cluster database commands to start and stop components that are
invoked by the various tools that use SRVCTL. Most of the database commands in SRVCTL can

Us
be used to work across the cluster or on individual nodes. You can use these commands to:
• Start and stop cluster databases
AI
O
• Start and stop cluster database instances

&
• Obtain the status of a cluster database instance

al
The specific commands to accomplish these tasks are covered in the following pages.

rn
Note: Your database and instance information must be available in the configuration repository
e
Int
before you use SRVCTL to perform these operations.

cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-9


Starting Databases and Instances

• Start all instances and listeners:


$ srvctl start database -d U2

• Start two named instances:


$ srvctl start database -d database U2 –i U2N1,U2N2

6-10 Copyright © 2004, Oracle. All rights reserved.

Starting Databases and Instances


nly
instances in an RAC database include:
e O
The key elements of the syntax of the SRVCTL command to start one, a subset of, or all

Us
srvctl start -d db_name [-i inst,...] [-n node,...]

where:
[-s stage,...] [-x stage,...]

AI
& O
-d db_name identifies the database against which the command is executed;

al
-i inst,... is the name of the instance, or a comma-separated list of instances, that are

rn
started (the default is all instances that are defined for db_name);

e
Int
-n node,... is the name of the node, or a comma-separated list of nodes, on which the
instances are started (the default is all nodes with instances that are defined for db_name).

cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-10


Starting Databases and Instances

• Start all instances with a nondefault connect


string:
$ srvctl start database -d U2 –c 'admin/adm1n as
sysdba'

• Start listeners and open instances with special


parameter file on two nodes:
$ srvctl start database -d U2 –n raclust1,raclin2 \
–o pfile=/ora/admin/initADMIN.ora

6-11 Copyright © 2004, Oracle. All rights reserved.

Starting Databases and Instances (continued)


nly
O
The additional syntax options for the srvctl start command are:

e
s
srvctl start ... [-c 'connstr'] [-o options] [-h]
where:
I U
sysdba);
OA
-c 'connstr' defines the connect string for the startup operation (the default is: / as

l &
-o options lists the startup command options, such as force, nomount, pfile=

na
(with an appropriate path and parameter file name), and so on;
r
nte
-h displays the help information for the command or option.

e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-11


Stopping Databases and Instances

• Stop an instance only


$ srvctl stop instance -d U2 -i U2N3

• Shut down instances on three nodes using a


nondefault connect string:
$ srvctl stop instance -d U2 \
–n raclin2,raclin3,raclin4 \
–c 'admin/adm1n as sysdba'

6-12 Copyright © 2004, Oracle. All rights reserved.

Stopping Databases and Instances


nly
e O
You can shut down your database instance components with SRVCTL. The syntax and options
for the srvctl stop command are similar to those for the srvctl start command, with

Us
options such as TRANSACTIONAL in place of MOUNT, and so on. The slide shows some typical
examples of this command.
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-12


Inspecting Status of Cluster Database

• List the active instances:


$ srvctl status database -d U2

• Show the status of two instances:


$ srvctl status instance -d U2 –i U2N1,U2N4

6-13 Copyright © 2004, Oracle. All rights reserved.

Inspecting Status of Cluster Database


nly
e O
You can use the srvctl status command to determine which components, such as instances
and listeners, of your cluster database are running. The command uses the same syntax options
as the srvctl start and srvctl stop commands.
Us
AI
The examples in the slide show how SRVCTL commands can be used to show which instances
are active in a cluster database.

& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-13


Inspecting Database Configuration
Information

• List the instances that are defined for a database:


$ srvctl config database -d U2

• List the environment information for a database:


$ srvctl getenv database -d U2

• List the environment information for an instance:


$ srvctl getenv instance -d U2 –i U2N4

6-14 Copyright © 2004, Oracle. All rights reserved.

Inspecting Database Configuration Information


nly
e O
You can use SRVCTL to examine the information in the cluster database configuration
repository. By using the srvctl config command, you can identify the existing RAC

Us
databases. There are two formats for this command. The first includes no subcommands or
I
options and lists all the cluster databases in your environment:
A
$ srvctl config

& O
The second format, which includes the -d db_name syntax, lists the instances for the named

al
database. The slide shows an example of this format.

rn
Use the srvctl get env command to obtain environment information for either an entire
e
nt
RAC database or a specific instance. The output from a command that uses the following syntax
I
contains environment information for the entire RAC database that is identified by the

cle
db_name value that you provide:

ra
$ srvctl getenv database -d db_name

O
A command with the following syntax displays environment information for a specific instance:
$ srvctl getenv database -d db_name -i instance_name

Oracle9i Database: Real Application Clusters on Linux 6-14


Parameter Files in Cluster Databases

• You can continue to use client-side initialization


parameter files for your RAC database.
• You can use a single server parameter file for all
of your instances.
– This is the preferred approach.
– The file must be available to all cluster database
nodes on a shared disk.
– This approach allows parameter value changes to
persist across instance shutdowns.
– You can change parameter values for all instances
with a single ALTER SYSTEM command.

6-15 Copyright © 2004, Oracle. All rights reserved.

Parameter Files in Cluster Databases


nly
e O
A client-side initialization parameter file for an instance must be located on all the systems from
which the instance is started. In order to provide the parameter values that are unique to each

Us
instance, you need a client-side parameter for each instance. You may also want to use a
I
common parameter file, for values that are identical on all instances, and include it in the
A
O
instance-specific files with the IFILE parameter.

&
It is recommended that you use a server parameter file for your RAC database. This binary file is
l
rna
maintained on a shared disk and contains generic entries for values that are common to all
instances and a separate parameter entry for each instance that requires a unique value.

te
If you build your RAC database with DBCA, then you have the option to create a server
n
I
parameter file concurrently with the database. Select the “Create server parameter file (spfile)”
e
l
box under the File Locations tab on the Initialization Parameters page and provide the shared
c
a
disk pathname in the Persistent Parameters Filename field.
r
O
You can also create a server parameter file manually if you have built or migrated your RAC
database without DBCA, or if you did not select the “Create server parameter file (spfile)”
option.

Oracle9i Database: Real Application Clusters on Linux 6-15


Creating and Managing Server Parameter
File

CREATE
SPFILE='/dev/vx/rdsk/oracle/U1_raw_spfile_5m'
FROM PFILE='$ORACLE_HOME/dbs/initU1.ora'

ALTER SYSTEM
SET sort_area_retained_size = 131072
SCOPE = SPFILE
SID = 'U1N1'

ALTER SYSTEM SET sort_area_size = 131072


COMMENT = 'Reduce sort space'
SCOPE = SPFILE
SID = '*'

6-16 Copyright © 2004, Oracle. All rights reserved.

Creating and Managing Server Parameter File


nly
e O
To create a server parameter file, you first need a text-based, client-side text initialization
parameter file and a shared disk. Connect to a SQL*Plus session and execute the CREATE

Us
SPFILE command as shown in the first example. If you are using a shared file system, then

I
name the default SPFILE location in the command, otherwise name the raw device or a link to
A
O
it that is defined with the default filename. You can do this regardless of whether you have a

&
running instance or an open database.

al
When initially created, all parameters in a server parameter file have identical values regardless

rn
of which instance uses it to start up. To add instance-specific values, you must use an ALTER

e
Int
SYSTEM command with a SCOPE clause set to MEMORY (or BOTH) and the SID clause set to
the required instance name. You can also set a databasewide value in your server parameter file

le
by setting the SID value to the wild card value ('*') as shown in the third example, which also
c
a
includes a comment. You can remove parameters from the SPFILE with the ALTER SYSTEM
r
RESET command.
O

Oracle9i Database: Real Application Clusters on Linux 6-16


Parameter File Search Order

• The default search order for a parameter file


during instance startup is:
– $ORACLE_HOME/dbs/spfilesid.ora
– $ORACLE_HOME/dbs/spfile.ora
– $ORACLE_HOME/dbs/initsid.ora
• To simplify instance startups, create a generic
pfile.ora file with a single SPFILE entry.

6-17 Copyright © 2004, Oracle. All rights reserved.

Parameter File Search Order


nly
e O
When you use a STARTUP command without identifying a parameter file with the PFILE
option, the Oracle server searches for an appropriate file in the following order:

U
• An instance-specific server parameter file, spfilesid.oras
• A generic server parameter file, spfile.ora
AI
O
• An instance-specific, client-side parameter file, initsid.ora

&
Even though server parameter files are in the search list, your RAC
l
rna
server parameter file will be on a shared disk and, therefore, not likely to be in the default
location with the default name. In order to take advantage of the default behavior to locate

te
your server parameter file, create a text file containing just one line: the SPFILE parameter. The
n
I
value for this SPFILE parameter is the full name of the shared disk partition where you created

e
l
the file. By locating and naming this text file as if it were a generic server parameter file, all
c
a
instances that are started on the server will locate and use it during a default startup.

Or
You may also be able to use the default behavior by creating a link (to the shared partition
where the server parameter file is stored) and giving it the same name and location as the
default text generic server parameter file.

Oracle9i Database: Real Application Clusters on Linux 6-17


Enterprise Manager and Cluster Databases

Cluster
databases

6-18 Copyright © 2004, Oracle. All rights reserved.

Enterprise Manager and Cluster Databases


nly
e O
The Enterprise Manager Console provides a central point to manage the Oracle environment
through an intuitive graphical user interface (GUI). The Console can initiate a variety of cluster

Us
database management tasks with the Management Server component. From the Navigator pane,
I
you can view and manage both single- and multiple-instance databases by using essentially the
A
O
same operations. Just as in single instance databases, cluster databases and all of their related

&
elements can be administered by using master/detail views and Navigator menus.

al
After the nodes are discovered, by using the repository information that is added by DBCA or

rn
the SRVCTL utility, the Navigator tree displays cluster databases, their instances, and other
e
Int
related services, such as a listener. You can then use the Console to start, stop, and monitor
services as well as to schedule jobs or register events, simultaneously performing these tasks on

le
multiple nodes if you want. You can also use the Console to manage schemas, security, and the
c
a
storage features of cluster databases.

Or
Before using the Enterprise Manager Console, start the following components:
• An Oracle Intelligent Agent on each of the nodes
• Management Server
• Console

Oracle9i Database: Real Application Clusters on Linux 6-18


Displaying Objects in the Navigator Pane

Cluster
instances

6-19 Copyright © 2004, Oracle. All rights reserved.

Displaying Objects in the Navigator Pane


nly
e O
In the Navigator tree, cluster databases are listed under the Databases folder that is related to
single-instance databases. Just as in single instance databases, each cluster database folder

Us
contains the instances and subfolders for Instance, Schema, Security, and Storage. By selecting
I
objects within a Cluster Database subfolder, you can access property sheets to inspect and
A
O
modify properties of these objects, just like single-instance databases. All discovered instances

&
are displayed under the Cluster Database Instances folder.

al
With cluster databases, only the subfolders of the Instance folder are different from those of

rn
single instance databases. In the Instance folder, the instance database subfolders are split into
e
Int
two functional parts: Database-Specific File and Instance-Specific File Structures.
The available database-specific functionality includes in-doubt transactions and resource

le
consumer groups. All instance-specific functionality appears beneath the individual instance
c
a
icons within the Cluster Database Instances subfolder and includes:
r
O
• Configuration and stored configuration information management
• Session management
• Lock information
• Resource plan and resource plan schedule management

Oracle9i Database: Real Application Clusters on Linux 6-19


Starting a Cluster Database

6-20 Copyright © 2004, Oracle. All rights reserved.

Starting a Cluster Database


nly
e O
You can use the Console to start an entire cluster database or selected instances within the cluster
database. You can also select the required startup options, for example, MOUNT.

Us
The Cluster Database Startup/Shutdown Results dialog box is automatically displayed during a

AI
startup (or shutdown) operation. You can also initiate it by performing the following steps:
1. In the Navigator pane, expand Databases.
2. Right-click a cluster database.
& O
al
3. Select Results from the Options menu that appears.

rn
The display is updated dynamically as the operation progresses and graphically reflects the
e
flag).
Int
following states: if the component is functional (green flag) or if the component is stopped (red

cle
If the instances are started successfully, then the Cluster Database Started message box appears

ra
with a successful message.

Oracle9i Database: Real Application Clusters on Linux 6-20


Stopping a Cluster Database

6-21 Copyright © 2004, Oracle. All rights reserved.

Stopping a Cluster Database


nly
e O
Similar to the startup option that is available on the cluster database menu, you can choose to
stop the entire cluster database or single instances. You can also select shutdown-specific

Us
options, such as IMMEDIATE. Only when all instances are shut down is the database considered
closed.
AI
& O
The Cluster Database Shutdown Progress dialog box displays the progress of the shutdown
operation. After the instances are shut down successfully, as shown in the slide, the Cluster

al
Database Stopped message box also appears with a successful message. If the shutdown fails,

rn
then the Cluster Database Stopped message box appears with a failure message.
e
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-21


Viewing Cluster Database Status

6-22 Copyright © 2004, Oracle. All rights reserved.

Viewing Cluster Database Status (continued)


nly
e O
The Status Details tab displays an overall view of the state of the cluster and related components,
as shown in the slide. This tab displays the status of the various components, such as listeners

Us
and instances, for all nodes. The states of the components are indicated with the following
graphical elements:
AI
O
• Green flag: The component is functional.

&
• Red flag: The component is stopped.

l
• Timer: An operation is in progress and the Enterprise Manager cannot determine the state
a
ern
of the component. This state occurs typically when the component startup or shutdown

t
operation has not completed.

In
• Blank background: The component does not exist on this node or is not configured on the
node.

cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-22


Instance Management

6-23 Copyright © 2004, Oracle. All rights reserved.

Instance Management
nly
These areas of management include:
e O
You can perform much of the required instance management by using Enterprise Manager.

• Single database management


Us
• Storage, schema, and other database components
AI
O
• Multiple instance management

&
• Configuration, sessions, and other instance-specific components

l
• Cluster-aware jobs and events
a
ern
• Performance reports

Int
From the Enterprise Manager Console, click the plus (+) in front of Instances. Next click the plus
(+) in front of Cluster Database Instances and then click the plus (+) in front of the desired

le
database instance. Finally, log in as the sys user and save this as a preferred credential. Repeat
c
a
these steps for each RAC database instance. After completing these tasks, you can manage each
r
individual instance from Enterprise Manager.
O

Oracle9i Database: Real Application Clusters on Linux 6-23


Management Menu

6-24 Copyright © 2004, Oracle. All rights reserved.

The Management Menu


nly
manage the RAC database as a single database from this menu.
e O
Right-clicking an RAC database provides a management menu as shown in the slide. You can

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-24


Storage Management
Tablespaces:

6-25 Copyright © 2004, Oracle. All rights reserved.

Tablespaces
nly
e O
You can use Enterprise Manager to manage database storage. To control tablespaces, click plus
(+) next to the Databases folder to expand the contents. Next, click plus (+) next to the RAC

Us
database that you want to manage to expand the management areas and log in as the sys user.
Expand Storage Management and then select Tablespaces.
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-25


Storage Management
Tablespace map:

6-26 Copyright © 2004, Oracle. All rights reserved.

Tablespace Map
nly
tablespace map and choose Show Tablespace Map.
e O
To view the usage map for a specific tablespace, right-click the desired tablespace in the

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-26


Storage Management
Create a new tablespace:

6-27 Copyright © 2004, Oracle. All rights reserved.

Creating New Tablespace


nly
e O
To create a new tablespace, right-click Tablespaces in the Enterprise Manager’s Navigator pane
on the left of the window. Choose Create and specify an unused OCFS file or a raw file to use.

Us
Next, specify the directory where the file is located and the file size. Click the Storage tab and
I
specify extent and segment space management. You can click the Show SQL button if you want
A
O
to view the SQL command that will be issued when the Create button is clicked.

&
After the tablespace has been created, you can create a new table in it. Expand Schema from the
l
rna
Navigator pane. You can create the table and insert rows in this tablespace.

nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-27


Performance Manager and RAC

6-28 Copyright © 2004, Oracle. All rights reserved.

Performance Manager and RAC


nly
e O
Oracle Performance Manager has a Cluster Database Instances tab that the administrator can use
to monitor performance statistics of the RAC environment. Performance charts are available for
both cluster database-wide and instance-specific parameters.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-28


Monitoring RAC
Node statistics:

6-29 Copyright © 2004, Oracle. All rights reserved.

Node Statistics
nly
e O
You can use Enterprise Manager to display node performance data. To do this, launch the
Performance Monitor and click the cluster database name. Click the Diagnostic Pack in the

Us
toolbar (the medicine bag) on the left and click the Performance Monitor (graphs). Expand
I
Cluster Databases and then expand Nodes. Performance charts that can be displayed include:
A
O
• CPU utilization

&
• Memory/swap data
• I/O data
al
ern
• File system information

t
• Process data
• Network data
In
• IPC data

cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-29


Monitoring RAC
Database statistics:

6-30 Copyright © 2004, Oracle. All rights reserved.

Cluster Database Statistics


nly
e O
To view cluster database statistics, launch Enterprise Manager and expand Cluster Database
Instances. Expand the cluster database that you want to view statistics for. Available
performance charts include:
Us
• Performance overview
AI
O
• Top sessions

&
• Locks
• Memory
al
• Top segments

ern
t
• Response time

I
• Parallel query
n
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-30


Summary

In this lesson, you should have learned how to:


• Effectively use the cluster database server
manager
• Manage SPFILE
• Manage tablespaces, segments, and extents
• Use Enterprise Manager to monitor the RAC
environment
• Monitor RAC statistics

6-31 Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 6-31


nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Advanced Deployment Topics

Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Objectives

After completing this lesson, you should be able to do


the following:
• Add extra nodes to the cluster
• Create and configure raw devices
• Configure failover
• Configure load balancing
• Configure adaptive parallel query

7-2 Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 7-2


Adding New Nodes

1. Create the oracle user and the dba and


oinstall groups on a new node:
# groupadd -g 500 dba
# groupadd -g 501 oinstall
# useradd -u 500 -d /usr/local/oracle -g "dba" -m \
-s /bin/bash oracle
2. Install Oracle Cluster File System (OCFS).
3. Identify OCFS files for redo logs and undo
tablespaces.
– Ensure that there is at least one available rollback
segment or a new undo tablespace.
4. Make the required server parameter file changes.
5. Install Oracle Cluster Management System
(OCMS).

7-3 Copyright © 2004, Oracle. All rights reserved.

Adding New Nodes


nly
e O
You can manually add nodes to the cluster. After the node recognizes the other nodes in the
cluster over the network, the configuration can proceed. You must add the user oracle and the

Us
group dba so that Oracle Cluster File System may be properly installed. You must identify

I
OCFS files for the redo logs and undo tablespaces. Make sure that there is at least one available
A
O
rollback segment or a new undo tablespace.

&
You must edit the server parameter file and make the appropriate changes for the instance on the
l
• instance_name
r a
new node. This includes changes to the following:

n

te
undo_tablespace

n
• Threads

e I
• Redo logs
cl
a
• Rollback segments

Or

Oracle9i Database: Real Application Clusters on Linux 7-3


Adding Log Files, and Enabling and
Disabling Threads
Adding Log Files

ALTER DATABASE ADD LOGFILE

,
filespec
THREAD integer GROUP integer

Enabling and Disabling Threads

ALTER DATABASE

DISABLE THREAD integer

ENABLE THREAD integer


PUBLIC

7-4 Copyright © 2004, Oracle. All rights reserved.

Adding Log Files, and Enabling and Disabling Threads


nly
e O
You can specify redo log threads for use by instances by using the THREAD option of the ALTER
DATABASE ADD LOGFILE command. You can enable or disable threads by using the ALTER
DATABASE ENABLE/DISABLE THREAD command.
Us
AI
The following is a description of the arguments that are shown in the slide:

& O
THREAD integer: Specifies the thread that is assigned to an instance
GROUP integer: Specifies the group number of the redo log file group
l


rna
filespec: Specifies the name of an operating system file, plus size and reuse options
PUBLIC: Specifies that the thread belongs to the public pool

nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux 7-4


Allocating Rollback Segments

CREATE ROLLBACK SEGMENT rollback_segment


PUBLIC

TABLESPACE tablespace
STORAGE storage_clause

ALTER ROLLBACK SEGMENT rollback_segment

ONLINE
OFFLINE
STORAGE storage_clause

7-5 Copyright © 2004, Oracle. All rights reserved.

Allocating Rollback Segments


nly
e O
Allocate rollback segments with the CREATE ROLLBACK SEGMENT command. Bring a
rollback segment online or offline with the ALTER ROLLBACK SEGMENT command.

Us
• Create at least one rollback segment for each instance of a parallel server.
I
• Ensure that the rollback segments are created in a tablespace other than the SYSTEM
A
O
tablespace to avoid contention.

&
• Create private rollback segments with a single instance operating in Exclusive mode before

l
starting up multiple instances of a parallel server.
a
ern
• Specify the rollback segment in the parameter file of the instance to be started.
• By using an instance that is already started, create the rollback segment with the CREATE

Int
ROLLBACK SEGMENT command. Omit the PUBLIC option.

cle
• Start up the instance to bring the segment online or use the ALTER ROLLBACK SEGMENT
command to bring the rollback segment online.

ra
• If a private rollback segment is specified in more than one parameter file, then only the first
O
instance that acquires the rollback segment can be started.

Oracle9i Database: Real Application Clusters on Linux 7-5


Adding an Instance with DBCA

7-6 Copyright © 2004, Oracle. All rights reserved.

Instance Management
nly
e O
To add a new instance, launch the Database Configuration Assistant. Click the Instance
Management option button to add (or delete) an instance. Click the Next button to continue.

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 7-6


Adding an Instance with DBCA

7-7 Copyright © 2004, Oracle. All rights reserved.

Adding an Instance
nly
the Next button to continue.
e O
Next, you are prompted to add or delete an instance. Click the Add Instance option button. Click

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 7-7


Choosing a Cluster Database

7-8 Copyright © 2004, Oracle. All rights reserved.

Choosing a Cluster Database


nly
e O
All reachable cluster databases in the network are displayed in this window. Click the option
button next to the cluster that you want to add the instance to. You must provide a

Us
username/password for a user with SYSDBA privileges. Click the Next button to continue.

AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 7-8


Instance Name

7-9 Copyright © 2004, Oracle. All rights reserved.

Instance Name
nly
e O
A default instance name and node are displayed on this window. You can change the default
name if you want. If you see that the instance name and node are correct, then continue by
clicking the Next button.
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 7-9


Redo Log Groups

7-10 Copyright © 2004, Oracle. All rights reserved.

Redo Log Groups


nly
e O
The Database Storage window is displayed next. You must add at least two redo log groups, so
expand the Redo Log Groups folder and click the Add button. Add an undo tablespace under the

Us
Tablespaces folder or a rollback segment under the Rollback Segments folder also. Click the
Finish button to proceed.
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 7-10


Confirming Instance Creation

7-11 Copyright © 2004, Oracle. All rights reserved.

Confirming Instance Creation


nly
O
If you want to proceed with the creation of the instance, then click the OK button.

e
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 7-11


Instance Creation Progress

7-12 Copyright © 2004, Oracle. All rights reserved.

Instance Creation Progress


nly
e O
The Progress screen is displayed for the duration of the instance creation. The status bar shows
the progress of the addition of the instance and network components.

Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 7-12


Using Raw Devices

• Create the necessary device files with mknod:


# mknod /dev/ocms-quorum c 162 1
# mknod /dev/RACThrd1Grp1Mem1.rdo c 162 2

• Bind raw devices to block devices with raw:


# raw /dev/ocms-quorum /dev/opsvg/quorum

• Give ownership of raw files to oracle with chown:


# chown oracle:dba /dev/ocms-quorum

7-13 Copyright © 2004, Oracle. All rights reserved.

Using Raw Devices


nly
e O
Linux Systems typically ship with preconfigured character and block device special files. For
raw I/O, these files are from /dev/raw1 through /dev/raw254 (usually only a small

Us
number of files are present). Because it is not meaningful to use /dev/raw1 as a database

I
filename, use database filenames that are as meaningful as with databases on file systems.
A
& O
Create raw device special files with the mknod command. The first argument to mknod is the
filename, the second is either the letter b or c, indicating a character or block device, the third is

al
the major device number (always 162 for raw devices), and the fourth is the minor device

rn
number. The minor device number ranges between 0 and 254 and must be unique among all files

e
Int
with the same major device number. Minor device number 0 is used for /dev/raw, which must
not be modified. Make sure that the minor device number is incremented for each file that is
created.

cle
a
You must bind the raw devices that are created above to block devices as part of each boot
r
O
sequence. This is done by using the raw command. This requires the rawio package.
# raw /dev/<raw device> /dev/<block device>
The raw devices must be readable and writable by the oracle user and dba group. Use the
chown command as shown in the slide to do this.

Oracle9i Database: Real Application Clusters on Linux 7-13


Transparent Application Failover

• Can be used with spare nodes or with primary or


secondary instance configurations
• Is designed for RAC but can be used for:
– Real Application Clusters Guard
– Replicated systems
– Data Guard

7-14 Copyright © 2004, Oracle. All rights reserved.

Transparent Application Failover


nly
e O
The transparent application failover (TAF) feature automatically reconnects applications to the
database if the connection fails. Because the reconnection happens automatically within the

Us
Oracle Call Interface (OCI) library, you need not change the client application to use TAF.

AI
Because most TAF functionality is implemented in the client-side network libraries (OCI), the

& O
client must use the Oracle Net OCI libraries to take advantage of TAF functionality. Therefore,
to implement TAF in RAC, make sure that you use JDBC OCI instead of PL/SQL packages.

al
Because TAF was designed for RAC, it is much easier to configure TAF for that environment.

rn
However, TAF is not restricted for use with RAC environments. You can also use TAF for single
e
Int
instance Oracle databases. In addition, you can also use TAF for Oracle Real Application
Clusters Guard, Replicated systems, and Data Guard.

cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 7-14


Failover Mode Options

• You must add failover options manually to TNS


configuration files.
• These options are part of the CONNECT_DATA
section of a connect descriptor.
• Failover options include:
– TYPE: Identifies the nature of TAF, if any
– METHOD: Configures how quickly failover can occur
– BACKUP: Identifies an alternative net service name
– RETRIES: Limits the number of times that a
reconnection is attempted
– DELAY: Specifies how long to wait between
reconnection attempts

7-15 Copyright © 2004, Oracle. All rights reserved.

Failover Mode Options


nly
e O
To implement TAF, you must include a FAILOVER_MODE parameter in the CONNECT_DATA
section of a connect descriptor. If your connect descriptors are defined in TNS configuration

Us
files, then you must add the TAF parameters manually. This is because Oracle Net Manager does
not provide support for TAF configurations.
AI
& O
The FAILOVER_MODE parameter has a set of subparameters that control how a failover will
occur if a client is disconnected from the original connection that was made with the connect

al
descriptor. The subparameters, which are covered in detail on the following pages, include:

rn
TYPE: (Required) Identifies one of the three types of Oracle Net failover available by

e

I t
default to OCI applications
n
METHOD: Determines how fast failover occurs from the primary node to the backup node

le
BACKUP: Identifies a different net service name for backup connections
c
a
RETRIES: Limits the number of times to attempt to connect after a failover
r

DELAY: Specifies the amount of time in seconds to wait between connect attempts
O

Oracle9i Database: Real Application Clusters on Linux 7-15


Failover Types

• Failover types identify the nature of TAF, if any.


• The options are:
– SESSION: Failover to an alternate session only
– SELECT: Failover and continue with any ongoing
query
– NONE: Prevent failover
. . .
(CONNECT_DATA =
(SERVICE_NAME = rac.us.aaacme.com)
(FAILOVER_MODE =
(TYPE=SELECT)
. . .

7-16 Copyright © 2004, Oracle. All rights reserved.

Failover Types
nly
e O
Three types of Oracle Net failover functionality are available by default to OCI applications:
SESSION: Causes a failed session to failover to a new session. If a user’s connection is
s

I U
lost, then a new session is automatically created for the user. This type of failover does not
attempt to perform any actions after connecting the user to the new session. This option is

OA
your best choice for applications that primarily perform DML transactions and short

&
queries.

al
SELECT: Causes a failed session to failover to a new session and continue any interrupted

ern
queries. After automatically connecting the user to a backup session, this option enables

t
users with open cursors to continue fetching on them after failure. However, this mode

In
involves overhead on the client side in normal select operations. You should use this option

cle
when an instance failure could result in having to re-create output that is already generated
by a long-running query.

ra
NONE: No failover functionality is used. Although this is the default, you can specify this
O

type explicitly to prevent failover from happening. This option is typically useful for
testing purposes rather than for implementing failover in a production environment.

Oracle9i Database: Real Application Clusters on Linux 7-16


Failover Methods

• Failover methods determine how quickly


connections become available following a failover.
– BASIC: Establishes no contact with the failover
instance before failure
– PRECONNECT: Creates mirror connections on the
standby instance for the connections on the
primary instance
. . .
(CONNECT_DATA =
(SERVICE_NAME = rac.us.aaacme.com)
(FAILOVER_MODE =(METHOD=PRECONNECT)
. . .

7-17 Copyright © 2004, Oracle. All rights reserved.

Failover Methods
nly
e O
The METHOD subparameter takes one of two values: BASIC and PRECONNECT. The latter is
only of use with Real Application Clusters (unlike the TYPE options which can be used for other

Us
failover situations, such as standby databases or reconnections to the same instance).

AI
The BASIC option requires a session to make a new connection when it fails over from its

& O
original instance connection. This option causes no overhead on the backup instance until a
failover occurs. This allows you to use the backup instance for nonapplication work, such as

al
database maintenance, without impacting the failover status. However, the failover processing

rn
can be slow because all of the disconnected sessions will attempt to reconnect to the failover
e
Int
instance concurrently, overburdening the listener on that instance.
The PRECONNECT option provides faster failover by creating a failover connection on the

le
standby instance concurrently with each connection to the primary instance. When the primary
c
a
instance fails, the connections are switched to one of the existing connections on the standby
r
O
instance. This requires minimal work by the listener for that instance and avoids the overhead of
creating new session connections. Unlike the BASIC option, the PRECONNECT option imposes
a load on the standby instance and must be able to support all connections from every supported
instance.

Oracle9i Database: Real Application Clusters on Linux 7-17


TAF Configuration: Example

RAC1 =
(DESCRIPTION=
(LOAD_BALANCE=OFF)(FAILOVER=ON)
(ADDRESS=
(PROTOCOL=TCP)(HOST=aaacme1)(PORT=1521))
(ADDRESS=
(PROTOCOL=TCP)(HOST=aaacme2)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=rac.us.acme.com)
(SERVER=DEDICATED)
(FAILOVER_MODE=
(BACKUP=RAC2)
(TYPE=SESSION)(METHOD=PRECONNECT)
(RETRIES=180)(DELAY =5))))

7-18 Copyright © 2004, Oracle. All rights reserved.

TAF Configuration: Example


nly
e O
The slide contains the definition of the TNS alias (RAC1) for a connect descriptor that could be
used for TAF with a primary/secondary instance configuration. The connections that are made

Us
through the RAC1 alias are to dedicated servers because of the SERVER binding value.

AI
The RAC1 alias is for the primary instance, as indicated by the INSTANCE_ROLE subparameter

& O
value. The primary instance runs on the aaacme1 node because this is the first address that is
listed in the DESCRIPTION clause and client load balancing, which could select either address,
is disabled.
al
rn
Failover is enabled with the FAILOVER parameter and the secondary instance is identified with
e
Int
the RAC2 alias in the BACKUP clause (the connect descriptor for RAC2 is shown on the next
page). A failed-over session would be directed to preestablished failover connections because of

le
the METHOD subparameter setting and would make up to 180 attempts to complete the
c
a
reconnection, with a 5-second pause between each attempt.
r
O
Note: You could use other options, such as shared instead of dedicated servers, or the BASIC
rather than the PRECONNECT method, without interfering with the TAF operations.

Oracle9i Database: Real Application Clusters on Linux 7-18


TAF Configuration: Example

RAC2 =
(DESCRIPTION=
(LOAD_BALANCE=OFF)(FAILOVER=ON)
(ADDRESS=
(PROTOCOL=TCP)(HOST=aaacme2)(PORT=1521))
(ADDRESS=
(PROTOCOL=TCP)(HOST=aaacme1)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=rac.us.acme.com)
(INSTANCE_ROLE=SECONDARY)
(SERVER=DEDICATED)
(FAILOVER_MODE=
(BACKUP=RAC1)
(TYPE=SESSION)(METHOD=PRECONNECT)
(RETRIES=180)(DELAY =5))))

7-19 Copyright © 2004, Oracle. All rights reserved.

TAF Configuration: Example (continued)


nly
e O
This slide contains the definition for the connect descriptor that is associated with the TNS alias
of the secondary instance, RAC2. The connect descriptor values are similar to those for the RAC1
descriptor with the following key differences:
Us
I
• The first address that is listed is for the node where the secondary instance runs
A
O
(aaacme2). This should prevent any connections that are made directly through this alias

&
because there is no load balancing to redirect the request to the second address.

l
• The INSTANCE_ROLE value is defined as SECONDARY. This prevents connections
a
ern
through the alias unless the primary instance has failed and the instance on aaacme2 has

t
assumed the primary role.

In
• The BACKUP value is the alias RAC1 so that connections to the instance on aaacme2 can

cle
fail back to the instance on aaacme1, if necessary.

ra
O

Oracle9i Database: Real Application Clusters on Linux 7-19


Connection Load Balancing

sales1.us.acme.com=
(DESCRIPTION=
(ADDRESS_LIST=
(LOAD_BALANCE=on)
(ADDRESS= . . . ))
(ADDRESS= . . . ))
(ADDRESS= . . . )))
(CONNECT_DATA=
(SERVICE_NAME=
sales.us.acme.com) Nodes Dispatchers

(SERVER=shared)))

7-20 Copyright © 2004, Oracle. All rights reserved.

Connection Load Balancing


nly
e O
With connection load balancing, connections to the handlers on each node are based on the
number of active connections; a new connection is assigned to the node with the lightest

Us
processing load and the fewest active connections. If you have configured shared servers, then
I
the connection is made to the dispatcher with the fewest current users on the selected node. The
A
O
example in the slide shows the configuration for connection load balancing across a three-node

&
cluster with shared servers enabled.

al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 7-20


Service and Instance Names

(DESCRIPTION=
(LOAD_BALANCE=ON)
(ADDRESS=(PROTOCOL=tcp)(HOST=host1)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=host2)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=host3)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=sales.us.acme.com)))

(DESCRIPTION=
(ADDRESS= (PROTOCOL=tcp)(HOST=host1)(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME= sales.us.acme.com)
(INSTANCE_NAME=S1))
)

7-21 Copyright © 2004, Oracle. All rights reserved.

Service and Instance Names


nly
e O
The DESCRIPTION clause in the first example on the slide enables load balancing for
connections to the sales.us.acme.com service name. Note that if you omit the

Us
LOAD_BALANCE clause, or set LOAD_BALANCE to OFF, NO, or FALSE, then the addresses

I
will be tried in the order that is listed until a successful connection is made.
A
& O
The second example’s DESCRIPTION clause causes a connection to be made specifically to the
instance with its INSTANCE_NAME initialization parameter set to the value S1. This option

al
enables connections to a specific instance based on the work that is being performed while

rn
connected. This usage supports functionally partitioned databases.
e
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 7-21


Adaptive Parallel Query

Node 1 Node 2 Node 3

Query processes …but will Query


have node use other coordinator
affinity for query nodes if Parallel query
coordinator… needed execution

7-22 Copyright © 2004, Oracle. All rights reserved.

Adaptive Parallel Query


nly
e O
As well as load balancing that is provided by Oracle Net Services, you can employ the adaptive
parallel query mechanism to execute statements in parallel across the instances of a Real

Us
Application Clusters database. This method allows the optimizer to determine whether it will
I
spread the work across query processes that are associated with one instance or with multiple
A
O
instances. Therefore, depending on the workload, queries, data manipulation language (DML),

&
and data definition language (DDL) statements may execute in parallel on a single node, across

l
multiple nodes, or across all nodes in the cluster database.
a
rn
In some cases, the parallel optimizer may choose to use only one node to satisfy a request.
e
Int
Generally, the optimizer will try to limit the work to the node where the query coordinator
process executes (node affinity) to reduce cross-instance message traffic. However, if multiple

le
nodes are employed, then they all continue to work until the entire operation is completed.
c
ra
O

Oracle9i Database: Real Application Clusters on Linux 7-22


Monitoring Parallel Query

Views for monitoring parallel query performance:


• V$PQ_SYSSTAT
• V$PQ_SESSTAT
• V$PQ_SLAVE
• V$PQ_TQSTAT
• V$PX_SESSION

7-23 Copyright © 2004, Oracle. All rights reserved.

Monitoring Parallel Query


nly
can use these views to gauge query performance. These include:
e O
There are several views that are useful for monitoring parallel query. The database administrator

• V$PQ_SYSSTAT: Overall parallel query system statistics


Us

AI
V$PQ_SESSTAT: Information about all parallel execution sessions

O
• V$PQ_SLAVE: Active parallel query slave statistics

&
• V$PQ_TQSTAT: Contains rows that are processed by the slave-by stage of SQL statement.

l
Statistics are compiled after a query finishes and are only available for current session.
a

ern
V$PX_SESSION: Information about all parallel execution sessions. It includes query

t
coordinator information.

In
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 7-23


Summary

In this lesson, you should have learned how to:


• Add extra nodes to the cluster
• Create and configure raw devices
• Configure failover
• Configure load balancing
• Configure adaptive parallel query

7-24 Copyright © 2004, Oracle. All rights reserved.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux 7-24


________
Appendix A:
IEEE1394
Shared Disks
________

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
IEEE 1394 Shared Disks
IEEE 1394 is a standard that defines a high-speed serial bus. This bus is more commonly known
as FireWire, a name that was coined by Apple Computer, Inc. FireWire is similar in principle to
Universal Serial Bus (USB), but runs at speeds of up to 400 megabits per second and the
transmission mode provides much greater bandwidth than USB. The original intent of FireWire
was to provide an interface for devices, such as digital video cameras, that transfer a large
amount of data. External IDE drive enclosures are available and include FireWire ports. It is now
possible to share inexpensive IDE drives between systems supporting FireWire devices. Linux
supports FireWire devices that are Open Host Controller Interface (OHCI)–compatible.
Used in conjunction with Oracle Cluster File System (OCFS), FireWire-connected IDE drives
provide an economical method of sharing disks for RAC on Linux. Currently, FireWire devices
allow a maximum of four concurrent system logins (connections), so that the maximum number
of nodes in the cluster is limited to four. This restriction, in addition to the current transfer speed
of 400 Mbits, would preclude implementation in large production environments, but is ideal for
building low-cost development or test systems. To prepare FireWire IDE disks for RAC, perform
the following steps:

1. Make sure that your configuration is certified.

The setup that is tested for this class is Red Hat Application Server 2.1 with Oracle 9.2.0.2. If
you are interested in using another distribution of Linux or Oracle9i, then check the certified
configurations.

a. Go to http://metalink.oracle.com and log in.

b. Click the Certify and Availability button.

ly
c. Click the View Certifications by Product link.

On
d. Select Real Application Clusters from the Product Group list.

s
e. Select RAC on Linux from the operating system list.e
I U
A
f. Choose your processor type (x86 or Itanium) from the Platform list.

O
&
g. Select the proper Oracle version link (9.2 or 9.0.1).
l
rna
nte
e I
cl
ra
O

Oracle9i Database: Real Application Clusters on Linux A-2


2. Purchase proper FireWire chipset and adapters.

For RAC on Linux to work properly with FireWire, all the nodes in the cluster must be
logged in to the external FireWire hard drive concurrently. Be aware that not all FireWire
adapters or FireWire drive enclosures work properly with RAC. The adapter must be OHCI
and IEEE 1394 compliant. The FireWire disk enclosure must contain a chipset that supports
multiple simultaneous logins. The best drive enclosures for this purpose contain the Oxford
OXFW911 chipset. This is the predominant chipset that is found in FireWire drive enclosures
but there are others, so you must be careful. Install the adapters in your systems and cable the
drive as directed by the hardware documentation.

3. Obtain and install Linux kernel with FireWire support.

If you are using Red Hat 2.1 Application Server, then you must install a kernel that supports
FireWire disk devices. The first kernel that incorporated this support was the 2.4.19 test
kernel. At the time of writing this course, the 2.4.20 kernel is available. This kernel is
preferable because it is a production kernel. To get and install the 2.4.20 kernel, perform the
following steps:

a. Go to http://otn.oracle.com/tech/linux/open_source.html and choose the proper


gzipped tar file. Choose either linux-2.4.20rc2-orafw-smp.tar.gz or
linux-2.4.20rc2-orafw-up.tar.gz. Which file you download depends on the
processor configuration of your hardware, symetric multiprocessor (SMP) or
uniprocessor (UP).

b. Transfer or copy the archive to the root directory of each node, then gunzip and untar
the archive as the root user.
# pwd

ly
/
# gunzip linux-2.4.20rc2-orafw-up.tar.gz
# tar –tvf linux-2.4.20rc2-orafw-up.tar
On
se
U
c. Edit the /boot/grub/grub.conf file to allow the new kernel to be included in
I
A
the Grub boot menu. Add an entry under the splashimage identifier and above the

O
original kernel entry as indicated below. Make sure that the root device matches the

&
one that was used in the original configuration.

al
# vi /boot/grub/grub.conf

ern
t
default=0

In
timeout=10

le
splashimage=(hd0,1)/boot/grub/splash.xpm.gz

rac
title Firewire Kernel 2.4.20

O
kernel /boot/vmlinuz-2.4.20-orafw ro root=/dev/hda2

title Red Hat Linux Advanced Server (2.4.9-e.3) # Original Grub entry
root (hd0,1)
kernel /boot/vmlinuz-2.4.9-e.3 ro root=/dev/hda2 hdc=ide-scsi
initrd /boot/initrd-2.4.9-e.3.img

Oracle9i Database: Real Application Clusters on Linux A-3


4. Configure FireWire modules to load at startup time.

Several kernel modules must be loaded in order for the shared disk to be recognized. The
modules that must be loaded are ohci1394 and sbp2 (serial bus protocol). The sbp2
module is a low-level SCSI driver for IDE buses. In addition, the proper high-level SCSI
device module must be loaded. Your choices include sd_mod (disk), st (tape), sr_mod
(CD-ROM), and sg (generic disc burner/scanner). Use the sd-mod module. Edit the
/etc/rc.local file and add the following three lines in the order specified:
# vi /etc/rc.local
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local
modprobe ohci1394
modprobe sbp2
modprobe sd_mod

In addition, the sbp2 module must be configured to support multiple logins. Edit the
/etc/modules.conf file and set the sbp2_exclusive_login parameter equal to 0
(allow multiple logins) as shown in the example. The default value is 1 (single login only).
# vi /etc/modules.conf
options sbp2 sbp2_exclusive_login=0

After restarting the system, run the dmesg command and look for 1394 and sbp2-related

ly
entries to verify that the FireWire adapter and shared disk are recognized and the node is

n
logged in.
# dmesg

e O
s
...

U
ohci1394: $Rev: 758 $ Ben Collins <[email protected]>
PCI: Found IRQ 12 for device 00:09.0

AI
ohci1394_0:OHCI-1394 1.0 (PCI): IRQ=[12] MMIO=[e8000000-e80007ff] Max
Packet=[2048]

& O
l
...

a
ieee1394: sbp2: Logged into SBP-2 device

rn
ieee1394: sbp2: Node[01:1023]: Max speed [S400] - Max payload [2048]

e
t
scsi0 : IEEE-1394 SBP-2 protocol driver (host: ohci1394)

In
$Rev: 792 $ James Goodwin <[email protected]>

e
SBP-2 module load options:

cl
- Max speed supported: S400

ra
- Max sectors per I/O supported: 255

O
- Max outstanding commands supported: 64
- Max outstanding commands per lun supported: 1
- Serialized I/O (debug): no
- Exclusive login: no
Vendor: QUANTUM Model: Bigfoot TX6.0AT Rev:
Type: Direct-Access ANSI SCSI revision: 06
Attached scsi disk sda at scsi0, channel 0, id 0, lun 0

Oracle9i Database: Real Application Clusters on Linux A-4


SCSI device sda: 11773755 512-byte hdwr sectors (6028 MB)

After the disk is recognized, install OCFS, Cluster Manager, and Oracle 9.2.0.4 as detailed in
the lessons and workshop.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O

Oracle9i Database: Real Application Clusters on Linux A-5


nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
________
Appendix B:
Workshop
________

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Exercise 2: Preparing the Operating System

1. Verify host names and IP addresses on both the nodes. There should be an entry for each
node and an entry for each interconnect. Ping the other host and interconnect to test the
network.

First node
[root@stc-raclin01]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 stc-raclin01 localhost.localdomain localhost
148.2.65.101 stc-raclin01 stc-raclin01 rac1
148.2.65.102 stc-raclin02 stc-raclin02 rac2
192.168.1.12 racic02 ic2
192.168.1.11 racic01 ic1

[root@stc-raclin01]# ping stc-raclin02


PING stc-raclin02 from 148.2.65.101 : 56(84) bytes of data.
64 bytes from stc-raclin02 : icmp_seq=0 ttl=64 time=436 usec
...
[root@stc-raclin01]# ping racic02
PING racic02 (138.2.65.12) from 138.2.65.11 : 56(84) bytes of data.
64 bytes from racic01 (138.2.65.12): icmp_seq=0 ttl=64 time=635 usec
...

Second node
[root@stc-raclin02 root]# cat /etc/hosts

ly
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 stc-raclin02 localhost.localdomain
Onlocalhost
148.2.65.102 stc-raclin02 rac2
se
U
148.2.65.101 stc-raclin01 rac1
192.168.1.11
192.168.1.12
racic01 ic1
racic02 ic2
AI
[root@stc-raclin02 root]# ping stc-raclin01
& O
al
PING stc-raclin01 from 148.2.65.102 : 56(84) bytes of data.

rn
64 bytes from stc-raclin01 : icmp_seq=0 ttl=64 time=436 usec

e
t
...

In
[root@stc-raclin02 root]# ping racic01

e
PING racic01 (138.2.65.11) from 138.2.65.12 : 56(84) bytes of data.

cl
64 bytes from racic01 (138.2.65.11): icmp_seq=0 ttl=64 time=635 usec

a
...

Or

Oracle9i Database: Real Application Clusters on Linux B-2


2. As the root user, review shared memory settings in the /etc/sysconfig/oracle file
on both nodes. If changes need to be made, restart for the new settings to take effect. The
settings shown below should be considered minimum values.

First Node
[root@stc-raclin01]# vi /etc/sysconfig/oracle
# Shared memory and Semophore memory settings
SHMMAX=47483648
SHMMNI=4096
SHMALL=2097152
SEMMSL=1250
SEMMNS=32000
SEMOPM=100
SEMMNI=256 ~
/etc/config/oracle, 622C written

Second Node
[root@stc-raclin02]# vi /etc/sysconfig/oracle
# Shared memory and Semophore memory settings
SHMMAX=47483648
SHMMNI=4096
SHMALL=2097152
SEMMSL=1250
SEMMNS=32000
SEMOPM=100
SEMMNI=256 ~
/etc/config/oracle, 622C written

nly
3. Create the UNIX dba and oinstall groups and the oracle user on both nodes. In
addition, create the /home/ora920 (ORACLE_HOME) and /var/opt/oracle

e O
directories if they don’t already exist. Note that the cluster software expects

s
/var/opt/oracle to exist before the installation begins. Perform these tasks as the
U
root user.

AI
First Node

& O
al
[root@stc-raclin01]# groupadd -g 500 dba

n
[root@stc-raclin01]# groupadd -g 501 oinstall

er
[root@stc-raclin01]# useradd -u 500 -d /home/oracle -g "dba" –G \

t
n
"oinstall" -m -s /bin/bash oracle

I
[root@stc-raclin01]# passwd oracle

e
l
[root@stc-raclin01]# mkdir /home/ora920;chmod 775 /home/ora920

ac
[root@stc-raclin01]# chown oracle:dba /home/ora920

r
[root@stc-raclin01]# mkdir /var/opt/oracle

O
[root@stc-raclin01]# chown oracle:dba /var/opt/oracle
[root@stc-raclin01]# chmod 775 /var/opt/oracle

Oracle9i Database: Real Application Clusters on Linux B-3


Second Node
[root@stc-raclin02]# groupadd -g 500 dba
[root@stc-raclin02]# useradd -u 500 -d /home/oracle -g "dba" –G \
"oinstall" -m -s /bin/bash oracle
[root@stc-raclin02]# passwd oracle
[root@stc-raclin02]# mkdir /home/ora920;chmod 775 /home/ora920
[root@stc-raclin02]# chown oracle:dba /home/ora920
[root@stc-raclin02]# mkdir /var/opt/oracle
[root@stc-raclin02]# chown oracle:dba /var/opt/oracle
[root@stc-raclin02]# chmod 775 /var/opt/oracle

4. Verify the Linux version in use on both the nodes.

First node
[root@stc-raclin01]# uname -r
2.4.19-64GB-SMP

Second Node
[root@stc-raclin02]# uname -r
2.4.19-64GB-SMP

5. List the contents of the /archives/ocfs-1.0.9 directory and ensure that the OCFS
kernel RPM matches the kernel version that is given by the uname –r command in Step 4.
Install the ocfs-support RPM with the rpm command. This must be done on both the nodes.

First node

ly
[root@stc-raclin01]# cd /archives/ocfs-1.0.9

n
[root@stc-raclin01]# ls -al

O
-rw-rw-r-- 1 root root 734 Jul 1 16:44 FIXES

e
-rw-r--r-- 1 root root 773 Jul 1 17:30 README.TXT
-rw-r--r-- 1 root root
s
252 Jul 1 17:31 README.TXT.2

U
I
-rw-r--r-- 1 root root 173029 Jul 1 17:23 ocfs-2.4.19-4GB-1.0.9-4.i586.rpm

A
-rw-r--r-- 1 root root 173578 Jul 1 17:23 ocfs-2.4.19-4GB-SMP-1.0.9-4.i586.rpm

O
-rw-r--r-- 1 root root 173498 Jul 1 17:23 ocfs-2.4.19-64GB-SMP-1.0.9-4.i586.rpm
-rw-r--r-- 1 root root 4861 Jul 1 16:35 ocfs-best-practices.txt

l
-rw-r--r-- 1 root root
&
38373 Jul 1 17:23 ocfs-support-1.0.9-4.i586.rpm

a
-rw-r--r-- 1 root root 136722 Jul 1 17:23 ocfs-tools-1.0.9-4.i586.rpm

ern
[root@stc-raclin01]# rpm –i ocfs-support-1.0.9-4.i586.rpm

Second node
Int
cle
[root@stc-raclin02 root]# cd /archives/ocfs-1.0.9

ra
[root@stc-raclin02 /archives]# ls -al

O
-rw-rw-r-- 1 root root 734 Jul 1 16:44 FIXES
-rw-r--r-- 1 root root 773 Jul 1 17:30 README.TXT
-rw-r--r-- 1 root root 252 Jul 1 17:31 README.TXT.2
-rw-r--r-- 1 root root 173029 Jul 1 17:23 ocfs-2.4.19-4GB-1.0.9-4.i586.rpm
-rw-r--r-- 1 root root 173578 Jul 1 17:23 ocfs-2.4.19-4GB-SMP-1.0.9-4.i586.rpm
-rw-r--r-- 1 root root 173498 Jul 1 17:23 ocfs-2.4.19-64GB-SMP-1.0.9-4.i586.rpm

Oracle9i Database: Real Application Clusters on Linux B-4


-rw-r--r-- 1 root root 4861 Jul 1 16:35 ocfs-best-practices.txt
-rw-r--r-- 1 root root 38373 Jul 1 17:23 ocfs-support-1.0.9-4.i586.rpm
-rw-r--r-- 1 root root 136722 Jul 1 17:23 ocfs-tools-1.0.9-4.i586.rpm

[root@stc-raclin02]# rpm –i ocfs-support-1.0.9-4.i586.rpm

6. Next, install the OCFS kernel RPM.and ocfs-tools rpm with the rpm command. Again,
perform this operation on both the nodes. After completing this step, restart both the nodes.

First node
[root@stc-raclin01]# rpm –i ocfs-2.4.19-64GB-SMP-1.0.9-4.i586.rpm
[root@stc-raclin01]# rpm –i ocfs-tools-1.0.9-4.i586.rpm
[root@stc-raclin01 /]# init 6 (to restart)

Second node
[root@stc-raclin02]# rpm –i ocfs-2.4.19-64GB-SMP-1.0.9-4.i586.rpm
[root@stc-raclin02]# rpm –i ocfs-tools-1.0.9-4.i586.rpm
[root@stc-raclin02]# init 6 (to restart)

7. From a root VNC or Vncviewer session, start the OCFS tool, ocfstool and create the
/etc/ocfs.conf configuration file.

[root@stc-raclin01]# vncviewer node1_name (for an Vncviewer session)


or
Type your_node_name:5801 in the URL field in your web browser. (for an VNC session)
Where your_node_name is one of the node names in your cluster.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-5
7.1. From the menu bar, select Tasks, and then Generate Config.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-6
7.2. Choose the second Ethernet interface by using the drop-down menu and selecting eth1.
Accept the default port, 7000, and enter the interconnect node name that corresponds to
the entry in the /etc/hosts file. Refer to Step 1. Click the OK button to continue.
You must perform the configuration file generation on both the nodes.

First node

Second node

nly
8. Create two directories called /ocfs and /quorum respectively on both nodes. These

e O
directories will be used to mount the shared disks. The directory must be owned by oracle

s
and the group must be dba. Set the permissions for the directory to 775.
U
First node
AI
[root@stc-raclin01 /]# mkdir /ocfs /quorum
& O
l
[root@stc-raclin01 /]# chown oracle:dba /ocfs /quorum

a
n
[root@stc-raclin01 /]# chmod 775 /ocfs /quorum

ter
I
Second noden
cle
[root@stc-raclin02 /]# mkdir /ocfs /quorum

ra
[root@stc-raclin02 /]# chown oracle:dba /ocfs /quorum

O
[root@stc-raclin02 /]# chmod 777 /ocfs /quorum

9. Use fdisk to partition the shared disk. Do this once on one node only. The disk should be
represented by the disk device /dev/sdd. Create two partitions, a large one to be used for
Oracle9i Database: Real Application Clusters on Linux B-7
the Oracle data files (/ocfs) and another smaller one to be used for the quorum and server
manager/group services shared files (/quorum). After starting fdisk, enter p to print the
partition table of the shared disk. There should be no partitions. If by chance there are
existing partitions, use the d option to delete them before proceeding. Enter n to create a new
partition and then enter p for primary. Make this partition 1. Use the majority of the cylinders
for the data disk because you need only a few cylinders for the quorum disk.

Enter n to create another partition and enter p to create a primary partition. Make this
partition 2 and use the remaining cylinders that are available for this disk, which will become
the quorum file system. Type w, when finished, to write the new partition table.

[root@stc-raclin01 root]# fdisk /dev/sdd


Command (m for help): p

Disk /dev/sdd: 255 heads, 63 sectors, 732 cylinders


Units = cylinders of 16065 * 512 bytes

Device Boot Start End Blocks Id System

Command (m for help): n


Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-4427, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-4427, default 4427): 700

Command (m for help): n


Command action
nly
e extended

e O
s
p primary partition (1-4)

U
p
Partition number (1-4): 2

AI
O
First cylinder (701-4427, default 701): 701
Last cylinder or +size or +sizeM or +sizeK (701-4427, default 4427):
4427
l &
rna
Command (m for help): w

te
The partition table has been altered!

n
e I
Calling ioctl() to re-read partition table.

cl
ra
WARNING: If you have created or modified any DOS 6.x

O
partitions, please see the fdisk manual page for additional
information.
Syncing disks.
[root@stc-raclin01 root]#

Oracle9i Database: Real Application Clusters on Linux B-8


10. The shared disks must be OCFS-formatted next. You must perform this from only one node.
In this exercise, it will be done from the first node.

10.1. From a VNC session on the first node, start ocfstool and select Tasks from the
menu bar. From the Tasks menu, choose Format. Choose the SCSI device sdd1. If
you do not see sdd1 in the list, it may be necessary to reboot the node.

10.2. Accept the volume name of Oracle. Change the mountpoint to /quorum. Change the
user to oracle and group to dba. Finally, set the protection to 0777 and click OK.
Click Yes when the tool prompts you of your intent to proceed.

nly
e O
Us
AI
& O
al
rn
Alternatively, you can perform this from the command line:
e
Int
[root@stc-raclin01 /]# mkfs.ocfs -F -b 128 -L oracle -m /ocfs -u oracle \

e
-g dba -p 0775 /dev/sdd1

cl
r
10.3.
a Repeat the steps above to create a second OCFS volume using the device

O /dev/sdd2. Specify the mount point as /ocfs, with a volume name of ocfs, with
the user and group set to oracle and dba, respectively. Set the protection field to 0777.
Click Yes when the tool prompts of your intent to proceed.

Oracle9i Database: Real Application Clusters on Linux B-9


11. Test the new OCFS volume by attempting to mount the volume. First, load the OCFS
module, then start ocfstool, highlight the OCFS volume and click the Mount button. The
mount point should appear to the right of the volume when mounted successfully. Perform
this test on both the nodes. These steps will be automated later.

First node
[root@stc-raclin01]# load_ocfs
/sbin/insmod ocfs node_name=racic01 ip_address=192.168.1.11 ip_port=7000
cs=1823 guid=98C704EBD14F6EBC68660060976E5460
[root@stc-raclin01 root]# ocfstool

nly
e O
Us
AI
O
Second node

l
[root@stc-raclin02 /]# load_ocfs
&
rna
/sbin/insmod ocfs node_name=racic02 node_number=0 ip_address=192.168.1.12
ip_port=7000 cs=1840 guid=E09B019CBFEB8579C8540050FC969760

te
[root@stc-raclin02 /]# ocfstool

n
e I
cl
ra
O
Oracle9i Database: Real Application Clusters on Linux B-10
If you can mount the OCFS volume from both the nodes, then the tasks have been
sucessfully completed.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-11
Exercise 3: Oracle Cluster Management System

1. Oracle Cluster Manager 9.2.0.1 is now installed. From a VNC session as user oracle, log in
to the Linux system as oracle and change directory to the /archives/Disk1 directory.
Execute runInstaller and install OCMS in the /home/ora920 directory. This must
be done on both nodes.
[oracle@stc-raclin01]$ cd /archives/Disk1
[oracle@stc-raclin01]$ ./runInstaller

1.1. Use /home/ora920/oraInventory as the inventory base directory.

1.2. Specify dba as the UNIX group to use for the installation.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-12
1.3. From a terminal window, execute the /tmp/orainstRoot.sh as the root user.

[root@stc-raclin01]# /tmp/orainstRoot.sh
Creating Oracle Inventory pointer file (/etc/oraInst.loc)
Changing groupname of /home/ora920/oraInventory to dba

1.4. Specify /home/ora920 as the location for ORACLE_HOME.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
1.5. From the list of available products, choose Oracle Cluster Manager 9.2.0.1.0.
Oracle9i Database: Real Application Clusters on Linux B-13
1.6. Enter the names of the two nodes that are in your cluster. Check the /etc/hosts
file to ensure accuracy.
[root@stc-raclin01]# cat /etc/hosts
# Node names
127.0.0.1 stc-raclin01 localhost.localdomain localhost
148.2.65.101 stc-raclin01.us.oracle.com stc-raclin01
148.2.65.102 stc-raclin02.us.oracle.com stc-raclin02
# Interconnect names
192.168.1.12 racic02 ic2
192.168.1.11 racic01 ic1

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-14
1.7. Enter the names of the interconnects for each node. Again, refer to the /etc/hosts
file to ensure accuracy.

1.8. Accept the default Watchdog parameter value. It will be disabled later in favor of the
hangcheck-timer.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-15
1.9. Specify the quorum.dbf file on the shared OCFS partition /quorum as the
quorum disk device.

1.10. Review the summary information, and then install.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-16
1.11. The Install window displays the progress of the installation.

1.12. When the installation is complete, exit the installer.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Now, repeat steps 1.1 through 1.12 to install Oracle Cluster Manager 9.2.0.1.0 on
the second node. Do not attempt to start Cluster Manager yet.

Oracle9i Database: Real Application Clusters on Linux B-17


2. To enable the hangcheck-timer module to load automatically at system startup, make sure the
following line appears in the /etc/rc.local file:
/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

First node
[root@stc-raclin01 root]# vi /etc/rc.local

#!/bin/sh
...
/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

Second node
[root@stc-raclin02 root]# vi /etc/rc.local

#!/bin/sh
...
/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

3. Disable the mechanism that is used to start the Oracle watchdog daemon at system startup.
As the root user, change directory to the $ORACLE_HOME/ocms/bin directory and
find the ocmstart.sh file. This file starts the watchdog daemon and timer. Edit the
ocmstart.sh file and eliminate watchdog-related commands. Use the # symbol to
comment out these lines. Perform these activities on both the nodes. There is a sample
ocmstart.sh file in the /archives directory.

First node
[root@stc-raclin01]# cd $ORACLE_HOME/oracm/bin

ly
[root@stc-raclin01]# vi ocmstart.sh

# watchdogd's default log file


On
# WATCHDOGD_LOG_FILE=$ORACLE_HOME/oracm/log/wdd.log

se
# watchdogd's default backup file
I U
A
# WATCHDOGD_BAK_FILE=$ORACLE_HOME/oracm/log/wdd.log.bak
# Get arguments

& O
# watchdogd_args=`grep '^watchdogd' $OCMARGS_FILE |\

al
# sed -e 's+^watchdogd *++'`

n
...

er
# Check watchdogd's existance

t
n
# if watchdogd status | grep 'Watchdog daemon active' >/dev/null
# then

e I
l
# echo 'ocmstart.sh: Error: watchdogd is already running'

c
a
# exit 1
# fi
...
Or
# Backup the old watchdogd log
# if test -r $WATCHDOGD_LOG_FILE
# then
# mv $WATCHDOGD_LOG_FILE $WATCHDOGD_BAK_FILE

Oracle9i Database: Real Application Clusters on Linux B-18


# fi

# Startup watchdogd
# echo watchdogd $watchdogd_args
# watchdogd $watchdogd_args
...

Second node
[root@stc-raclin02]# cd $ORACLE_HOME/oracm/bin
[root@stc-raclin02]# vi ocmstart.sh

# watchdogd's default log file


# WATCHDOGD_LOG_FILE=$ORACLE_HOME/oracm/log/wdd.log

# watchdogd's default backup file


# WATCHDOGD_BAK_FILE=$ORACLE_HOME/oracm/log/wdd.log.bak
# Get arguments
# watchdogd_args=`grep '^watchdogd' $OCMARGS_FILE |\
# sed -e 's+^watchdogd *++'`
...
# Check watchdogd's existence
# if watchdogd status | grep 'Watchdog daemon active' >/dev/null
# then
# echo 'ocmstart.sh: Error: watchdogd is already running'
# exit 1
# fi
...
# Backup the old watchdogd log
# if test -r $WATCHDOGD_LOG_FILE

ly
# then
# mv $WATCHDOGD_LOG_FILE $WATCHDOGD_BAK_FILE
# fi
On
se
U
# Startup watchdogd
# echo watchdogd $watchdogd_args
# watchdogd $watchdogd_args
AI
...

& O ...

al
rn
4. It is now time to udpdate OCMS from 9.2.0.1 to 9.2.0.4. To install the 9.2.0.4 patch set, you

e
t
must first start the installer as the oracle user as shown below.

In
e
[oracle@stc-raclin01]$ /archives/Disk1/runInstaller

cl
a
4.1. Upon reaching the File Locations window, change the directory that is specified in the
r
O
Source... field to point to the patch location, /archives/Patch_Linux_9204/stage.
When you click the Next button, the Available Products window appears with the
products that may be installed from the location that is specified. Choose Oracle9iR2
Cluster Manager 9.2.0.4.0 and continue. Choose the products.jar file in the Browse
window.
Oracle9i Database: Real Application Clusters on Linux B-19
4.2. Select the Oracle9iR2 Cluster Manager 9.2.0.4.0 option button and continue.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-20
4.3. Provide the node (host) names from the /etc/hosts file in the Public Node
Information window.

4.4. Provide the interconnect names from the /etc/hosts file in the Private Node
Information window.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-21
4.5. If prompted, specify the /quorum/quorum.dbf file on the shared OCFS partition
/quorum as the quorum disk device.

4.6. Review the Summary window to make sure that everything is correct and continue.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
The End of Installation window will advise when the Cluster Manager has been upgraded.
Oracle9i Database: Real Application Clusters on Linux B-22
Before continuing to the next step, perform the Cluster Manager upgrade (steps 4.1
through 4.6) on the second node.

5. Check the /home/ora920/oracm/admin/cmcfg.ora file that was written by the


installation. If there is no cmcfg.ora file, copy it from the cmcfg.ora.tmp file and
make sure the entries below are present. Comment out any Watchdog related entries. Perform
this task on both nodes. Edit the ocmargs.ora file in the same directory and remove the
watchdog line. Again, do this on both nodes.
First node
[oracle@stc-raclin01]# cat /home/ora920/oracm/admin cmcfg.ora
CmDiskFile=/quorum/quorum.dbf
ClusterName=rac9202
PollInterval=1000
MissCount=210
PrivateNodeNames=racic01 racic02
PublicNodeNames=stc-raclin01 stc-raclin02
ServicePort=9998
HostName=stc-raclin01
[oracle@stc-raclin01]# vi ocmargs.ora

Second node
[oracle@stc-raclin02]# cat /home/ora920/oracm/admin/cmcfg.ora
CmDiskFile=/quorum/quorum.dbf
ClusterName=rac9202
PollInterval=1000
MissCount=210
PrivateNodeNames=racic02 racic01

ly
PublicNodeNames=stc-raclin02 stc-raclin01

n
ServicePort=9998

O
HostName=stc-raclin02

e
[oracle@stc-raclin02]# vi ocmargs.ora

Us
AI
6. Create and initialize the quorum file, /quorum/quorum.dbf using the dd command.
Make sure it is created once only and is owned by the root user. Adjust permissions as
shown.

& O
First node only!
al
ern
# dd if=/dev/zero of=/quorum/quorum.dbf bs=4096 count=65

nt
# chown root /quorum/quorum.dbf

I
# chmod 666 /quorum/quorum.dbf

cle
a
7. It is now time to stop and automate some of the OCFS and Cluster Manager commands to
r
load on system startup. Verify the /etc/rc.local file contains the lines in bold typeface.
O
First node
[root@stc-raclin01]# vi /etc/rc.local
#!/bin/sh

Oracle9i Database: Real Application Clusters on Linux B-23


...
#*********Add the lines below to your /etc/rc.local file************

# Set Oracle environment variables now as OCFS and CM are loaded as the
root user.
ORACLE_HOME=/home/ora920
export ORACLE_HOME
PATH=$PATH:$ORACLE_HOME/oracm/bin:$ORACLE_HOME/bin
export PATH

echo "Loading OCFS Module"


su – root –c "/sbin/load_ocfs"
["$?" –eq "0" ] && echo "OCFS Module loaded"

echo "Mounting OCFS Filesystems"


su - root -c "/bin/mount -t ocfs /dev/sdd1 /ocfs"
su - root -c "/bin/mount -t ocfs /dev/sdd2 /quorum"
echo "OCFS Filesystems Mounted"

echo "Loading Hangcheck Timer"


/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
["$?" –eq "0" ] && echo "Loaded Hangcheck Timer"

echo "Starting Oracle Cluster Manager"


[ -f $ORACLE_HOME/oracm/log/ocmstart.ts ] && rm
$ORACLE_HOME/oracm/log/ocmstart.ts
su - root -c "$ORACLE_HOME/oracm/bin/ocmstart.sh"
["$?" –eq "0" ] && echo "Oracle Cluster Manager Loaded"

# echo "Starting GSD"


[ -f $ORACLE_HOME/bin/gsdctl ] && su - oracle –c "$ORACLE_HOME/bin/gsdctl start"

ly
["$?" –eq "0" ] && echo "Group Services Started Successfully"

n
Second node
e O
[root@stc-raclin02]# vi /etc/rc.local
Us
#!/bin/sh
...
AI
O
#*********Add the lines below to your /etc/rc.local file************

&
al
# Set Oracle environment variables now as OCFS and CM are loaded as the
root user.

ern
t
ORACLE_HOME=/home/ora920

n
export ORACLE_HOME

e I
PATH=$PATH:$ORACLE_HOME/oracm/bin:$ORACLE_HOME/bin

c
export PATH
l
ra
echo "Loading OCFS Module"

O
su – root –c "/sbin/load_ocfs"
["$?" –eq "0" ] && echo "OCFS Module loaded"

echo "Mounting OCFS Filesystems"


su - root -c "/bin/mount -t ocfs /dev/sdd1 /ocfs"
Oracle9i Database: Real Application Clusters on Linux B-24
su - root -c "/bin/mount -t ocfs /dev/sdd2 /quorum"
echo "OCFS Filesystems Mounted"

echo "Loading Hangcheck Timer"


/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
["$?" –eq "0" ] && echo "Loaded Hangcheck Timer"
echo "Starting Oracle Cluster Manager"
[ -f $ORACLE_HOME/oracm/log/ocmstart.ts ] && rm
$ORACLE_HOME/oracm/log/ocmstart.ts
su - root -c "$ORACLE_HOME/oracm/bin/ocmstart.sh"
["$?" –eq "0" ] && echo "Oracle Cluster Manager Loaded"

# echo "Starting GSD"


[ -f $ORACLE_HOME/bin/gsdctl ] && su - oracle –c "$ORACLE_HOME/bin/gsdctl start"
["$?" –eq "0" ] && echo "Group Services Started Successfully"

8. When finished, reboot both nodes.

First node
[root@stc-raclin01 /]# init 6 (to restart)

Second node
[root@stc-raclin02 /]# init 6 (to restart)

9. Check that the cluster filesystems have been mounted during boot up with the mount
command. Make sure that Cluster Manager is running also. Use the ps and grep commands
to look for oracm processes.

First Node
# mount
nly
/dev/sda6 on / type ext3 (rw)

e O
s
...

U
/dev/sdd2 on /ocfs type ocfs (rw)
/dev/sdd1 on /quorum type ocfs (rw)
# ps –ef|grep oracm
AI
root 21028 1296 0 09:35 ?

& O 00:00:00 oracm

l
...

a
root 12168 1296 0 18:51 ? 00:00:00 oracm

ern
t
Second Node

In
le
# mount

c
/dev/sda6 on / type ext3 (rw)

a
...

Or
/dev/sdd2 on /ocfs type ocfs (rw)
/dev/sdd1 on /quorum type ocfs (rw)
# ps –ef|grep oracm
root 21028 1296 0 09:35 ? 00:00:00 oracm
...
root 12168 1296 0 18:51 ? 00:00:00 oracm
Oracle9i Database: Real Application Clusters on Linux B-25
Exercise 4: Installing Oracle on Linux

1. The Oracle installer, runInstaller, is node aware. This means that Oracle software can
be loaded on multiple nodes from one installer at the same time. For this to work properly,
Oracle Cluster Manager must be working on both nodes. This was done in the Lesson 3
exercise. In addition, user equivalence must be in effect for the user performing the
installation, which is oracle in this exercise.

1.1. Edit the /etc/inetd.conf file as the root user. Find the entry for shell and
make sure it is uncommented. To make inetd reread inetd.conf, find the PID for
the inetd process and kill it with the HUP option. In addition, create or edit the
/etc/hosts.equiv file and enter the host name for the other node.

First node
[root@stc-raclin01]# vi /etc/inetd.conf
...
# nntp stream tcp nowait news /usr/sbin/tcpd /usr/sbin/leafnode
# smtp stream tcp nowait root /usr/sbin/sendmail sendmail -L sendmail -Am
#
# Shell, login, exec and talk are BSD protocols.
# The option "-h" permits ``.rhosts'' files for superuser. Please look at
# man-page of rlogind and rshd to see more configuration possibilities about
# .rhosts files.
shell stream tcp nowait root /usr/sbin/tcpd in.rshd –L

[root@stc-raclin01]# ps -ef|grep inetd


root 1049 1 0 Nov17 ? 00:00:00 /usr/sbin/inetd

[root@stc-raclin01]# kill –HUP 1049

[root@stc-raclin01]$ vi /etc/hosts.equiv
nly
stc-raclin02

e O
Second node
Us
[root@stc-raclin02]$ vi /etc/inetd.conf
AI
...

& O
# nntp stream tcp nowait news /usr/sbin/tcpd /usr/sbin/leafnode

al
# smtp stream tcp nowait root /usr/sbin/sendmail sendmail -L sendmail -Am

n
#

er
# Shell, login, exec and talk are BSD protocols.

t
# The option "-h" permits ``.rhosts'' files for superuser. Please look at

In
# man-page of rlogind and rshd to see more configuration possibilities about

le
# .rhosts files.

c
shell stream tcp nowait root /usr/sbin/tcpd in.rshd -L

ra
O
[root@stc-raclin02]# ps -ef|grep inetd
root 1066 1 0 Nov17 ? 00:00:00 /usr/sbin/inetd

[root@stc-raclin02]# kill –HUP 1066

Oracle9i Database: Real Application Clusters on Linux B-26


[root@stc-raclin02 xinetd.d]$ vi /etc/hosts.equiv
stc-raclin01

1.2. Restart both the nodes and test the user equivalency. Perform an rlogin as oracle
or use rsh to run a remote command. If you are not prompted for a password, then
the configuration is correct.
[root@stc-raclin01]# su – oracle
[oracle@stc-raclin01]$ rlogin stc-raclin02
[oracle@stc-raclin02]$
[oracle@stc-raclin02]$ exit
[oracle@stc-raclin01]$
[oracle@stc-raclin01]$ rsh stc-raclin02 uname -a
Linux stc-raclin02 2.4.19-64GB-SMP Fri Feb 21 13:07:49 PST 2003 i686

2. The Oracle database installation will be done by the oracle user on the first and second
nodes. Prepare the users’ environment by creating the .bash_profile file for Oracle
database–related environment variables. Set ORACLE_HOME to /home/ora920 on both
nodes and ORACLE_SID to RACDB1 on the first node and RACDB2 on the second node.

On the first node as oracle


[oracle@stc-raclin01 oracle]$ vi .bash_profile
export ORACLE_HOME=/home/ora920
export ORACLE_BASE=/home/ora920
export ORACLE_SID=RACDB1
export PATH=$PATH:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib
export TNS_ADMIN=$ORACLE_HOME/network/admin

On the second node as oracle


nly
[oracle@stc-raclin02 oracle]$ vi .bash_profile

e O
s
export ORACLE_HOME=/home/ora920

U
export ORACLE_BASE=/home/ora920
export ORACLE_SID=RACDB2
export PATH=$PATH:$ORACLE_HOME/bin
AI
export TNS_ADMIN=$ORACLE_HOME/network/admin
& O
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib

al
rn
3. Create the shared server manager/group services file using the touch command. Make sure it

e
t
is owned by the oracle user and the group is dba. This should only be done from one

In
node. Change file permissions to 666 with the chown command.

cle
First node only

ra
O
[root@stc-raclin01]# touch /quorum/srvm.dbf
[root@stc-raclin01]# chown oracle:dba /quorum/srvm.dbf
[root@stc-raclin01]# chmod 666 /quorum/srvm.dbf

Oracle9i Database: Real Application Clusters on Linux B-27


4. The installation files are located in the /archives directory. On the first node, go to the
/archives/Disk1 directory and start runInstaller.
[oracle@stc-raclin01]$ cd /archives/Disk1
[oracle@stc-raclin01]$ ./runInstaller

4.1. In the Cluster Node Selection window, select the local node in your cluster.

4.2. Make sure that the installer is using the products.jar file that is found in the

ly
Disk1 directory.

On
se
I U
OA
l &
rna
nte
e I
cl
ra
O
Oracle9i Database: Real Application Clusters on Linux B-28
4.3. In the Available Products window, click the Oracle9i Database 9.2.0.1.0 option
button.

4.4. In the Installation Types window, click the Custom option button.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-29
4.5. Make sure that Oracle9i Real Application Clusters 9.2.0.1 is selected.

4.6. Although it is possible to install the listed components in some place other than
ORACLE_HOME, there is no need to do so. Accept the default destination for the
Oracle Universal Installer and JRE components.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-30
4.7. Enter the configuration file name that will be used by both the nodes. Use the
/quorum/srvm.dbf file that you created in step three of this lesson exercise.

4.8. Enter the group name dba for both the database administrator and database operator
groups.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-31
4.9. In the Oracle Management Server Repository window, indicate that you will use an
existing repository.

4.10. In the Create Database window, select the No option button to defer database
creation. It will be performed by using DBCA after the 9.2.0.2.0 database patch is
applied.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-32
4.11. Wait for the installation to complete. Monitor the progress from the Install window.

4.12. Just before the installation completes, you are prompted to execute the root.sh
script. Open a terminal window as the root user and execute the root.sh script
from $ORACLE_HOME.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-33
[root@stc-raclin01] # cd /home/ora920
[root@stc-raclin01] # ./root.sh

4.13. After the binaries are installed, you are prompted to configure network services with
NETCA. Click on Yes.

4.13.1. On the Welcome screen, click on the Next button.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-34
4.13.2. Defer Directory Services configuration by clicking on the No… radio button.

4.13.3. On the next screen, accept the default listener name, LISTENER and click Next to
continue.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-35
4.13.4. On the Select Protocols screen, TCP will already be selected. Click on the Next
button to continue.

4.13.5. Accept the default port number of 1521. Click on the Next button to continue.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
4.13.6. The next screen asks you if you would like to configure another listener. Click on
the No radio button to continue. The next slide informs you that listener
configuration is complete. Click the Next button to continue. On the Naming
Methods Configuration screen , click on the No radio button. This preserves
Oracle9i Database: Real Application Clusters on Linux B-36
tnsnames.ora as the preferred naming method. Click Finish on the next slide to
exit Network Configuration Assistant.

4.14. After the binaries are installed, you are prompted to configure Enterprise Manager
with EMCA. Cancel this operation

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-37
4.15. The next window advises that the installation is completed but some configuration
tools did not complete. This is normal; exit the installer. If the Enterprise Manager
console appears upon exit, cancel the operation.

4.16. View the /var/opt/oracle/srvConfig.loc file on both the nodes and make
sure that the server manager/group services shared file is properly specified.
[root@stc-raclin01]# cat /var/opt/oracle/srvConfig.loc
srvconfig_loc=/quorum/srvm.dbf

4.17.
nly
As the oracle user stop the listener before applying the 9.2.0.4 patch.

e O
s
[oracle@stc-raclin01]$ lsnrctl stop

I U
5. The Universal Installer must be upgraded before the 9.2.0.4 database patch can be applied.
A
To do this start the installer from $ORACLE_HOME/bin.
O
l
[oracle@stc-raclin01]$ cd $ORACLE_HOME/bin
&
a
[oracle@stc-raclin01]$ ./runInstaller

ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-38
5.1. Click Next on the Welcome screen and choose both nodes from the Cluster Node
Selection screen. Click on the Next button to continue.

5.2. On the Available Products window, Click on the Oracle Universal Installer radio
button and click Next to continue.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-39
5.3. Make sure the 9.2.0.4 patch location appears in the Source field.

5.4. Review the Destination on the Components Locations page. It should be


/home/ora920/oui. Click on the Next button to continue.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-40
5.5. Review the information in the Summary page and click the Install button to appy the
installer upgrade.

5.6. After the upgrade is comlete, exit the installer. Before the new installer can be used,
you must run the following command as the oracle user on both nodes:

First Node
$ cd $ORACLE_BASE/oui/bin/linux
$ ln -s libclntsh.so.9.0 libclntsh.so

Second Node
nly
$ cd $ORACLE_BASE/oui/bin/linux
e O
$ ln -s libclntsh.so.9.0 libclntsh.so
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-41
6. The next step is to apply the Oracle9iR2 9.2.0.4.0 patch on both nodes. Start
runInstaller from the Disk1 directory.

6.1. In the Cluster Node Selection window, choose both nodes..

6.2. The 9.2.0.4.0 patch is located in the Patch_Linux_9204 directory under


/archives. Make sure that the installer points to the products.jar file that is
located there.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-42
6.3. In the Available products window, choose the Oracle9iR2 Patch Set 9.2.0.4.0 option
button.

6.4. Review the Summary window and continue.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-43
6.5. Just before the upgrade is finished, you will be prompted to run the root.sh script
as the root user. This needs to be done on both nodes.

First Node as root


# $ORACLE_HOME/root.sh

Second Node as root


# $ORACLE_HOME/root.sh

6.6.
ly
You must check for the existence of several directories on the second node.
n
O
Sometimes, these direcories may not be properly copied during the install. Problems

them if necessary.
se
will arise during database correction if the are not there. Check for them and create

I U
Second Node only

OA
&
[oracle@stc-raclin02]$ mkdir -p $ORACLE_HOME/rdbms/audit

al
[oracle@stc-raclin02]$ mkdir -p $ORACLE_HOME/rdbms/log

n
[oracle@stc-raclin02]$ mkdir -p $ORACLE_HOME/network/log

ter
[oracle@stc-raclin02]$
[oracle@stc-raclin02]$
mkdir
mkdir
–p
-p
$ORACLE_HOME/Apache/Apache/logs
$ORACLE_HOME/Apache/Jserv/logs

In
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-44
Exercise 5: Building the Database

1. The database is now ready to be installed. Use the Database Configuration Assistant (DBCA)
to do this. When using DBCA to install a cluster database, the dbca executable becomes a
client of GSD. You will need to initialized the shared file and start GSD. Use the ps and
grep commands to make sure GSD is sucessfully started on both nodes.

First Node only!


$ srvconfig –init –f
$ gsdctl start

On both nodes
$ ps –ef|grep –i gsd
oracle 1296 1295 0 10:47 ? 00:00:00 /home/.../jre -DPROGRAM=gsd ...
oracle 1297 1295 0 10:47 ? 00:00:00 /home/.../jre -DPROGRAM=gsd ...
oracle 1298 1295 0 10:47 ? 00:00:00 /home/.../jre -DPROGRAM=gsd ...

If DBCA is started without GSD running, then the following error will result:

ly
2. As the oracle user, change directory to $ORACLE_HOME/bin and start DBCA. When the
n
e O
opening screen appears, choose the Oracle cluster database radio button. Use the –
datafileDestination to let dbca know where the datafiles should be created.

Us
I
[oracle@stc-raclin01]$ cd $ORACLE_HOME/bin

A
[oracle@stc-raclin01]$ dbca –datafileDestination /ocfs

& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-45
2.1. On the Welcome screen, click the Oracle cluster database window and click Next to
continue. On the next screen, make sure that both the nodes in your cluster are
highlighted.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-46
2.2. On the Operations screen, click the “Create a database” button

2.3. Select the New Database radio button from the Database Templates screen.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-47
2.4. You are prompted for a global database name and SID prefix. Enter RACDB in both
fields.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-48
2.5. The window Database Configuration Assistant: Step 6 of 10: Database Features
opens. Clear all check boxes, and confirm deletion of tablespaces. Choose Human
Resources and Sales History under Example Schemas.

2.6. Select Standard database features and uncheck all options, confirm deletion of
tablespaces. Close Standard database features window and click Next.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-49
2.7. Choose the Dedicated Server Mode radio button on the Database Connection Options
screen

2.8. Click the Memory folder on the Initialization Parameters screen. Click the Custom
radio button and accept the default values.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-50
2.9. Click the File Locations tab next. Review the file locations by clicking the File
Locations Variables button. Click the Next button to continue.

2.10. Click on Controlfile in the Storage tree on the left. Remove control03.ctl and
control04.ctl by highlighting each one and pressing the delete key.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-51
2.11. On the Options tab, change the maximum number of instances to 4 and the maximum
number of log history to 100.

2.12. Click Expand Tablespaces on left side, and select SYSTEM tablespace. Click on the
General tab and change size to 110 MB.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-52
2.13. Click the Storage folder tab and click on the Managed in the Dictionay radio button.
Set Initial to 32 KB, set Next to 128 KB, and set Increment Size by 0.

2.14. Select TEMP in the Storage tree and change the size to 10 MB.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-53
2.15. Select UNDOTBS1 in the Storage tree, and change the size to 50 MB. Select
UNDOTBS2 and set size to 50 MB also. Click on OK accept the new file size and
return to the Database Storage window.

2.16. Click the Next button the the Database Storage window and a review window will
appear. You can browse file locations, tablespaces, parameters, etc. that will be used
in the database creation. When you are finished, click the OK button and the database
creation will begin.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-54
2.17. Click the Finish button in the Creation Options window. When the Summary window
opens, review the summary information and click the OK button. The Progress
window will appear.

2.18. When the cluster database has been created, you are prompted for passwords for the
SYS and SYSTEM accounts. For classroom purposes, make both passwords
oracle. Click the Exit button to close the window. Congratulations, you are
finished.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-55
3. Your cluster database should now be up and running. Enter the following query at the SQL
prompt:
SQL> SELECT instance_number inst_no, instance_name inst_name,
parallel, status, database_status db_status, active_state state,
host_name host FROM gv$instance;

INST_NO INST_NAME PAR STATUS DB_STATUS STATE HOST


------- ------------ --- ------ --------- ------ --------------
1 RACDB1 YES OPEN ACTIVE NORMAL sct-raclin01
2 RACDB2 YES OPEN ACTIVE NORMAL stc-raclin02

DB_STATUS indicates the database state, STATUS indicates the startup condition of the
database, and PAR (parallel) indicates whether the database is operating in cluster mode.

4. Verify SRVCTL configuration by running the following command:


$ srvctl status database -d RACDB
Instance RACDB1 is running on node ed-otraclin1a
Instance RACDB2 is running on node ed-otraclin1b

If your output matches the output in the example, your cluster database is running normally and
SRVCTL is configured properly.

nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-56

You might also like