Oracle Internal & OAI Use Only Oracle Internal & OAI Use Only
Oracle Internal & OAI Use Only Oracle Internal & OAI Use Only
Student Guide
nly
e O
Us
AI
& O
D16335GC10
al
n
Edition 1.0
February 2004
D38296
ter
In
cle
ra
O
Author Copyright © 2004, Oracle. All rights reserved.
All other products or company names are used for identification purposes only, and
Publisher may be trademarks of their respective owners.
Joseph Fernandez
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Contents
Preface
1 Oracle Real Application Clusters on Linux: Overview
Objectives 1-2
What Is a Cluster? 1-3
Cluster Hardware Components 1-4
Oracle9i Real Application Clusters 1-5
Why Implement RAC? 1-6
Scalability Considerations 1-7
Linux RAC Architecture 1-9
RAC on Linux Storage 1-10
Oracle Cluster File System 1-11
OCFS Features 1-12
Cluster Management on Linux 1-13
RAC/Linux Hardware Compatibility Matrix 1-14
Oracle/Linux Compatibility Matrix 1-15
Summary 1-16
ly
Asynchronous I/O 2-11
Enabling Asynchronous I/O 2-12
Downloading OCFS 2-13
On
Installing the RPM Packages 2-14
se
Starting ocfstool 2-15
I U
A
Generating the ocfs.conf File 2-16
Loading OCFS at Startup 2-17
Preparing the Disks 2-18
& O
al
Creating Extended Partitions 2-19
rn
The OCFS Format Window 2-21
e
Int
OCFS Command-Line Interface 2-22
Alternate OCFS Mounting Method 2-24
le
System Parameter Configuration for OCFS 2-25
c
a
Swap Space Configuration 2-26
Or
Red Hat Network Adapter Configuration 2-27
UnitedLinux Network Adapter Configuration 2-28
Known Limitations and Requirements 2-29
Summary 2-30
iii
3 Oracle Cluster Management System
Objectives 3-2
Linux Cluster Management Software 3-3
OCMS 3-4
The Hangcheck-Timer 3-5
The Node Monitor (NM) 3-6
The Cluster Monitor 3-7
Starting OCMS 3-8
The Quorum Disk 3-9
Configuring the User Environment 3-10
Starting the Installer 3-11
Specifying Inventory Location 3-12
File Locations 3-13
Available Products 3-14
Node Information 3-15
Interconnect Information 3-16
Watchdog Parameter 3-17
Quorum Disk 3-18
9.2.0.1.0 Summary Window 3-19
Installation Progress 3-20
End of Installation 3-21
The Hangcheck-Timer RPM 3-22
Hangcheck Settings 3-24
The Oracle 9.2.0.2 Patch Set 3-25
9.2.0.4.0 Cluster Manager Patch 3-26
Node Selection 3-27
ly
Node Information 3-28
Interconnect Information 3-29
Watchdog Parameter 3-30
On
Quorum Disk 3-31
se
9.2.0.4.0 Summary Window 3-32
I U
Starting Cluster Manager 3-33
Summary 3-34
OA
l &
a
4 Installing Oracle on Linux
rn
Objectives 4-2
e
t
Starting the Installation 4-3
In
Choose the Target Node 4-4
le
File Locations 4-5
c
a
Product Selection 4-6
Or
Installation Type 4-7
Product Components 4-8
Component Locations 4-9
Shared Configuration File 4-10
iv
Operating System Groups 4-11
OMS Repository 4-12
Create Database Options 4-13
Installation Summary 4-14
Installation Progress 4-15
The root.sh Script 4-16
Net Configuration Assistant 4-17
Enterprise Manager Configuration Assistant (EMCA) 4-18
Installer Message 4-19
End of Installation 4-20
Updating Universal Installer 4-21
The Oracle 9.2.0.4 Patch Set 4-22
Installing the 9.2.0.4 Patch Set 4-23
Node Selection 4-24
Finishing Up 4-25
Summary 4-26
ly
Database Features 5-10
n
Database Connections 5-11
Initialization Parameters 5-12
File Locations 5-13
e O
Database Storage 5-14
Us
Control File Specifications 5-15
AI
O
Tablespaces 5-16
&
Redo Log Groups 5-17
al
DBCA Summary 5-18
rn
Database Creation Progress 5-19
e
t
Database Passwords 5-20
In
Remote Password File 5-21
le
Summary 5-22
c
ra
6 Managing RAC on Linux
O
Objectives 6-2
Group Services Management 6-3
Server Control Utility 6-4
v
SRVCTL Command Syntax 6-5
SRVCTL Cluster Database Configuration Tasks 6-6
Adding and Deleting Databases 6-7
Adding and Deleting Instances 6-8
SRVCTL Cluster Database Tasks 6-9
Starting Databases and Instances 6-10
Stopping Databases and Instances 6-12
Inspecting Status of Cluster Database 6-13
Inspecting Database Configuration Information 6-14
Parameter Files in Cluster Databases 6-15
Creating and Managing Server Parameter File 6-16
Parameter File Search Order 6-17
Enterprise Manager and Cluster Databases 6-18
Displaying Objects in the Navigator Pane 6-19
Starting a Cluster Database 6-20
Stopping a Cluster Database 6-21
Viewing Cluster Database Status 6-22
Instance Management 6-23
Management Menu 6-24
Storage Management 6-25
Performance Manager and RAC 6-28
Monitoring RAC 6-29
Summary 6-31
ly
Adding New Nodes 7-3
n
Adding Log Files, and Enabling and Disabling Threads 7-4
Allocating Rollback Segments 7-5
Adding an Instance with DBCA 7-6
e O
Choosing a Cluster Database 7-8
Us
Instance Name 7-9
AI
O
Redo Log Groups 7-10
&
Confirming Instance Creation 7-11
al
Instance Creation Progress 7-12
rn
Using Raw Devices 7-13
e
t
Transparent Application Failover 7-14
In
Failover Mode Options 7-15
le
Failover Types 7-16
c
a
Failover Methods 7-17
Or
TAF Configuration: Example 7-18
Connection Load Balancing 7-20
Service and Instance Names 7-21
vi
Adaptive Parallel Query 7-22
Monitoring Parallel Query 7-23
Summary 7-24
Appendix A
Appendix B
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
vii
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Preface
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Preface - 2
Profile
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Preface - 3
Typographic Conventions
Typographic Conventions in Text
ly
phrases,
titles of books and
courses,
On
For further information, see Oracle7 Server
SQL Language Reference Manual.
variables
se
U
Enter [email protected], where
I
user_id is the name of the user.
A
Quotation Interface elements with
O
Select “Include a reusable module component”
&
marks
al
long names that have
only initial caps;
and click Finish.
rn
lesson and chapter titles
e
This subject is covered in Unit II, Lesson 3,
Int
in cross-references “Working with Objects.”
Uppercase
cle SQL column names, Use the SELECT command to view information
ra commands, functions,
schemas, table names
stored in the LAST_NAME
column of the EMP table.
O
Preface - 4
Convention Element Example
Arrow Menu paths Select File > Save.
Preface - 5
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle Real Application Clusters
on Linux: Overview
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Objectives
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
• Interconnected nodes
• Cluster software
– Hidden structure
• Shared disks
Disks
Interconnect
Node
ly
What Is a Cluster?
On
A cluster consists of two or more independent, but interconnected servers. Several hardware
vendors have provided cluster capability over the years to meet a variety of needs. Some clusters
se
were only intended to provide high availability by allowing work to be transferred to a secondary
U
node if the active node failed. Others were designed to provide scalability by allowing user
I
connections or work to be distributed across the nodes.
OA
Another common feature of a cluster is that it should appear to an application as a single server.
l &
Similarly, management of the cluster should be as similar to the management of a single server
rna
as possible. Cluster management software helps provide this transparency.
nte
In order for the nodes to act as if they were a single server, you must store files in such a way
that they can be found by the specific node that needs them. There are several cluster topologies
e I
that address the data access issue, each dependent on the primary goals of the cluster designer.
cl
ra
O
• Nodes
• Interconnect
• Shared disk subsystem
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Database files
ly
What Is Oracle9i Real Application Clusters?
On
Real Applications Clusters (RAC) is an Oracle9i database software option that you can use to
take advantage of clustered hardware by running multiple instances against a database. The
se
database files are stored on disks that are either physically or logically connected to each node so
that every active instance can read from or write to them.
I U
OA
The RAC software manages data access so that the changes are coordinated between the
instances and each instance uses a consistent image of the database. The cluster interconnect
l &
enables instances to pass coordination information and data images between each other.
na
Oracle9i RAC replaces clustered database options that were available in earlier releases. It offers
r
nte
transparent scalability, high availability with minimal downtime following an instance failure,
and centralized management of the database and its instances.
e I
cl
ra
O
Implementing RAC:
• Enables systems to scale up by increasing
throughput
• Increases performance by speeding up database
operations
• Provides higher availability
• Provides support for a greater number of users
ly
Why Implement RAC?
Increased Throughput
On
e
Parallel processing breaks a large task into smaller subtasks that can be performed concurrently.
s
With tasks that grow larger over time, a parallel system that also grows, or “scales up,” can
U
I
maintain a constant time for completing the same task.
Increased Performance
OA
For a given task, a parallel system that can scale up improves response time for completing the
l &
same task. For decision support systems (DSS) applications and parallel query, parallel
na
processing decreases response time. For online transaction processing (OLTP) applications,
r
e
speedup cannot be expected because of the overhead of synchronization.
nt
Higher Availability
I
le
Because each node that runs in the parallel system is isolated from other nodes, a single node
c
failure or crash should not cause other nodes to fail. This enables other instances in the parallel
ra
server environment to run normally. This also depends on the failover capabilities of the
O
operating system and the fault tolerance of the distributed cluster software.
Support for a Greater Number of Users
Because each node has its own set of resources, such as memory, CPU, and so on, each node can
support several users. As nodes are added to the system, more users can also be added, thereby
enabling the system to continue to scale up.
ly
Scalability Considerations
On
It is important to remember that if any of the following areas are not scalable, no matter how
scalable the other areas are, then parallel cluster processing may not be successful:
se
• System scalability: High bandwidth and low latency offer maximum scalability. A high
U
amount of remote I/O may prevent system scalability, because remote I/O is much slower
I
OA
than local I/O. Bandwidth of the communication interface is the total size of messages that
can be sent per second. Latency of the communication interface is the time it takes to place
&
a message on the interconnect. It indicates the number of messages that can be put on the
l
rna
interconnect per unit of time.
• Operating system: Nodes with multiple CPUs and methods of synchronization in the
te
operating system can determine how well the system scales. Symmetric multiprocessing
n
I
(SMP) can process multiple requests to resources concurrently.
e
l
• Locking system: The scalability of the system that is used to handle locks of global
c
a
resources across the nodes determines the number of concurrent requests that can be
r
O
handled at one time and number of local lock requests that can be handled concurrently.
• Database scalability: Database scalability depends on how well the database is designed,
such as how the data files are arranged and how well objects are partitioned.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
• Hardware
– Intel-based hardware
– External shared SCSI or Fiber Channel disks
– Interconnect by using NIC
• Operating system
– Red Hat 7.1, Red Hat 2.1 and 3.0 Advanced Server
– SuSE 7.2 and SuSE SLES7
– UnitedLinux 1.0
• Oracle software
– Oracle9i Enterprise database
– Oracle Cluster File System
– Oracle Cluster Management System
ly
Linux RAC Architecture
requirements:
On
To successfully configure and run Oracle RAC on Linux, you must observe the following
l &
rna
nte
e I
cl
ra
O
ly
RAC on Linux Storage
On
Regular UNIX file system I/O routines do not support simultaneous remote access, which is
required by RAC instances. Raw devices have been the standard for RAC on the UNIX platform
se
because they bypass the OS file handling function calls, such as iget(), fopen(),
U
fclose(), and so on. However, the disadvantage of using raw devices is the difficulty in
I
OA
managing very large number of raw disk devices. This has been addressed by the use of volume
managers like Veritas Volume Manager. These volume managers work very well but they tend
to be very expensive.
l &
rna
nte
e I
cl
ra
O
ly
Oracle Cluster File System
On
Oracle Cluster File System (OCFS) is a shared file system that is designed specifically for Oracle
RAC. OCFS eliminates the requirement for Oracle database files to be linked to logical drives or
se
raw devices. OCFS volumes can span one shared disk or multiple shared disks for redundancy
and performance enhancements.
I U
The Oracle Cluster File System:
OA
• Is extensible without interrupting availability. Oracle homes and data files that are stored
l
on the OCFS can be extended dynamically.
&
rna
• Takes full advantage of RAID volumes and storage area networks (SANs)
• Provides uniform accessibility to archive logs in the event of physical node failures
nte
• Guarantees, when applying Oracle patches, that the updated Oracle home is visible to all
I
nodes in the cluster
e
cl
• Guarantees consistency of metadata across nodes in a cluster
ra
O
ly
OCFS Features
Node-Specific Files and Directories
On
e
OCFS supports node-specific files and directories, which are also known as Content Dependent
s
Symbolic Links (CDSL). This allows nodes in a cluster to see different views of the same files
U
I
and directories although they have the same pathname on OCFS. This feature supports products
OA
that are installed on the Oracle home (like Oracle Intelligent Agent) that need to have the same
filename on different nodes but require a private copy on each node because node-specific
information might be stored in these files.
l &
rna
Unique Clustername Integrity
e
OCFS associates a unique clustername with an OCFS volume. The clustername is automatically
Int
selected from the Cluster Manager registry and, if a valid nondefault cluster name is present,
then any volume that is formatted from this node is available to nodes with the same clustername
le
as this node. The ocfsutil command provides a way to change the clustername for a volume
c
a
to another clustername or no clustername, which makes the volume visible to all nodes in the
r
O
cluster. Clustername allows a hardware cluster to be segregated into logical software clusters
from a storage viewpoint. This is important for supporting a storage area network (SAN).
When new nodes are added to an existing cluster, they automatically have access to the OCFS
volume.
ly
Cluster Management on Linux
On
In contrast to other UNIX platforms, RAC on Linux does not rely on a cluster software layer that
is supplied by the system vendor. OCMS is included with Oracle9i for Linux.
OCMS consists of the following components:
se
• Hangcheck thread driver
I U
• Cluster manager (oracm)
OA
OCMS resides above the operating system and provides the clustering that is needed by RAC.
l &
OCMS also provides cluster membership services, global view of clusters, node monitoring, and
na
cluster reconfiguration as needed. The binaries, logs, and configuration files can be found in
r
te
$ORACLE_HOME/oracm/.
n
I
In Oracle Release 9.2.0.2, Watchdog and the Watchdog timer have been replaced by the
e
l
hangcheck thread driver and the hangcheck-timer, respectively. The hangcheck thread driver
rac
starts a thread with a timeout value that is controlled by the hangcheck_margin parameter.
If the thread is not scheduled within that timeout value, then the machine is restarted. The default
O
value for the parameter is 60 seconds.
ly
RAC/Linux Hardware Compatibility Matrix
On
Oracle Corporation supports the Oracle software on clusters that comprise RAC-compatible
technologies and certified software combinations. Consult your hardware and clusterware vendor
se
because not all vendors may choose to support their hardware or clusterware in every possible
U
cluster combination. Oracle Corporation does not provide hardware certification or compliance;
I
this is still the responsibility of the hardware vendor.
OA
l &
rna
nte
e I
cl
ra
O
ly
Linux Compatibility
On
Oracle Corporation supports Red Hat Linux Advanced Server on any platform that Red Hat
certifies. It is a requirement that the operating system binaries have not been modified or
se
relinked. As can be seen from the compatibility matrix, Oracle Corporation is also committed to
U
the SuSE Linux platform, but note that there are no plans to certify RAC 9.2 on any versions of
I
SuSE earlier than SLES7.
OA
Oracle Corporation has also worked with UnitedLinux to confirm compatibility of Oralce9i
l
products (including RAC) on UnitedLinux 1.0.
&
rna
nte
e I
cl
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Objectives
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
ly
Verifying the Linux Environment
On
Before loading any software, determine whether the system is ready. First, verify that the Linux
version is compatible with the Oracle version that you intend to use. Use the uname command
to get this information.
se
I U
Verify that all the systems that comprise the cluster have entries in the /etc/hosts file. There
A
should also be entries for the network cards that are used for the interconnects.
O
&
$ cat /etc/hosts
127.0.0.1
al localhost.localdomain localhost
138.1.162.61
ern git-raclin01.us.oracle.com # node 1
138.1.162.62
Int git-raclin02.us.oracle.com # node 2
e
raclin01_IC racic1 # interconnect node 1
l
192.168.1.2
a
192.168.1.3
ly
Interprocess Communication Settings
On
Interprocess communication is an important issue for RAC because cache fusion transfers data
between instances by using this mechanism. Thus, networking parameters are important for RAC
e
databases. The values in the table, which is shown on the slide, are the default on most
s
following command:
I U
distributions and should be acceptable for most configurations. To see these values, run the
$ cat /proc/sys/net/core/rmem_default
OA
&
65535
l
Use vi or the echo command to change the value:
a
rn
echo 65535 > /proc/sys/net/core/rmem_default
e
Int
This method is not persistent, so this must be done each time the system starts. Some
distributions such as Red Hat have a persistent method for setting these parameters during
cle
startup. You can edit the /etc/sysctl.conf file to make the settings more permanent.
a
vi /etc/sysctl.conf
Or
net.core.rmem_default = 65535
net.core.rmem_max = 65535
net.core.wmem_default = 65535
net.core.wmem_max = 65535
...
ly
Shared Memory and Semaphores
On
Several shared memory parameters must be set to enable the Oracle database to function
properly. These parameters are best set in the /etc/sysctl.conf file.
e
• SHMMAX: The maximum size of a single shared memory segment. This should be slightly
s
• SHMMNI: The number of shared memory identifiers
I U
larger than the largest anticipated size of the SGA, if possible.
te
• SEMMSL: Semaphores are “grouped” into semaphore sets, and SEMMSL controls the array
n
I
size, or the number of semaphores that are contained per semaphore set. It should be about
e
l
ten more than the maximum number of Oracle processes.
c
a
• SEMOPM: Maximum number of operations per semaphore op call
Or
You can adjust these semaphore parameters manually by writing the contents of the
/proc/sys/kernel/sem file:
# echo SEMMSL_value SEMMNS_value SEMOPM_value \
SEMMNI_value > /proc/sys/kernel/sem
ly
#
SEMMSL=1250
#
On
# SEMMNS: max. number of semaphores system wide. Set to the sum of the
se
# PROCESSES parameter for each Oracle database, adding the largest one
U
# twice, then add an additional 10 for each database (see init.ora).
I
# Max. value possible is INT_MAX (largest INTEGER value on this
A
# architecture, on 32-bit systems: 2147483647).
O
#
&
SEMMNS=32000
l
#
rna
# SEMOPM: max. number of operations per semop call. Oracle recommends
# a value of 100. Max. value possible is 1000.
#
nte
I
SEMOPM=100
#
le
# SEMMNI: max. number of semaphore identifiers. Oracle recommends a
c
a
# a value of (at least) 100. Max. value possible is 32768 (defined
r
# in include/linux/ipc.h: IPCMNI)
O
#
SEMMNI=256
...
ly
Viewing Resource Use
On
When a database creation fails, or an instance does not start while displaying a memory error or
a semaphores error, it is useful to be able to view the shared memory allocations and the
semaphore allocations on the system.
se
• To display the shared memory segments, use: ipcs -m
I U
• To display the semaphore sets, use: ipcs -s
OA
• To display all resources that are allocated, use: ipcs -a
For example:
l &
# ipcs -m
rna
e
------ Shared Memory Segments --------
key shmid
e
0x00000000 524288 oracle 640 4194304 12
cl
0x00000000 557057 oracle 640 201326592 12
ra
0x9808bbd8 589826 oracle 640 205520896 60
O
0x152464c8 622595 oracle 640 142606336 85
ly
Oracle Preinstallation Tasks
On
You must perform several tasks before any Oracle software can be installed. Verify that the
UNIX user oracle and group dba exist on the system. To do this, view the /etc/passwd
se
and /etc/group files, respectively. If they do not exist, then you must create them.
# groupadd -g 500 dba
I U
A
# groupadd -g 501 oinstall
-m -s /bin/bash oracle
& O
# useradd -u 500 -d /usr/local/oracle -g "dba" –G oinstall \
al
Note that the group is added first because it is not possible to create the user and add it to a
ern
nonexistent group. Note that the group oinstall is the secondary group that the user oracle
t
belongs to. You must create the ORACLE_HOME directory if it is not already present. The
In
oracle user must own the directory.
le
Check for the existence of the /var/opt/oracle directory. The cluster software expects the
c
a
directory to exist before the installation begins, otherwise the installation will terminate. This is
Or
the directory where the installation writes the srvConfig.loc file, which contains the pointer
to the shared file that is needed by the srvctl utility. Make sure that the directory is associated
with the oracle user and the dba group. Note that all the operating system commands that are
discussed here are best run as the superuser (root).
ly
Oracle Environment Variables
On
The Oracle environment variables that are listed in the slide should be set in the user login file.
Generally, this is the .bash_profile file if the default bash shell is used, but it is shell
se
dependent. Make sure that you unset LANG, JRE_HOME and JAVA_HOME in your profile. If
U
these are set, then they may interfere with Oracle variables such as NLS_LANG and
I
CLASSPATH.
OA
If you are using UnitedLinux, please check the /etc/profile.d/oracle.sh file. You will
l &
find that many Oracle environment variables like ORACLE_HOME, ORACLE_BASE, and
rna
TNS_ADMIN are pre-set here. The values will most certainly be incorrect for your installation.
Please remove or comment out the unneeded entries or you may encounter difficulties during the
installation.
nte
e I
cl
ra
O
ly
Asynchronous I/O
On
One of the most important enhancements on Linux is asynchronous I/O (or nonblocking I/O) in
the kernel. Before the introduction of asynchronous I/O in Advanced Server, the processes
se
submitted disk I/O requests sequentially. Each I/O request would cause the calling process to
U
sleep until the request was completed. Asynchronous I/O enables a process to submit an I/O
I
OA
request without waiting for it to complete. The implementation also enables Oracle processes to
issue multiple I/O requests to disk with a single system call, rather than a large number of single
&
I/O requests. This improves performance in two ways. First, because a process can queue
l
rna
multiple requests for the kernel to handle, the kernel can optimize disk activity by reordering
requests or combining individual requests that are adjacent on disk into fewer, larger requests.
te
Second, because the system does not put the process to sleep while the hardware processes the
n
I
request, the process is able to perform other tasks until the I/O is complete.
e
cl
Please note that at this time Asynchronous I/O is not supported under OCFS.
ra
O
ly
Enabling Asynchronous I/O
On
By default, Oracle9i Release 2 is shipped with asynchronous I/O support disabled. This is
necessary to accommodate other Linux distributions that do not support this feature. To enable
se
asynchronous I/O for Oracle9i Release 2 on Red Hat Linux Advanced Server 2.1, you must
U
perform the following steps as outlined in the product documentation:
I
1. Change directory to $ORACLE_HOME/rdbms/lib.
# make -f ins_rdbms.mk async_on
OA
&
2. If asynchronous I/O needs to be disabled for some reason, then change directory to
l
rna
$ORACLE_HOME/rdbms/lib.
# make -f ins_rdbms.mk async_off
te
3. Parameter settings in the parameter file for raw devices:
n
I
set 'disk_asynch_io=true' (default value is true)
e
l
4. Make sure that all Oracle data files reside on file systems that support asynchronous I/O.
c
ra
Parameter settings in the parameter file for file system files:
set 'disk_asynch_io=true' (default value is true)
O
set 'filesystemio_options=asynch'
ly
Downloading OCFS
n
Download OCFS for Linux in a compiled form from the following Web site:
O
e
http://oss.oracle.com
In addition, you must download the following RPM packages:
Us
• ocfs-support-1.0-9.i686.rpm
AI
O
• ocfs-tools-1.0-9.i686.rpm
&
Also, download the RPM kernel module ocfs-2.4.9-3typeversion.rpm, where the
l
a
variable typeversion stands for the type and version of the kernel that is used. Use the
rn
following command to find out which kernel version is installed on your system:
e
$ uname -a
Int
The alphanumeric identifier at the end of the kernel name indicates the kernel version that you
cle
are running. Download the kernel module that matches your kernel version. For example, if the
ra
kernel name that is returned with the uname command ends with
O
-e.3smp, then you would download the kernel module ocfs-2.4.9-e.3-smp-1.0-
1.i686.rpm.
ly
Installing the OCFS RPM Packages
On
Complete the following procedure to prepare the environment to run OCFS. Note that you must
perform all steps as the root user and that each step must be performed on all the nodes of the
cluster.
se
I U
First, install the support RPM file, ocfs-support-1.0.-n.i686.rpm, and then the
l
# rpm –i ocfs-support-1.0-n.i686.rpm &
na
To install the kernel module RPM file for an e.3 enterprise kernel, you must enter the following
r
command:
nte
I
# rpm -i ocfs-2.4.9-e.3-enterprise-1.0-1.i686.rpm
e
l
Next, install the tools RPM, ocfs-tools-1.0-n.i686.rpm. To install the files, enter the
c
ra
following command:
O
# rpm -i ocfs-tools-1.0-n.i686.rpm
Where n is the latest release number of the RPM that you are installing.
ly
Starting ocfstool
On
By using the ocfstool utility, generate the needed /etc/ocfs.conf file. Start up
ocfstool from a graphical display (Xterm, SSH, VNC, etc) as shown in the following
example:
se
# /usr/bin/ocfstool&
I U
OA
The OCFS Tool window appears in a new X window. Click in the window to make it active and
select the Generate Config option from the Tasks menu. The OCFS Generate Config window
opens.
l &
rna
nte
e I
cl
ra
O
ly
The ocfs.conf File
On
When the OCFS Generate Config window opens, check the values that are displayed in the
window to confirm that they are correct, and then click the OK button. Based on the information
se
that is gathered from your installation, the ocfstool utility generates the necessary
U
/etc/ocfs.conf file. After the generation is completed, open the /etc/ocfs.conf file
I
OA
in a text file tool and verify that the information is correct before continuing.
The guid value is generated from the Ethernet adapter hardware address and must not be edited
l &
manually. If the adapter is switched or replaced, then remove the ocfs.conf file and
rna
regenerate it or run the ocfs_uid_gen utility that is located in /usr/local/sbin or
/usr/sbin, depending on the OCFS version used.
nte
e I
cl
ra
O
ly
Loading OCFS at Startup
On
To start OCFS, the module ocfs.o must be loaded at system startup. To do this, add the lines
that are shown in the slide to the /etc/rc.local file. Because the script is linked to
se
S99local in the rc5.d directory, it is processed at startup as the system progresses through
the UNIX run levels.
I U
OA
Note that there is an entry to mount an OCFS file system in this example. Alternatively, OCFS
file systems can be mounted by adding appropriate entries in the /etc/fstab file. This will be
l
shown later in this lesson.
&
rna
nte
e I
cl
ra
O
ly
Preparing the Disks
On
By using the fdisk utility, partition the disk to allocate space for the OCFS file system
according to your storage needs. You should partition your system in accordance with Oracle
se
Optimal Flexible Architecture (OFA) standards. In Linux, SCSI disk devices are named by using
the following convention:
I U
• Sd: SCSI disk
• a–z: Disks 1 through 26
OA
• 1–4: Partitions one through four
l &
rna
Therefore, in the slide example, the OCFS file system that is mounted on /u01 is the first
partition on the sixth SCSI drive (sdf1). After the partitions are created, use the following
nte
command to create the mount points for the OCFS file system:
e I
# mkdir -p /u01 /u02 /u03 /u04 ... (more as needed)
cl
Note these mount points, because you must provide them later.
ra
As the root user, start the ocfstool utility:
O
# /sbin/ocfstool&
# /sbin/fdisk /dev/sde
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1020, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-
1020, default 1020): 1020
ly
Creating a Primary Partition
On
Before starting, identify an unused disk. As the root user, execute the /sbin/fdisk
command. At any command prompt, you can use the option m to print help information for
fdisk.
se
# /sbin/fdisk /dev/sde
I U
A
Command (m for help): m
O
a toggle a bootable flag
&
b edit bsd disklabel
c
l
toggle the dos compatibility flag
a
n
d delete a partition
l
er
list known partition types
t
In
m print this menu
e
n add a new partition
o
cl
create a new empty DOS partition table
p
ra
print the partition table
O
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
ly
The OCFS Format Window
On
The OCFS Tool window appears as shown in the slide. Click in the window to make it active,
and press [CTRL] + [F], or choose the Tasks menu and select Format. The OCFS Format
se
window appears. Use the values in the text boxes to format the partitions and mount the file
systems.
I U
OA
Fill the text field boxes according to the specifications for your system. The block size setting
must be a multiple of the Oracle block size. It is recommended that you do not change the
&
default block size, which is set to 128. Set the value for the User text field to oracle and the
l
a
value for the Group text field to dba. Set the values for the Volume Label and Mountpoint text
rn
fields to the values that you had set earlier and then click the OK button. Formatting then begins.
e
t
The amount of time it takes to format and mount partitions depends on the speed of your system
In
disk drives and CPU.
le
Note: After the partition is properly formatted, you must initially mount the partitions
c
a
individually. When you mount each node for the first time, no other node should attempt to
Or
mount the file systems.
OCFS requires this procedure for the initial mount to allow OCFS to initialize the file system
properly. To perform an individual mount, use the following mount command syntax:
# mount -t ocfs /dev/device /mountpoint
ly
OCFS Command-Line Interface
On
If you want to format the OCFS partitions manually, then you can use the mkfs.ocfs utility.
This is the same utility that is called by the OCFS Format window. Given below is a summary of
the usage and syntax of mkfs.ocfs:
se
I U
mkfs.ocfs -b block-size [-C] [-F] [-g gid] -L volume-label \
l
• -b: Block size in kilobytes
&
rna
• -C: Clear all data blocks
• -F: Force format existing OCFS volume
te
• -g: Group ID (GID) for the root directory
n
I
• -L: Volume label
e
l
• -m: Path where this device will be mounted
c
ra
• -n: Query only
O
• -p: Permissions for the root directory
• -q: Quiet execution
• -u: User ID (UID) for the root directory
• -V: Print version and exit
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
ly
Alternate OCFS Mounting Method
OA
mount to complete before starting the mount on the next node. The OCFS file systems must be
&
mounted after the standard file systems as indicated below:
# cat /etc/fstab
al
rn
LABEL=/ / ext3 defaults 1 1
e
t
LABEL=/tmp /tmp ext3 defaults 1 2
In
LABEL=/usr /usr ext3 defaults 1 2
le
LABEL=/var /var ext3 defaults 1 2
c
/dev/sdb2 swap swap defaults 0 0
...
ra
O
/dev/sdf1 /ocfs1 ocfs uid=500,gid=500
/dev/sdg1 /ocfs2 ocfs uid=500,gid=500
/dev/sdh1 /quorum ocfs uid=500,gid=500
Note: The load_ocfs command must be executed in the startup scripts before the OCFS file
systems can be mounted.
ly
OCFS System Parameter Configuration
On
You must verify some of the system parameters to accommodate Oracle9i RAC and OCFS. Use
the script /etc/init.d/rhas_ossetup.sh on Red Hat Linux to perform this
configuration. As the root user, enter:
se
# /etc/init.d/rhas_ossetup.sh
I U
A
Using this script ensures that your system is correctly configured, and helps avoid problems.
& O
Note that the settings are valid for a cycle only, which means that it is automatically reset to its
original values upon restarting. To make the process automatic during the startup of the system,
l
enter the following commands as the root user:
a
rn
# ln -s /etc/init.d/rhas_ossetup.sh /etc/rc5.d/S77rhas_ossetup
e
Int
# ln -s /etc/init.d/rhas_ossetup.sh /etc/rc3.d/S77rhas_ossetup
Alternatively, the lines configuring kernel parameters can be included in the
le
/etc/sysctl.conf file.
c
a
If your platform is UnitedLinux, you may add the lines individually to /etc/rc.local file.
Or
# vi /etc/rc.local
...
echo "65536 " > /proc/sys/fs/file-max
echo 1024 65000 > /proc/sys/net/ipv4/ip_local_port_range
...
ly
OCFS Swap Space Requirements
On
You must allocate at least 1 GB to the local swap partition. As the root user, use the command
swapon -s to verify that you have enough disk space allocated. If you require more disk
se
space, then use the command swapon -a. Note that you can create a swap partition with a
U
maximum size of 2 GB. To have the swap automatically set on startup, add lines that are similar
I
to the following to the /etc/fstab file:
/dev/sdb2 swap swap defaults 0 0
OA
l &
Swap entries in /etc/fstab should occur after the standard file system entries and before the
na
OCFS file system entries.
r
te
LABEL=/tmp /tmp ext3 defaults 1 2
n
I
LABEL=/usr /usr ext3 defaults 1 2
e
cl
LABEL=/var /var ext3 defaults 1 2
ra
/dev/sdb2 swap swap defaults 0 0
O
/dev/sdb3 swap swap defaults 0 0
/dev/sdc1 swap swap defaults 0 0
/dev/sdf1 /ocfs1 ocfs uid=500,gid=500
/dev/sdg1 /ocfs2 ocfs uid=500,gid=500
# /usr/sbin/redhat-config-network
ly
Network Adapter Configuration
On
You must have the network consistently available during system startup. To ensure that all
network adapters are automatically enabled and in the correct order, perform the following tasks:
se
1. Ensure that you have the DISPLAY variable properly set, and launch the
U
/usr/sbin/redhat-config-network program. The Ethernet Device window
I
opens.
OA
2. Select the “Activate device when computer starts” check box, and click the OK button.
&
3. Click the Hardware Devices tab. Select the Use Hardware Address check box, and click the
l
r
save the changes.
na
Probe for Address button to populate the Hardware Address field. Click the OK button to
te
4. Ensure that the public and private node names of all member nodes in the RAC are listed in
n
I
the /etc/hosts file.
e
cl
ra
O
ly
UnitedLinux Network Adapter Configuration
following tasks:
On
If you are running UnitedLinux and need to configure your network adapters, perform the
se
1. Ensure that you have the DISPLAY variable properly set, and launch the /sbin/yast2
program. The YaST Control Center window opens.
I U
OA
2. Select the “Activate device when computer starts” check box, and click the OK button.
3. Select Network/Basic, and click on Network Card Configuration.
&
4. Select the adapter from the Network Device pull down menu and configure the IP address
l
rna
and host name as needed.
5. Ensure that the public and private node names of all member nodes in the RAC are listed in
te
the /etc/hosts file.
n
e I
cl
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Objectives
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
& O
watchdog daemon is an integral part of the cluster manager. With the introduction of Oracle
9.2.0.2, the architecture of the cluster manager has been changed. The hangcheck-timer is a
al
kernel module, whereas the watchdog daemon is essentially a user process. The kernel module
rn
approach is a faster, more efficient solution to node monitoring. The kernel module is contained
e
Int
in the hangcheck-timer RPM, which can be found on http://metalink.oracle.com.
Oracle 9.2.0.2 (and higher) is installed as a patch from the 9.2.0.1 installer. You must repeat
le
these installation tasks on each node in your cluster.
c
ra
O
Shared disks
OCMS
nly
e O
OCMS is included as part of the Oracle9i distribution for Linux. OCMS resides above the
operating system and provides all the clustering services that Oracle RAC needs to function as a
Us
high-availability and a highly scalable solution. It provides cluster membership services, global
I
view of clusters, node monitoring, and cluster reconfiguration.
A
& O
The cluster monitor (CM) maintains the process-level cluster status. It also accepts the
registration of Oracle instances to the cluster and provides a consistent view of Oracle instances.
al
The node monitor provides the interface to other modules for determining cluster resources’
rn
status, that is, node membership. It obtains the status of the cluster resources from the cluster
e
cluster manager.
Int
manager for remote nodes and provides the status of the cluster resources of the local node to the
le
The hangcheck-timer module monitors the Linux kernel for any long operating system hangs that
c
ra
might adversely affect the cluster or damage the database.
O
The parameters that control the behavior of the cluster manager are set in two files that are
located in $ORACLE_HOME/oracm/admin: cmcfg.ora and ocmargs.ora, respectively.
$ cd $ORACLE_HOME/oracm/admin
$ grep KernelModuleName cmcfg.ora
KernelModuleName=hangcheck-timer
The Hangcheck-Timer
nly
e O
In place of the watchdog daemon, the 9.2.0.2 version of the cluster manager for Linux now
includes the use of a Linux kernel module called hangcheck-timer. This module is not required
Us
for cluster manager operation but its use is highly recommended. This module monitors the
I
Linux kernel for long operating system hangs that could affect the reliability of an RAC node
A
O
and damage an RAC database. When such a hang occurs, this module sends a signal to reset the
&
node. This approach offers three advantages over the watchdog approach:
l
• Node resets are triggered from within the Linux kernel, making them much less affected by
a
ern
the system load.
t
• The cluster manager on an RAC node can easily be stopped and reconfigured because its
In
operation is completely independent of the kernel module.
cle
• The features that are provided by the hangcheck-timer module closely resemble those
found in the implementation of the cluster manager for RAC on the Windows platform, on
ra
which the cluster manager on Linux was based.
O
Us
mark a node inactive if the hangcheck-timer determines that the kernel is inactive for too long a
period.
AI
• Termination of the NM on the remote server
& O
The hangcheck-timer sends a node reset signal for the following reasons:
• Node failure
al
rn
• Heavy load on the remote server
e
Int
The node monitor reconfigures the cluster to terminate the isolated nodes, ensuring that the
remaining nodes in the reconfigured cluster continue to function properly.
cle
ra
O
Us
physical I/O to the shared disk before CM daemons on the other nodes report the cluster
I
reconfiguration to instances on the nodes. This action prevents database damage.
A
& O
al
ern
Int
cle
ra
O
Starting OCMS
nly
command to confirm this:
e O
Before starting OCMS, make sure the hangcheck-timer module is loaded. Use the lsmod
# lsmod
Us
Module Size Used by
I
Not tainted
A
O
hangcheck-timer 1208 0 (unused)
&
ocfs 402980 5
l
...
aic7xxx
rna 179076 11
When OCMS is patched to 9.2.0.2 or higher, the ocmstart.sh script located in
te
$ORACLE_HOME/oracm/bin must be edited to comment out or remove all Watchdog related
n
I
entries since it is no longer needed:
e
cl
# watchdogd’s default log file
ra
# WATCHDOGD_LOG_FILE=$ORACLE_HOME/oracm/log/wdd.log
O
...
# if watchdogd status | grep ‘Watchdog daemon active’ >/dev/null
# then
# echo ‘ocmstart.sh: Error: watchdogd is already running’
# exit 1
# fi
Us
configuration information to manage the cluster configuration.
AI
The Oracle configuration and administrative tools also require access to cluster configuration
& O
data that is stored on shared disks. You must configure a shared disk resource to use the
Database Configuration Assistant, Oracle Enterprise Manager, and the Server Control command-
al
line administrative utility.
rn
Note: On some platforms such as Windows NT, the quorum disk is sometimes called the voting
e
disk.
Int
cle
ra
O
User Environment
nly
e O
There are some environment variables that have to be set for the oracle user with the export
command. Rather than setting them every time after logging on to the system, put them into the
Us
.bash_profile script within the oracle user’s home directory. Therefore, log in as
I
oracle user and, in the home directory, modify the .bash_profile login file and ensure
A
O
that it looks similar to the example below:
&
$ cat .bash_profile
al
export ORACLE_HOME=/oracle/9.2.0
n
export ORACLE_BASE=/oracle/9.2.0
er
export ORACLE_SID=U1N1
t
n
export PATH=$ORACLE_HOME/bin:$PATH
e I
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib
cl
After modifying the .bash_profile login file, make sure that the newly defined
ra
environment variables become active in the current session by running .bash_profile in the
O
current shell:
$ . .bash_profile
Us
session. If installing from a CD-ROM, then start the Oracle Universal Installer by using the
following command:
AI
$ /mnt/cdrom/runInstaller
& O
al
ern
Int
cle
ra
O
Inventory Location
nly
e O
The Inventory Location window is displayed next. If you have not installed any Oracle products
on the node, then you have the option of specifying a location. If the node has previously
Us
installed Oracle software, then the installer should detect the existing inventory and display that
location, which you can accept.
AI
& O
al
ern
Int
cle
ra
O
File Locations
nly
e O
In the File Locations window, specify the source and destination file locations for the
installation. If there are existing Oracle homes on this node, then they appear in a drop-down
Us
menu in the Name field. Otherwise, indicate the path where you would like the Oracle files to be
written.
AI
& O
al
ern
Int
cle
ra
O
Available Products
nly
e O
In order to install Oracle9i Database together with the Real Application Clusters option, the
Oracle Cluster Manager must be installed first. Choose Oracle Cluster Manager 9.2.0.1.0 from
the list of products in the Available Products window.
Us
AI
& O
al
ern
Int
cle
ra
O
Node Information
nly
e O
Next, specify the public names of the nodes of your cluster. These are the node names that are
used from the outside network (that is, the network excluding the node interconnects). You can
find these names in the /etc/hosts file.
Us
$ cat /etc/hosts
AI
O
# Do not remove the following line, or various programs
&
# that require network functionality will fail.
127.0.0.1
al localhost.localdomain localhost
#
ern
t
138.1.162.61 git-raclin01.us.oracle.com git-raclin01
138.1.162.62
In git-raclin02.us.oracle.com git-raclin02
le
# Addresses for the interconnects
c
a
192.168.1.2 racic1
Or
192.168.1.3 racic2
Interconnect Information
nly
e O
In the next window, specify the private node names. These are the names that are used to identify
the interconnects between the nodes in the cluster. You can find these names also in the
/etc/hosts file.
Us
$ cat /etc/hosts
AI
O
# Do not remove the following line, or various programs
&
# that require network functionality will fail.
127.0.0.1
al localhost.localdomain localhost
#
ern
t
138.1.162.61 git-raclin01.us.oracle.com git-raclin01
138.1.162.62
In git-raclin02.us.oracle.com git-raclin02
le
# Addresses for the interconnects
c
a
192.168.1.2 racic1
Or
192.168.1.3 racic2
Watchdog Parameter
nly
e O
Leave the Watchdog parameter at the default value of 60000. This parameter is deprecated in
Oracle Cluster Manager 9.2.0.2. It is removed later in the installation process when the
Us
hangcheck-timer module and the Oracle 9.2.0.4 patch are installed.
AI
& O
al
ern
Int
cle
ra
O
Quorum Disk
nly
e O
Specify the name of a raw device or file for the quorum disk. This can be either a raw device or
an OCFS file. If using an OCFS file, make sure the file already exists and can be written by the
oracle user and members of the dba group.
Us
AI
& O
al
ern
Int
cle
ra
O
Installation Progress
nly
e O
When the Oracle Universal Installer starts installing the Oracle Cluster Management software for
Linux, the installation progress is displayed in the Install window. It should only take a few
minutes for this product to load.
Us
AI
& O
al
ern
Int
cle
ra
O
End of Installation
nly
e O
After the Oracle Cluster Manager is loaded, the End of Installation window appears. Click the
Exit button to quit the installer. Do not start the cluster manager yet. You must first install the
hangcheck-timer module and the Oracle 9.2.0.4 patch set.
Us
AI
& O
al
ern
Int
cle
ra
O
&
2. Enter the username and password, and then click OK.
l
3. Click Patches on the left of the window.
a
ern
4. Enter 2594820 in the Patch Number field, and then click Submit.
5. Click Download, and save the p2594820_20_LINUX.zip file to the local disk.
Int
6. Unzip and identify the RPM that is needed for your kernel by running the uname
command.
cle
a
# unzip p2594820_20_LINUX.zip
Or
# uname –a
Linux git-raclin01 2.4.9-e.3smp #1 SMP
7. From the directory where the RPM is unzipped, run the RPM command:
# rpm -ivh <RPM-matching-your-kernel>
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Hangcheck Settings
nly
e O
It is recommended that the hangcheck-timer module be loaded and the cluster manager be started
with the parameter values that are shown above (in addition to recommendations that are made
Us
elsewhere in the Oracle RAC documentation). The inclusion of the hangcheck-timer kernel
I
module also introduces two new configuration parameters to be used when the module is loaded:
A
O
• hangcheck_tick: This is an interval that indicates how often the hangcheck-timer
&
checks the condition of the system.
l
• hangcheck_margin: Certain kernel activities may randomly introduce delays in the
a
ern
operation of the hangcheck-timer. The hangcheck_margin parameter provides a
t
margin of error to prevent unnecessary system resets because of these delays.
In
Taken together, these two parameters indicate how long an RAC node must stop responding
le
before the hangcheck-timer module resets the system. A node reset occurs when the following
c
a
condition is true:
Or
(system hang time) > (hangcheck_tick + hangcheck_margin)
Us
Developers Kit, Oracle9i Globalization, Oracle Core, Ultrasearch, Spatial, SQL*Plus, SQLJ,
I
JPublisher, Intermedia, OLAP, and Oracle Internet Directory products. This is not a complete
A
O
software distribution and you must install it over an existing Oracle9i Release 2 Oracle Server
&
installation.
al
The Oracle 9.2.0.2 (and higher) patch set also includes upgrades to the Oracle Cluster Manager
rn
on Linux. Again, the Oracle Cluster Manager Software patch set is not a complete software
e
Int
distribution and must be installed over an existing Oracle9i Release 2 Oracle Cluster Manager
Software installation.
cle
ra
O
Us
specified in the Source... field to point to the patch location. When you click the Next button, the
I
Available Products window appears with the products that may be installed from the location
A
O
specified. Choose Oracle9iR2 Cluster Manager 9.2.0.4.0 and continue.
l &
rna
nte
e I
cl
ra
O
Node Selection
nly
e O
You can use the Oracle 9.2.0.4 patch set to install the included patches onto multiple nodes in a
cluster when the base release (9.2.0.1.0) is already installed on those nodes. The Oracle
Us
Universal Installer detects whether the machine on which you are running the installer is part of
I
the cluster. If it is, then you are prompted to select the nodes from the cluster on which you
A
O
would like the patch set installed. For this to work properly, user equivalence must be in effect
&
for the oracle user on each node of the cluster. To enable user equivalence, make sure that the
l
/etc/hosts.equiv file exists on each node with an entry for each trusted host. For example,
a
ern
if the cluster has two nodes, git-raclin01 and git-raclin02, then the hosts.equiv
t
files will look like this:
In
[root@git-raclin01]# cat /etc/hosts.equiv
git-raclin02
cle
ra
[root@git-raclin02]# cat /etc/hosts.equiv
O
git-raclin01
Node Information
nly
/etc/hosts file.
e O
You must provide the host names of your nodes again. These names can be verified in the
$ cat /etc/hosts
Us
...
AI
O
138.1.162.61 git-raclin01.us.oracle.com git-raclin01
&
138.1.162.62 git-raclin02.us.oracle.com git-raclin02
al
# Addresses for the interconnects
192.168.1.2
ern racic1
t
192.168.1.3 racic2
In
cle
ra
O
Interconnect Information
nly
file.
e O
Provide the interconnect names again. These names can also be verified in the /etc/hosts
$ cat /etc/hosts
Us
...
AI
O
138.1.162.61 git-raclin01.us.oracle.com git-raclin01
&
138.1.162.62 git-raclin02.us.oracle.com git-raclin02
al
# Addresses for the interconnects
192.168.1.2
ern racic1
t
192.168.1.3 racic2
In
cle
ra
O
Watchdog Parameter
nly
Watchdog daemon is not used.
e O
Leave the Watchdog parameter at the default value of 60000 as was done earlier. The
Us
AI
& O
al
ern
Int
cle
ra
O
Quorum Disk
nly
or an OCFS file.
e O
Specify the name of a device or file to use for the quorum disk. This can be either a raw device
Us
AI
& O
al
ern
Int
cle
ra
O
Us
AI
& O
al
ern
Int
cle
ra
O
Us
In the $ORACLE_HOME/oracm/admin/ocmargs.ora script, remove the first line that
contains watchdogd.
AI
& O
Changes that must be made to $ORACLE_HOME/oracm/bin/ocmstart.sh include:
• Remove the words watchdog and from the line which says "Sample startup
al
n
script for watchdogd and oracm".
ter
• Remove or comment out all the lines that contain watchdogd (both uppercase and
lowercase) from the rest of the script.
In
If the word watchdog is used within an if/then/fi block, then delete or comment out the
cle
lines containing if/then/fi also. You must perform these modifications on all nodes in the
ra
cluster before continuing.
O
Start the modified script from /etc/rc.local. You must run the ocmstart.sh startup
command as the root user because the oracm processes have their priorities (nice values)
adjusted at startup. Remove the ocmstart.ts timestamp file before starting or the script will
fail.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Objectives
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
&
al
ern
Int
cle
ra
O
Us
manager, the installer can see all the nodes in the cluster during the installation. Choose both
I
nodes of the cluster where you want the software to be copied during this installation. If the
A
O
Cluster Node Selection window is not displayed, then check that the cluster manager is started
&
properly. You can do this by checking for active oracm process on both nodes:
al
[oracle@git-raclin01 /]# ps -ef|grep oracm
root 1621
rn
1 0 May14 ?
e
00:00:00 oracm
t
root 1624 1621 0 May14 ? 00:00:00 oracm
root
In
1625 1624 0 May14 ? 00:00:00 oracm
e
root 1626 1624 0 May14 ? 00:00:00 oracm
...
cl
ra
[oracle@git-raclin02 /]# ps -ef|grep oracm
O
root 1627 1 0 May14 ? 00:00:00 oracm
root 1628 1627 0 May14 ? 00:00:00 oracm
root 1629 1628 0 May14 ? 00:00:00 oracm
root 1631 1629 0 May14 ? 00:00:00 oracm
...
File Locations
nly
e O
Because you had previously loaded Oracle software (Oracle Cluster Manager), the File
Locations window displays an existing ORACLE_HOME. Accept the default Source and
Destination file locations.
Us
AI
& O
al
ern
Int
cle
ra
O
Product Selection
nly
e O
When the Available Products window appears, select Oracle9i Database 9.2.0.1 as the product to
install. This must be done before the database files can be upgraded to release 9.2.0.2 (or higher).
Us
AI
& O
al
ern
Int
cle
ra
O
Installation Type
nly
e O
In the Installation Type window, specify Custom as the installation method. Do not choose either
of the other two methods because they do not satisfy all the installation requirements for the
RAC option.
Us
AI
& O
al
ern
Int
cle
ra
O
Product Components
nly
e O
In the Available Product Components window, choose the Oracle9i Real Application Clusters
9.2.0.1.0 option. In addition, make sure that Oracle Partitioning 9.2.0.1.0 and Oracle Net
Services 9.2.0.1.0 are also selected.
Us
AI
& O
al
ern
Int
cle
ra
O
Component Locations
nly
e O
Unless you have a specific need to change the destination of non-ORACLE_HOME components
that are listed in the Component Locations window, accept the default location that is displayed
in the window.
Us
AI
& O
al
ern
Int
cle
ra
O
Us
written to the srvConfig.loc file located in /var/opt/oracle or
I
$ORACLE_HOME/srvm/config. The file must exist before the group services daemon
A
O
(GSD) is started. Create the file by using the Unix touch command. Make sure the file is
&
associated with the oracle user and the dba group. Make sure it is readable and writable by
l
both. The following steps need to be performed once only.
a
rn
[oracle@git-raclin01]# touch /quorum/srvm.dbf
e
t
[oracle@git-raclin01]# chown dba:oracle /quorum/srvm.dbf
In
[oracle@git-raclin01]# chown 666 /quorum/srvm.dbf
cle
ra
O
Privileged Groups
nly
OSDBA and OSOPER group name fields.
e O
In the Privileged Operating System Groups window, specify the UNIX dba group for both the
Us
AI
& O
al
ern
Int
cle
ra
O
OMS Repository
nly
Server will use an existing repository.
e O
In the Oracle Management Server Repository window, specify that the Oracle Management
Us
AI
& O
al
ern
Int
cle
ra
O
Us
AI
& O
al
ern
Int
cle
ra
O
Installation Summary
nly
e O
Check the information that is displayed in the Summary window. If you are satisfied that your
choices are accurately reflected on it, then click the Install button to continue with the
installation.
Us
AI
& O
al
ern
Int
cle
ra
O
Installation Progress
nly
e O
When the Oracle Universal Installer starts installing the Oracle database distribution for Linux,
the installation progress is displayed in the Install window. The distribution is large and takes
time to completely install.
Us
AI
& O
al
ern
Int
cle
ra
O
Us
indicated by the Setup Privileges window, you must run the root.sh script as the root user.
I
Remember, because you install the software on a cluster, you must run the root.sh script on
A
O
all the nodes to which the files are copied.
l &
rna
nte
e I
cl
ra
O
Us
LISTNER, the default protocol TCP, and the default port of 1521. When asked if you prefer
I
another naming method (other than tnsnames.ora), answer no. Click on the Finish button on
A
O
the last page to continue.
l &
rna
nte
e I
cl
ra
O
EMCA
nly
e O
When the Enterprise Manager Configuration Assistant starts, choose Cancel, and then confirm
your choice by clicking the No button. Enterprise Manager is configured after the database is
created.
Us
AI
& O
al
ern
Int
cle
ra
O
Installer Message
nly
e O
At this point in the installation, the Oracle Universal Installer generates an error. The error is
generated by canceling one or more configuration tools. Click the OK button to proceed with the
installation.
Us
AI
& O
al
ern
Int
cle
ra
O
End of Installation
nly
e O
When the End of Installation window appears, quit the installer by clicking the Exit button.
Because the installation is complete, the Oracle Database 9.2.0.4.0 patch set must now be
applied.
Us
AI
& O
al
ern
Int
cle
ra
O
Us
I
$ cd $ORACLE_HOME/bin
A
$ runInstaller
& O
You should point the installer to the location of the 9.2.0.4 patch in the Files Location screen,
then choose both nodes on the Cluster Node Selection screen. Accept the default destination for
al
the product and install the product.
ern
Int
cle
ra
O
Us
Oracle9i Globalization, Oracle Core, UltraSearch, Spatial, SQL*Plus, SQLJ, JPublisher,
I
Intermedia, OLAP and Oracle Internet Directory products. This is not a complete software
A
O
distribution and you must install it on an existing Oracle9i 9.2.0.1.0 Oracle Server installation.
l &
rna
nte
e I
cl
ra
O
Us
Source... field to point to the patch location. When you click the Next button, the Available
I
Products window appears with the products that may be installed from the location that is
A
O
specified. Choose Oracle9iR2 Patch Set 9.2.0.4.0 and continue.
l &
rna
nte
e I
cl
ra
O
Node Selection
nly
e O
You can use the 9.2.0.4 patch set to install the included patches on multiple nodes in a cluster
when the base release (9.2.0.1.0) is already installed on those nodes. The Oracle Universal
Us
Installer detects whether the machine on which you are installing is part of the cluster. If it is,
I
then you are prompted to select the nodes from the cluster on which you would like the patch set
A
O
installed.
l &
rna
nte
e I
cl
ra
O
Finishing Up
nly
e O
At the end of the upgrade process you will be prompted to run the root.sh script. Please note
that you are required to run the script on both nodes in your cluster. When this is finished, click
Us
the OK button to dismiss the notification. The upgrade is successfully completed so you can
I
click the Exit button to quit the installer. Before continuing with the database creation, you must
A
O
start Group Services on each node. Use the gsdctl command to do this:
&
$ gsdctl start
al
Successfully started GSD on local node
ern
Repeat this on the second node. If Group Services is not running on both nodes, database
creation with DBCA will not be possible.
Int
cle
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Objectives
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Starting DBCA
nly
e O
The Database Configuration Assistant is capable of installing single instance or cluster
databases. From an Xterm, log on to one of the nodes of your cluster and launch DBCA as shown
below:
Us
$ cd $ORACLE_HOME/bin
AI
$ ./dbca
& O
l
The Welcome window appears as shown in the slide. You have the option to create a single
a
n
instance database or a cluster database. Click the “Oracle cluster database” option button. Click
er
the Next button to continue.
t
In
cle
ra
O
Creating a Database
nly
click the Next button to continue.
e O
The Operations window is displayed next. Click the “Create a database” option button, and then
Us
AI
& O
al
ern
Int
cle
ra
O
Node Selection
nly
e O
The Node Selection window is displayed next. Because you are creating a cluster database,
choose both the nodes. Click the Select All button to choose both the nodes of the cluster. Each
Us
node must be highlighted before continuing. Click the Next button to proceed.
AI
& O
al
ern
Int
cle
ra
O
Database Templates
nly
e O
In the Database Templates window, you must choose a template for the creation of the database.
Click the New Database option button and then click the Next button to continue.
Us
AI
& O
al
ern
Int
cle
ra
O
Database Identification
nly
e O
In the Database Identification window, you must enter the database name in the Global Database
Name field. A system identifier (SID) prefix is required and DBCA will suggest a name. This
Us
prefix is used to generate unique SID names for the two instances that comprise the cluster
I
database. If you do not want to use the system-supplied prefix, then enter a prefix of your choice.
A
O
Click the Next button to continue.
l &
rna
nte
e I
cl
ra
O
Us
should clear all database features and example schemas unless you know that they are needed.
I
Some of the features have related tablespaces. If you deselect them, you will also be asked to
A
O
confirm deletion of the associated tablespace. Click the Next button to continue.
l &
rna
nte
e I
cl
ra
O
Us
AI
& O
al
ern
Int
cle
ra
O
Database Features
nly
e O
Standard database features include Oracle JVM, Intermedia, Oracle Text, and Oracle XML. You
should clear these additional features unless you know that they are needed. Click the OK button
Us
to return to the Database Features window. Click the Next button to continue. Confirm the
deletion of any related tablespaces.
AI
& O
al
ern
Int
cle
ra
O
Database Connections
nly
e O
Next, in the Database Connection Options window, you can choose how users will connect to the
database. The default is dedicated server mode. Click the Next button to accept the default value.
Us
AI
& O
al
ern
Int
cle
ra
O
Initialization Parameters
nly
e O
The Initialization Parameters window is displayed next. The Memory tab is displayed. Accept
the default parameters on the Memory tab. Click the File Locations tab to review or specify
various Oracle file locations.
Us
AI
& O
al
ern
Int
cle
ra
O
File Locations
nly
e O
After clicking the File Locations tab, specify the location of the server parameter file. Enter an
OCFS file if the cluster file system is used. If raw devices are used, then enter a raw file. Click
Us
the Next button to continue. If you have properly set the environment variables ORACLE_HOME
and ORACLE_BASE, this will be a review only.
AI
& O
al
ern
Int
cle
ra
O
Database Storage
nly
e O
By using the Database Storage window, determine the location of control files, data files, redo
logs, and so on. To begin, expand the Controlfile folder that is located in the navigation pane on
the left.
Us
AI
& O
al
ern
Int
cle
ra
O
& O
al
ern
Int
cle
ra
O
Tablespaces
nly
e O
The pane on the left of the window lists all the tablespaces that are used when the cluster
database is created. Choose a tablespace and click the Storage tab on the details pane. In the
Us
example above, the SYSTEM tablespace details are listed. You may edit them to suit your needs.
I
By clicking the general folder tab, you can adjust the tablespace file size if the default is too
A
O
small (or too large). Review each tablespace and verify that the size and storage settings are
&
suitable for your purposes.
al
ern
Int
cle
ra
O
Us
the steps for each log group that is listed. You will require at least two log members for each
I
thread. For a two node cluster, you will require a minimum of four redo logs. Review all entries
A
O
carefully and click the Next button to continue.
l &
rna
nte
e I
cl
ra
O
DBCA Summary
nly
displayed next. Click the OK button to continue.
e O
Review the information in the Database Configuration Assistant Summary window that is
Us
AI
& O
al
ern
Int
cle
ra
O
Us
AI
& O
al
ern
Int
cle
ra
O
Database Passwords
nly
SYSTEM. Click the Exit button to exit DBCA.
e O
After the database is created, DBCA prompts you to set passwords for the users SYS and
Us
AI
& O
al
ern
Int
cle
ra
O
Us
second node. If it does not, you can do it manually using the Linux rcp command. You can use
the example below as a guide:
AI
O
$ cd $ORACLE_HOME/dbs
&
$ rcp orapwDbname1 node2_name:$ORACLE_HOME/dbs/orapwDbname2
l
(where Dbname is the database name)
a
ern
Int
cle
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Objectives
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
GSD Management
nly
e O
Clients of GSD, such as SRVCTL, DBCA, and Enterprise Manager, interact with the daemon to
perform various manageability operations on the nodes in your cluster. You must start the GSD
Us
on all the nodes in your Real Applications Clusters database before you use SRVCTL commands
I
or attempt to employ the other tools across the cluster. However, you need only one GSD on
A
O
each node no matter how many cluster databases you create.
&
The name of the daemon is gsd and the daemon is located in the $ORACLE_HOME/bin
l
rna
directory. Start the daemon with the gsdctl command as shown in the example. Logging
information is written to the $ORACLE_HOME/srvm/log/gsdaemon.log file.
nte
e I
cl
ra
O
Us
cluster wide perspective of your database and its instances. This information is used by SRVCTL
I
database management commands as well as by several other tools. For example, node and
A
O
instance mappings, which are needed for discovery and monitoring operations that are performed
&
by Enterprise Manager and its intelligent agents, are generated by SRVCTL. Many of these tools
l
run SRVCTL commands to complete the operations that are requested through their graphical
a
user interface (GUI).
ern
Int
SRVCTL works with the GSD to manage and retrieve cluster and database configuration
information that are stored in the shared disk location.
cle
ra
O
$ srvctl
$ srvctl config -h
-h Print usage
-V Show version
e O
s
srvctl
U
To see the online command syntax and options for each SRVCTL command, enter:
I
A
srvctl command option -h
O
where command option is one of the valid options (verbs) such as start, stop, or
&
l
status.
rna
Note: To use SRVCTL, you must already have created the configuration information for the
database that you want to administer by using either DBCA or the srvctl add command.
nte
e I
cl
ra
O
Us
database or for specific instances. The following types of information can be created or modified
with SRVCTL:
AI
O
• Define a new cluster database configuration or remove obsolete database configuration
&
information.
l
• Add information about a new instance to a cluster database configuration or remove
a
ern
instance information from a cluster database.
t
• Rename instance name within a cluster database configuration.
In
• Change the node where an instance will run in a cluster database configuration.
cle
• Set and unset the definitions that are used to assign environment variables for an entire
cluster database.
ra
• Set and unset the definitions that are used to assign environment variables for an instance
O
in a cluster database configuration.
$ srvctl remove db –d U2
al
The example in the slide shows the creation of the database U1 on a UNIX system. Use the
rn
srvctl remove db command to delete the static configuration for an RAC database. The
e
Int
following syntax deletes the RAC database that is identified by the name that you provide:
$ srvctl remove db -d db_name
cle
The second example in the slide shows the removal of repository information for the database
called U2.
ra
O
Us
create the database or the instance. The following syntax adds an instance, which is named
I
instance_name, to the specified database on the node that you identify with node_name:
A
& O
$ srvctl add instance -d db_name -i instance_name -n node_name
The example in the slide adds the instance U2N3 on node RACLIN3 to the configuration
al
information for database U2.
rn
The srvctl remove instance command deletes static configuration information for an
e
nt
RAC instance. Use the following syntax to delete the configuration for the instance that is
I
identified by the database name that you provide:
cle
$ srvctl remove instance -d db_name -i instance_name
ra
The second example in the slide removes the instance U2N1 from the configuration information
O
for database U2.
Note: It is recommended that you use the Instance Management feature of DBCA to add and
delete cluster databases and instances.
Us
be used to work across the cluster or on individual nodes. You can use these commands to:
• Start and stop cluster databases
AI
O
• Start and stop cluster database instances
&
• Obtain the status of a cluster database instance
al
The specific commands to accomplish these tasks are covered in the following pages.
rn
Note: Your database and instance information must be available in the configuration repository
e
Int
before you use SRVCTL to perform these operations.
cle
ra
O
Us
srvctl start -d db_name [-i inst,...] [-n node,...]
where:
[-s stage,...] [-x stage,...]
AI
& O
-d db_name identifies the database against which the command is executed;
al
-i inst,... is the name of the instance, or a comma-separated list of instances, that are
rn
started (the default is all instances that are defined for db_name);
e
Int
-n node,... is the name of the node, or a comma-separated list of nodes, on which the
instances are started (the default is all nodes with instances that are defined for db_name).
cle
ra
O
e
s
srvctl start ... [-c 'connstr'] [-o options] [-h]
where:
I U
sysdba);
OA
-c 'connstr' defines the connect string for the startup operation (the default is: / as
l &
-o options lists the startup command options, such as force, nomount, pfile=
na
(with an appropriate path and parameter file name), and so on;
r
nte
-h displays the help information for the command or option.
e I
cl
ra
O
Us
options such as TRANSACTIONAL in place of MOUNT, and so on. The slide shows some typical
examples of this command.
AI
& O
al
ern
Int
cle
ra
O
& O
al
ern
Int
cle
ra
O
Us
databases. There are two formats for this command. The first includes no subcommands or
I
options and lists all the cluster databases in your environment:
A
$ srvctl config
& O
The second format, which includes the -d db_name syntax, lists the instances for the named
al
database. The slide shows an example of this format.
rn
Use the srvctl get env command to obtain environment information for either an entire
e
nt
RAC database or a specific instance. The output from a command that uses the following syntax
I
contains environment information for the entire RAC database that is identified by the
cle
db_name value that you provide:
ra
$ srvctl getenv database -d db_name
O
A command with the following syntax displays environment information for a specific instance:
$ srvctl getenv database -d db_name -i instance_name
Us
instance, you need a client-side parameter for each instance. You may also want to use a
I
common parameter file, for values that are identical on all instances, and include it in the
A
O
instance-specific files with the IFILE parameter.
&
It is recommended that you use a server parameter file for your RAC database. This binary file is
l
rna
maintained on a shared disk and contains generic entries for values that are common to all
instances and a separate parameter entry for each instance that requires a unique value.
te
If you build your RAC database with DBCA, then you have the option to create a server
n
I
parameter file concurrently with the database. Select the “Create server parameter file (spfile)”
e
l
box under the File Locations tab on the Initialization Parameters page and provide the shared
c
a
disk pathname in the Persistent Parameters Filename field.
r
O
You can also create a server parameter file manually if you have built or migrated your RAC
database without DBCA, or if you did not select the “Create server parameter file (spfile)”
option.
CREATE
SPFILE='/dev/vx/rdsk/oracle/U1_raw_spfile_5m'
FROM PFILE='$ORACLE_HOME/dbs/initU1.ora'
ALTER SYSTEM
SET sort_area_retained_size = 131072
SCOPE = SPFILE
SID = 'U1N1'
Us
SPFILE command as shown in the first example. If you are using a shared file system, then
I
name the default SPFILE location in the command, otherwise name the raw device or a link to
A
O
it that is defined with the default filename. You can do this regardless of whether you have a
&
running instance or an open database.
al
When initially created, all parameters in a server parameter file have identical values regardless
rn
of which instance uses it to start up. To add instance-specific values, you must use an ALTER
e
Int
SYSTEM command with a SCOPE clause set to MEMORY (or BOTH) and the SID clause set to
the required instance name. You can also set a databasewide value in your server parameter file
le
by setting the SID value to the wild card value ('*') as shown in the third example, which also
c
a
includes a comment. You can remove parameters from the SPFILE with the ALTER SYSTEM
r
RESET command.
O
U
• An instance-specific server parameter file, spfilesid.oras
• A generic server parameter file, spfile.ora
AI
O
• An instance-specific, client-side parameter file, initsid.ora
&
Even though server parameter files are in the search list, your RAC
l
rna
server parameter file will be on a shared disk and, therefore, not likely to be in the default
location with the default name. In order to take advantage of the default behavior to locate
te
your server parameter file, create a text file containing just one line: the SPFILE parameter. The
n
I
value for this SPFILE parameter is the full name of the shared disk partition where you created
e
l
the file. By locating and naming this text file as if it were a generic server parameter file, all
c
a
instances that are started on the server will locate and use it during a default startup.
Or
You may also be able to use the default behavior by creating a link (to the shared partition
where the server parameter file is stored) and giving it the same name and location as the
default text generic server parameter file.
Cluster
databases
Us
database management tasks with the Management Server component. From the Navigator pane,
I
you can view and manage both single- and multiple-instance databases by using essentially the
A
O
same operations. Just as in single instance databases, cluster databases and all of their related
&
elements can be administered by using master/detail views and Navigator menus.
al
After the nodes are discovered, by using the repository information that is added by DBCA or
rn
the SRVCTL utility, the Navigator tree displays cluster databases, their instances, and other
e
Int
related services, such as a listener. You can then use the Console to start, stop, and monitor
services as well as to schedule jobs or register events, simultaneously performing these tasks on
le
multiple nodes if you want. You can also use the Console to manage schemas, security, and the
c
a
storage features of cluster databases.
Or
Before using the Enterprise Manager Console, start the following components:
• An Oracle Intelligent Agent on each of the nodes
• Management Server
• Console
Cluster
instances
Us
contains the instances and subfolders for Instance, Schema, Security, and Storage. By selecting
I
objects within a Cluster Database subfolder, you can access property sheets to inspect and
A
O
modify properties of these objects, just like single-instance databases. All discovered instances
&
are displayed under the Cluster Database Instances folder.
al
With cluster databases, only the subfolders of the Instance folder are different from those of
rn
single instance databases. In the Instance folder, the instance database subfolders are split into
e
Int
two functional parts: Database-Specific File and Instance-Specific File Structures.
The available database-specific functionality includes in-doubt transactions and resource
le
consumer groups. All instance-specific functionality appears beneath the individual instance
c
a
icons within the Cluster Database Instances subfolder and includes:
r
O
• Configuration and stored configuration information management
• Session management
• Lock information
• Resource plan and resource plan schedule management
Us
The Cluster Database Startup/Shutdown Results dialog box is automatically displayed during a
AI
startup (or shutdown) operation. You can also initiate it by performing the following steps:
1. In the Navigator pane, expand Databases.
2. Right-click a cluster database.
& O
al
3. Select Results from the Options menu that appears.
rn
The display is updated dynamically as the operation progresses and graphically reflects the
e
flag).
Int
following states: if the component is functional (green flag) or if the component is stopped (red
cle
If the instances are started successfully, then the Cluster Database Started message box appears
ra
with a successful message.
Us
options, such as IMMEDIATE. Only when all instances are shut down is the database considered
closed.
AI
& O
The Cluster Database Shutdown Progress dialog box displays the progress of the shutdown
operation. After the instances are shut down successfully, as shown in the slide, the Cluster
al
Database Stopped message box also appears with a successful message. If the shutdown fails,
rn
then the Cluster Database Stopped message box appears with a failure message.
e
Int
cle
ra
O
Us
and instances, for all nodes. The states of the components are indicated with the following
graphical elements:
AI
O
• Green flag: The component is functional.
&
• Red flag: The component is stopped.
l
• Timer: An operation is in progress and the Enterprise Manager cannot determine the state
a
ern
of the component. This state occurs typically when the component startup or shutdown
t
operation has not completed.
In
• Blank background: The component does not exist on this node or is not configured on the
node.
cle
ra
O
Instance Management
nly
These areas of management include:
e O
You can perform much of the required instance management by using Enterprise Manager.
&
• Configuration, sessions, and other instance-specific components
l
• Cluster-aware jobs and events
a
ern
• Performance reports
Int
From the Enterprise Manager Console, click the plus (+) in front of Instances. Next click the plus
(+) in front of Cluster Database Instances and then click the plus (+) in front of the desired
le
database instance. Finally, log in as the sys user and save this as a preferred credential. Repeat
c
a
these steps for each RAC database instance. After completing these tasks, you can manage each
r
individual instance from Enterprise Manager.
O
Us
AI
& O
al
ern
Int
cle
ra
O
Tablespaces
nly
e O
You can use Enterprise Manager to manage database storage. To control tablespaces, click plus
(+) next to the Databases folder to expand the contents. Next, click plus (+) next to the RAC
Us
database that you want to manage to expand the management areas and log in as the sys user.
Expand Storage Management and then select Tablespaces.
AI
& O
al
ern
Int
cle
ra
O
Tablespace Map
nly
tablespace map and choose Show Tablespace Map.
e O
To view the usage map for a specific tablespace, right-click the desired tablespace in the
Us
AI
& O
al
ern
Int
cle
ra
O
Us
Next, specify the directory where the file is located and the file size. Click the Storage tab and
I
specify extent and segment space management. You can click the Show SQL button if you want
A
O
to view the SQL command that will be issued when the Create button is clicked.
&
After the tablespace has been created, you can create a new table in it. Expand Schema from the
l
rna
Navigator pane. You can create the table and insert rows in this tablespace.
nte
e I
cl
ra
O
Node Statistics
nly
e O
You can use Enterprise Manager to display node performance data. To do this, launch the
Performance Monitor and click the cluster database name. Click the Diagnostic Pack in the
Us
toolbar (the medicine bag) on the left and click the Performance Monitor (graphs). Expand
I
Cluster Databases and then expand Nodes. Performance charts that can be displayed include:
A
O
• CPU utilization
&
• Memory/swap data
• I/O data
al
ern
• File system information
t
• Process data
• Network data
In
• IPC data
cle
ra
O
&
• Locks
• Memory
al
• Top segments
ern
t
• Response time
I
• Parallel query
n
cle
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Objectives
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Us
group dba so that Oracle Cluster File System may be properly installed. You must identify
I
OCFS files for the redo logs and undo tablespaces. Make sure that there is at least one available
A
O
rollback segment or a new undo tablespace.
&
You must edit the server parameter file and make the appropriate changes for the instance on the
l
• instance_name
r a
new node. This includes changes to the following:
n
•
te
undo_tablespace
n
• Threads
e I
• Redo logs
cl
a
• Rollback segments
Or
,
filespec
THREAD integer GROUP integer
ALTER DATABASE
& O
THREAD integer: Specifies the thread that is assigned to an instance
GROUP integer: Specifies the group number of the redo log file group
l
•
•
•
rna
filespec: Specifies the name of an operating system file, plus size and reuse options
PUBLIC: Specifies that the thread belongs to the public pool
nte
e I
cl
ra
O
TABLESPACE tablespace
STORAGE storage_clause
ONLINE
OFFLINE
STORAGE storage_clause
Us
• Create at least one rollback segment for each instance of a parallel server.
I
• Ensure that the rollback segments are created in a tablespace other than the SYSTEM
A
O
tablespace to avoid contention.
&
• Create private rollback segments with a single instance operating in Exclusive mode before
l
starting up multiple instances of a parallel server.
a
ern
• Specify the rollback segment in the parameter file of the instance to be started.
• By using an instance that is already started, create the rollback segment with the CREATE
Int
ROLLBACK SEGMENT command. Omit the PUBLIC option.
cle
• Start up the instance to bring the segment online or use the ALTER ROLLBACK SEGMENT
command to bring the rollback segment online.
ra
• If a private rollback segment is specified in more than one parameter file, then only the first
O
instance that acquires the rollback segment can be started.
Instance Management
nly
e O
To add a new instance, launch the Database Configuration Assistant. Click the Instance
Management option button to add (or delete) an instance. Click the Next button to continue.
Us
AI
& O
al
ern
Int
cle
ra
O
Adding an Instance
nly
the Next button to continue.
e O
Next, you are prompted to add or delete an instance. Click the Add Instance option button. Click
Us
AI
& O
al
ern
Int
cle
ra
O
Us
username/password for a user with SYSDBA privileges. Click the Next button to continue.
AI
& O
al
ern
Int
cle
ra
O
Instance Name
nly
e O
A default instance name and node are displayed on this window. You can change the default
name if you want. If you see that the instance name and node are correct, then continue by
clicking the Next button.
Us
AI
& O
al
ern
Int
cle
ra
O
Us
Tablespaces folder or a rollback segment under the Rollback Segments folder also. Click the
Finish button to proceed.
AI
& O
al
ern
Int
cle
ra
O
e
Us
AI
& O
al
ern
Int
cle
ra
O
Us
AI
& O
al
ern
Int
cle
ra
O
Us
number of files are present). Because it is not meaningful to use /dev/raw1 as a database
I
filename, use database filenames that are as meaningful as with databases on file systems.
A
& O
Create raw device special files with the mknod command. The first argument to mknod is the
filename, the second is either the letter b or c, indicating a character or block device, the third is
al
the major device number (always 162 for raw devices), and the fourth is the minor device
rn
number. The minor device number ranges between 0 and 254 and must be unique among all files
e
Int
with the same major device number. Minor device number 0 is used for /dev/raw, which must
not be modified. Make sure that the minor device number is incremented for each file that is
created.
cle
a
You must bind the raw devices that are created above to block devices as part of each boot
r
O
sequence. This is done by using the raw command. This requires the rawio package.
# raw /dev/<raw device> /dev/<block device>
The raw devices must be readable and writable by the oracle user and dba group. Use the
chown command as shown in the slide to do this.
Us
Oracle Call Interface (OCI) library, you need not change the client application to use TAF.
AI
Because most TAF functionality is implemented in the client-side network libraries (OCI), the
& O
client must use the Oracle Net OCI libraries to take advantage of TAF functionality. Therefore,
to implement TAF in RAC, make sure that you use JDBC OCI instead of PL/SQL packages.
al
Because TAF was designed for RAC, it is much easier to configure TAF for that environment.
rn
However, TAF is not restricted for use with RAC environments. You can also use TAF for single
e
Int
instance Oracle databases. In addition, you can also use TAF for Oracle Real Application
Clusters Guard, Replicated systems, and Data Guard.
cle
ra
O
Us
files, then you must add the TAF parameters manually. This is because Oracle Net Manager does
not provide support for TAF configurations.
AI
& O
The FAILOVER_MODE parameter has a set of subparameters that control how a failover will
occur if a client is disconnected from the original connection that was made with the connect
al
descriptor. The subparameters, which are covered in detail on the following pages, include:
•
rn
TYPE: (Required) Identifies one of the three types of Oracle Net failover available by
e
•
I t
default to OCI applications
n
METHOD: Determines how fast failover occurs from the primary node to the backup node
•
le
BACKUP: Identifies a different net service name for backup connections
c
a
RETRIES: Limits the number of times to attempt to connect after a failover
r
•
DELAY: Specifies the amount of time in seconds to wait between connect attempts
O
•
Failover Types
nly
e O
Three types of Oracle Net failover functionality are available by default to OCI applications:
SESSION: Causes a failed session to failover to a new session. If a user’s connection is
s
•
I U
lost, then a new session is automatically created for the user. This type of failover does not
attempt to perform any actions after connecting the user to the new session. This option is
OA
your best choice for applications that primarily perform DML transactions and short
&
queries.
•
al
SELECT: Causes a failed session to failover to a new session and continue any interrupted
ern
queries. After automatically connecting the user to a backup session, this option enables
t
users with open cursors to continue fetching on them after failure. However, this mode
In
involves overhead on the client side in normal select operations. You should use this option
cle
when an instance failure could result in having to re-create output that is already generated
by a long-running query.
ra
NONE: No failover functionality is used. Although this is the default, you can specify this
O
•
type explicitly to prevent failover from happening. This option is typically useful for
testing purposes rather than for implementing failover in a production environment.
Failover Methods
nly
e O
The METHOD subparameter takes one of two values: BASIC and PRECONNECT. The latter is
only of use with Real Application Clusters (unlike the TYPE options which can be used for other
Us
failover situations, such as standby databases or reconnections to the same instance).
AI
The BASIC option requires a session to make a new connection when it fails over from its
& O
original instance connection. This option causes no overhead on the backup instance until a
failover occurs. This allows you to use the backup instance for nonapplication work, such as
al
database maintenance, without impacting the failover status. However, the failover processing
rn
can be slow because all of the disconnected sessions will attempt to reconnect to the failover
e
Int
instance concurrently, overburdening the listener on that instance.
The PRECONNECT option provides faster failover by creating a failover connection on the
le
standby instance concurrently with each connection to the primary instance. When the primary
c
a
instance fails, the connections are switched to one of the existing connections on the standby
r
O
instance. This requires minimal work by the listener for that instance and avoids the overhead of
creating new session connections. Unlike the BASIC option, the PRECONNECT option imposes
a load on the standby instance and must be able to support all connections from every supported
instance.
RAC1 =
(DESCRIPTION=
(LOAD_BALANCE=OFF)(FAILOVER=ON)
(ADDRESS=
(PROTOCOL=TCP)(HOST=aaacme1)(PORT=1521))
(ADDRESS=
(PROTOCOL=TCP)(HOST=aaacme2)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=rac.us.acme.com)
(SERVER=DEDICATED)
(FAILOVER_MODE=
(BACKUP=RAC2)
(TYPE=SESSION)(METHOD=PRECONNECT)
(RETRIES=180)(DELAY =5))))
Us
through the RAC1 alias are to dedicated servers because of the SERVER binding value.
AI
The RAC1 alias is for the primary instance, as indicated by the INSTANCE_ROLE subparameter
& O
value. The primary instance runs on the aaacme1 node because this is the first address that is
listed in the DESCRIPTION clause and client load balancing, which could select either address,
is disabled.
al
rn
Failover is enabled with the FAILOVER parameter and the secondary instance is identified with
e
Int
the RAC2 alias in the BACKUP clause (the connect descriptor for RAC2 is shown on the next
page). A failed-over session would be directed to preestablished failover connections because of
le
the METHOD subparameter setting and would make up to 180 attempts to complete the
c
a
reconnection, with a 5-second pause between each attempt.
r
O
Note: You could use other options, such as shared instead of dedicated servers, or the BASIC
rather than the PRECONNECT method, without interfering with the TAF operations.
RAC2 =
(DESCRIPTION=
(LOAD_BALANCE=OFF)(FAILOVER=ON)
(ADDRESS=
(PROTOCOL=TCP)(HOST=aaacme2)(PORT=1521))
(ADDRESS=
(PROTOCOL=TCP)(HOST=aaacme1)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=rac.us.acme.com)
(INSTANCE_ROLE=SECONDARY)
(SERVER=DEDICATED)
(FAILOVER_MODE=
(BACKUP=RAC1)
(TYPE=SESSION)(METHOD=PRECONNECT)
(RETRIES=180)(DELAY =5))))
&
because there is no load balancing to redirect the request to the second address.
l
• The INSTANCE_ROLE value is defined as SECONDARY. This prevents connections
a
ern
through the alias unless the primary instance has failed and the instance on aaacme2 has
t
assumed the primary role.
In
• The BACKUP value is the alias RAC1 so that connections to the instance on aaacme2 can
cle
fail back to the instance on aaacme1, if necessary.
ra
O
sales1.us.acme.com=
(DESCRIPTION=
(ADDRESS_LIST=
(LOAD_BALANCE=on)
(ADDRESS= . . . ))
(ADDRESS= . . . ))
(ADDRESS= . . . )))
(CONNECT_DATA=
(SERVICE_NAME=
sales.us.acme.com) Nodes Dispatchers
(SERVER=shared)))
Us
processing load and the fewest active connections. If you have configured shared servers, then
I
the connection is made to the dispatcher with the fewest current users on the selected node. The
A
O
example in the slide shows the configuration for connection load balancing across a three-node
&
cluster with shared servers enabled.
al
ern
Int
cle
ra
O
(DESCRIPTION=
(LOAD_BALANCE=ON)
(ADDRESS=(PROTOCOL=tcp)(HOST=host1)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=host2)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=host3)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=sales.us.acme.com)))
(DESCRIPTION=
(ADDRESS= (PROTOCOL=tcp)(HOST=host1)(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME= sales.us.acme.com)
(INSTANCE_NAME=S1))
)
Us
LOAD_BALANCE clause, or set LOAD_BALANCE to OFF, NO, or FALSE, then the addresses
I
will be tried in the order that is listed until a successful connection is made.
A
& O
The second example’s DESCRIPTION clause causes a connection to be made specifically to the
instance with its INSTANCE_NAME initialization parameter set to the value S1. This option
al
enables connections to a specific instance based on the work that is being performed while
rn
connected. This usage supports functionally partitioned databases.
e
Int
cle
ra
O
Us
Application Clusters database. This method allows the optimizer to determine whether it will
I
spread the work across query processes that are associated with one instance or with multiple
A
O
instances. Therefore, depending on the workload, queries, data manipulation language (DML),
&
and data definition language (DDL) statements may execute in parallel on a single node, across
l
multiple nodes, or across all nodes in the cluster database.
a
rn
In some cases, the parallel optimizer may choose to use only one node to satisfy a request.
e
Int
Generally, the optimizer will try to limit the work to the node where the query coordinator
process executes (node affinity) to reduce cross-instance message traffic. However, if multiple
le
nodes are employed, then they all continue to work until the entire operation is completed.
c
ra
O
AI
V$PQ_SESSTAT: Information about all parallel execution sessions
O
• V$PQ_SLAVE: Active parallel query slave statistics
&
• V$PQ_TQSTAT: Contains rows that are processed by the slave-by stage of SQL statement.
l
Statistics are compiled after a query finishes and are only available for current session.
a
•
ern
V$PX_SESSION: Information about all parallel execution sessions. It includes query
t
coordinator information.
In
cle
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
IEEE 1394 Shared Disks
IEEE 1394 is a standard that defines a high-speed serial bus. This bus is more commonly known
as FireWire, a name that was coined by Apple Computer, Inc. FireWire is similar in principle to
Universal Serial Bus (USB), but runs at speeds of up to 400 megabits per second and the
transmission mode provides much greater bandwidth than USB. The original intent of FireWire
was to provide an interface for devices, such as digital video cameras, that transfer a large
amount of data. External IDE drive enclosures are available and include FireWire ports. It is now
possible to share inexpensive IDE drives between systems supporting FireWire devices. Linux
supports FireWire devices that are Open Host Controller Interface (OHCI)–compatible.
Used in conjunction with Oracle Cluster File System (OCFS), FireWire-connected IDE drives
provide an economical method of sharing disks for RAC on Linux. Currently, FireWire devices
allow a maximum of four concurrent system logins (connections), so that the maximum number
of nodes in the cluster is limited to four. This restriction, in addition to the current transfer speed
of 400 Mbits, would preclude implementation in large production environments, but is ideal for
building low-cost development or test systems. To prepare FireWire IDE disks for RAC, perform
the following steps:
The setup that is tested for this class is Red Hat Application Server 2.1 with Oracle 9.2.0.2. If
you are interested in using another distribution of Linux or Oracle9i, then check the certified
configurations.
ly
c. Click the View Certifications by Product link.
On
d. Select Real Application Clusters from the Product Group list.
s
e. Select RAC on Linux from the operating system list.e
I U
A
f. Choose your processor type (x86 or Itanium) from the Platform list.
O
&
g. Select the proper Oracle version link (9.2 or 9.0.1).
l
rna
nte
e I
cl
ra
O
For RAC on Linux to work properly with FireWire, all the nodes in the cluster must be
logged in to the external FireWire hard drive concurrently. Be aware that not all FireWire
adapters or FireWire drive enclosures work properly with RAC. The adapter must be OHCI
and IEEE 1394 compliant. The FireWire disk enclosure must contain a chipset that supports
multiple simultaneous logins. The best drive enclosures for this purpose contain the Oxford
OXFW911 chipset. This is the predominant chipset that is found in FireWire drive enclosures
but there are others, so you must be careful. Install the adapters in your systems and cable the
drive as directed by the hardware documentation.
If you are using Red Hat 2.1 Application Server, then you must install a kernel that supports
FireWire disk devices. The first kernel that incorporated this support was the 2.4.19 test
kernel. At the time of writing this course, the 2.4.20 kernel is available. This kernel is
preferable because it is a production kernel. To get and install the 2.4.20 kernel, perform the
following steps:
b. Transfer or copy the archive to the root directory of each node, then gunzip and untar
the archive as the root user.
# pwd
ly
/
# gunzip linux-2.4.20rc2-orafw-up.tar.gz
# tar –tvf linux-2.4.20rc2-orafw-up.tar
On
se
U
c. Edit the /boot/grub/grub.conf file to allow the new kernel to be included in
I
A
the Grub boot menu. Add an entry under the splashimage identifier and above the
O
original kernel entry as indicated below. Make sure that the root device matches the
&
one that was used in the original configuration.
al
# vi /boot/grub/grub.conf
ern
t
default=0
In
timeout=10
le
splashimage=(hd0,1)/boot/grub/splash.xpm.gz
rac
title Firewire Kernel 2.4.20
O
kernel /boot/vmlinuz-2.4.20-orafw ro root=/dev/hda2
title Red Hat Linux Advanced Server (2.4.9-e.3) # Original Grub entry
root (hd0,1)
kernel /boot/vmlinuz-2.4.9-e.3 ro root=/dev/hda2 hdc=ide-scsi
initrd /boot/initrd-2.4.9-e.3.img
Several kernel modules must be loaded in order for the shared disk to be recognized. The
modules that must be loaded are ohci1394 and sbp2 (serial bus protocol). The sbp2
module is a low-level SCSI driver for IDE buses. In addition, the proper high-level SCSI
device module must be loaded. Your choices include sd_mod (disk), st (tape), sr_mod
(CD-ROM), and sg (generic disc burner/scanner). Use the sd-mod module. Edit the
/etc/rc.local file and add the following three lines in the order specified:
# vi /etc/rc.local
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.
touch /var/lock/subsys/local
modprobe ohci1394
modprobe sbp2
modprobe sd_mod
In addition, the sbp2 module must be configured to support multiple logins. Edit the
/etc/modules.conf file and set the sbp2_exclusive_login parameter equal to 0
(allow multiple logins) as shown in the example. The default value is 1 (single login only).
# vi /etc/modules.conf
options sbp2 sbp2_exclusive_login=0
After restarting the system, run the dmesg command and look for 1394 and sbp2-related
ly
entries to verify that the FireWire adapter and shared disk are recognized and the node is
n
logged in.
# dmesg
e O
s
...
U
ohci1394: $Rev: 758 $ Ben Collins <[email protected]>
PCI: Found IRQ 12 for device 00:09.0
AI
ohci1394_0:OHCI-1394 1.0 (PCI): IRQ=[12] MMIO=[e8000000-e80007ff] Max
Packet=[2048]
& O
l
...
a
ieee1394: sbp2: Logged into SBP-2 device
rn
ieee1394: sbp2: Node[01:1023]: Max speed [S400] - Max payload [2048]
e
t
scsi0 : IEEE-1394 SBP-2 protocol driver (host: ohci1394)
In
$Rev: 792 $ James Goodwin <[email protected]>
e
SBP-2 module load options:
cl
- Max speed supported: S400
ra
- Max sectors per I/O supported: 255
O
- Max outstanding commands supported: 64
- Max outstanding commands per lun supported: 1
- Serialized I/O (debug): no
- Exclusive login: no
Vendor: QUANTUM Model: Bigfoot TX6.0AT Rev:
Type: Direct-Access ANSI SCSI revision: 06
Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
After the disk is recognized, install OCFS, Cluster Manager, and Oracle 9.2.0.4 as detailed in
the lessons and workshop.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Exercise 2: Preparing the Operating System
1. Verify host names and IP addresses on both the nodes. There should be an entry for each
node and an entry for each interconnect. Ping the other host and interconnect to test the
network.
First node
[root@stc-raclin01]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 stc-raclin01 localhost.localdomain localhost
148.2.65.101 stc-raclin01 stc-raclin01 rac1
148.2.65.102 stc-raclin02 stc-raclin02 rac2
192.168.1.12 racic02 ic2
192.168.1.11 racic01 ic1
Second node
[root@stc-raclin02 root]# cat /etc/hosts
ly
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 stc-raclin02 localhost.localdomain
Onlocalhost
148.2.65.102 stc-raclin02 rac2
se
U
148.2.65.101 stc-raclin01 rac1
192.168.1.11
192.168.1.12
racic01 ic1
racic02 ic2
AI
[root@stc-raclin02 root]# ping stc-raclin01
& O
al
PING stc-raclin01 from 148.2.65.102 : 56(84) bytes of data.
rn
64 bytes from stc-raclin01 : icmp_seq=0 ttl=64 time=436 usec
e
t
...
In
[root@stc-raclin02 root]# ping racic01
e
PING racic01 (138.2.65.11) from 138.2.65.12 : 56(84) bytes of data.
cl
64 bytes from racic01 (138.2.65.11): icmp_seq=0 ttl=64 time=635 usec
a
...
Or
First Node
[root@stc-raclin01]# vi /etc/sysconfig/oracle
# Shared memory and Semophore memory settings
SHMMAX=47483648
SHMMNI=4096
SHMALL=2097152
SEMMSL=1250
SEMMNS=32000
SEMOPM=100
SEMMNI=256 ~
/etc/config/oracle, 622C written
Second Node
[root@stc-raclin02]# vi /etc/sysconfig/oracle
# Shared memory and Semophore memory settings
SHMMAX=47483648
SHMMNI=4096
SHMALL=2097152
SEMMSL=1250
SEMMNS=32000
SEMOPM=100
SEMMNI=256 ~
/etc/config/oracle, 622C written
nly
3. Create the UNIX dba and oinstall groups and the oracle user on both nodes. In
addition, create the /home/ora920 (ORACLE_HOME) and /var/opt/oracle
e O
directories if they don’t already exist. Note that the cluster software expects
s
/var/opt/oracle to exist before the installation begins. Perform these tasks as the
U
root user.
AI
First Node
& O
al
[root@stc-raclin01]# groupadd -g 500 dba
n
[root@stc-raclin01]# groupadd -g 501 oinstall
er
[root@stc-raclin01]# useradd -u 500 -d /home/oracle -g "dba" –G \
t
n
"oinstall" -m -s /bin/bash oracle
I
[root@stc-raclin01]# passwd oracle
e
l
[root@stc-raclin01]# mkdir /home/ora920;chmod 775 /home/ora920
ac
[root@stc-raclin01]# chown oracle:dba /home/ora920
r
[root@stc-raclin01]# mkdir /var/opt/oracle
O
[root@stc-raclin01]# chown oracle:dba /var/opt/oracle
[root@stc-raclin01]# chmod 775 /var/opt/oracle
First node
[root@stc-raclin01]# uname -r
2.4.19-64GB-SMP
Second Node
[root@stc-raclin02]# uname -r
2.4.19-64GB-SMP
5. List the contents of the /archives/ocfs-1.0.9 directory and ensure that the OCFS
kernel RPM matches the kernel version that is given by the uname –r command in Step 4.
Install the ocfs-support RPM with the rpm command. This must be done on both the nodes.
First node
ly
[root@stc-raclin01]# cd /archives/ocfs-1.0.9
n
[root@stc-raclin01]# ls -al
O
-rw-rw-r-- 1 root root 734 Jul 1 16:44 FIXES
e
-rw-r--r-- 1 root root 773 Jul 1 17:30 README.TXT
-rw-r--r-- 1 root root
s
252 Jul 1 17:31 README.TXT.2
U
I
-rw-r--r-- 1 root root 173029 Jul 1 17:23 ocfs-2.4.19-4GB-1.0.9-4.i586.rpm
A
-rw-r--r-- 1 root root 173578 Jul 1 17:23 ocfs-2.4.19-4GB-SMP-1.0.9-4.i586.rpm
O
-rw-r--r-- 1 root root 173498 Jul 1 17:23 ocfs-2.4.19-64GB-SMP-1.0.9-4.i586.rpm
-rw-r--r-- 1 root root 4861 Jul 1 16:35 ocfs-best-practices.txt
l
-rw-r--r-- 1 root root
&
38373 Jul 1 17:23 ocfs-support-1.0.9-4.i586.rpm
a
-rw-r--r-- 1 root root 136722 Jul 1 17:23 ocfs-tools-1.0.9-4.i586.rpm
ern
[root@stc-raclin01]# rpm –i ocfs-support-1.0.9-4.i586.rpm
Second node
Int
cle
[root@stc-raclin02 root]# cd /archives/ocfs-1.0.9
ra
[root@stc-raclin02 /archives]# ls -al
O
-rw-rw-r-- 1 root root 734 Jul 1 16:44 FIXES
-rw-r--r-- 1 root root 773 Jul 1 17:30 README.TXT
-rw-r--r-- 1 root root 252 Jul 1 17:31 README.TXT.2
-rw-r--r-- 1 root root 173029 Jul 1 17:23 ocfs-2.4.19-4GB-1.0.9-4.i586.rpm
-rw-r--r-- 1 root root 173578 Jul 1 17:23 ocfs-2.4.19-4GB-SMP-1.0.9-4.i586.rpm
-rw-r--r-- 1 root root 173498 Jul 1 17:23 ocfs-2.4.19-64GB-SMP-1.0.9-4.i586.rpm
6. Next, install the OCFS kernel RPM.and ocfs-tools rpm with the rpm command. Again,
perform this operation on both the nodes. After completing this step, restart both the nodes.
First node
[root@stc-raclin01]# rpm –i ocfs-2.4.19-64GB-SMP-1.0.9-4.i586.rpm
[root@stc-raclin01]# rpm –i ocfs-tools-1.0.9-4.i586.rpm
[root@stc-raclin01 /]# init 6 (to restart)
Second node
[root@stc-raclin02]# rpm –i ocfs-2.4.19-64GB-SMP-1.0.9-4.i586.rpm
[root@stc-raclin02]# rpm –i ocfs-tools-1.0.9-4.i586.rpm
[root@stc-raclin02]# init 6 (to restart)
7. From a root VNC or Vncviewer session, start the OCFS tool, ocfstool and create the
/etc/ocfs.conf configuration file.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-5
7.1. From the menu bar, select Tasks, and then Generate Config.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-6
7.2. Choose the second Ethernet interface by using the drop-down menu and selecting eth1.
Accept the default port, 7000, and enter the interconnect node name that corresponds to
the entry in the /etc/hosts file. Refer to Step 1. Click the OK button to continue.
You must perform the configuration file generation on both the nodes.
First node
Second node
nly
8. Create two directories called /ocfs and /quorum respectively on both nodes. These
e O
directories will be used to mount the shared disks. The directory must be owned by oracle
s
and the group must be dba. Set the permissions for the directory to 775.
U
First node
AI
[root@stc-raclin01 /]# mkdir /ocfs /quorum
& O
l
[root@stc-raclin01 /]# chown oracle:dba /ocfs /quorum
a
n
[root@stc-raclin01 /]# chmod 775 /ocfs /quorum
ter
I
Second noden
cle
[root@stc-raclin02 /]# mkdir /ocfs /quorum
ra
[root@stc-raclin02 /]# chown oracle:dba /ocfs /quorum
O
[root@stc-raclin02 /]# chmod 777 /ocfs /quorum
9. Use fdisk to partition the shared disk. Do this once on one node only. The disk should be
represented by the disk device /dev/sdd. Create two partitions, a large one to be used for
Oracle9i Database: Real Application Clusters on Linux B-7
the Oracle data files (/ocfs) and another smaller one to be used for the quorum and server
manager/group services shared files (/quorum). After starting fdisk, enter p to print the
partition table of the shared disk. There should be no partitions. If by chance there are
existing partitions, use the d option to delete them before proceeding. Enter n to create a new
partition and then enter p for primary. Make this partition 1. Use the majority of the cylinders
for the data disk because you need only a few cylinders for the quorum disk.
Enter n to create another partition and enter p to create a primary partition. Make this
partition 2 and use the remaining cylinders that are available for this disk, which will become
the quorum file system. Type w, when finished, to write the new partition table.
e O
s
p primary partition (1-4)
U
p
Partition number (1-4): 2
AI
O
First cylinder (701-4427, default 701): 701
Last cylinder or +size or +sizeM or +sizeK (701-4427, default 4427):
4427
l &
rna
Command (m for help): w
te
The partition table has been altered!
n
e I
Calling ioctl() to re-read partition table.
cl
ra
WARNING: If you have created or modified any DOS 6.x
O
partitions, please see the fdisk manual page for additional
information.
Syncing disks.
[root@stc-raclin01 root]#
10.1. From a VNC session on the first node, start ocfstool and select Tasks from the
menu bar. From the Tasks menu, choose Format. Choose the SCSI device sdd1. If
you do not see sdd1 in the list, it may be necessary to reboot the node.
10.2. Accept the volume name of Oracle. Change the mountpoint to /quorum. Change the
user to oracle and group to dba. Finally, set the protection to 0777 and click OK.
Click Yes when the tool prompts you of your intent to proceed.
nly
e O
Us
AI
& O
al
rn
Alternatively, you can perform this from the command line:
e
Int
[root@stc-raclin01 /]# mkfs.ocfs -F -b 128 -L oracle -m /ocfs -u oracle \
e
-g dba -p 0775 /dev/sdd1
cl
r
10.3.
a Repeat the steps above to create a second OCFS volume using the device
O /dev/sdd2. Specify the mount point as /ocfs, with a volume name of ocfs, with
the user and group set to oracle and dba, respectively. Set the protection field to 0777.
Click Yes when the tool prompts of your intent to proceed.
First node
[root@stc-raclin01]# load_ocfs
/sbin/insmod ocfs node_name=racic01 ip_address=192.168.1.11 ip_port=7000
cs=1823 guid=98C704EBD14F6EBC68660060976E5460
[root@stc-raclin01 root]# ocfstool
nly
e O
Us
AI
O
Second node
l
[root@stc-raclin02 /]# load_ocfs
&
rna
/sbin/insmod ocfs node_name=racic02 node_number=0 ip_address=192.168.1.12
ip_port=7000 cs=1840 guid=E09B019CBFEB8579C8540050FC969760
te
[root@stc-raclin02 /]# ocfstool
n
e I
cl
ra
O
Oracle9i Database: Real Application Clusters on Linux B-10
If you can mount the OCFS volume from both the nodes, then the tasks have been
sucessfully completed.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-11
Exercise 3: Oracle Cluster Management System
1. Oracle Cluster Manager 9.2.0.1 is now installed. From a VNC session as user oracle, log in
to the Linux system as oracle and change directory to the /archives/Disk1 directory.
Execute runInstaller and install OCMS in the /home/ora920 directory. This must
be done on both nodes.
[oracle@stc-raclin01]$ cd /archives/Disk1
[oracle@stc-raclin01]$ ./runInstaller
1.2. Specify dba as the UNIX group to use for the installation.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-12
1.3. From a terminal window, execute the /tmp/orainstRoot.sh as the root user.
[root@stc-raclin01]# /tmp/orainstRoot.sh
Creating Oracle Inventory pointer file (/etc/oraInst.loc)
Changing groupname of /home/ora920/oraInventory to dba
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
1.5. From the list of available products, choose Oracle Cluster Manager 9.2.0.1.0.
Oracle9i Database: Real Application Clusters on Linux B-13
1.6. Enter the names of the two nodes that are in your cluster. Check the /etc/hosts
file to ensure accuracy.
[root@stc-raclin01]# cat /etc/hosts
# Node names
127.0.0.1 stc-raclin01 localhost.localdomain localhost
148.2.65.101 stc-raclin01.us.oracle.com stc-raclin01
148.2.65.102 stc-raclin02.us.oracle.com stc-raclin02
# Interconnect names
192.168.1.12 racic02 ic2
192.168.1.11 racic01 ic1
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-14
1.7. Enter the names of the interconnects for each node. Again, refer to the /etc/hosts
file to ensure accuracy.
1.8. Accept the default Watchdog parameter value. It will be disabled later in favor of the
hangcheck-timer.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-15
1.9. Specify the quorum.dbf file on the shared OCFS partition /quorum as the
quorum disk device.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-16
1.11. The Install window displays the progress of the installation.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Now, repeat steps 1.1 through 1.12 to install Oracle Cluster Manager 9.2.0.1.0 on
the second node. Do not attempt to start Cluster Manager yet.
First node
[root@stc-raclin01 root]# vi /etc/rc.local
#!/bin/sh
...
/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
Second node
[root@stc-raclin02 root]# vi /etc/rc.local
#!/bin/sh
...
/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
3. Disable the mechanism that is used to start the Oracle watchdog daemon at system startup.
As the root user, change directory to the $ORACLE_HOME/ocms/bin directory and
find the ocmstart.sh file. This file starts the watchdog daemon and timer. Edit the
ocmstart.sh file and eliminate watchdog-related commands. Use the # symbol to
comment out these lines. Perform these activities on both the nodes. There is a sample
ocmstart.sh file in the /archives directory.
First node
[root@stc-raclin01]# cd $ORACLE_HOME/oracm/bin
ly
[root@stc-raclin01]# vi ocmstart.sh
se
# watchdogd's default backup file
I U
A
# WATCHDOGD_BAK_FILE=$ORACLE_HOME/oracm/log/wdd.log.bak
# Get arguments
& O
# watchdogd_args=`grep '^watchdogd' $OCMARGS_FILE |\
al
# sed -e 's+^watchdogd *++'`
n
...
er
# Check watchdogd's existance
t
n
# if watchdogd status | grep 'Watchdog daemon active' >/dev/null
# then
e I
l
# echo 'ocmstart.sh: Error: watchdogd is already running'
c
a
# exit 1
# fi
...
Or
# Backup the old watchdogd log
# if test -r $WATCHDOGD_LOG_FILE
# then
# mv $WATCHDOGD_LOG_FILE $WATCHDOGD_BAK_FILE
# Startup watchdogd
# echo watchdogd $watchdogd_args
# watchdogd $watchdogd_args
...
Second node
[root@stc-raclin02]# cd $ORACLE_HOME/oracm/bin
[root@stc-raclin02]# vi ocmstart.sh
ly
# then
# mv $WATCHDOGD_LOG_FILE $WATCHDOGD_BAK_FILE
# fi
On
se
U
# Startup watchdogd
# echo watchdogd $watchdogd_args
# watchdogd $watchdogd_args
AI
...
& O ...
al
rn
4. It is now time to udpdate OCMS from 9.2.0.1 to 9.2.0.4. To install the 9.2.0.4 patch set, you
e
t
must first start the installer as the oracle user as shown below.
In
e
[oracle@stc-raclin01]$ /archives/Disk1/runInstaller
cl
a
4.1. Upon reaching the File Locations window, change the directory that is specified in the
r
O
Source... field to point to the patch location, /archives/Patch_Linux_9204/stage.
When you click the Next button, the Available Products window appears with the
products that may be installed from the location that is specified. Choose Oracle9iR2
Cluster Manager 9.2.0.4.0 and continue. Choose the products.jar file in the Browse
window.
Oracle9i Database: Real Application Clusters on Linux B-19
4.2. Select the Oracle9iR2 Cluster Manager 9.2.0.4.0 option button and continue.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-20
4.3. Provide the node (host) names from the /etc/hosts file in the Public Node
Information window.
4.4. Provide the interconnect names from the /etc/hosts file in the Private Node
Information window.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-21
4.5. If prompted, specify the /quorum/quorum.dbf file on the shared OCFS partition
/quorum as the quorum disk device.
4.6. Review the Summary window to make sure that everything is correct and continue.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
The End of Installation window will advise when the Cluster Manager has been upgraded.
Oracle9i Database: Real Application Clusters on Linux B-22
Before continuing to the next step, perform the Cluster Manager upgrade (steps 4.1
through 4.6) on the second node.
Second node
[oracle@stc-raclin02]# cat /home/ora920/oracm/admin/cmcfg.ora
CmDiskFile=/quorum/quorum.dbf
ClusterName=rac9202
PollInterval=1000
MissCount=210
PrivateNodeNames=racic02 racic01
ly
PublicNodeNames=stc-raclin02 stc-raclin01
n
ServicePort=9998
O
HostName=stc-raclin02
e
[oracle@stc-raclin02]# vi ocmargs.ora
Us
AI
6. Create and initialize the quorum file, /quorum/quorum.dbf using the dd command.
Make sure it is created once only and is owned by the root user. Adjust permissions as
shown.
& O
First node only!
al
ern
# dd if=/dev/zero of=/quorum/quorum.dbf bs=4096 count=65
nt
# chown root /quorum/quorum.dbf
I
# chmod 666 /quorum/quorum.dbf
cle
a
7. It is now time to stop and automate some of the OCFS and Cluster Manager commands to
r
load on system startup. Verify the /etc/rc.local file contains the lines in bold typeface.
O
First node
[root@stc-raclin01]# vi /etc/rc.local
#!/bin/sh
# Set Oracle environment variables now as OCFS and CM are loaded as the
root user.
ORACLE_HOME=/home/ora920
export ORACLE_HOME
PATH=$PATH:$ORACLE_HOME/oracm/bin:$ORACLE_HOME/bin
export PATH
ly
["$?" –eq "0" ] && echo "Group Services Started Successfully"
n
Second node
e O
[root@stc-raclin02]# vi /etc/rc.local
Us
#!/bin/sh
...
AI
O
#*********Add the lines below to your /etc/rc.local file************
&
al
# Set Oracle environment variables now as OCFS and CM are loaded as the
root user.
ern
t
ORACLE_HOME=/home/ora920
n
export ORACLE_HOME
e I
PATH=$PATH:$ORACLE_HOME/oracm/bin:$ORACLE_HOME/bin
c
export PATH
l
ra
echo "Loading OCFS Module"
O
su – root –c "/sbin/load_ocfs"
["$?" –eq "0" ] && echo "OCFS Module loaded"
First node
[root@stc-raclin01 /]# init 6 (to restart)
Second node
[root@stc-raclin02 /]# init 6 (to restart)
9. Check that the cluster filesystems have been mounted during boot up with the mount
command. Make sure that Cluster Manager is running also. Use the ps and grep commands
to look for oracm processes.
First Node
# mount
nly
/dev/sda6 on / type ext3 (rw)
e O
s
...
U
/dev/sdd2 on /ocfs type ocfs (rw)
/dev/sdd1 on /quorum type ocfs (rw)
# ps –ef|grep oracm
AI
root 21028 1296 0 09:35 ?
l
...
a
root 12168 1296 0 18:51 ? 00:00:00 oracm
ern
t
Second Node
In
le
# mount
c
/dev/sda6 on / type ext3 (rw)
a
...
Or
/dev/sdd2 on /ocfs type ocfs (rw)
/dev/sdd1 on /quorum type ocfs (rw)
# ps –ef|grep oracm
root 21028 1296 0 09:35 ? 00:00:00 oracm
...
root 12168 1296 0 18:51 ? 00:00:00 oracm
Oracle9i Database: Real Application Clusters on Linux B-25
Exercise 4: Installing Oracle on Linux
1. The Oracle installer, runInstaller, is node aware. This means that Oracle software can
be loaded on multiple nodes from one installer at the same time. For this to work properly,
Oracle Cluster Manager must be working on both nodes. This was done in the Lesson 3
exercise. In addition, user equivalence must be in effect for the user performing the
installation, which is oracle in this exercise.
1.1. Edit the /etc/inetd.conf file as the root user. Find the entry for shell and
make sure it is uncommented. To make inetd reread inetd.conf, find the PID for
the inetd process and kill it with the HUP option. In addition, create or edit the
/etc/hosts.equiv file and enter the host name for the other node.
First node
[root@stc-raclin01]# vi /etc/inetd.conf
...
# nntp stream tcp nowait news /usr/sbin/tcpd /usr/sbin/leafnode
# smtp stream tcp nowait root /usr/sbin/sendmail sendmail -L sendmail -Am
#
# Shell, login, exec and talk are BSD protocols.
# The option "-h" permits ``.rhosts'' files for superuser. Please look at
# man-page of rlogind and rshd to see more configuration possibilities about
# .rhosts files.
shell stream tcp nowait root /usr/sbin/tcpd in.rshd –L
[root@stc-raclin01]$ vi /etc/hosts.equiv
nly
stc-raclin02
e O
Second node
Us
[root@stc-raclin02]$ vi /etc/inetd.conf
AI
...
& O
# nntp stream tcp nowait news /usr/sbin/tcpd /usr/sbin/leafnode
al
# smtp stream tcp nowait root /usr/sbin/sendmail sendmail -L sendmail -Am
n
#
er
# Shell, login, exec and talk are BSD protocols.
t
# The option "-h" permits ``.rhosts'' files for superuser. Please look at
In
# man-page of rlogind and rshd to see more configuration possibilities about
le
# .rhosts files.
c
shell stream tcp nowait root /usr/sbin/tcpd in.rshd -L
ra
O
[root@stc-raclin02]# ps -ef|grep inetd
root 1066 1 0 Nov17 ? 00:00:00 /usr/sbin/inetd
1.2. Restart both the nodes and test the user equivalency. Perform an rlogin as oracle
or use rsh to run a remote command. If you are not prompted for a password, then
the configuration is correct.
[root@stc-raclin01]# su – oracle
[oracle@stc-raclin01]$ rlogin stc-raclin02
[oracle@stc-raclin02]$
[oracle@stc-raclin02]$ exit
[oracle@stc-raclin01]$
[oracle@stc-raclin01]$ rsh stc-raclin02 uname -a
Linux stc-raclin02 2.4.19-64GB-SMP Fri Feb 21 13:07:49 PST 2003 i686
2. The Oracle database installation will be done by the oracle user on the first and second
nodes. Prepare the users’ environment by creating the .bash_profile file for Oracle
database–related environment variables. Set ORACLE_HOME to /home/ora920 on both
nodes and ORACLE_SID to RACDB1 on the first node and RACDB2 on the second node.
e O
s
export ORACLE_HOME=/home/ora920
U
export ORACLE_BASE=/home/ora920
export ORACLE_SID=RACDB2
export PATH=$PATH:$ORACLE_HOME/bin
AI
export TNS_ADMIN=$ORACLE_HOME/network/admin
& O
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib
al
rn
3. Create the shared server manager/group services file using the touch command. Make sure it
e
t
is owned by the oracle user and the group is dba. This should only be done from one
In
node. Change file permissions to 666 with the chown command.
cle
First node only
ra
O
[root@stc-raclin01]# touch /quorum/srvm.dbf
[root@stc-raclin01]# chown oracle:dba /quorum/srvm.dbf
[root@stc-raclin01]# chmod 666 /quorum/srvm.dbf
4.1. In the Cluster Node Selection window, select the local node in your cluster.
4.2. Make sure that the installer is using the products.jar file that is found in the
ly
Disk1 directory.
On
se
I U
OA
l &
rna
nte
e I
cl
ra
O
Oracle9i Database: Real Application Clusters on Linux B-28
4.3. In the Available Products window, click the Oracle9i Database 9.2.0.1.0 option
button.
4.4. In the Installation Types window, click the Custom option button.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-29
4.5. Make sure that Oracle9i Real Application Clusters 9.2.0.1 is selected.
4.6. Although it is possible to install the listed components in some place other than
ORACLE_HOME, there is no need to do so. Accept the default destination for the
Oracle Universal Installer and JRE components.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-30
4.7. Enter the configuration file name that will be used by both the nodes. Use the
/quorum/srvm.dbf file that you created in step three of this lesson exercise.
4.8. Enter the group name dba for both the database administrator and database operator
groups.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-31
4.9. In the Oracle Management Server Repository window, indicate that you will use an
existing repository.
4.10. In the Create Database window, select the No option button to defer database
creation. It will be performed by using DBCA after the 9.2.0.2.0 database patch is
applied.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-32
4.11. Wait for the installation to complete. Monitor the progress from the Install window.
4.12. Just before the installation completes, you are prompted to execute the root.sh
script. Open a terminal window as the root user and execute the root.sh script
from $ORACLE_HOME.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-33
[root@stc-raclin01] # cd /home/ora920
[root@stc-raclin01] # ./root.sh
4.13. After the binaries are installed, you are prompted to configure network services with
NETCA. Click on Yes.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-34
4.13.2. Defer Directory Services configuration by clicking on the No… radio button.
4.13.3. On the next screen, accept the default listener name, LISTENER and click Next to
continue.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-35
4.13.4. On the Select Protocols screen, TCP will already be selected. Click on the Next
button to continue.
4.13.5. Accept the default port number of 1521. Click on the Next button to continue.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
4.13.6. The next screen asks you if you would like to configure another listener. Click on
the No radio button to continue. The next slide informs you that listener
configuration is complete. Click the Next button to continue. On the Naming
Methods Configuration screen , click on the No radio button. This preserves
Oracle9i Database: Real Application Clusters on Linux B-36
tnsnames.ora as the preferred naming method. Click Finish on the next slide to
exit Network Configuration Assistant.
4.14. After the binaries are installed, you are prompted to configure Enterprise Manager
with EMCA. Cancel this operation
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-37
4.15. The next window advises that the installation is completed but some configuration
tools did not complete. This is normal; exit the installer. If the Enterprise Manager
console appears upon exit, cancel the operation.
4.16. View the /var/opt/oracle/srvConfig.loc file on both the nodes and make
sure that the server manager/group services shared file is properly specified.
[root@stc-raclin01]# cat /var/opt/oracle/srvConfig.loc
srvconfig_loc=/quorum/srvm.dbf
4.17.
nly
As the oracle user stop the listener before applying the 9.2.0.4 patch.
e O
s
[oracle@stc-raclin01]$ lsnrctl stop
I U
5. The Universal Installer must be upgraded before the 9.2.0.4 database patch can be applied.
A
To do this start the installer from $ORACLE_HOME/bin.
O
l
[oracle@stc-raclin01]$ cd $ORACLE_HOME/bin
&
a
[oracle@stc-raclin01]$ ./runInstaller
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-38
5.1. Click Next on the Welcome screen and choose both nodes from the Cluster Node
Selection screen. Click on the Next button to continue.
5.2. On the Available Products window, Click on the Oracle Universal Installer radio
button and click Next to continue.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-39
5.3. Make sure the 9.2.0.4 patch location appears in the Source field.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-40
5.5. Review the information in the Summary page and click the Install button to appy the
installer upgrade.
5.6. After the upgrade is comlete, exit the installer. Before the new installer can be used,
you must run the following command as the oracle user on both nodes:
First Node
$ cd $ORACLE_BASE/oui/bin/linux
$ ln -s libclntsh.so.9.0 libclntsh.so
Second Node
nly
$ cd $ORACLE_BASE/oui/bin/linux
e O
$ ln -s libclntsh.so.9.0 libclntsh.so
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-41
6. The next step is to apply the Oracle9iR2 9.2.0.4.0 patch on both nodes. Start
runInstaller from the Disk1 directory.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-42
6.3. In the Available products window, choose the Oracle9iR2 Patch Set 9.2.0.4.0 option
button.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-43
6.5. Just before the upgrade is finished, you will be prompted to run the root.sh script
as the root user. This needs to be done on both nodes.
6.6.
ly
You must check for the existence of several directories on the second node.
n
O
Sometimes, these direcories may not be properly copied during the install. Problems
them if necessary.
se
will arise during database correction if the are not there. Check for them and create
I U
Second Node only
OA
&
[oracle@stc-raclin02]$ mkdir -p $ORACLE_HOME/rdbms/audit
al
[oracle@stc-raclin02]$ mkdir -p $ORACLE_HOME/rdbms/log
n
[oracle@stc-raclin02]$ mkdir -p $ORACLE_HOME/network/log
ter
[oracle@stc-raclin02]$
[oracle@stc-raclin02]$
mkdir
mkdir
–p
-p
$ORACLE_HOME/Apache/Apache/logs
$ORACLE_HOME/Apache/Jserv/logs
In
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-44
Exercise 5: Building the Database
1. The database is now ready to be installed. Use the Database Configuration Assistant (DBCA)
to do this. When using DBCA to install a cluster database, the dbca executable becomes a
client of GSD. You will need to initialized the shared file and start GSD. Use the ps and
grep commands to make sure GSD is sucessfully started on both nodes.
On both nodes
$ ps –ef|grep –i gsd
oracle 1296 1295 0 10:47 ? 00:00:00 /home/.../jre -DPROGRAM=gsd ...
oracle 1297 1295 0 10:47 ? 00:00:00 /home/.../jre -DPROGRAM=gsd ...
oracle 1298 1295 0 10:47 ? 00:00:00 /home/.../jre -DPROGRAM=gsd ...
If DBCA is started without GSD running, then the following error will result:
ly
2. As the oracle user, change directory to $ORACLE_HOME/bin and start DBCA. When the
n
e O
opening screen appears, choose the Oracle cluster database radio button. Use the –
datafileDestination to let dbca know where the datafiles should be created.
Us
I
[oracle@stc-raclin01]$ cd $ORACLE_HOME/bin
A
[oracle@stc-raclin01]$ dbca –datafileDestination /ocfs
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-45
2.1. On the Welcome screen, click the Oracle cluster database window and click Next to
continue. On the next screen, make sure that both the nodes in your cluster are
highlighted.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-46
2.2. On the Operations screen, click the “Create a database” button
2.3. Select the New Database radio button from the Database Templates screen.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-47
2.4. You are prompted for a global database name and SID prefix. Enter RACDB in both
fields.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-48
2.5. The window Database Configuration Assistant: Step 6 of 10: Database Features
opens. Clear all check boxes, and confirm deletion of tablespaces. Choose Human
Resources and Sales History under Example Schemas.
2.6. Select Standard database features and uncheck all options, confirm deletion of
tablespaces. Close Standard database features window and click Next.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-49
2.7. Choose the Dedicated Server Mode radio button on the Database Connection Options
screen
2.8. Click the Memory folder on the Initialization Parameters screen. Click the Custom
radio button and accept the default values.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-50
2.9. Click the File Locations tab next. Review the file locations by clicking the File
Locations Variables button. Click the Next button to continue.
2.10. Click on Controlfile in the Storage tree on the left. Remove control03.ctl and
control04.ctl by highlighting each one and pressing the delete key.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-51
2.11. On the Options tab, change the maximum number of instances to 4 and the maximum
number of log history to 100.
2.12. Click Expand Tablespaces on left side, and select SYSTEM tablespace. Click on the
General tab and change size to 110 MB.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-52
2.13. Click the Storage folder tab and click on the Managed in the Dictionay radio button.
Set Initial to 32 KB, set Next to 128 KB, and set Increment Size by 0.
2.14. Select TEMP in the Storage tree and change the size to 10 MB.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-53
2.15. Select UNDOTBS1 in the Storage tree, and change the size to 50 MB. Select
UNDOTBS2 and set size to 50 MB also. Click on OK accept the new file size and
return to the Database Storage window.
2.16. Click the Next button the the Database Storage window and a review window will
appear. You can browse file locations, tablespaces, parameters, etc. that will be used
in the database creation. When you are finished, click the OK button and the database
creation will begin.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-54
2.17. Click the Finish button in the Creation Options window. When the Summary window
opens, review the summary information and click the OK button. The Progress
window will appear.
2.18. When the cluster database has been created, you are prompted for passwords for the
SYS and SYSTEM accounts. For classroom purposes, make both passwords
oracle. Click the Exit button to close the window. Congratulations, you are
finished.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-55
3. Your cluster database should now be up and running. Enter the following query at the SQL
prompt:
SQL> SELECT instance_number inst_no, instance_name inst_name,
parallel, status, database_status db_status, active_state state,
host_name host FROM gv$instance;
DB_STATUS indicates the database state, STATUS indicates the startup condition of the
database, and PAR (parallel) indicates whether the database is operating in cluster mode.
If your output matches the output in the example, your cluster database is running normally and
SRVCTL is configured properly.
nly
e O
Us
AI
& O
al
ern
Int
cle
ra
O
Oracle9i Database: Real Application Clusters on Linux B-56