WWW Akadia Com Services Ora - Rac HTML PDF
WWW Akadia Com Services Ora - Rac HTML PDF
Content
Overview
Architecture
Enterprise Linux Installation and Setup
Create Accounts
NFS Configuration
Enabling SSH User Equivalency
Install Oracle Clusterware
Install Oracle Database Software
Create Listener Configuration
Create the Cluster Database
Transparent Application Failover (TAF)
Facts Sheet RAC
Troubles during the Installation
Overview
In the past, it was not easy to become familiar with Oracle Real Application Clusters (RAC), due to the price of the
hardware required for a typical production RAC configuration which makes this goal
impossible.
Shared storage file systems, or even cluster file systems (e.g. OCFS2) are primarily used
in a storage area network where all nodes directly access the storage on the
shared file system. This makes it possible for nodes to fail without affecting access to
the file system from the other nodes. Shared disk file systems are normally used in a
high-availability cluster.
At the heart of Oracle RAC is a shared disk subsystem. All nodes in the cluster must be
able to access all of the data, redo log files, control files and parameter files for all nodes
in the cluster. The data disks must be globally available to allow all nodes to access the
database. Each node has its own redo log and control files but the other nodes must be able to access them in order to
recover that node in the event of a system failure.
Architecture
The following RAC Architecture should only be used for test environments.
For our RAC test environment, we use a normal linux server, acting as a shared storage server using NFS. We can use
NFS to provide shared storage for a RAC installation. NFS is an abbreviation of Network File System, a platform
independent technology created by Sun Microsystems that allows shared access to files stored on computers via an
interface called the Virtual File System (VFS) that runs on top of TCP/IP.
Network Configuration
Each node must have one static IP address for the public network and one static IP address for the private cluster
interconnect. The private interconnect should only be used by Oracle. Note that the /etc/hosts settings are the same
for both nodes Gentic and Cellar.
Host Gentic
Host Cellar
Note that the virtual IP addresses only need to be defined in the /etc/hosts file (or your DNS) for both nodes. The
public virtual IP addresses will be configured automatically by Oracle when you run the Oracle Universal Installer,
which starts Oracle's Virtual Internet Protocol Configuration Assistant (VIPCA). All virtual IP addresses will be activated
when the srvctl start nodeapps -n <node_name> command is run. This is the Host Name/IP Address that will be
configured in the client(s) tnsnames.ora file.
About IP Addresses
Virtual IP address A public internet protocol (IP) address for each node, to be used as the Virtual IP
address (VIP) for client connections. If a node fails, then Oracle Clusterware fails
over the VIP address to an available node. This address should be in the
/etc/hosts file on any node. The VIP should not be in use at the time of the
installation, because this is an IP address that Oracle Clusterware manages.
1. The new node re-arps the world indicating a new MAC address for the
address. For directly connected clients, this usually causes them to see errors
on their connections to the old address.
2. Subsequent packets sent to the VIP go to the new node, which will send error
RST packets back to the clients. This results in the clients getting errors
immediately.
This means that when the client issues SQL to the node that is now down, or
traverses the address list while connecting, rather than waiting on a very long
TCP/IP time-out (~10 minutes), the client receives a TCP reset. In the case of SQL,
this is ORA-3113. In the case of connect, the next address in tnsnames is used.
Going one step further is making use of Transparent Application Failover (TAF). With
TAF successfully configured, it is possible to completely avoid ORA-3113 errors
alltogether.
Public IP address The public IP address name must be resolvable to the hostname. You can register
both the public IP and the VIP address with the DNS. If you do not have a DNS,
then you must make sure that both public IP addresses are in the node /etc/hosts
file (for all cluster nodes)
Private IP address A private IP address for each node serves as the private interconnect address for
internode cluster communication only. The following must be true for each private
IP address:
/etc/ntp.conf
server swisstime.ethz.ch
restrict swisstime.ethz.ch mask 255.255.255.255 nomodify notrap noquery
The kernel parameters will need to be defined on every node within the cluster every time the machine is booted. This
section focuses on configuring both Linux servers - getting each one prepared for the Oracle RAC 11g installation. This
includes verifying enough swap space, setting shared memory and semaphores, setting the maximum amount of file
handles, setting the IP local port range, setting shell limits for the oracle user and activating all kernel parameters for
the system.
/etc/sysctl.conf
kernel.shmmni = 4096
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 262144
/etc/pam.d/login
# For Oracle
session required /lib/security/pam_limits.so
session required pam_limits.so
Create Accounts
Create the following groups and the user Oracle on all three hosts
root> useradd -u 400 -g 500 -G dba -c "Oracle Owner" -d /home/oracle -s /bin/bash oracle
root> passwd oracle
$HOME/.bash_profile
#!/bin/bash
TZ=MET; export TZ
PATH=${PATH}:$HOME/bin
ENV=$HOME/.bashrc
BASH_ENV=$HOME/.bashrc
USERNAME=`whoami`
POSTFIX=/usr/local/postfix
# LANG=en_US.UTF-8
LANG=en_US
COLUMNS=130
LINES=45
DISPLAY=192.168.138.11:0.0
export USERNAME ENV COLUMNS LINES TERM PS1 PS2 PATH POSTFIX BASH_ENV LANG DISPLAY
if [ `tty` != "/dev/tty1" ]
then
# TERM=linux
TERM=vt100
else
# TERM=linux
TERM=vt100
fi
if [ -t 0 ]
then
stty erase "^H" kill "^U" intr "^C" eof "^D"
stty cs8 -parenb -istrip hupcl ixon ixoff tabs
fi
PATH=${POSTFIX}/bin:${POSTFIX}/sbin:${POSTFIX}/sendmail:${ORACLE_HOME}/bin
PATH=${PATH}:${ORA_CRS_HOME}/bin:/usr/local/bin:/bin:/sbin:/usr/bin:/usr/sbin
PATH=${PATH}:/usr/local/sbin:/usr/bin/X11:/usr/X11R6/bin
PATH=${PATH}:.
export PATH
: > $HOME/.bash_history
cat .lastlogin
term=`tty`
echo -e "Last login at `date '+%H:%M, %h %d'` on $term" >.lastlogin
echo -e " "
if [ $LOGNAME = "root" ]
then
echo -e "WARNING: YOU ARE SUPERUSER !!!"
echo -e " "
fi
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
if [ $USER = "oracle" ]
then
ulimit -u 16384 -n 65536
fi
umask 022
$HOME/.bashrc
alias more=less
alias up='cd ..'
alias kk='ls -la | less'
alias ll='ls -la'
alias ls='ls -F'
alias ps='ps -ef'
alias home='cd $HOME'
alias which='type -path'
alias h='history'
#
# Do not produce core dumps
#
# ulimit -c 0
PS1="`whoami`@\h:\w> "
export PS1
PS2="> "
export PS2
NFS Configuration
The Oracle Clusterware Shared Files are the Oracle Cluster Registry (OCR) and the CRS Voting Disk. They will be
installed by the Oracle Installer on the Shared Disk on the NFS-Server. Besides this two shared Files all Oracle
Datafiles will be created on the Shared Disk.
Create and export Shared Directories on NFS-Server (Opal)
root@opal> mkdir -p /u01/crscfg
root@opal> mkdir -p /u01/votdsk
root@opal> mkdir -p /u01/oradat
root@opal> chown -R oracle:oinstall /u01/crscfg
root@opal> chown -R oracle:oinstall /u01/votdsk
root@opal> chown -R oracle:oinstall /u01/oradat
root@opal> chmod -R 775 /u01/crscfg
root@opal> chmod -R 775 /u01/votdsk
root@opal> chmod -R 775 /u01/oradat
/etc/exports
/u01/crscfg *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/u01/votdsk *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/u01/oradat *(rw,sync,insecure,root_squash,no_subtree_check)
Export Options:
rw Allow both read and write requests on this NFS volume. The default is to disallow
any request which changes the filesystem. This can also be made explicit by using
the ro option.
sync Reply to requests only after the changes have been committed to stable storage.
In this and future releases, sync is the default, and async must be explicit
requested if needed. To help make system adminstrators aware of this change,
'exportfs' will issue a warning if neither sync nor async is specified.
no_wdelay This option has no effect if async is also set. The NFS server will normally delay
committing a write request to disc slightly if it suspects that another related write
request may be in progress or may arrive soon. This allows multiple write requests
to be committed to disc with the one operation which can improve performance. If
an NFS server received mainly small unrelated requests, this behaviour could
actually reduce performance, so no_wdelay is available to turn it off. The default
can be explicitly requested with the wdelay option.
no_root_squash root_squash map requests from uid/gid 0 to the anonymous uid/gid.
no_root_squash turns off root squashing.
insecure The insecure option allows clients with NFS implementations that don't use a
reserved port for NFS
no_subtree_check This option enables subtree checking, which does add another level of security, but
can be unreliability in some circumstances.
In order to perform this check, the server must include some information about the
location of the file in the "filehandle" that is given to the client. This can cause
problems with accessing files that are renamed while a client has them open
(though in many simple cases it will still work).
Subtree checking is also used to make sure that files inside directories to which only
root has access can only be accessed if the filesystem is exported with
no_root_squash (see below), even if the file itself allows more general access.
root@opal> exportfs -v
/u01/crscfg <world>(rw,no_root_squash,no_subtree_check,insecure_locks,anonuid=65534,anongid=65534)
/u01/votdsk <world>(rw,no_root_squash,no_subtree_check,insecure_locks,anonuid=65534,anongid=65534)
/u01/oradat <world>(rw,wdelay,insecure,root_squash,no_subtree_check,anonuid=65534,anongid=65534)
/etc/fstab
Mount Options:
The default value for UDP is 7 tenths of a second. The default value for TCP is 60 seconds.
After the first timeout, the timeout is doubled after each successive timeout until a maximum
timeout of 60 seconds is reached or the enough retransmissions have occured to cause a
major timeout.
Then, if the filesystem is hard mounted, each new timeout cascade restarts at twice the initial
value of the previous cascade, again doubling at each retransmission. The maximum timeout is
always 60 seconds.
rsize The number of bytes NFS uses when reading files from an NFS server. The rsize is negotiated
between the server and client to determine the largest block size that both can support. The
value specified by this option is the maximum size that could be used; however, the actual size
used may be smaller. Note: Setting this size to a value less than the largest supported block
size will adversely affect performance.
wsize The number of bytes NFS uses when writing files to an NFS server. The wsize is negotiated
between the server and client to determine the largest block size that both can support. The
value specified by this option is the maximum size that could be used; however, the actual size
used may be smaller. Note: Setting this size to a value less than the largest supported block
size will adversely affect performance.
actimeo Using actimeo sets all of acregmin, acregmax, acdirmin, and acdirmax to the same value.
There is no default value.
nfsvers Use an alternate RPC version number to contact the NFS daemon on the remote host. This
option is useful for hosts that can run multiple NFS servers. The default value depends on
which kernel you are using.
noac Disable all forms of attribute caching entirely. This extracts a significant performance penalty
but it allows two different NFS clients to get reasonable results when both clients are actively
writing to a common export on the server.
Installing Oracle Clusterware and the Oracle Database software is only performed from one node in a RAC cluster.
When running the Oracle Universal Installer (OUI) on that particular node, it will use the ssh and scp commands to run
remote commands on and copy files (the Oracle software) to all other nodes within the RAC cluster.
Host Cellar
Host Gentic
Host Cellar
oracle@cellar> cd ~/.ssh
oracle@cellar> scp id_rsa.pub gentic:/home/oracle/.ssh/authorized_keys
Host Gentic
oracle@gentic> cd ~/.ssh
oracle@gentic> scp id_rsa.pub cellar:/home/oracle/.ssh/authorized_keys
Host Cellar
Host Gentic
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAv2TjN0KTuvqxr3XBHG2JFecCqZ0aPqGO/8cqBtdg
X9qQuLIP5zGpKGrDcRVULvLncGSifVbDvV89LGFnXiv0FZ+8PHD1snGX5M4YyUMcv362wAaW3g2k
Gp1ky0jQias5CZKtC42f94qt6rU1gm4E6Xh7U2QsLkEC0gPiYlGR2Zey4X01Eb18kM55eeGSFjoo
v58T99MjdHFmxEWWvckhwudYZ4sFYbGxqJgywKtSNT0WI9HAGL3LNLBBjmLbbAnxrI1iDqTGMQIq
zTf+p/E+2K/LrG9oUrN3qdT0EGciD0lcxO6Ke7O/npnCscRoUKPlIChsIN4ruJxikurOMzb37Q==
oracle@gentic
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAwQpfO1b5wSF99b/XRZny/xC9/d2l1Y2oF+YT3Qle
8VumvmNBawCmSucUd9q8Jp6PdgTJLpMO60BwbhsrlqCqAUZ2iCgLBsFvAGjQMrBy1b01yRDGlfi3
pyH1FycuzcyD6S+WSa4CH0A7obAr71CDThzU8LRvGMftXsYN+yKPFYhoXUbw0OC7MQs0BfVKaUo/
CXhMKTYUqPdALm0I0TdlQ2uYpg7iXLIxAVV+qB4jH5RaMWRrFETtp9OErkkACA5O/lb8Fy0gYcDs
M6Sqnv9Nw596vSKn7CXATu8C9HgIbpwdGVc+TwEiQKdMKbgT7z5Ep8LFHrwSm8GtSChR/ILdvw==
oracle@cellar
Host Gentic
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAwQpfO1b5wSF99b/XRZny/xC9/d2l1Y2oF+YT3Qle
8VumvmNBawCmSucUd9q8Jp6PdgTJLpMO60BwbhsrlqCqAUZ2iCgLBsFvAGjQMrBy1b01yRDGlfi3
pyH1FycuzcyD6S+WSa4CH0A7obAr71CDThzU8LRvGMftXsYN+yKPFYhoXUbw0OC7MQs0BfVKaUo/
CXhMKTYUqPdALm0I0TdlQ2uYpg7iXLIxAVV+qB4jH5RaMWRrFETtp9OErkkACA5O/lb8Fy0gYcDs
M6Sqnv9Nw596vSKn7CXATu8C9HgIbpwdGVc+TwEiQKdMKbgT7z5Ep8LFHrwSm8GtSChR/ILdvw==
oracle@cellar
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAv2TjN0KTuvqxr3XBHG2JFecCqZ0aPqGO/8cqBtdg
X9qQuLIP5zGpKGrDcRVULvLncGSifVbDvV89LGFnXiv0FZ+8PHD1snGX5M4YyUMcv362wAaW3g2k
Gp1ky0jQias5CZKtC42f94qt6rU1gm4E6Xh7U2QsLkEC0gPiYlGR2Zey4X01Eb18kM55eeGSFjoo
v58T99MjdHFmxEWWvckhwudYZ4sFYbGxqJgywKtSNT0WI9HAGL3LNLBBjmLbbAnxrI1iDqTGMQIq
zTf+p/E+2K/LrG9oUrN3qdT0EGciD0lcxO6Ke7O/npnCscRoUKPlIChsIN4ruJxikurOMzb37Q==
oracle@gentic
So, what exactly is the Oracle Clusterware responsible for? It contains all of the cluster and database configuration
metadata along with several system management features for RAC. It allows the DBA to register and invite an Oracle
instance (or instances) to the cluster. During normal operation, Oracle Clusterware will send messages (via a special
ping operation) to all nodes configured in the cluster, often called the «heartbeat». If the heartbeat fails for any of the
nodes, it checks with the Oracle Clusterware configuration files (on the shared disk) to distinguish between a real node
failure and a network failure.
Host Gentic
Before installing the clusterware, check the prerequisites have been met using the runcluvfy.sh utility
in the clusterware root directory.
Interfaces found on subnet "192.168.138.0" that are likely candidates for a private interconnect:
cellar eth0:192.168.138.36
gentic eth0:192.168.138.35
Interfaces found on subnet "192.168.137.0" that are likely candidates for a private interconnect:
cellar eth1:192.168.137.36
gentic eth1:192.168.137.35
WARNING:
Could not find a suitable set of interfaces for VIPs.
Install Clusterware
Make sure that the X11-Server is started and reachable.
Start the Installer, make sure that there are no errors shown in the Installer Window
oracle> ./runInstaller
Enter cellar Node using [Add] Button Specify eth0 as the Public Interface
Enter /u01/crscfg/crs_registry for the Registry Enter /u01/votdsk/voting_disk for the voting disk
Host Gentic
oracle> cd /u01/app/oraInventory
oracle> su
root> ./orainstRoot.sh
Host Cellar
oracle> cd /u01/app/oraInventory
oracle> su
root> ./orainstRoot.sh
Host Gentic
root> cd /u01/app/oracle/crs
root> ./root.sh
Linux gentic 2.6.18-8.el5PAE #1 SMP Tue Jun 5 23:39:57 EDT 2007 i686 i686 i386 GNU/Linux
Last login at 10:03, Sep 19 on /dev/pts/1
Host Cellar
root> cd /u01/app/oracle/crs
root> ./root.sh
Linux cellar 2.6.18-8.el5PAE #1 SMP Tue Jun 5 23:39:57 EDT 2007 i686 i686 i386 GNU/Linux
Last login at 10:10, Sep 19 on /dev/pts/0
Done.
You will not use the «Create Database» option when installing the software. You will, instead, create the database
using the Database Creation Assistant (DBCA) after the install.
Host Cellar