Upgrading RAC From 10g To 11g-Linux
Upgrading RAC From 10g To 11g-Linux
By Bhavin Hingu
CLICK HERE for the”Step By Step: Upgrade 10gR2 RAC to 11gR2 RAC on Linux”
<<HOME>>
This Document shows the step by step of upgrading 3‐Node 10gR2 RAC database on ASM to 11gR1 RAC. The below upgrade path allows upgrading 10gR2 CRS to 11gR1 CRS
with no down me using rolling upgrade. The rolling upgrade of ASM is possible star ng from Release 11gR1 (11.1.0.6) and is not backward compa ble. That means we cannot
use Rolling upgrade method for upgrading 10gR2 ASM to 11gR1 ASM.
Upgrading 10gR2 RAC database will need outage and the total down me may further be avoided or minimized by using Logical Standby Database in the upgrade process (not
covered here).
Existing 10gR2 RAC setup (Before Upgrade) Target 11gR1 RAC Setup (After Upgrade)
Clusterware Oracle 10g R2 Clusterware 10.2.0.3 Oracle 11gR1 Clusterware 11.1.0.6
ASM Binaries 10g R2 RAC 10.2.0.3 11gR1 RAC 11.1.0.6
Cluster Name Lab Lab
Cluster Nodes node1, node2, node3 node1, node2, node3
Clusterware Home /u01/app/oracle/crs (CRS_HOME) /u01/app/oracle/crs (CRS_HOME)
Clusterware Owner oracle:(oinstall, dba) oracle:(oinstall, dba)
VIPs node1vip, node2vip, node3vip node1vip, node2vip, node3vip
SCAN N/A N/A
SCAN_LISTENER Host/port N/A N/A
OCR and Vo ng Disks Storage Type Raw Devices Raw Devices
OCR Disks /dev/raw/raw1, /dev/raw/raw2 /dev/raw/raw1, /dev/raw/raw2
Pre‐Upgrade Tasks
Upgrade 10gR2 (10.2.0.1) Clusterware to 11gR1 (11.1.0.6) Rolling Upgrade No Down me.
Upgrade 10gR2 ASM_HOME (10.2.0.1) to 11gR1 RAC (11.1.0.6) ASM and Database Down me required.
Upgrade 10gR2 Database to 11gR1 RAC (11.1.0.6) Database Down me required.
Pre‐Upgrade tasks:
Install/Upgrade RPMs required for 11gR1 RAC and 11gR2 RAC Installa on
Set 11gR1 specific Kernel Parameters
Update the TIMEZONE file version
Backing up ORACLE_HOMEs/database
Minimum Required RPMs for 11gR1 RAC on OEL 5.5 (All the 3 RAC Nodes):
binutils2.17.50.0.62.el5
compatlibstdc++333.2.361
elfutilslibelf0.125
elfutilslibelfdevel0.125
elfutilslibelfdevel0.125
glibc2.512
glibccommon2.512
glibcdevel2.512
glibcheaders2.3.42
gcc4.1.152
gccc++4.1.152
libaio0.3.106
libaiodevel0.3.106
libgcc4.1.152
libstdc++4.1.1
libstdc++devel4.1.152.e15
make3.811.1
sysstat7.0.0
unixODBC2.2.11
unixODBCdevel2.2.11
Below command verifies whether the specified rpms are installed or not. Any missing rpms can be installed from the OEL Media Pack
NOTE: The cvuqdisk rpm in 11gR1 has the same version as one available in 10gR2 (cvuqdisk1.0.11), so I did not have to replace the 10gR2 cvuqdisk by
11gR1 one.
# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multithreaded applications
kernel.core_uses_pid = 1
sysctl p
Verify the current version of the timezone file in the 10gR2 database.
5746875
5632264
Backing Up ORACLE_HOMEs/database:
Steps I followed to take the Backup of ORACLE_HOMEs before the upgrade: (This can be applied to 11gR1 and 10g databases)
On node1:
mkdir backup
cd backup
dd if=/dev/dev/raw1 of=ocr_disk_10gr2.bkp
dd if=/dev/dev/raw3 of=voting_disk_10gr2.bkp
cp /etc/inittab etc_inittab
mkdir etc_init_d
cd etc_init_d
cp /etc/init.d/init* .
On node2:
mkdir backup
cd backup
cp /etc/inittab etc_inittab
mkdir etc_init_d
cd etc_init_d
cp /etc/init.d/init* .
On node3:
mkdir backup
cd backup
cp /etc/inittab etc_inittab
mkdir etc_init_d
cd etc_init_d
cp /etc/init.d/init* .
Step By Step: Upgrade Clusterware, ASM and Database from 10.2.0.3 to 11.1.0.6.
Upgrade 10gR2 CRS to 11gR1 CRS in Rolling Fashion (No Down me):
On Node1:
crsctl enable crs once the node comes back up after the reboot.
Crsctl check crs CRS stack will not be up as CRS was disabled before the reboot.
/home/oracle/11gR1/clusterware/upgrade/preupdate.sh –crshome /u01/app/oracle/crs –crsuser oracle noshutdown
./runInstaller –ignoreSysPrereqs
/u01/app/oracle/crs/install/rootupgrade run this script at the end as root
Verify the CRS
crsctl check crs (on all the RAC nodes) it should be up on all the nodes
crsctl query crs activeversion node1 The active version should be 10.2.0.3
crsctl query crs activeversion node2
crsctl query crs activeversion node3
crsctl query crs softwareversion node1 The Software Version should show 11.1.0.6
crsctl query crs softwareversion node2
crsctl query crs softwareversion node3
On Node2:
./runInstaller –ignoreSysPrereqs
/u01/app/oracle/crs/install/rootupgrade
./runInstaller –ignoreSysPrereqs
/u01/app/oracle/crs/install/rootupgrade
The active Version and Software version should show 11.1.0.6 at this stage.
HERE’s the detailed Screen Shots of Upgrading 10gR2 CRS to 11gR1 CRS
Start the runInstaller from the 11gR1 database so ware Install Media and select the “Install Software Only” Configura on Op on on the OUI screen.
./runInstaller –ignoreSysPrereqs From the 11gR1 Software for Database
Run the root.sh on all the RAC nodes to finish the Installa on.
Invoke the dbua from the new 11gR1 ASM_HOME to upgrade the 10gR2 ASM to 11gR1 ASM. Upgrading ASM instance is pre y much like Star ng the ASM Instances
and ASM Listeners using the newly Installed 11gR1 ASM_HOME. DBUA simply copies the asm pfile and password files from 10gR2 dbs folder to 11gR1 dbs folder. It also
copies the tnsnames.ora and listener.ora files as well. At the end, it modifies the ASM related CRS resources (asm and all the listeners belonging to 10gR2 ASM_HOME)
to start from new 11gR1 ASM_HOME by modifying the ACTION_SCRIPT parameter to point it to the 11gR1 racgwrap.
export ORACLE_HOME=/u01/app/oracle/asm11gr1
export ORACLE_SID=+ASM1
/u01/app/oracle/asm11gr1/bin/dbua
HERE’s the detailed Screen Shots of Upgrading 10gR2 ASM to 11gR1 ASM.
Issue #1:
During the ASM Upgrade by DBUA, I was prompted for migra ng DB listeners running from 10gR2 exis ng Database HOME to 11gR1 ASM HOME. I did not want the DB
Listener to migrate to ASM HOME but the Installer was not moving ahead if pressed “No”. So, I decided to move forward with the DB listener migra on. I had to
manually migrate this DB listener back to 10gR2 DB home at the end of the upgrade of ASM process.
Here’s how the DB listener LAB_LISTENER was moved back to 10gR2 DB HOME from 11gR1 ASM_HOME.
I made sure that the 10gR2 TNS_ADMIN s ll have the listener.ora and tnsnames.ora files with informa on about LAB_LISTENER.
Updated the CRS resources related to LAB_LISTENER as shown below to point the ACTION_SCRIPT to /u01/app/oracle/asm/bin/racgwrap
cd /u01/app/oracle/crs/crs/public
/u01/app/oracle/crs/bin/crs_stat –p ora.node1.LAB_LISTENER_NODE1.lsnr > ora.node1.LAB_LISTENER_NODE1.lsnr.cap
/u01/app/oracle/crs/bin/crs_stat –p ora.node2.LAB_LISTENER_NODE2.lsnr > ora.node2.LAB_LISTENER_NODE2.lsnr.cap
/u01/app/oracle/crs/bin/crs_stat –p ora.node3.LAB_LISTENER_NODE3.lsnr > ora.node3.LAB_LISTENER_NODE3.lsnr.cap
/u01/app/oracle/crs/bin/crs_stat –p ora.node3.LAB_LISTENER_NODE3.lsnr > ora.node3.LAB_LISTENER_NODE3.lsnr.cap
ACTION_SCRIPT=/u01/app/oracle/asm11gr1/bin/racgwrap
WITH…
ACTION_SCRIPT=/u01/app/oracle/asm/bin/racgwrap
Ran the below commands to update the OCR with these changes.
/u01/app/oracle/crs/bin/crs_register –u ora.node1.LAB_LISTENER_NODE1.lsnr
/u01/app/oracle/crs/bin/crs_register –u ora.node2.LAB_LISTENER_NODE2.lsnr
/u01/app/oracle/crs/bin/crs_register –u ora.node3.LAB_LISTENER_NODE3.lsnr
Issue #2:
The DBUA could not start the ASM on node 2 during post upgrade steps at the end of the upgrade process and so it terminated with the below error in trace.log file.
I tried to manually start the ASM instance and it worked just fine but the OCR was s ll showing the “UNKNOWN” status for this resource. Even using
crs_start/crs_stop to start or stop the ASM on node2 could not change the status from UNKNOWN. This means that the ASM resource entry for node2 in OCR
was somehow logically corrupted by DBUA. I had to unregister this entry and register back with the same parameter to fix the issue. A er doing that, the ASM was
successfully started and stopped by srvctl command.
ora.node2.ASM2.asm.cap:
NAME=ora.node2.ASM2.asm
TYPE=application
ACTION_SCRIPT=/u01/app/oracle/asm11gr1/bin/racgwrap
ACTIVE_PLACEMENT=0
AUTO_START=1
CHECK_INTERVAL=300
DESCRIPTION=CRS application for ASM instance
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=node2
OPTIONAL_RESOURCES=
PLACEMENT=restricted
REQUIRED_RESOURCES=
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=600
START_TIMEOUT=900
STOP_TIMEOUT=180
UPTIME_THRESHOLD=7d
USR_ORA_ALERT_NAME=
USR_ORA_CHECK_TIMEOUT=0
USR_ORA_CONNECT_STR=/ as sysasm
USR_ORA_DEBUG=0
USR_ORA_DISCONNECT=false
USR_ORA_FLAGS=
USR_ORA_IF=
USR_ORA_INST_NOT_SHUTDOWN=
USR_ORA_LANG=
USR_ORA_NETMASK=
USR_ORA_OPEN_MODE=mount
USR_ORA_OPI=false
USR_ORA_PFILE=
USR_ORA_PRECONNECT=none
USR_ORA_SRV=
USR_ORA_START_TIMEOUT=0
USR_ORA_STOP_MODE=immediate
USR_ORA_STOP_TIMEOUT=0
USR_ORA_VIP=
crs_register ora.node2.ASM2.asm
srvctl add instance –d labdb –i labdb2 –n node2
srvctl modify instance –d labdb –I labdb2 –t +ASM2
srvctl modify service d labdb s oltp n i "labdb1,labdb2,labdb3"
Start the runInstaller from the 11gR1 database so ware Install Media and select the “Install Software Only” Configura on Op on on the OUI screen.
Invoke the DBUA from the newly installed 11gR1 HOME for the database using X terminal
/u01/app/oracle/db11gr1/bin/dbua
DBUA invalidates some of the objects in the database during the upgrade of database and so DBUA runs the utlrcmp.sql script at the end as part of post upgrade steps
to make them valid. I would start running this script when the DBUA progress is around 75% so that when the DBUA runs this script to recompile the INVALID objects, it
has to recompile fewer objects and helps reduce the overall ming of upgrade process as a whole.
Move the listener LAB_LISTENER from 10gR2 DB home to 11gR1 HOME.
The DBUA did not move the DB listener LAB_LISTENER from 10gR2 HOME to the 11gR1 Home and I had to manually moved it as below:
Copy the 10gR2 listener.ora and tnsnames.ora files to the 11gR2 TNS_ADMIN