0% found this document useful (0 votes)
46 views4 pages

Exadata QFSDP Patching

Uploaded by

shop4kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views4 pages

Exadata QFSDP Patching

Uploaded by

shop4kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 4

Exadata QFSDP patching

Quarterly Full Stack Dawnload Patch. Following components are patched.


Cell nodes
IB Switches
Compute node
Oracle grid and Database bundle patching
OJVM, in case Java component being used in database
Unless instructed by Oracle, any component can be patched in any sequence. Read
patching instruction thoroughly. Ideally patching sequence should be Cell nodes ->
IB switches -> Compute Node -> GI and DB patching.

Patch can be applied in rolling or non-rolling fashion. IB Switches are patched in


rolling only.

All task are performed as root user.

Advise to take os snapshot backup if tape backup is not scheduled.

QFSDP patching can be done in two phases


Phase-I : apply cell nodes, compute nodes and IB switches patch
Phase-II : apply GI and Database Bundle patches.

patchmgr utility cannot be used on same node for patching. So if you are using
patchmgr utility to patch compute node, first compute node should be patched by
exectuing patchmgr utility from one of the other node and for rest of the compute
nodes, patchmgr can be run from first compute node.

patchmgr utility run from first compute node for cell node patching.

Create /root/dbs_group file having entries for all compute nodes.


Create /root/cell_group file having entries for all cell nodes
Create /root/ib_group file having entries for all IB switches.

Task that should be performed at least 2 weeks in advance.


Generate and review Exachk report. Fix issues which could fail patching. I will
post separate blog for how to review exachk report.
Open advisory SR with Oracle and upload Exachk report to SR.
Read Doc ID 888828.1 on support.oracle.com and get information about patch you are
going to apply.
Download patch software from support.oracle.com and copy on one of compute nodes
Unzip and merge the patch software as explained in patch read me file.
Create directory QFSDP and create 3 subdirectories "ComputeNodePatch" ,
"StorageIBPatch","BP" and copy computenode patch, storagenode patch and
database/Grid patch software in respective directories.
Copy software to respective directories
tar /QFSDP/ComputeNodePatch directory and copy to one other node. patchmrg command
will run from that node for first compute node.
tar /QFSDP/BP directory and copy on each compute node. GI and Database patches will
be executed from each compute node locally.
Reboot cell nodes in rolling fashion and make sure they are starting clean (without
issue).
Run prechecks for all components and make sure everything is good to go.
On patching day:
Cell nodes precheck
Cell nodes patching
IB Switches precheck
IB Switches patching
Compute nodes precheck (with rpm update option)
Compute nodes patching
GI and BP precheck.
GI and BP patching.
Steps We followed for Jan 2018 QFSDP.
Download p27011122_122010_Linux-x86-64_1of10.zip to p27011122_122010_Linux-x86-
64_10of10.zip (10 zip files) software from MOS and copy to /QFSDP directory on
first compute node.
Unzip all(10) patch files. It creates 10 26635229.tar.splita[a-j] files. Merge them
using cat *.tar.* | tar -xvf - (run it from /QFSDP directory). It creates 27011122
directory. Inside it has Database, Infrastructure, SystemsManagement, and
automation directories and README.html and bundle.xml files.
Create "ComputeNodePatch" , "StorageIBPatch","BP" directories under /QFSDP.
Copy /QFSDP/27011122/Infrastructure/12.2.1.1.6/ExadataDatabaseServer_OL6/
p27390990_122110_Linux-x86-64.zip file to /QFSDP/ComputeNodePatch (DO NOT UNZIP
FILE)
Copy /QFSDP/27011122/Infrastructure/SoftwareMaintenanceTools/DBServerPatch/
5.180130/p21634633_181300_Linux-x86-64.zip to /QFSDP/ComputeNodePatch/ and unzip
it, it creates dbserver_patch_5.180130 directory.
Copy
/QFSDP/27011122/Infrastructure/12.2.1.1.6/ExadataStorageServer_InfiniBandSwitch/
p27351065_122110_Linux-x86-64.zip to /QFSDP/StorageIBPatch directory and unzip it.
It will create patch_12.2.1.1.6.180125.1 directory.
Copy Grid and database related patches from /QFSDP/27011122/Database/ directory
to /QFSDP/BP/ directory.
Create tar of /QFSDP/ComputeNodePatch/
cd /QFSDP
tar -cvf ComputeNodePatch.tar ./ComputeNodePatch
Copy /QFSDP/ComputeNodePatch.tar to /QFSDP directory of one other node (nodes2).
Logon to other node
cd /QFSDP
untar ComputeNodePatch.tar
Create tar of /QFSDP/BP/
cd /QFSDP
tar -cvf BP.tar ./BP
Copy /QFSDP/BP.tar to /QFSDP directory of all other nodes.
Logon to other node
cd /QFSDP
untar BP.tar
Reboot cell nodes in rolling fashion at least two weeks in advance. It will make
sure cell nodes are coming up clean after reboot. If any issue with cell nodes, it
can be fixed before patching and isolate from patching. Refer Doc ID 1188080.1 for
how to reboot cell nodes in rolling fashion.
Check ILOM connectivity to all compute and cell nodes
Run prechecks:
Cell Nodes:
Logon as root to compute node where patch software has copied.
cd /QFSDP/StorageIBPatch/patch_12.2.1.1.6.180125.1
./patchmgr -cells /root/cell_group -patch_check_prereq -rolling -smtp_from
"[email protected]" -smtp_to "<Recipient email id >"
It should show all SUCCESS status. Patchmgr will send email to recipient email id.
Fix if any errors are reported.
IB Switches :
cd /os_backup_fs/QFSDP/StorageIBPatch/patch_12.2.1.1.6.180125.1
create ibswitches.lst file having IB Switches names.
./patchmgr -ibswitches ibswitches.lst -upgrade -ibswitch_precheck -smtp_from
"[email protected]" -smtp_to "<Recipient email id>"
Compute Nodes:
create /root/dbs_group1 file having all compute nodes except node from where
command will be executed.
cd /os_backup_fs/QFSDP/ComputeNodePatch/dbserver_patch_5.180130
./patchmgr -dbnodes /root/dbs_group1 -cleanup
./patchmgr -dbnodes /root/dbs_group1 -precheck -nomodify_at_prereq -log_dir auto -
target_version 12.2.1.1.6.180125.1 -iso_repo
/QFSDP/ComputeNodePatch/p27390990_122110_Linux-x86-64.zip -smtp_from
"[email protected]" -smtp_to "<Recipient email id>"
Result log should be clean. If it shows some rpm conflict or failure for custom
rpms, that should be okay.
Logon to other node.
create /root/dbs_group1 file having only compute nodes from where above script ran.
cd /os_backup_fs/QFSDP/ComputeNodePatch/dbserver_patch_5.180130
./patchmgr -dbnodes /root/dbs_group1 -cleanup
./patchmgr -dbnodes /root/dbs_group1 -precheck -nomodify_at_prereq -log_dir auto -
target_version 12.2.1.1.6.180125.1 -iso_repo
/QFSDP/ComputeNodePatch/p27390990_122110_Linux-x86-64.zip -smtp_from
"[email protected]" -smtp_to "<Recipient email id>"
Result log should be clean. If it shows some rpm conflict or failure for custom
rpms, that should be okay.
Patching:
Cell Node:
Take output of imageinfo command
Set ASM asm_power_limit=256 and disk_repair_time=24h
cd /QFSDP/StorageIBPatch/patch_12.2.1.1.6.180125.1
./patchmgr -cells /root/cell_group -reset_force
./patchmgr -cells /root/cell_group -cleanup
./patchmgr -cells /root/cell_group -patch_check_prereq -rolling -smtp_from
"[email protected]" -smtp_to "<Recipient email id>"
./patchmgr -cells /root/cell_group -patch -rolling -smtp_from "[email protected]" -
smtp_to "<Recipient email id>"
./patchmgr -cells /root/cell_group -cleanup
imageinfo
IB Switch patching:
cd /QFSDP/StorageIBPatch/patch_12.2.1.1.6.180125.1
./patchmgr -ibswitches ibswitches.lst -upgrade -ibswitch_precheck -smtp_from
"[email protected]" -smtp_to "<Recipient email id>"
./patchmgr -ibswitches ibswitches.lst –upgrade -smtp_from "[email protected]" -
smtp_to "<Recipient email id>"
Compute Node patching:
blackout OEM
First Node patching:
Logon to first compute node
Take system and database information before patching.
df -h
crontab -l
mount -l
ifconfig -a
uname -r
uname -a
cat /etc/fstab
imageinfo
ps -ef | grep ora_pmon |grep -v grep
ps -ef | grep tns |grep -v grep
<GRID_HOME>/bin/crsctl stat res -t
Copy all output to other node or laptop.
Unmount NFS mount points and Comment out them in /etc/fstab.
#umount <NFS Mount point>
df -h and make sure NFS mount points are unmounted
<GI HOME>/bin/crsctl disable crs
<GI HOME>/bin/crsctl stop crs -f
Open ILOM console
Reboot Server
shutdown -r -y now
Verify and ensure that all services are down
ps -eaf | grep d.bin | grep -v grep
ps -eaf | grep lsnr | grep -v grep
ps -eaf | grep pmon | grep -v grep
Logout from first node. Login to other nodes.
Precheck:
cd /QFSDP/ComputeNodePatch/dbserver_patch_5.180130
./patchmgr -dbnodes /root/dbs_group1 -precheck -modify_at_prereq -log_dir auto -
target_version 12.2.1.1.6.180125.1 -iso_repo
/QFSDP/ComputeNodePatch/p27390990_122110_Linux-x86-64.zip -smtp_from
"[email protected]" -smtp_to "<Recipient email id>"
Patch
./patchmgr -dbnodes /root/dbs_group1 -upgrade -log_dir auto -target_version
12.2.1.1.6.180125.1 -iso_repo /QFSDP/ComputeNodePatch/p27390990_122110_Linux-x86-
64.zip -smtp_from "[email protected]" -smtp_to "<Recipient email id>"
Repeat Compute Node patching steps for other nodes. And patching should run from
first compute node.
GI and DB patch: It is same as non Exadata GI and DB patching.

You might also like