Database_issues_post_patching
Database_issues_post_patching
PURPOSE:
Patching operations can fail for various reasons. Typically, an operation fails because a database node is
down, there is insufficient space on the file system, or the database host cannot access the object store.
This topic includes information to help you determine the cause of the failure and fix the problem.
2. SCOPE:
This document is prepared to help troubleshoot the issues that are encountered during and post
patching of the OS/Database which can cause scenarios like database instance not coming up or
misbehavior of the database/database instance post patching.
3. SCENARIOS:
On an Oracle Database Appliance deployment with release earlier than 19.12, if the Security Technical
Implementation Guidelines (STIG) V1R2 is already deployed, then when you patch to 19.12 or earlier,
the command odacli update-server -f version.
Hardware Models
Workaround
As per the analysis, the STIG V1R2 rule OL7-00-040420 tried to change the permission of the file
/etc/ssh/ssh_host_rsa_key from '640' to '600' which caused the error. During patching, run the
command chmod 600 /etc/ssh/ssh_host_rsa_key command on both nodes.
/u01/app/19.12.0.0/ygridDCS-10001:
PRGO-1022 : Working copy "OraGrid191200" already exists.....
Hardware Models
Workaround
Evaluate DBHome patching with Failed Internal error encountered: Internal RHP error encountered:
PRGO-1693 : The database patching cannot be completed
Evaluate DBHome patching with Failed Internal error encountered: Internal RHP error encountered:
PRCT-1003 : failed to run "rhphelper" on node "node1"
Hardware Models
Workaround
Verify the Alternate Archive Failed AHF-4940: One or more log archive
Hardware Models
Workaround
Hardware Models
Workaround
Stop and restart the DCS agent and run the pre-patch report again.
Create the pre-patch report again and check that the same error is not displayed in the report:
/opt/oracle/dcs/bin/odacli create-prepatchreport -s -v 19.12.0.0.0
For Oracle Database Appliance release 19.11, when you run the odacli update-dbhome command, due
to the inclusion of the non-rolling DST patch, the job waits for 12,000 seconds (around 3.5 hours). The
following error message is displayed:
PRCC-1025 : Command submitted on node cdb1 timed out after 12,000 seconds..
"PRGO-1693 : The database patching cannot be completed in a rolling manner because the target
patched home at "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4" contains non-rolling bug
fixes "32327201" compared to the source home at "/u01/app/oracle/product/19.0.0.0/dbhome_1"
Hardware Models
All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.11
Workaround
Shut down and restart database the failed database and run the datapatch script manually to complete
the database update.
db_home_path_the_database_is_running_on/OPatch/datapatch
If the database is an Oracle ACFS database that was patched to 19.12, then run odacli list-dbstorages
command, and locate the corresponding entries by db_unique_name. Check the DATA and RECO
destination location ifthey exist from the result.
For DATA destination location, the value should be similar to the following:
/u02/app/oracle/oradata/db_unique_name
For RECO, pre-process the values from the beginning to the last forward slash (/). For example:
/u03/app/oracle
/u02/app/oracle/oradata/provDb0,/u03/app/oracle/,/u01/app/odaorahome,/u01/app/
PRGH-1153 : RHPHelper call to get runing nodes failed for DB: "GIS_IN"
Hardware Models
Workaround
Ensure that the database instances are running before you run the odacli update-dbhome command. Do
not manually stop the database before updating it.
When a DB system node, which has Oracle Database Appliance 19.10, reboots, the ora packages
repository is not mounted automatically. The 19.10 DCS Agent does not mount the repositories causing
a failure in operations that needs repository access, such as patching.
Hardware Models
Workaround
When you restart the bare metal system, the DCS Agent on the bare metal system restarts NFS on both
nodes. Follow these steps to remount the repository on the DB system:
On the VM mount pkgrepos directory, on the first node, run these steps:
cp /opt/oracle/oak/pkgrepos/System/VERSION /opt/oracle/oak/conf/VERSION
On the VM mount pkgrepos directory, on the second node, run these steps:
cp /opt/oracle/oak/pkgrepos/System/VERSION /opt/oracle/oak/conf/VERSION
Patch the DB system with the same steps as when patching the bare metal system:
The odacli update-server command may fail with the following message:
…
…
/u01/app/19.12.0.0/grid/crs/install/crsconfig_params
/u01/app/grid/crsdata/<node_name>/crsconfig/crs_postpatch_apply_oop_node_name_timestamp.log
This error shows that during the move of Oracle Grid Infrastructure stack to the new home location,
stopping the Clusterware on the earlier Oracle home fails. Confirm that it is the same error by checking
the error log for the following entry:
'node1' failed
…"
Hardware Models
Restart the Clusterware manually from the old grid home that is, the 19.10 or 19.11 home.
# cat /var/lib/nfs/etab
/opt/oracle/oak/pkgrepos
192.168.17.4(ro,sync,wdelay,hide,crossmnt,secure,root_squash,no_all_squash,no_subtree_check,secur
e_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash)
# exportfs -u host:/opt/oracle/oak/pkgrepos
# exportfs -u 192.168.17.4:/opt/oracle/oak/pkgrepos
After running steps 1-3 on both nodes, run the odacli update-server command and patch your
appliance.
The odacli update-storage command may fail with the following message:
This error shows that stopping the Clusterware may fail. Confirm that it is the same error by checking
the error log for the following entry:
'node1' failed
…"
Hardware Models
Workaround
Restart the Clusterware manually from the old grid home that is, the 19.10 or 19.11 home.
# cat /var/lib/nfs/etab
/opt/oracle/oak/pkgrepos
192.168.17.4(ro,sync,wdelay,hide,crossmnt,secure,root_squash,no_all_squash,no_subtree_check,secur
e_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash)
# exportfs -u host:/opt/oracle/oak/pkgrepos
# exportfs -u 192.168.17.4:/opt/oracle/oak/pkgrepos
After running steps 1-3 on both nodes, run the odacli update-storage command and patch the storage.
Even when the odacli update-server job is successful, odacli describe-job output may show a message
about missing patches on the source home. For example:
Message: Contact Oracle Support Services to request patch(es) "bug #". The patched "OraGrid191100" is
missing the patches for bug "bug#” which is present in the source "OraGrid19000"
For release 19.11, a missing patch error for bug number 29511771 is expected. This patch contains Perl
version 5.28 for the source grid home. Oracle Database Appliance release 19.11 includes the later Perl
version 5.32 in the Oracle Grid Infrastructure clone files, and hence, you can ignore the error. For any
other missing patches reported in the odacli describe-job command output, contact Oracle Support to
request the patches for Oracle Clusterware release 19.11.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.11
Workaround
Review the error messages reported in the odacli describe-job command output for any missing patches
other than the patch with bug number 29511771, and contact Oracle Support to request the patches for
Oracle Clusterware release 19.11.
Retrying odacli update-dbhome command with -imp option after update fails
When you patch database homes to Oracle Database Appliance release 19.11, the odacli update-
dbhome command fails.
For Oracle Database Appliance release 19.11, when you run the odacli update-dbhome command, the
following error message is displayed:
DCS-10001:Internal error encountered: Contact Oracle Support Services to request patch(es) "bug#".
Then supply the --ignore-missing-patch|-imp to retry the command.
You need not contact Oracle Support for the following bug numbers in the error message:
These patches contain the earlier versions of Perl 5.26 and Perl 5.28 for the source database home.
Oracle Database Appliance release 19.11 includes the later Perl version 5.32 in the database clone files,
and hence, you can ignore the error. You must rerun the odacli update-dbhome command again with
the -imp option.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.11
Workaround
Rerun the odacli update-dbhome command again with the -imp option:
For Oracle Database Appliance release 19.11, when you run the odacli update-dbhome command, due
to the inclusion of the non-rolling DST patch, the job waits for 12,000 seconds (around 3.5 hours). The
following error message is displayed:
PRCC-1025 : Command submitted on node cdb1 timed out after 12,000 seconds..
"PRGO-1693 : The database patching cannot be completed in a rolling manner because the target
patched home at "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4" contains non-rolling bug
fixes "32327201" compared to the source home at "/u01/app/oracle/product/19.0.0.0/dbhome_1"
Hardware Models
All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.11
Workaround
Shut down and restart database the failed database and run the datapatch script manually to complete
the database update.
/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4/OPatch/datapatch
Error in upgrading from Oracle Linux 6 to Oracle Linux 7 during Oracle Database
Appliance patching
During upgrade of Oracle Linux 6 to Oracle Linux 7 during Oracle Database Appliance upgrade from
release 18.8 to 19.x, an error is encountered.
Following are the errors reported when running the odacli update-server command:
This is because, the STIG Oracle Linux 6 rules deployed on an Oracle Database Appliance system due to
RDS/RDS_TCP not being loaded (due to OL6-00-000126 rule).
Hardware Models
Workaround
Error when patching 11.2.0.4 Database homes to Oracle Database Appliance release
19.10
Patching of database home of versions 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to
version 11.2.0.4.210119 may fail.
When DCS Agent version is 19.9, and you patch database homes from 11.2.0.4.180717, or
11.2.0.4.170814, or 11.2.0.4.180417 to 11.2.0.4.201020 (which was the Database home version released
with Oracle Database Appliance release 19.9)
When DCS Agent version is 19.10, and you patch database homes from 11.2.0.4.180717, or
11.2.0.4.170814, or 11.2.0.4.180417 to 11.2.0.4.210119 (which was the Database home version released
with Oracle Database Appliance release 19.9)
When DCS Agent version is 19.10, and you patch database homes from 11.2.0.4.180717, or
11.2.0.4.170814, or 11.2.0.4.180417 to 11.2.0.4.200114 (which was the Database home version released
with Oracle Database Appliance release 19.6)
This error occurs only when patching Oracle Database homes of versions 11.2.0.4.180717, or
11.2.0.4.170814, or 11.2.0.4.180417 to Oracle Database home using 19.10.0.0.0 version DCS Agent.
Hardware Models
Workaround
Patch your 11.2.0.4 Oracle Database home to any version earlier than 11.2.0.4.210119 (the version
released with Oracle Database Appliance release 19.10) so that the DCS Agent is of version earlier than
19.10.0.0.0, and then update the DCSAgent to 19.10.
Note that once you patch DCS Agent to 19.10.0.0.0, then patching of these old 11.2.0.4 homes will fail.
Error message displayed even when patching Oracle Database Appliance is successful
Although patching of Oracle Database Appliance was successful, an error message is displayed.
The following error is seen when running the odacli update-dcscomponents command:
details.
Hardware Models
Workaround
Run the odacli update-dcscomponents command again and the operation completes successfully.
Job details
----------------------------------------------------------------
ID: 765c5601-f4ad-44f0-a989-45a0b7432a0d
---------------------------------------- ------------------------------------------------------------------
Storage Firmware Patching February 24, 2021 8:18:06 AM PST February 24, 2021 8:18:48 AM PST
Failure
task:TaskSequential_140 February 24, 2021 8:18:06 AM PST February 24, 2021 8:18:48 AM PST
Failure
Applying Firmware Disk Patches February 24, 2021 8:18:28 AM PST February 24, 2021 8:18:48 AM PST
Failure
Hardware Models
Workaround
Check the private network (ibbond0) and ping private IPs from each node.
If the private IPs are not ping-able, then restart the private network interfaces on both nodes and retry.
On Oracle Database Appliance high availability deployments, if the zookeeper status is not in the leader
of follower mode, then continue to the next job.
The following messages are logged in the grid upgrade log file located under
/opt/oracle/oak/log/<NODENAME>/patch/19.8.0.0.0/ .
This is because when the root upgrade scripts run on the last node, the active version is not set to the
correct state.
Hardware Models
Workaround
/u01/app/19.0.0.0/grid/rootupgrade.sh -f
After the command completes, verify that the active version of the cluster is updated to UPGRADE
FINAL.
Run Oracle Database Applaince server patching process again to upgrade Oracle Grid Infrastructure.
Hardware Models
Workaround
To verify the segment space management policy currently in use by the AUD$ and FGA_LOG$ tables, use
the following SQL*Plus command:
t.table_name in ('AUD$','FGA_LOG$');
------------------------------ ------
FGA_LOG$ AUTO
AUD$ AUTO
If one or both of the AUD$ or FGA_LOG$ tables return "MANUAL", use the
BEGIN
DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type =>
END;
BEGIN
DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type =>
END;
When running the command odacli update-dbhome -v release_number on database homes that have
Standard Edition High Availability enabled, an error is encountered.
Hardware Models
Workaround
On the running node, manually run the datapatch for non-CDB databases:
dbhomeLocation/OPatch/datapatch
When running the command odacli update-server -v release_number, the following error is
encountered:
Hardware Models
Workaround
Change the file ownership temporarily to the appropriate grid user for the osdbagrp binary in the
grid_home/bin location. For example:
When patching the appliance, the odacli update-server command fails with the following error:
DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name
Hardware Models
Workaround
PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR.
Run the command again till the output displays only the two warnings above. The status of Oracle
Custerware status should be Normal again.
The data patch log contains the entry "Prereq check failed, exiting without installing any patches.".
Hardware Models
Workaround
Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.
These drives are displayed when you run the odacli describe-component command. Patching of neither
of the two known versions 0112 and 0121 of the M.2 disk is supported. Patching the LSI controller
version 13.00.00.00 to version 16.00.01.00 is also not supported. However, on some Oracle Database
Appliance X8-2 models, the installed LSI controller version may be 16.00.01.00.
Hardware Models
Workaround
None
When patching the appliance on bare metal systems, the odacli update-server command fails with the
following error:
When patching the appliance on Virtualized Platform, patching fails with an error similar to the
following:
Hardware Models
Workaround
Restart patching once Oracle TFA Collector has stopped on all nodes.
Restart patching once Oracle TFA Collector has stopped on all nodes.