0% found this document useful (0 votes)
64 views

Database_issues_post_patching

Uploaded by

rohan ghonge
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views

Database_issues_post_patching

Uploaded by

rohan ghonge
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

1.

PURPOSE:
Patching operations can fail for various reasons. Typically, an operation fails because a database node is
down, there is insufficient space on the file system, or the database host cannot access the object store.
This topic includes information to help you determine the cause of the failure and fix the problem.

2. SCOPE:
This document is prepared to help troubleshoot the issues that are encountered during and post
patching of the OS/Database which can cause scenarios like database instance not coming up or
misbehavior of the database/database instance post patching.

3. SCENARIOS:

 Error in server patching


When patching Oracle Database Appliance which already has STIG V1R2 deployed, an error is
encountered.

On an Oracle Database Appliance deployment with release earlier than 19.12, if the Security Technical
Implementation Guidelines (STIG) V1R2 is already deployed, then when you patch to 19.12 or earlier,
the command odacli update-server -f version.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

As per the analysis, the STIG V1R2 rule OL7-00-040420 tried to change the permission of the file
/etc/ssh/ssh_host_rsa_key from '640' to '600' which caused the error. During patching, run the
command chmod 600 /etc/ssh/ssh_host_rsa_key command on both nodes.

This issue is tracked with Oracle bug 33168598.

 Error in prepatch report for the update-server command


When you patch server to Oracle Database Appliance release 19.12, the odacli update-server command
fails.

The following error message is displayed in the pre-patch report:

Evaluate GI patching Failed Internal error encountered:

/u01/app/19.12.0.0/ygridDCS-10001:
PRGO-1022 : Working copy "OraGrid191200" already exists.....

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:

Run the odacli update-server command with the -f option.

/opt/oracle/dcs/bin/odacli update-server -v 19.12.0.0.0 -f

This issue is tracked with Oracle bug 33261965.

 Error in prepatch report for the update-dbhome command


When you patch server to Oracle Database Appliance release 19.12, the odacli update-dbhome
command fails.

The following error message is displayed in the pre-patch report:

Evaluate DBHome patching with Failed Internal error encountered: Internal RHP error encountered:
PRGO-1693 : The database patching cannot be completed

in a rolling manner because the target patched home at


"/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_3" contains non-rolling bug fixes "32327201"

compared to the source home at "/u01/app/oracle/product/19.0.0.0/dbhome_1"....

Evaluate DBHome patching with Failed Internal error encountered: Internal RHP error encountered:
PRCT-1003 : failed to run "rhphelper" on node "node1"

PRCT-1014 : Internal error: RHPHELP12102_main-02...

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:

Run the odacli update-dbhome command with the -f option.

/opt/oracle/dcs/bin/odacli update-dbhome --dbhomeid 7c67c5b4-f585-4ba9-865f-c719c63c0a6e -v


19.12.0.0.0 -f

This issue is tracked with Oracle bug 33251523.


 AHF error in prepatch report for the update-dbhome command
When you patch server to Oracle Database Appliance release 19.12, the odacli update-dbhome
command fails.

The following error message is displayed in the pre-patch report:

Verify the Alternate Archive Failed AHF-4940: One or more log archive

Destination is Configured to destination and alternate log archive

Prevent Database Hangs destination settings are not as recommended

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:

Run the odacli update-dbhome command with the -f option.

/opt/oracle/dcs/bin/odacli update-dbhome --dbhomeid 7c67c5b4-f585-4ba9-865f-c719c63c0a6e -v


19.12.0.0.0 -f

This issue is tracked with Oracle bug 33144170.

 Database clone error in prepatch report for the update-dbhome command


When you patch server to Oracle Database Appliance release 19.12, the odacli update-dbhome
command fails.

The following error message is displayed in the pre-patch report:

Is DB clone available Failed The DB clone for version

19.12.0.0.210720 cannot be found.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:

Stop and restart the DCS agent and run the pre-patch report again.

systemctl stop initdcsagent

systemctl start initdcsagent

Create the pre-patch report again and check that the same error is not displayed in the report:
/opt/oracle/dcs/bin/odacli create-prepatchreport -s -v 19.12.0.0.0

Run the odacli update-dbhome command.

/opt/oracle/dcs/bin/odacli update-dbhome --dbhomeid 7c67c5b4-f585-4ba9-865f-c719c63c0a6e -v


19.12.0.0.0 -f

This issue is tracked with Oracle bug 33293991.

 Error in running the update-dbhome command


When you patch database homes to Oracle Database Appliance release 19.11, the odacli update-
dbhome command fails.

For Oracle Database Appliance release 19.11, when you run the odacli update-dbhome command, due
to the inclusion of the non-rolling DST patch, the job waits for 12,000 seconds (around 3.5 hours). The
following error message is displayed:

DCS-10001:Internal error encountered: PRCC-1021 :

One or more of the submitted commands did not execute successfully.

PRCC-1025 : Command submitted on node cdb1 timed out after 12,000 seconds..

The rhp.log file contains the following entries:

"PRGO-1693 : The database patching cannot be completed in a rolling manner because the target
patched home at "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4" contains non-rolling bug
fixes "32327201" compared to the source home at "/u01/app/oracle/product/19.0.0.0/dbhome_1"

Hardware Models

All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.11

Workaround

Follow these steps:

Shut down and restart database the failed database and run the datapatch script manually to complete
the database update.

db_home_path_the_database_is_running_on/OPatch/datapatch

If the database is an Oracle ACFS database that was patched to 19.12, then run odacli list-dbstorages
command, and locate the corresponding entries by db_unique_name. Check the DATA and RECO
destination location ifthey exist from the result.

For DATA destination location, the value should be similar to the following:

/u02/app/oracle/oradata/db_unique_name

For RECO, pre-process the values from the beginning to the last forward slash (/). For example:
/u03/app/oracle

addlFS = /u01/app/odaorahome,/u01/app/odaorabase0(for single-node systems)

addlFS = /u01/app/odaorahome,/u01/app/odaorabase0, /u01/app/odaorabase1(for high availability


systems)

Run the srvctl command db_home_path_the_database_is_running_on/bin/srvctl modify database -d


db_unique_name -acfspath $data, $reco, $addlFS -diskgroup DATA. For example:

srvctl modify database -d provDb0 -acfspath

/u02/app/oracle/oradata/provDb0,/u03/app/oracle/,/u01/app/odaorahome,/u01/app/

odaorabase0 -diskgroup DATA

This issue is tracked with Oracle bug 32740491.

 Error in updating dbhome


When you patch database homes to Oracle Database Appliance release 19.12, the odacli update-
dbhome command fails.

The following error message is displayed:

PRGH-1153 : RHPHelper call to get runing nodes failed for DB: "GIS_IN"

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Ensure that the database instances are running before you run the odacli update-dbhome command. Do
not manually stop the database before updating it.

This issue is tracked with Oracle bug 33114855.

 Error when patching DB systems


When patching DB systems on Oracle Database Appliance, an error is encountered.

When a DB system node, which has Oracle Database Appliance 19.10, reboots, the ora packages
repository is not mounted automatically. The 19.10 DCS Agent does not mount the repositories causing
a failure in operations that needs repository access, such as patching.

Hardware Models

All Oracle Database Appliance hardware models

Workaround
When you restart the bare metal system, the DCS Agent on the bare metal system restarts NFS on both
nodes. Follow these steps to remount the repository on the DB system:

On the VM mount pkgrepos directory, on the first node, run these steps:

cp /opt/oracle/oak/pkgrepos/System/VERSION /opt/oracle/oak/conf/VERSION

mount 192.168.17.2:/opt/oracle/oak/pkgrepos /opt/oracle/oak/pkgrepos

For InfiniBand environments:

mount 192.168.16.24:/opt/oracle/oak/pkgrepos /opt/oracle/oak/pkgrepos

On the VM mount pkgrepos directory, on the second node, run these steps:

cp /opt/oracle/oak/pkgrepos/System/VERSION /opt/oracle/oak/conf/VERSION

mount 192.168.17.3:/opt/oracle/oak/pkgrepos /opt/oracle/oak/pkgrepos

For InfiniBand environments:

mount 192.168.16.25:/opt/oracle/oak/pkgrepos /opt/oracle/oak/pkgrepos

Patch the DB system with the same steps as when patching the bare metal system:

odacli update-dcsadmin -v 19.12.0.0.0

odacli update-dcscomponents -v 19.12.0.0.0

odacli update-dcsagent -v 19.12.0.0.0

odacli create-prepatchreport -v 19.12.0.0.0 -s

odacli update-server -v 19.12.0.0.0

odacli create-prepatchreport -v 19.12.0.0.0 -d -i id

odacli update-dbhome -v 19.12.0.0.0 0 -i id -f -imp

This issue is tracked with Oracle bug 33217680.

 Error in server patching


When patching Oracle Database Appliance, errors are encountered.

The odacli update-server command may fail with the following message:

Fail to patch GI with RHP : DCS-10001:Internal error encountered: PRGH-1057

: failure during move of an Oracle Grid Infrastructure home


RCZ-4001 : failed to execute command

"/u01/app/19.12.0.0/grid/crs/install/rootcrs.sh" using the

privileged execution plugin "odaexec" on nodes "xxxxxxxx"

within 36,000 seconds

PRCZ-2103 : Failed to execute command

"/u01/app/19.12.0.0/grid/crs/install/rootcrs.sh" on node "xxxxxxxx" as user

"root". Detailed error: Using configuration parameter file:

/u01/app/19.12.0.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

/u01/app/grid/crsdata/<node_name>/crsconfig/crs_postpatch_apply_oop_node_name_timestamp.log

This error shows that during the move of Oracle Grid Infrastructure stack to the new home location,
stopping the Clusterware on the earlier Oracle home fails. Confirm that it is the same error by checking
the error log for the following entry:

“Error unmounting '/opt/oracle/oak/pkgrepos/orapkgs/clones'. Possible busy file system. Verify

the logs.Retrying unmount

CRS-2675: Stop of 'ora.data.acfsclone.acfs' on

'node1' failed

CRS-2679: Attempting to clean 'ora.data.acfsclone.acfs' on 'node1'

Clean action is about to exhaust maximum waiting time

CRS-2678: 'ora.data.acfsclone.acfs' on 'node1' has experienced an unrecoverable failure

CRS-0267: Human intervention required to resume its availability.

CRS-2679: Attempting to clean 'ora.data.acfsclone.acfs' on 'node1'

Clean action is about to exhaust maximum waiting time

CRS-2680: Clean of 'ora.data.acfsclone.acfs' on 'node1' failed

…"

Hardware Models

All Oracle Database Appliance hardware models


Workaround

Follow these steps on both nodes:

Restart the Clusterware manually from the old grid home that is, the 19.10 or 19.11 home.

Locate all export points of /opt/oracle/oak/pkgrepos:

# cat /var/lib/nfs/etab

/opt/oracle/oak/pkgrepos
192.168.17.4(ro,sync,wdelay,hide,crossmnt,secure,root_squash,no_all_squash,no_subtree_check,secur
e_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash)

Clear references to export of clones:

# exportfs -u host:/opt/oracle/oak/pkgrepos

# exportfs -u 192.168.17.4:/opt/oracle/oak/pkgrepos

After running steps 1-3 on both nodes, run the odacli update-server command and patch your
appliance.

This issue is tracked with Oracle bug 33284607.

 Error in storage patching


When patching Oracle Database Appliance, errors are encountered.

The odacli update-storage command may fail with the following message:

DCS-10001:Internal error encountered: Failed to stop cluster

This error shows that stopping the Clusterware may fail. Confirm that it is the same error by checking
the error log for the following entry:

“Error unmounting '/opt/oracle/oak/pkgrepos/orapkgs/clones'. Possible busy file system. Verify

the logs.Retrying unmount

CRS-2675: Stop of 'ora.data.acfsclone.acfs' on

'node1' failed

CRS-2679: Attempting to clean 'ora.data.acfsclone.acfs' on 'node1'

Clean action is about to exhaust maximum waiting time

CRS-2678: 'ora.data.acfsclone.acfs' on 'node1' has experienced an unrecoverable failure

CRS-0267: Human intervention required to resume its availability.

CRS-2679: Attempting to clean 'ora.data.acfsclone.acfs' on 'node1'


Clean action is about to exhaust maximum waiting time

CRS-2680: Clean of 'ora.data.acfsclone.acfs' on 'node1' failed

…"

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps on both nodes:

Restart the Clusterware manually from the old grid home that is, the 19.10 or 19.11 home.

Locate all export points of /opt/oracle/oak/pkgrepos:

# cat /var/lib/nfs/etab

/opt/oracle/oak/pkgrepos
192.168.17.4(ro,sync,wdelay,hide,crossmnt,secure,root_squash,no_all_squash,no_subtree_check,secur
e_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash)

Clear references to export of clones:

# exportfs -u host:/opt/oracle/oak/pkgrepos

# exportfs -u 192.168.17.4:/opt/oracle/oak/pkgrepos

After running steps 1-3 on both nodes, run the odacli update-storage command and patch the storage.

This issue is tracked with Oracle bug 33284607.

 Retrying update-server command after odacli update-server command fails


When you patch Oracle Database Appliance release 19.11, the odacli update-server command fails.

Even when the odacli update-server job is successful, odacli describe-job output may show a message
about missing patches on the source home. For example:

Message: Contact Oracle Support Services to request patch(es) "bug #". The patched "OraGrid191100" is
missing the patches for bug "bug#” which is present in the source "OraGrid19000"

For release 19.11, a missing patch error for bug number 29511771 is expected. This patch contains Perl
version 5.28 for the source grid home. Oracle Database Appliance release 19.11 includes the later Perl
version 5.32 in the Oracle Grid Infrastructure clone files, and hence, you can ignore the error. For any
other missing patches reported in the odacli describe-job command output, contact Oracle Support to
request the patches for Oracle Clusterware release 19.11.
Hardware Models

All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.11

Workaround

Review the error messages reported in the odacli describe-job command output for any missing patches
other than the patch with bug number 29511771, and contact Oracle Support to request the patches for
Oracle Clusterware release 19.11.

This issue is tracked with Oracle bug 32973488.

 Retrying odacli update-dbhome command with -imp option after update fails
When you patch database homes to Oracle Database Appliance release 19.11, the odacli update-
dbhome command fails.

For Oracle Database Appliance release 19.11, when you run the odacli update-dbhome command, the
following error message is displayed:

DCS-10001:Internal error encountered: Contact Oracle Support Services to request patch(es) "bug#".
Then supply the --ignore-missing-patch|-imp to retry the command.

You need not contact Oracle Support for the following bug numbers in the error message:

27138071 and 30508171, applicable to Oracle Database release 12.1

28581244 and 30508161, applicable to Oracle Database release 12.2

28628507 and 31225444, applicable to Oracle Database release 18c

29511771, applicable to Oracle Database release 19c

These patches contain the earlier versions of Perl 5.26 and Perl 5.28 for the source database home.
Oracle Database Appliance release 19.11 includes the later Perl version 5.32 in the database clone files,
and hence, you can ignore the error. You must rerun the odacli update-dbhome command again with
the -imp option.

Hardware Models

All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.11

Workaround

Rerun the odacli update-dbhome command again with the -imp option:

# /opt/oracle/dcs/bin/odacli update-dbhome --dbhomeid 7c67c5b4-f585-4ba9-865f-c719c63c0a6e -v


19.12.0.0.0 -imp

This issue is tracked with Oracle bug 32915897.


 Error in running the update-dbhome command
When you patch database homes to Oracle Database Appliance release 19.11, the odacli update-
dbhome command fails.

For Oracle Database Appliance release 19.11, when you run the odacli update-dbhome command, due
to the inclusion of the non-rolling DST patch, the job waits for 12,000 seconds (around 3.5 hours). The
following error message is displayed:

DCS-10001:Internal error encountered: PRCC-1021 :

One or more of the submitted commands did not execute successfully.

PRCC-1025 : Command submitted on node cdb1 timed out after 12,000 seconds..

The rhp.log file contains the following entries:

"PRGO-1693 : The database patching cannot be completed in a rolling manner because the target
patched home at "/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4" contains non-rolling bug
fixes "32327201" compared to the source home at "/u01/app/oracle/product/19.0.0.0/dbhome_1"

Hardware Models

All Oracle Database Appliance hardware models with Oracle Database Appliance release 19.11

Workaround

Shut down and restart database the failed database and run the datapatch script manually to complete
the database update.

/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4/OPatch/datapatch

This issue is tracked with Oracle bug 32801095.

 Error in upgrading from Oracle Linux 6 to Oracle Linux 7 during Oracle Database
Appliance patching
During upgrade of Oracle Linux 6 to Oracle Linux 7 during Oracle Database Appliance upgrade from
release 18.8 to 19.x, an error is encountered.

Following are the errors reported when running the odacli update-server command:

DCS-10059:Clusterware is not running on all nodes

The log file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_25383.trc has the following error:

KSIPC: ksipc_open: Failed to complete ksipc_open at process startup!!

KSIPC: ksipc_open: ORA-27504: IPC error creating OSD context

This is because, the STIG Oracle Linux 6 rules deployed on an Oracle Database Appliance system due to
RDS/RDS_TCP not being loaded (due to OL6-00-000126 rule).
Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:

Edit the /etc/modprobe.d/modprobe.conf file.

Comment the following lines:

# The RDS protocol is disabled

# install rds /bin/true

Restart the nodes.

Run the the odacli update-server command again.

This issue is tracked with Oracle bug 31881957.

 Error when patching 11.2.0.4 Database homes to Oracle Database Appliance release
19.10
Patching of database home of versions 11.2.0.4.180717, or 11.2.0.4.170814, or 11.2.0.4.180417 to
version 11.2.0.4.210119 may fail.

Following are the scenarios when this error may occur:

When DCS Agent version is 19.9, and you patch database homes from 11.2.0.4.180717, or
11.2.0.4.170814, or 11.2.0.4.180417 to 11.2.0.4.201020 (which was the Database home version released
with Oracle Database Appliance release 19.9)

When DCS Agent version is 19.10, and you patch database homes from 11.2.0.4.180717, or
11.2.0.4.170814, or 11.2.0.4.180417 to 11.2.0.4.210119 (which was the Database home version released
with Oracle Database Appliance release 19.9)

When DCS Agent version is 19.10, and you patch database homes from 11.2.0.4.180717, or
11.2.0.4.170814, or 11.2.0.4.180417 to 11.2.0.4.200114 (which was the Database home version released
with Oracle Database Appliance release 19.6)

This error occurs only when patching Oracle Database homes of versions 11.2.0.4.180717, or
11.2.0.4.170814, or 11.2.0.4.180417 to Oracle Database home using 19.10.0.0.0 version DCS Agent.

Hardware Models

All Oracle Database Appliance hardware models

Workaround
Patch your 11.2.0.4 Oracle Database home to any version earlier than 11.2.0.4.210119 (the version
released with Oracle Database Appliance release 19.10) so that the DCS Agent is of version earlier than
19.10.0.0.0, and then update the DCSAgent to 19.10.

Note that once you patch DCS Agent to 19.10.0.0.0, then patching of these old 11.2.0.4 homes will fail.

This issue is tracked with Oracle bug 32498178.

 Error message displayed even when patching Oracle Database Appliance is successful
Although patching of Oracle Database Appliance was successful, an error message is displayed.

The following error is seen when running the odacli update-dcscomponents command:

# time odacli update-dcscomponents -v 19.10.0.0.0

^[[ADCS-10008:Failed to update DCScomponents: 19.10.0.0.0

Internal error while patching the DCS components :

DCS-10231:Cannot proceed. Pre-checks for update-dcscomponents failed. Refer

to /opt/oracle/dcs/log/-dcscomponentsPreCheckReport.log on node 1 for

details.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

This is a timing issue with setting up the SSH equivalence.

Run the odacli update-dcscomponents command again and the operation completes successfully.

This issue is tracked with Oracle bug 32553519.

 Error in updating storage when patching Oracle Database Appliance


When updating storage during patching of Oracle Database Appliance, an error is encountered.

The following error is displayed:

# odacli describe-job -i 765c5601-f4ad-44f0-a989-45a0b7432a0d

Job details

----------------------------------------------------------------

ID: 765c5601-f4ad-44f0-a989-45a0b7432a0d

Description: Storage Firmware Patching


Status: Failure

Created: February 24, 2021 8:15:21 AM PST

Message: ZK Wait Timed out. ZK is Offline

Task Name Start Time End Time Status

---------------------------------------- ------------------------------------------------------------------

Storage Firmware Patching February 24, 2021 8:18:06 AM PST February 24, 2021 8:18:48 AM PST
Failure

task:TaskSequential_140 February 24, 2021 8:18:06 AM PST February 24, 2021 8:18:48 AM PST
Failure

Applying Firmware Disk Patches February 24, 2021 8:18:28 AM PST February 24, 2021 8:18:48 AM PST
Failure

Hardware Models

Oracle Database Appliance X5-2 hardware models with InfiniBand

Workaround

Follow these steps:

Check the private network (ibbond0) and ping private IPs from each node.

If the private IPs are not ping-able, then restart the private network interfaces on both nodes and retry.

Check the zookeeper status.

On Oracle Database Appliance high availability deployments, if the zookeeper status is not in the leader
of follower mode, then continue to the next job.

This issue is tracked with Oracle bug 32550378.

 Error in Oracle Grid Infrastructure upgrade


Oracle Grid Infrastructure upgrade fails, though the rootupgrade.sh script ran successfully.

The following messages are logged in the grid upgrade log file located under
/opt/oracle/oak/log/<NODENAME>/patch/19.8.0.0.0/ .

ERROR: The clusterware active state is UPGRADE_AV_UPDATED

INFO: ** Refer to the release notes for more information **


INFO: ** and suggested corrective action **

This is because when the root upgrade scripts run on the last node, the active version is not set to the
correct state.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:

As root user, run the following command on the second node:

/u01/app/19.0.0.0/grid/rootupgrade.sh -f

After the command completes, verify that the active version of the cluster is updated to UPGRADE
FINAL.

/u01/app/19.0.0.0/grid/bin/crsctl query crs activeversion -f

The cluster upgrade state is [UPGRADE FINAL]

Run Oracle Database Applaince server patching process again to upgrade Oracle Grid Infrastructure.

This issue is tracked with Oracle bug 31546654.

 Error when running ORAChk or updating the server or database home


When running Oracle ORAchk or the commands odacli create-prepatchreport, odacli update-server,
odacli update-dbhome, an error is encountered.

The following messages may be displayed:

- Table AUD$[FGA_LOG$] should use Automatic Segment Space Management

Hardware Models

All Oracle Database Appliance hardware models

Workaround

To verify the segment space management policy currently in use by the AUD$ and FGA_LOG$ tables, use
the following SQL*Plus command:

select t.table_name,ts.segment_space_management from dba_tables t,

dba_tablespaces ts where ts.tablespace_name = t.tablespace_name and

t.table_name in ('AUD$','FGA_LOG$');

The output should be similar to the following:


TABLE_NAME SEGMEN

------------------------------ ------

FGA_LOG$ AUTO

AUD$ AUTO

If one or both of the AUD$ or FGA_LOG$ tables return "MANUAL", use the

DBMS_AUDIT_MGMT package to move them to the SYSAUX tablespace:

BEGIN

DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type =>

DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,--this moves table AUD$

audit_trail_location_value => 'SYSAUX');

END;

BEGIN

DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type =>

DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,--this moves table FGA_LOG$

audit_trail_location_value => 'SYSAUX');

END;

This issue is tracked with Oracle bug 27856448.

 Error in patching database homes


An error is encountered when patching database homes on databases that have Standard Edition High
Availability enabled.

When running the command odacli update-dbhome -v release_number on database homes that have
Standard Edition High Availability enabled, an error is encountered.

WARNING::Failed to run the datapatch as db <db_name> is not in running state

Hardware Models

All Oracle Database Appliance hardware models with High-Availability deployments

Workaround

Follow these steps:

Locate the running node of the target database instance:


srvctl status database -database dbUniqueName

Or, relocate the single-instance database instance to the required node:

odacli modify-database -g node_number (-th node_name)

On the running node, manually run the datapatch for non-CDB databases:

dbhomeLocation/OPatch/datapatch

For CDB databases, locate the PDB list using SQL*Plus.

select name from v$containers where open_mode='READ WRITE';

dbhomeLocation/OPatch/datapatch -pdbs pdb_names_found_in_previous_step_divided_by_comma

This issue is tracked with Oracle bug 31654816.

 Error in server patching


An error is encountered when patching the server.

When running the command odacli update-server -v release_number, the following error is
encountered:

DCS-10001:Internal error encountered: patchmetadata for 19.6.0.0.0 missing

target version for GI.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Follow these steps:

Change the file ownership temporarily to the appropriate grid user for the osdbagrp binary in the
grid_home/bin location. For example:

$ chown -R grid:oinstall /u01/app/18.0.0.0/grid/bin/osdbagrp

Run either the update-registry -n gihome or the update-registry -n system command.

This issue is tracked with Oracle bug 31125258.

 Server status not set to Normal when patching


When patching Oracle Database Appliance, an error is encountered.

When patching the appliance, the odacli update-server command fails with the following error:
DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name

Hardware Models

All Oracle Database Appliance hardware models

Workaround

Run the command:

Grid_home/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all

Ignore the following two warnings:

Verifying OCR Integrity ...WARNING

PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR.

Verifying Single Client Access Name (SCAN) ...WARNING

RVG-11368 : A SCAN is recommended to resolve to "3" or more IP

Run the command again till the output displays only the two warnings above. The status of Oracle
Custerware status should be Normal again.

You can verify the status with the command:

Grid_home/bin/crsctl query crs activeversion -f

This issue is tracked with Oracle bug 30099090.

 Error when patching to 12.1.0.2.190716 Bundle Patch


When patching Oracle Database release 12.1.0.2 to Oracle Database 12.1.0.2.190716 Bundle Patch, an
error is encountered.

The ODACLI job displays the following error:

DCS-10001:Internal error encountered: Failed to run SQL script: datapatch script.

The data patch log contains the entry "Prereq check failed, exiting without installing any patches.".

Hardware Models

All Oracle Database Appliance hardware models bare metal deployments

Workaround

Install the same patch again.

This issue is tracked with Oracle bugs 30026438 and 30155710.

 Parent topic: Known Issues When Patching Oracle Database Appliance


Patching of M.2 drives not supported

Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.

These drives are displayed when you run the odacli describe-component command. Patching of neither
of the two known versions 0112 and 0121 of the M.2 disk is supported. Patching the LSI controller
version 13.00.00.00 to version 16.00.01.00 is also not supported. However, on some Oracle Database
Appliance X8-2 models, the installed LSI controller version may be 16.00.01.00.

Hardware Models

Oracle Database Appliance bare metal deployments

Workaround

None

This issue is tracked with Oracle bug 30249232.

 Error in patching Oracle Database Appliance


When applying the server patch for Oracle Database Appliance, an error is encountered.

Error Encountered When Patching Bare Metal Systems:

When patching the appliance on bare metal systems, the odacli update-server command fails with the
following error:

Please stop TFA before server patching.

To resolve this issue, follow the steps described in the Workaround.

Error Encountered When Patching Virtualized Platform:

When patching the appliance on Virtualized Platform, patching fails with an error similar to the
following:

INFO: Running prepatching on local node

WARNING: errors seen during prepatch on local node

ERROR: Unable to apply the patch 1

Check the prepatch log file generated in the directory /opt/oracle/oak/log/hostname/patch/18.8.0.0.0.


You can also view the prepatch log for the last run with the command ls -lrt prepatch_*.log. Check the
last log file in the command output.

In the log file, search for entries similar to the following:

ERROR: date_time_stamp: TFA is running on one or more nodes.


WARNING: date_time_stamp: Shutdown TFA and then restart patching

INFO: date_time_stamp: Read the Release Notes for additional information.

To resolve this issue, follow the steps described in the Workaround.

Hardware Models

All Oracle Database Appliance hardware models

Workaround

On Oracle Database Appliance bare metal systems, do the following:

Run tfactl stop on all the nodes in the cluster.

Restart patching once Oracle TFA Collector has stopped on all nodes.

On Oracle Database Appliance Virtualized Platform, do the following:

Run /etc/init.d/init.tfa stop on all the nodes in the cluster.

Restart patching once Oracle TFA Collector has stopped on all nodes.

This issue is tracked with Oracle bug 30260318.

You might also like