0% found this document useful (0 votes)
39 views14 pages

ExaCCGen1 UpgradeGridTo19c

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views14 pages

ExaCCGen1 UpgradeGridTo19c

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Knowledge https://support.oracle.com/cloud/faces/DocumentDisplay?_afrLoop=7...

Switch to My Oracle Support Contact Us Help

Dashboard Service Requests Knowledge Communities Learning Quick Reference

Knowledge

1 จาก 14 14/2/2565 9:50


Knowledge https://support.oracle.com/cloud/faces/DocumentDisplay?_afrLoop=7...

Copyright (c) 2022, Oracle. All rights reserved. Oracle Confidential.

Upgrading to 19c Oracle Grid Infrastructure on Gen 1 Exadata Cloud at Customer (Doc ID 2709296.1) To Bottom

Modified: Oct 21, 2020 Type: HOWTO

In this Document

Goal
Solution
Upgrade Oracle Grid Infrastructure on Gen 1 Exadata Cloud at Customer
Grid Infrastructure 19c Upgrade Prerequisites
Step 1.1 - Validate Minimum Software Requirements
Step 1.1.1 - Required Exadata Database Server Software
Step 1.1.2 - Required Grid Infrastructure Software
Step 1.1.3 - Required Database Software
Step 1.1.4 - List of One-off patches
Step 1.2 - Update and run Exachk
Step 1.3 - Validate HugePages Memory Allocation
Step 1.4 - Install the latest Cloud Tooling
Step 1.5 - Check for the availability of the upgrade patch
Step 1.6 - Run the prerequisite check
Step 1.7 - Configure the Administrative Interface Inactivity Timeout
Step 1.8 - Evaluate checklist for continuous application service during maintenance window
Upgrade Grid Infrastructure to 19c
Step 2.1 - Perform the upgrade
Step 2.2 - Verify the upgrade
Step 2.3 - Apply 19c OneOff Patches to the Grid Infrastructure Home
Step 2.4 - Perform DBFS Required Updates (DBFS only)
Post Upgrade
Step 3.1 - Manual post-upgrade tasks
Step 3.2 – Synchronize Grid Infrastructure Version in Database Cloud Service Console using REST API
Step 3.3 - Run Exachk
Step 3.4 - Remove Oracle Grid Infrastructure Software if no fallback required
Step 3.5 - Advance ASM Compatible Diskgroup Attribute
Troubleshooting a Failed Grid Infrastructure Upgrade
References

APPLIES TO:
Gen 1 Exadata Cloud at Customer (Oracle Exadata Database Cloud Machine) - Version N/A to N/A

Information in this document applies to any platform.

GOAL
This document provides step-by-step instructions for upgrading Oracle Grid Infrastructure from version 12.2.0.1 or 18c, to version 19c on Exadata Cloud at Customer Gen
1.

SOLUTION

Upgrade Oracle Grid Infrastructure on Gen 1 Exadata Cloud at Customer


Overview

This document provides step-by-step instructions for upgrading Oracle Grid Infrastructure from release 12.2.0.1 or 18c to 19c on Oracle Exadata Cloud at Customer Gen 1.
Updates and additional patches may be required for your existing installation before upgrading to Oracle Grid Infrastructure 19c. The note box below provides a summary of
the software requirements to upgrade.

Summary of software requirements to upgrade to Oracle Grid Infrastructure 19c:

1. Current Grid Infrastructure version must be 12.2.0.1.190416, 18.6.0.0.190416 or higher.


2. Minimum Exadata System Software version 19.2 is required.
3. Minimum Cloud Tooling version 19.1.1.1.0_200727 is required

With OCC release 20.3 and higher you should be able to create 19c DB Services using database service console and manage lifecycle operations provided all the
requirements are met.
Conventions

• The steps documented apply to Grid Infrastructure 12.2.0.1 or 18c upgrades to 19c unless specified differently.
• The new Grid Infrastructure Home will be /u02/app/19.0.0.0/gridHome<N> e.g: /u02/app/19.0.0.0/gridHome2
• In this document, we will assume that the current Grid Infrastructure version is 18.6.0.0.190416 (/u01/app/18.1.0.0/grid).
• The hostnames used in the document are fictitious.
• The examples are based on a two database nodes cluster.

Assumptions:

• The files ~grid/dbs_group and ~oracle/dbs_group exist and contain the hostname of all database servers.
• Current Grid Infrastructure home can be either 12.2.0.1 or 18c Grid Infrastructure home.
• All Exachk recommended best practices, for example memory management (huge pages) and interconnect settings (not using HAIP) are implemented prior to the
beginning of the upgrade.

2 จาก 14 14/2/2565 9:50


Knowledge https://support.oracle.com/cloud/faces/DocumentDisplay?_afrLoop=7...

beginning of the upgrade.

© Oracle Site Maps Terms of Use & Privacy Cookie Preferences Contact Support
NOTE: In the images and/or the document content below, the user information and data used represents fictitious data or data from the Oracle sample schema(s) or
Public Documentation delivered with an Oracle database product. Any similarity to actual persons, living or dead, is purely coincidental and not intended in any manner.

Grid Infrastructure 19c Upgrade Prerequisites


Execute the following steps prior to the planned maintenance window to ensure all prerequisites are met and to reduce the likelihood of experiencing an issue during the
upgrade steps. Furthermore, it is always recommended to upgrade your test systems and standby systems prior to production.
Run the following prerequisite checks at least one week prior to the upgrade planned maintenance window. All prechecks and prerequisites issues must be resolved prior to
the planned maintenance window.

Step 1.1 - Validate Minimum Software Requirements

Step 1.1.1 - Required Exadata Database Server Software

• Exadata Database Server (DomU) system software version needs to be Exadata 19.2 or higher. As root execute imageinfo to verify the version:

$ ssh -i key.ppk -l opc IP_ADDRESS_OF_DATABASE_NODE

$ sudo su -

# imageinfo -ver

19.2.14.0.0.200517

• The recommended version is the latest listed in Document 2333222.1.


• Please refer to Document 2391164.1 for further information on how to upgrade DomU system software to meet the requirements.

Step 1.1.2 - Required Grid Infrastructure Software

• Required minimum Grid Infrastructure home release before upgrading to 19c Grid Infrastructure:

• 18.6.0.0.190416

• 12.2.0.1.190416

• To validate, please execute the following command as root:

$ ssh -i key.ppk -l opc IP_ADDRESS_OF_DATABASE_NODE

$ sudo su -

# /var/opt/oracle/exapatch/exadbcpatchmulti -list_patches -oh=$(hostname -s):$(grep ^crs_home /etc/oracle/olr.loc | cut -d= -f2)


|grep gridversion

INFO: gridversion detected : 18.6.0.0.0

• If you must update your Grid Infrastructure software to meet the patching requirements, then install the most recent release indicated in Document 2333222.1

• Please refer to Patching an Exadata DB System for further information on how to upgrade Grid Infrastructure on ExaC@C to meet the requirements.

Step 1.1.3 - Required Database Software

• Required minimum Database home releases before upgrading to 19c Grid Infrastructure:

• 18.3.0.0.180717

• 12.2.0.1.180717

• 12.1.0.2.180831

• 11.2.0.4.190416

• To validate, please execute the following command as root:


$ ssh -i key.ppk -l opc IP_ADDRESS_OF_DATABASE_NODE

$ sudo su -

# dbaascli dbhome info |egrep "PATCH_LEVEL|HOME_LOC|DBs installed"

=====> Press Enter

HOME_LOC=/u02/app/oracle/product/11.2.0/dbhome_2

PATCH_LEVEL=11.2.0.4.190716

3 จาก 14 14/2/2565 9:50


Knowledge https://support.oracle.com/cloud/faces/DocumentDisplay?_afrLoop=7...

DBs installed= <db11_1>

HOME_LOC=/u02/app/oracle/product/12.2.0/dbhome_2

PATCH_LEVEL=12.2.0.1.181016

DBs installed= <db12_1>

HOME_LOC=/u02/app/oracle/product/12.2.0/dbhome_3

PATCH_LEVEL=12.2.0.1.191015

DBs installed= <db12_2>

• If you must update your Databases software to meet the patching requirements, then install the most recent release indicated in Document 2333222.1

• Please refer to Patching an Exadata DB System for further information on how to upgrade Database software to meet the requirements.

Step 1.1.4 - List of One-off patches

Evaluate any existing Grid Infrastructure one-off patches currently installed on top of 12.2.0.1 or 18.1.0.0 are fixed in 19c. Please execute the following command as root:

$ ssh -i key.ppk -l opc IP_ADDRESS_OF_DATABASE_NODE

$ sudo su - grid

$ $ORACLE_HOME/OPatch/opatch lspatches

29302264;OCW RELEASE UPDATE 18.6.0.0.0 (29302264)

29301643;ACFS RELEASE UPDATE 18.6.0.0.0 (29301643)

29301631;Database Release Update : 18.6.0.0.190416 (29301631)

28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)

28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)

For 12.2.0.1 GI the installed one-off patches should be:

29770090;ACFS JUL 2019 RELEASE UPDATE 12.2.0.1.190716 (29770090)

29770040;OCW JUL 2019 RELEASE UPDATE 12.2.0.1.190716 (29770040)

29757449;Database Jul 2019 Release Update : 12.2.0.1.190716 (29757449)

28566910;TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:180802.1448.S) (28566910)

26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)

NOTE: If you have existing one-off patches that are not included in the above list, contact Oracle support to evaluate if those fixes are already integrated with target 19c
GI or DB releases. Please note that the above examples are for 18.6.0.0.190416 and 12.2.0.1.190716, the patch numbers will be different if you are on different bundle
patch level.

Step 1.2 - Update and run Exachk


Run the latest release of Exachk to validate software, hardware, firmware, and configuration best practices. Resolve any issues identified by Exachk before proceeding.
Review Document 1070954.1 and Document 2550798.1 for details.

NOTE: It is recommended to run Exachk before and after the upgrade. When doing this, Exachk may find recommendations for the compatible settings for database,
ASM, and diskgroup. At some point, it is recommended to change compatible settings, but a conservative approach is advised. This is because changing compatible
settings can result in not being able to downgrade/rollback later. It is therefore recommended to revisit compatible parameters sometime after the upgrade has finished,
when there is no chance for any downgrade and the system has been running stable for a longer period.

Step 1.3 - Validate HugePages Memory Allocation


At minimum, we are recommending to add 1 GB to existing large page setting to accommodate the additional 1 GB recommend for ASM SGA. This is only possible if you
have available free memory, don't change the parameter USE_LARGE_PAGES if you don't have 1 GB of free space. See Document 361468.1 and Document 401749.1 for
more details on HugePages.
ASM initialization parameter values should be as follows:

Memory Config

Name Value

use_large_pages TRUE

memory_target 0

memory_max_target unset

sga_target 3G

sga_max_size unset

Execute the following commands to validate the parameter values:

4 จาก 14 14/2/2565 9:50


Knowledge https://support.oracle.com/cloud/faces/DocumentDisplay?_afrLoop=7...

Execute the following commands to validate the parameter values:

1. Connect to a compute node as the opc user.

2. Start a grid-user command shell:

3. $ sudo su - grid
4. Use the following commands to display the current values for memory_target, memory_max_target and use_large_pages:

$ sqlplus '/ as sysasm'

SQL> set linesize 140

col sid for a20

col name for a40

col value for a40

select sid, name, value from v$spparameter where name in


('memory_target','memory_max_target','use_large_pages','sga_max_size','sga_target');

The output will be similar to:

SID NAME VALUE

---------- ---------------------------------------- ----------------------------------------

* sga_max_size

* use_large_pages TRUE

* sga_target 3221225472

* memory_target 0

* memory_max_target

Please note that memory_max_target should be unset not 0. When the values do not match with the recommended ASM initialization parameters in the above table,
change them as follows:

1. Connect to a compute node as the opc user.


2. Start a grid-user command shell:

$ sudo su - grid

3. Execute the following commands to change the values:

$ sqlplus '/ as sysasm'

SQL> alter system set use_large_pages=TRUE sid='*' scope=spfile;


SQL> alter system set sga_target = 3G scope=spfile sid='*';
SQL> alter system set memory_target=0 sid='*' scope=spfile;
SQL> alter system set memory_max_target=0 sid='*' scope=spfile /* required workaround */;

SQL> alter system reset memory_max_target sid='*' scope=spfile;

Note: When resetting parameters that are not in the SPFILE, you may receive a "ORA-32010: cannot find entry to delete in SPFILE" which is expected.

If changing sga_target from 2G to 3G, then 1 GB should be added to the Huge Pages pool if needed. If the Huge Pages pool needs to be increased by 1 GB, then 512
Huge Pages need to be added to the current allocation. The number of Huge Pages can be configured and activated by setting nr_hugepages in the proc file system. To
verify and increase, if needed, the current allocation in 512 Huge Pages, execute:

1. Connect to a compute node as the opc user.


2. Start a root-user command shell:

$ sudo su -

3. Use the following command to display the current Huge Pages value:

# grep vm.nr_hugepages /etc/sysctl.conf

vm.nr_hugepages = <hugepages>

4. Check the current free HugePages:

# grep HugePages_Free /proc/meminfo

HugePages_Free: <hugepages_free>

5. If HugePages_Free is equal or higher than 512, then no adjustment to hugepages is necessary. Please skip the rest of this section and move to Step 1.3

6. Calculate the new Huge Pages value (<hugepages>+(512-<hugepages_free>)=<new_hugepages>) and update all your database nodes.

5 จาก 14 14/2/2565 9:50


Knowledge https://support.oracle.com/cloud/faces/DocumentDisplay?_afrLoop=7...

Edit the file /etc/sysctl.conf. You can use the sample text below and your favorite Linux text editor:

# vi /etc/sysctl.conf

And replace the current Huge Pages value with the new one:

#vm.nr_hugepages = <hugepages>

vm.nr_hugepages = <new_hugepages>

This file is used during the boot process. The Huge Pages pool is usually guaranteed if requested at boot time:

7. Use sysctl to load and verify the new settings in all database nodes:

# sysctl -p |grep "vm.nr_hugepages "

The output will be similar to:

vm.nr_hugepages = <new_hugepages>

Step 1.4 - Install the latest Cloud Tooling


Always update the cloud tooling software, if available, to the latest available release:

1. Connect to a compute node as the opc user.


2. Start a root-user command shell.

$ sudo su -

3. Patch your tooling to the latest version using the following command:

# dbaascli patch tools apply --patchid LATEST

Note: Make sure to update the tooling to a version higher than the minimum required specified in the overview section.

See Updating the Cloud Tooling on Exadata Cloud at Customer for further information.

Step 1.5 - Check for the availability of the upgrade patch


Perform the following check on your first database node:

1. Connect to a compute node as the opc user.


2. Start a root-user command shell:

$ sudo su -

3. Use the following command to display a list of available updates:

# /var/opt/oracle/exapatch/exadbcpatchmulti -list_upg_patches -oh=$(hostname -s):$(grep ^crs_home /etc/oracle/olr.loc | cut -d= -f2)


|egrep "'current_version'|'patchid'" | sed 's/upgrade_//g;s/-GI//g;s/,//g'

4. The output will be similar to:

'current_version' => '18.6.0.0.0'

'patchid' => '19.0.0.0'

The "current_version" will indicate current Grid Infrastructure version and patch level. For example, 'current_version' => '18.6.0.0.0' would be Grid Infrastructure 18.0.0.0 Apr
2019. The format is YYMMDD.
In the command output, ensure the target version you want to upgrade to is listed as an entry under 'patches'. Take note of the upgrade 'patchid' (for example: 19.0.0.0) so
that you can use it in the following dbaascli commands.
If you do not see the newer version listed in the output, please work with Oracle Support to stage the GI Upgrade image and make it available for your cloud environment.

Step 1.6 - Run the prerequisite check


Perform the following check on your first database node:

1. Connect to a compute node as the opc user.


2. Start a root-user command shell:

$ sudo su -

3. Execute the prerequisite check on the first database node:

# dbaascli grid upgrade --version 19.0.0.0.0 --executePrereqs

The output will be similar to:

DBAAS CLI version 19.1.1.1.0

Executing command grid upgrade --version 19.0.0.0.0 --executePrereqs

Releasing locks.

6 จาก 14 14/2/2565 9:50


Knowledge https://support.oracle.com/cloud/faces/DocumentDisplay?_afrLoop=7...

Releasing locks.

INFO: Releasing ohome lock.

INFO: Released ohome lock successfully...

INFO: Releasing provisioning lock.

INFO: Released provisioning lock successfully.

Released locks.

Grid Upgrade Prereqs Execution Successful

Continue only after the prerequisite check succeeds. If the prerequisite check fails, then remedy all failures and rerun the prerequisite check until it succeeds.

For detailed error messages, please refer to the logs located under /var/opt/oracle/log/giUpgrade/ and /home/opc/.pilotBase/logs directories on the node from which
dbaascli was executed.

Check the 'Troubleshooting' section for any known issues with the GI upgrade prerequisite process.

NOTE: If you are unable to remedy any failure using the Troubleshooting section, then lodge a service request with Oracle Support.

4. Use crsctl command to obtain the configuration information of all the resources registered with CRS and their status:

# /u01/app/18.1.0.0/grid/bin/crsctl stat res –t

The output will be similar to:

--------------------------------------------------------------------------------

Name Target State Server State details

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.ASMNET1LSNR_ASM.lsnr

ONLINE ONLINE <SERVER_NAME> STABLE

ONLINE ONLINE <SERVER_NAME> STABLE

ora.<disk_group>.GHCHKPT.advm

OFFLINE OFFLINE <SERVER_NAME> STABLE

OFFLINE OFFLINE <SERVER_NAME> STABLE

ora.<disk_group>.dg

ONLINE ONLINE <SERVER_NAME> STABLE

ONLINE ONLINE <SERVER_NAME> STABLE

ora.LISTENER.lsnr

ONLINE ONLINE <SERVER_NAME> STABLE

ONLINE ONLINE <SERVER_NAME> STABLE

...

5. Check cluster state on all DB nodes, to ensure it is NORMAL

ASMCMD> showclusterstate

Normal
# crsctl query crs activeversion -f

Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch
level is [3599760901].

Step 1.7 - Configure the Administrative Interface Inactivity Timeout


During the upgrade, some tasks will take an extended amount of time to complete so we want to prevent shell and SSH idle-timeout. Please check that the idle-timeout is
14400 at minimum.

The host_access_control command provides ability to update the shell or SSH client idle timeouts. The net effect of these two timeouts are the same, so typically the
shorter of the two will prevail and make the most impact.

1. Connect to a compute node as the opc user.


2. Start a root-user command shell:

$ sudo su -

7 จาก 14 14/2/2565 9:50


Knowledge https://support.oracle.com/cloud/faces/DocumentDisplay?_afrLoop=7...

3. Use the following command to display the idle-timeout:

# /opt/oracle.cellos/host_access_control idle-timeout --status

The output will be similar to:

[<date> <timestamp>] [INFO] [IMG-SEC-0402] Shell timeout is set to TMOUT=14400

[<date> <timestamp>] [INFO] [IMG-SEC-0403] SSH client idle timeout is set to ClientAliveInterval 600

If any of the above values is less than 14400, take note of the current value and make the necessary modifications:

# /opt/oracle.cellos/host_access_control idle-timeout --shell 14400

# /opt/oracle.cellos/host_access_control idle-timeout --client 14400

NOTE: Remember to restore the original value after the upgrade process has finished, "/opt/oracle.cellos/host_access_control idle-timeout -c 600" in our example.

Step 1.8 - Evaluate checklist for continuous application service during maintenance window
The following checklist is useful for preparing your applications and databases, even if you are not yet using Application Continuity to maintain application service availability
throughput the maintenance window. The points discussed here provide significant value for preparing your systems to support continuous availability, reducing possible
downtime during planned maintenance activities and during unplanned outages should they occur.

The Grid Infrastructure software upgrade is automated and executed in a rolling fashion maintaining Database availability, but requiring Grid Infrastructure and RAC
instances to be restarted. To maximize application uptime, review and incorporate these configuration best practices that include:

1. Connect using clusterware managed services.


2. Configure for Fast Application Notification.
3. Configure with recommended TNS connect string and attributes.
4. Leverage application continuity or transparent application continuity if applicable to maintain application service availability throughput the maintenance window.

For more information, refer to Continuous Availability - Application Checklist for Continuous Service for MAA Solutions MAA Technical brief.

Upgrade Grid Infrastructure to 19c

Step 2.1 - Perform the upgrade


Grid Infrastructure upgrades from 12.2.0.1 or 18.1.0.0 to 19c are always performed out of place and in a RAC rolling manner. Database remains online while each RAC
instance is restarted. Clusterware managed services will be drained and relocated automatically during this process.
The DBaaSCLI utility manages the upgrade orchestration in a rolling manner for Oracle ExaC@C Database Machine database servers. A rolling upgrade operates on
nodes sequentially, allowing for services to remain up on one or more nodes throughout the maintenance window. Local connections are disconnected and all local
instances are shutdown. Oracle RAC databases remain available through instances running on other nodes in the cluster.

The upgrade will take place on the local node, from which DBaaSCLI was executed, and then it will move sequentially from the lower to higher node number. You can find
the node number by executing 'olsnodes -n'.
The upgrade could take up to one hour per database node.
The new Oracle Grid Infrastructure 19c installation will be stored in /u02 filesystem. The Oracle Home for Oracle Grid Infrastructure software (Grid home) is located
in /u02/app/19.0.0.0/gridHome<instancenumber>. Oracle Base will be located in /u02/app/grid19.
Run the upgrade command on any of the database nodes. Use nohup to allow the command to keep running even if you lose your ssh connection to the compute node:

1. Connect to a compute node as the opc user.


2. Create a dbs_group file that contains a list of database nodes to upgrade. This will be the servers listed in the output of the "olsnodes" command executed via sudo:

$ sudo -u grid $(grep ^crs_home /etc/oracle/olr.loc | cut -d= -f2)/bin/olsnodes > ~/dbs_group

$ cat ~/dbs_group

exacc-node1

exacc-node2

...

3. During the maintenance window we recommend to disable all the cron jobs. As the opc OS user, stop the cron service via sudo:

$ dcli -l opc -g ~/dbs_group sudo systemctl stop crond.service

4. To confirm that cron is not running on any database node, please verify that that there is one line for each node in the output of the following command:

$ dcli -l opc -g ~/dbs_group sudo systemctl status crond.service |grep Stopped

The output will be similar to:

exacc-node1: <date> <time> exacc-node1 systemd[1]: Stopped Command Scheduler.

exacc-node2: <date> <time> exacc-node2 systemd[1]: Stopped Command Scheduler.

5. Start a root-user command shell:

$ sudo su -

6. Use the following command to start the upgrade:

8 จาก 14 14/2/2565 9:50


Knowledge https://support.oracle.com/cloud/faces/DocumentDisplay?_afrLoop=7...

# nohup dbaascli grid upgrade --version 19.0.0.0.0 &

7. You can see the output in real time by running the following command:

# tail -f nohup.out

The output will be similar to:

DBAAS CLI version 19.1.1.1.0

Executing command grid upgrade --version 19.0.0.0.0

Releasing locks.

INFO: Releasing ohome lock.

INFO: Released ohome lock successfully...

INFO: Releasing provisioning lock.

INFO: Released provisioning lock successfully.

Released locks.

-----------------

Grid Upgrade Successful

For troubleshooting, please refer to section "Troubleshooting a Failed Grid Infrastructure Upgrade".

NOTE: After completing the upgrade, Oracle Clusterware restores resources (e.g. database, instances, services) to the same state they were in when clusterware
stopped during the upgrade.

Step 2.2 - Verify the upgrade


Check that the Grid Infrastructure is running on all of your database nodes.

1. Connect to a compute node as the opc user.


2. Start a root-user command shell:

$ sudo su -

3. Use the following command to display information about the status of the cluster:

# $(grep ^crs_home /etc/oracle/olr.loc | cut -d= -f2)/bin/crsctl check cluster -all

The output will be similar to:

*****************************************************************

exacc-node1:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

*****************************************************************

exacc-node2:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

***************************************************************

4. Check that the running CRS version matches your expectation after the upgrade:

# $(grep ^crs_home /etc/oracle/olr.loc | cut -d= -f2)/bin/crsctl query crs activeversion

The output will be similar to:

Oracle Clusterware active version on the cluster is [19.0.0.0.0]

5. Use crsctl command to obtain the configuration information of all the databases registered with CRS:

# $(grep ^crs_home /etc/oracle/olr.loc | cut -d= -f2)/bin/crsctl stat res -v -w "TYPE = ora.database.type" |egrep
'NAME=|^STATE=|TARGET=|TARGET_SERVER=' | sed '/NAME/ i\\'

The output will be similar to:

NAME=ora.<db_unique_name_1>.db

STATE=ONLINE on exacc-node1

TARGET=ONLINE

9 จาก 14 14/2/2565 9:50


Knowledge https://support.oracle.com/cloud/faces/DocumentDisplay?_afrLoop=7...

TARGET=ONLINE

TARGET_SERVER=exacc-node1

STATE=ONLINE on exacc-node2

TARGET=ONLINE

TARGET_SERVER=exacc-node2

NAME=ora.<db_unique_name_2>.db

STATE=ONLINE on exacc-node1

TARGET=ONLINE

TARGET_SERVER=exacc-node1

STATE=ONLINE on exacc-node2

TARGET=ONLINE

TARGET_SERVER=exacc-node2

...

6. Use crsctl command to obtain the configuration information of all the resources registered with CRS and their status:

# /u01/app/18.1.0.0/grid/bin/crsctl stat res –t

For troubleshooting and further investigation, please see Clusterware Administration and Deployment Guide.

Step 2.3 - Apply 19c OneOff Patches to the Grid Infrastructure Home
If there are one-offs that need to be applied to the environment and they are approved by Oracle Support, then apply them. Always consult the specific patch README for
current instructions.

NOTE: Grid Infrastructure one-off patch [31561819] needs to be applied for 19c database downgrade issue for release 11.2 and 12.1. Apply the one-off patch if the bug
fix is not included in the existing Grid Infrastructure software home release update.

Step 2.4 - Perform DBFS Required Updates (DBFS only)


Execute the following command to confirm if DBFS is in place.

1. Connect to a compute node as the opc user.


2. Start a grid-user command shell:

$ sudo su - grid

3. Use the following command to verify if any DBFS resource is in place:

$ crsctl stat res -t |grep dbfs

If you find any resource, DBFS is in use and further actions are needed. Review section "Steps to Perform If Grid Home or Database Home Changes" in Document
1054431.1, as the shell script used to mount the DBFS filesystem may be located in the original Grid Infrastructure home and needs to be relocated.

Post Upgrade

Step 3.1 - Manual post-upgrade tasks


Execute the following manual post tasks:

1. Connect to a compute node as the opc user.


2. Start the cron service:

$ dcli -l opc -g ~/dbs_group sudo systemctl start crond.service

To confirm that cron is running on all database nodes, verify that that there is one line for each node in the output of the following command:

$ dcli -l opc -g ~/dbs_group sudo systemctl status crond.service |grep "(running)"

The output will be similar to:

exacc-node1: Active: active (running) since <date> <time>; 25s ago

exacc-node2: Active: active (running) since <date> <time>; 25s ago

...

Step 3.2 – Synchronize Grid Infrastructure Version in Database Cloud Service Console using REST API
After successful upgrade of the Grid Infrastructure, the software version information in the Database Cloud Service Console needs to be updated.
Follow the guidelines provided in the online documentation using the link to synchronize the Grid Infrastructure version using REST APIs.

10 จาก 14 14/2/2565 9:50


Knowledge https://support.oracle.com/cloud/faces/DocumentDisplay?_afrLoop=7...

NOTE:

Grid Version should contain 4 numbers separated by dots.

Valid GridVersion key value is: 19.0.0.0

The "action": "syncproperties" key/value is mandatory, and the other keys are optional.

Only the GridVersion key value needs to be updated after the GI upgrade.
The update should be verified in the Database Cloud Service Console by checking the List of Available Patches for ‘Exadata Grid’ under Administration Tile for DB
Services. The list should reflect any available patches for 19c Grid Infrastructure.

After the version information is updated, Database Cloud Service Console can be used to apply Exadata Grid PSUs for 19c Grid Infrastructure.

Step 3.3 - Run Exachk


The Grid Infrastructure upgrade to 19c is now complete. It is now required to run Exachk to validate all Oracle database 19c parameters meet the Best practices. Run the
latest release of Exachk to validate software, hardware, firmware, and configuration best practices. Resolve any issues identified by Exachk before proceeding.
Review Document 1070954.1 and Document 2550798.1 for details.

Step 3.4 - Remove Oracle Grid Infrastructure Software if no fallback required


After the upgrade is complete, the database and applications have been validated and in use for some time, the 12.2.0.1 or 18c Grid Infrastructure home can be removed
using the deinstall tool. Keep in mind that downgrade is not possible after removing the old Grid Infrastructure Home.

Run the following commands on the first database server only. The deinstall tool will perform the deinstallation on all database servers. Refer to Oracle Grid Infrastructure
Installation Guide for 12.2c or 18c for additional details of the deinstall tool.

1. Connect to a compute node as the opc user.


2. Start a grid-user command shell:

$ sudo su - grid

3. Before running the deinstall tool to remove the old grid homes, run deinstall as grid user with the -checkonly option to verify the actions it will perform:

$ export ORACLE_HOME=/<mount_point>/app/<gi_version>/grid

$ cd $ORACLE_HOME/deinstall

$ ./deinstall -checkonly

4. Ensure the following:

a) The Oracle Home selected for deinstall is the old home 12.2 or 18c.

b) The home is not a configured Grid Infrastructure home.

c) ASM is not detected in the Oracle Home.

5. Execute the following command as opt user in only one database node:

$ exit

$ dcli -l opc -g ~/dbs_group sudo chown -R grid:oinstall /<mount_point>/app/<gi_version>/grid

6. To deinstall Grid Infrastructure, execute the following command as grid user in only one database node:

$ sudo su - grid

$ export ORACLE_HOME=/<mount_point>/app/<gi_version>/grid

$ cd $ORACLE_HOME/deinstall

$ ./deinstall

The output will be similar to:

####################### DECONFIG CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is: /u02/app/19.0.0.0/ gridHome2

The following nodes are part of this cluster: exacc-node1 ~,exacc-node2

Active Remote Nodes are exacc-node2

The cluster node(s) on which the Oracle home deinstallation will be performed are:exacc-node1 ~,exacc-node2

Oracle Home selected for deinstall is: /u01/app/18.1.0.0/grid

Inventory Location where the Oracle home registered is: /u01/app/oraInventory

The home being deconfigured is NOT a configured Grid Infrastructure home (/u02/app/19.0.0.0/gridHome2)

ASM was not detected in the Oracle Home

Oracle Grid Management database was not found in this Grid Infrastructure home

Do you want to continue (y - yes, n - no)? [n]: y

...

####################### DEINSTALL CLEAN OPERATION SUMMARY #######################

11 จาก 14 14/2/2565 9:50


Knowledge https://support.oracle.com/cloud/faces/DocumentDisplay?_afrLoop=7...

####################### DEINSTALL CLEAN OPERATION SUMMARY #######################

Successfully detached Oracle home '/u01/app/18.1.0.0/grid' from the central inventory on the local node.

Failed to delete directory '/u01/app/18.1.0.0/grid' on the local node.

Successfully detached Oracle home '/u01/app/18.1.0.0/grid' from the central inventory on the remote nodes 'exacc-node2'.

Failed to delete directory '/u01/app/18.1.0.0/grid' on the remote nodes 'exacc-node2'.

Failed to delete directory '/u01/app/grid' on the remote nodes 'exacc-node2'.

Oracle Universal Installer cleanup completed with errors.

Review the permissions and contents of '/u01/app/grid' on nodes(s) 'exacc-node1 ~,exacc-node2'.

If there are no Oracle home(s) associated with '/u01/app/grid', manually delete '/u01/app/grid' and its contents.

Oracle deinstall tool successfully cleaned up temporary directories.

#######################################################################

It is expected to find issues while removing directories due to lack of privileges, just make sure that the Oracle home is successfully detached from the central inventory on
the local and remote nodes.

When not immediately deinstalling the previous Grid Infrastructure, rename the old Grid Home directory on all nodes such that operators cannot mistakenly execute crsctl
commands from the wrong Grid Infrastructure Home.

Step 3.5 - Advance ASM Compatible Diskgroup Attribute


As a highly recommended best practice and in order to create new databases with the password file stored in an ASM diskgroup or using ACFS, it's recommended to
advance the COMPATIBLE.ASM and COMPATIBLE.ADVM parameters of your diskgroups to the Oracle ASM software version in use. Please do not execute this step
until you are committed to stay in Grid Infrastructure 19c, after advancing this parameter downgrade is not possible.
Your diskgroup names may be different, please check your environment, in the below example it is assumed to be DATAC1, RECOC1, ADG1C1 and ADG2C1. Please
execute the following commands as grid user:

1. Connect to a compute node as the opc user.


2. Start a grid-user command shell:

$ sudo su - grid

3. Connect to the ASM instance as sysasm:

$ sqlplus / as sysasm

4. Use the following SQL query to display the diskgroups:

SQL> SELECT dg.name AS diskgroup, SUBSTR(a.name,1,18) AS name, SUBSTR(a.value,1,24) AS value

FROM V$ASM_DISKGROUP dg, V$ASM_ATTRIBUTE a

WHERE a.name like 'compatible.a%'

AND dg.group_number = a.group_number;

The output will be similar to:

DISKGROUP NAME VALUE

------------------------------ ---------------------------------------- ------------------------

DATAC1 compatible.asm 18.0.0.0.0

DATAC1 compatible.advm 18.0.0.0

RECOC1 compatible.asm 18.0.0.0.0

RECOC1 compatible.advm 18.0.0.0.0

ADG1C1 compatible.asm 12.1.0.2.0

ADG1C1 compatible.advm 12.1.0.2.0

ADG2C1 compatible.asm 12.1.0.2.0

ADG2C1 compatible.advm 12.1.0.2.0

5. Advance COMPATIBLE.ASM diskgroup attribute:

SQL> ALTER DISKGROUP RECOC1 SET ATTRIBUTE 'compatible.asm' = '19.0.0.0.0';

SQL> ALTER DISKGROUP DATAC1 SET ATTRIBUTE 'compatible.asm' = '19.0.0.0.0';

SQL> ALTER DISKGROUP ADG1C1 SET ATTRIBUTE 'compatible.asm' = '19.0.0.0.0';

SQL> ALTER DISKGROUP ADG2C1 SET ATTRIBUTE 'compatible.asm' = '19.0.0.0.0';

6. Advance COMPATIBLE.ADVM diskgroup attribute:

SQL> ALTER DISKGROUP RECOC1 SET ATTRIBUTE 'compatible.advm' = '19.0.0.0.0';

SQL> ALTER DISKGROUP DATAC1 SET ATTRIBUTE 'compatible.advm' = '19.0.0.0.0';

12 จาก 14 14/2/2565 9:50


Knowledge https://support.oracle.com/cloud/faces/DocumentDisplay?_afrLoop=7...

SQL> ALTER DISKGROUP ADG1C1 SET ATTRIBUTE 'compatible.advm' = '19.0.0.0.0';

SQL> ALTER DISKGROUP ADG2C1 SET ATTRIBUTE 'compatible.advm' = '19.0.0.0.0';

7. Use the following SQL query to display the diskgroups compatible version:

SQL> SELECT dg.name AS diskgroup, SUBSTR(a.name,1,18) AS name, SUBSTR(a.value,1,24) AS value

FROM V$ASM_DISKGROUP dg, V$ASM_ATTRIBUTE a

WHERE a.name like 'compatible.a%'

AND dg.group_number = a.group_number;

The output will be similar to:

DISKGROUP NAME VALUE

------------------------------ ---------------------------------------- ------------------------

DATAC1 compatible.asm 19.0.0.0.0

DATAC1 compatible.advm 19.0.0.0.0

RECOC1 compatible.asm 19.0.0.0.0

RECOC1 compatible.advm 19.0.0.0.0

ADG1C1 compatible.asm 19.0.0.0.0

ADG1C1 compatible.advm 19.0.0.0.0

ADG2C1 compatible.asm 19.0.0.0.0

ADG2C1 compatible.advm 19.0.0.0.0

Troubleshooting a Failed Grid Infrastructure Upgrade


The approach where a new software release is installed out of place (in a new home) will help against failed installations. Any type of installation problem should not impact
availability. Failed installations can easily be rolled back and restarted. If you face any issues either during installation, setup or configuration of the environment, there are
some troubleshooting documentation available. The Troubleshooting section summarizes the most relevant once:

• For troubleshooting: For detailed error messages, please refer to the logs under /var/opt/oracle/log/giUpgrade/ and /home/opc/.pilotBase/logs on the node from
which dbaascli was executed.

• Prerequisite fails with Unexpected Error related to cprops and giImageDownload: If GI Upgrade prerequisites command fails immediately with ‘Unexpected
Error’ showing ‘cprops’ and ‘giImageDownload’ errors, then follow the below workaround to avoid this issue.

Use the parameter [--containerUrl] with GI upgrade prerequisites command e.g.

# dbaascli grid upgrade --version 19.0.0.0.0 --containerUrl <OSS_Container_URL> --executePrereqs

Use the <OSS_Container_URL> which is defined in the configuration file /var/opt/oracle/exapatch/exadbcpatch.cfg.

• Resuming cron service after a failed upgrade: a failed upgrade will likely still have CRS and database instances running on all but one node. Connect to each
node as root to verify if the process crsd.bin is running. If the process is running, cron should be restarted while the failure is investigated so that backups can
resume:

$ ssh -i key.ppk -l opc IP_ADDRESS_OF_DATABASE_NODE

$ sudo su -

# ps -ef |grep crsd | grep -v grep

root 99047 1 1 17:30 ? 00:02:29 /<filesystem>/app/<gi_version>/grid/bin/crsd.bin reboot

# systemctl start crond.service

• Resuming a failed upgrade: if the dbasscli grid upgrade fails, review the logs to find and fix the issue and then you can resume the upgrade by executing “dbaascli
grid upgrade --version 19.0.0.0.0 --resume”

• Completing a failed upgrade: please refer to section "Completing Failed or Interrupted Installations and Upgrades" of the document "Grid Infrastructure Installation
and Upgrade Guide for Linux". After completing the upgrade, contact Oracle Support to update your /var/opt/oracle/creg/grid/grid.ini file.

• Downgrading: If a downgrade or revert is needed due to a failed 19c upgrade, work with Oracle support to revert grid infrastructure back to the original release.

REFERENCES
NOTE:1070954.1 - Oracle Exadata Database Machine EXAchk

NOTE:2550798.1 - Autonomous Health Framework (AHF) - Including TFA and ORAchk/EXAChk

NOTE:1054431.1 - Configuring DBFS on Oracle Exadata Database Machine

NOTE:2495335.1 - Updating the Cloud Tooling for Exadata Cloud Environment dbaastools_exa

NOTE:2333222.1 - Exadata Cloud Service Software Versions

13 จาก 14 14/2/2565 9:50


Knowledge https://support.oracle.com/cloud/faces/DocumentDisplay?_afrLoop=7...

NOTE:2709284.1 - Upgrading to 19c Oracle Database on Gen 1 Exadata Cloud at Customer

NOTE:2391164.1 - How to update the Exadata Image (OS) in Exadata Cloud at Customer

NOTE:401749.1 - Oracle Linux: Shell Script to Calculate Values Recommended Linux HugePages / HugeTLB Configuration

NOTE:2522950.2 - Exadata Cloud Support Information Center

NOTE:361468.1 - HugePages on Oracle Linux 64-bit

▼ Related
Products


Oracle Cloud > Oracle Infrastructure Cloud > Oracle Cloud at Customer > Gen 1 Exadata Cloud at Customer (Oracle Exadata Database Cloud Machine) > Patching and Tool
Issues (DomU) > Upgrade DB, GI, OS

Back to Top

14 จาก 14 14/2/2565 9:50

You might also like