Grid Infrastructure Installation and Upgrade Guide Linux
Grid Infrastructure Installation and Upgrade Guide Linux
Grid Infrastructure Installation and Upgrade Guide Linux
E96272-18
Contributing Authors: Aparna Kamath, Douglas Williams, Mark Bauer, Prakash Jashnani, Janet Stern
Contributors: Markus Michalewicz, Aneesh Khandelwal, James Williams, Ian Cookson, Rajesh Dasari, Pallavi
Kamath, Donald Graves, Jonathan Creighton, Angad Gokakkar, Srinivas Poovala, Prasad Bagal, Balaji
Pagadala, Neha Avasthy, Mark Scardina, Hanlin Chien, Anil Nair, Apparsamy Perumal, Samarjeet Tomar,
Akshay Shah, Kevin Jernigan, Eric Belden, Mark Fuller, Barbara Glover, Saar Maoz, Binoy Sukumaran
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on
behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software,
any programs embedded, installed or activated on delivered hardware, and modifications of such programs)
and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end
users are "commercial computer software" or "commercial computer software documentation" pursuant to the
applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use,
reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or
adaptation of i) Oracle programs (including any operating system, integrated software, any programs
embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle
computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the
license contained in the applicable contract. The terms governing the U.S. Government’s use of Oracle cloud
services are defined by the applicable contract for such services. No other rights are granted to the U.S.
Government.
This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle, Java, and MySQL are registered trademarks of Oracle and/or its affiliates. Other names may be
trademarks of their respective owners.
Intel and Intel Inside are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Epyc,
and the AMD logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered
trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise
set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be
responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
Contents
Preface
Audience xvii
Documentation Accessibility xvii
Set Up Java Access Bridge to Implement Java Accessibility xviii
Related Documentation xviii
Conventions xviii
iii
Configure Additional Operating System Features 3-7
iv
Installing Oracle Messaging Gateway 4-32
Installation Requirements for Programming Environments for Linux 4-32
Installation Requirements for Programming Environments for Linux x86-64 4-33
Installation Requirements for Programming Environments for IBM: Linux on
System z 4-33
Installation Requirements for Web Browsers 4-34
Checking Kernel and Package Requirements for Linux 4-34
Setting Clock Source for VMs on Linux x86-64 4-35
Installing the cvuqdisk RPM for Linux 4-35
Reviewing HugePages Memory Allocation 4-36
Disabling Transparent HugePages 4-37
Enabling the Name Service Cache Daemon 4-38
Verifying the Disk I/O Scheduler on Linux 4-39
Using Automatic SSH Configuration During Installation 4-40
Setting Network Time Protocol for Cluster Time Synchronization 4-40
v
Domain Delegation to Grid Naming Service 5-16
Choosing a Subdomain Name for Use with Grid Naming Service 5-17
Configuring DNS for Cluster Domain Delegation to Grid Naming Service 5-17
Configuration Requirements for Oracle Flex Clusters 5-18
Understanding Oracle Flex Clusters 5-19
About Oracle Flex ASM Clusters Networks 5-19
General Requirements for Oracle Flex Cluster Configuration 5-20
Oracle Flex Cluster DHCP-Assigned Virtual IP (VIP) Addresses 5-21
Oracle Flex Cluster Manually-Assigned Addresses 5-21
Grid Naming Service Cluster Configuration Example 5-22
Manual IP Address Configuration Example 5-23
Network Interface Configuration Options 5-24
Multiple Private Interconnects and Oracle Linux 5-25
vi
Creating the OSRACDBA Group for Database Installations 6-15
Creating Operating System Oracle Installation User Accounts 6-15
Creating an Oracle Software Owner User 6-16
Modifying Oracle Owner User Groups 6-16
Identifying Existing User and Group IDs 6-17
Creating Identical Database Users and Groups on Other Cluster Nodes 6-17
Example of Creating Minimal Groups, Users, and Paths 6-18
Example of Creating Role-allocated Groups, Users, and Paths 6-20
Configuring Grid Infrastructure Software Owner User Environments 6-23
Environment Requirements for Oracle Software Owners 6-23
Procedure for Configuring Oracle Software Owner Environments 6-24
Checking Resource Limits for Oracle Software Installation Users 6-26
Setting Remote Display and X11 Forwarding Configuration 6-28
Preventing Installation Errors Caused by Terminal Output Commands 6-29
Enabling Intelligent Platform Management Interface (IPMI) 6-29
Requirements for Enabling IPMI 6-30
Configuring the IPMI Management Network 6-31
Configuring the Open IPMI Driver 6-31
Configuring the BMC 6-32
Example of BMC Configuration Using IPMItool 6-33
vii
About the Grid Infrastructure Management Repository 8-10
Using an Existing Oracle ASM Disk Group 8-11
About Upgrading Existing Oracle Automatic Storage Management Instances 8-11
Selecting Disks to use with Oracle ASM Disk Groups 8-12
Specifying the Oracle ASM Disk Discovery String 8-12
Creating Files on a NAS Device for Use with Oracle Automatic Storage Management 8-13
Configuring Storage Device Path Persistence Using Oracle ASMFD 8-14
About Oracle ASM with Oracle ASM Filter Driver 8-14
Using Disk Groups with Oracle Database Files on Oracle ASM 8-15
Identifying and Using Existing Oracle Database Disk Groups on Oracle ASM 8-15
Creating Disk Groups for Oracle Database Data Files 8-16
Creating Directories for Oracle Database Files 8-16
Configuring File System Storage for Oracle Database 8-17
Configuring NFS Buffer Size Parameters for Oracle Database 8-18
Checking TCP Network Protocol Buffer for Direct NFS Client 8-18
Creating an oranfstab File for Direct NFS Client 8-19
Enabling and Disabling Direct NFS Client Control of NFS 8-22
Enabling Hybrid Columnar Compression on Direct NFS Client 8-22
Creating Member Cluster Manifest File for Oracle Member Clusters 8-22
Configuring Oracle Automatic Storage Management Cluster File System 8-24
Checking OCFS2 Version Manually 8-25
viii
Configuring the Software Binaries Using a Response File 9-31
Setting Ping Targets for Network Checks 9-32
About Deploying Oracle Grid Infrastructure Using Oracle Fleet Patching and Provisioning 9-32
Confirming Oracle Clusterware Function 9-34
Confirming Oracle ASM Function for Oracle Clusterware Files 9-35
Understanding Offline Processes in Oracle Grid Infrastructure 9-35
ix
About the CVU Upgrade Validation Command Options 11-12
Example of Verifying System Upgrade Readiness for Grid Infrastructure 11-13
Using Dry-Run Upgrade Mode to Check System Upgrade Readiness 11-13
About Oracle Grid Infrastructure Dry-Run Upgrade Mode 11-13
Performing Dry-Run Upgrade Using Oracle Universal Installer 11-14
Understanding Rolling Upgrades Using Batches 11-16
Performing Rolling Upgrade of Oracle Grid Infrastructure 11-16
Upgrading Oracle Grid Infrastructure from an Earlier Release 11-17
Completing an Oracle Clusterware Upgrade when Nodes Become Unreachable 11-19
Joining Inaccessible Nodes After Forcing an Upgrade 11-19
Changing the First Node for Install and Upgrade 11-20
About Upgrading Oracle Grid Infrastructure Using Oracle Fleet Patching and Provisioning 11-20
Applying Patches to Oracle Grid Infrastructure 11-21
About Individual Oracle Grid Infrastructure Patches 11-21
About Oracle Grid Infrastructure Software Patch Levels 11-22
About Zero-Downtime Oracle Grid Infrastructure Patching 11-22
Patching Oracle Grid Infrastructure 11-23
Applying Patches Using Zero-Downtime Oracle Grid Infrastructure Patching 11-23
Applying Patches During an Oracle Grid Infrastructure Installation or Upgrade 11-25
Applying Patches After an Oracle Grid Infrastructure Installation or Upgrade 11-25
Applying Patches when Oracle Clusterware Fails to Start 11-26
Patching and Switching Oracle Grid Infrastructure Homes 11-27
Updating Oracle Enterprise Manager Cloud Control Target Parameters 11-28
Updating the Enterprise Manager Cloud Control Target After Upgrades 11-28
Updating the Enterprise Manager Agent Base Directory After Upgrades 11-29
Registering Resources with Oracle Enterprise Manager After Upgrades 11-29
Unlocking and Deinstalling the Previous Release Grid Home 11-30
Checking Cluster Health Monitor Repository Size After Upgrading 11-31
Downgrading Oracle Clusterware to an Earlier Release 11-31
Options for Oracle Grid Infrastructure Downgrades 11-32
Restrictions for Oracle Grid Infrastructure Downgrades 11-33
Downgrading Oracle Clusterware to 18c 11-33
Downgrading Oracle Standalone Cluster to 18c 11-34
Downgrading Oracle Domain Services Cluster to 18c 11-35
Downgrading Oracle Member Cluster to 18c 11-38
Downgrading Oracle Grid Infrastructure to 18c when Upgrade Fails 11-40
Downgrading Oracle Clusterware to 12c Release 2 (12.2) 11-40
Downgrading Oracle Standalone Cluster to 12c Release 2 (12.2) 11-41
Downgrading Oracle Domain Services Cluster to 12c Release 2 (12.2) 11-42
Downgrading Oracle Member Cluster to 12c Release 2 (12.2) 11-45
x
Downgrading Oracle Grid Infrastructure to 12c Release 2 (12.2) when Upgrade
Fails 11-47
Downgrading to Oracle Grid Infrastructure 12c Release 1 (12.1) 11-48
Downgrading to Oracle Grid Infrastructure 11g Release 2 (11.2) 11-50
Downgrading Oracle Grid Infrastructure Using Online Abort Upgrade 11-51
Completing Failed or Interrupted Installations and Upgrades 11-52
Completing Failed Installations and Upgrades 11-53
Continuing Incomplete Upgrade of First Node 11-53
Continuing Incomplete Upgrades on Remote Nodes 11-54
Continuing Incomplete Installation on First Node 11-54
Continuing Incomplete Installation on Remote Nodes 11-55
Converting to Oracle Extended Cluster After Upgrading Oracle Grid Infrastructure 11-55
xi
Creating a Password Response File A-14
Running Postinstallation Configuration Using a Password Response File A-15
xii
Index
xiii
List of Figures
9-1 Oracle Cluster Domain 9-6
xiv
List of Tables
1-1 Server Hardware Checklist for Oracle Grid Infrastructure 1-1
1-2 Operating System General Checklist for Oracle Grid Infrastructure and Oracle RAC 1-3
1-3 Server Configuration Checklist for Oracle Grid Infrastructure 1-4
1-4 Network Configuration Tasks for Oracle Grid Infrastructure and Oracle RAC 1-5
1-5 User Environment Configuration for Oracle Grid Infrastructure 1-7
1-6 Oracle Grid Infrastructure Storage Configuration Checks 1-9
1-7 Oracle Grid Infrastructure Cluster Deployment Checklist 1-10
1-8 Oracle Universal Installer Checklist for Oracle Grid Infrastructure Installation 1-11
4-1 x86-64 Oracle Linux 8 Minimum Operating System Requirements 4-12
4-2 x86-64 Oracle Linux 7 Minimum Operating System Requirements 4-14
4-3 x86-64 Red Hat Enterprise Linux 8 Minimum Operating System Requirements 4-17
4-4 x86-64 Red Hat Enterprise Linux 7 Minimum Operating System Requirements 4-19
4-5 x86-64 SUSE Linux Enterprise Server 12 Minimum Operating System Requirements 4-21
4-6 x86-64 SUSE Linux Enterprise Server 15 Minimum Operating System Requirements 4-23
4-7 Red Hat Enterprise Linux 8 Minimum Operating System Requirements 4-26
4-8 Red Hat Enterprise Linux 7 Minimum Operating System Requirements 4-28
4-9 SUSE Linux Enterprise Server 12 Minimum Operating System Requirements 4-29
4-10 Requirements for Programming Environments for Linux X86–64 4-33
4-11 Requirements for Programming Environments for IBM: Linux on System z 4-34
5-1 Grid Naming Service Cluster Configuration Example 5-22
5-2 Manual Network Configuration Example 5-23
6-1 Installation Owner Resource Limit Recommended Ranges 6-27
7-1 Supported Storage Options for Oracle Grid Infrastructure 7-2
7-2 Platforms That Support Oracle ACFS and Oracle ADVM 7-3
8-1 Minimum Available Space Requirements for Oracle Standalone Cluster With GIMR Configuration 8-8
8-2 Minimum Available Space Requirements for Oracle Standalone Cluster Without GIMR
Configuration 8-8
8-3 Minimum Available Space Requirements for Oracle Member Cluster with Local ASM 8-9
8-4 Minimum Available Space Requirements for Oracle Domain Services Cluster 8-9
9-1 Image-Creation Options for Setup Wizard 9-3
9-2 Oracle ASM Disk Group Redundancy Levels for Oracle Extended Clusters with 2 Data Sites 9-8
11-1 Upgrade Checklist for Oracle Grid Infrastructure Installation 11-6
A-1 Response Files for Oracle Database and Oracle Grid Infrastructure A-3
B-1 Device Name Formats Based on Disk Type B-9
B-2 Disk Management Tasks Using ORACLEASM B-11
xv
B-3 Minimum Operating System Resource Parameter Settings B-19
B-4 Commands to Display Kernel Parameter Values B-20
C-1 Examples of OFA-Compliant Oracle Base Directory Names C-4
C-2 Optimal Flexible Architecture Hierarchical File Path Examples C-6
xvi
Preface
This guide explains how to configure a server in preparation for installing and configuring an
Oracle Grid Infrastructure installation (Oracle Clusterware and Oracle Automatic Storage
Management).
It also explains how to configure a server and storage in preparation for an Oracle Real
Application Clusters (Oracle RAC) installation.
• Audience
• Documentation Accessibility
• Set Up Java Access Bridge to Implement Java Accessibility
Install Java Access Bridge so that assistive technologies on Microsoft Windows systems
can use the Java Accessibility API.
• Related Documentation
• Conventions
Audience
This guide provides configuration information for network and system administrators, and
database installation information for database administrators (DBAs) who install and
configure Oracle Clusterware and Oracle Automatic Storage Management in an Oracle Grid
Infrastructure for a cluster installation.
For users with specialized system roles who intend to install Oracle RAC, this book is
intended to be used by system administrators, network administrators, or storage
administrators to configure a system in preparation for an Oracle Grid Infrastructure for a
cluster installation, and complete all configuration tasks that require operating system root
privileges. When Oracle Grid Infrastructure installation and configuration is completed
successfully, a system administrator should only need to provide configuration information
and to grant access to the database administrator to run scripts as root during an Oracle RAC
installation.
This guide assumes that you are familiar with Oracle Database concepts.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility
Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
xvii
Preface
Related Documentation
For more information, see the following Oracle resources:
Related Topics
• Oracle Real Application Clusters Installation Guide for Linux and UNIX
• Oracle Database Installation Guide
• Oracle Clusterware Administration and Deployment Guide
• Oracle Real Application Clusters Administration and Deployment Guide
• Oracle Database Concepts
• Oracle Database New Features Guide
• Oracle Database Licensing Information
• Oracle Database Release Notes
• Oracle Database Examples Installation Guide
• Oracle Database Administrator's Reference for Linux and UNIX-Based Operating
Systems
• Oracle Automatic Storage Management Administrator's Guide
• Oracle Database Upgrade Guide
• Oracle Database 2 Day DBA
• Oracle Application Express Installation Guide
Conventions
The following text conventions are used in this document:
Convention Meaning
boldface Boldface type indicates graphical user interface elements associated
with an action, or terms defined in text or the glossary.
xviii
Preface
Convention Meaning
italic Italic type indicates book titles, emphasis, or placeholder variables for
which you supply particular values.
monospace Monospace type indicates commands within a paragraph, URLs, code
in examples, text that appears on the screen, or text that you enter.
xix
New Features
New Features
Review new features available with Oracle Grid Infrastructure 19c.
• Support for Dry-Run Validation of Oracle Clusterware Upgrade
• Multiple ASMB
• Parity Protected Files
• Secure Cluster Communication
• Zero-Downtime Oracle Grid Infrastructure Patching
• Zero-Downtime Oracle Grid Infrastructure Patching Using Oracle Fleet Patching
and Provisioning
• Resupport of Direct File Placement for OCR and Voting Disks
• Optional Install for the Grid Infrastructure Management Repository
Related Topics
• Oracle Database New Features Guide
20
New Features
Related Topics
• Using Dry-Run Upgrade Mode to Check System Upgrade Readiness
Use dry-run upgrade mode of Oracle Grid Infrastructure installation wizard,
gridSetup.sh, to check readiness for Oracle Clusterware upgrades.
Multiple ASMB
Given that +ASM1 has dg1 mounted but not dg2, and +ASM2 has dg2 mounted but not dg1,
the Multiple ASMB project allows for the Database to use both dg1 and dg2 by connecting to
both ASM instances simultaneously. Instead of having just ASMB, we can now have ASMBn.
This feature increases the availability of the Real Application Clusters (RAC) stack by
allowing DB to use multiple disk groups even if a given ASM instance happens not to have all
of them mounted.
21
New Features
Related Topics
• Zero-Downtime Oracle Grid Infrastructure Patching
22
Deprecated Features
Application Clusters (Oracle RAC) databases. Having an optional installation for the GIMR
allows for more flexible storage space management and faster deployment, especially during
the installation of test and development systems.
Related Topics
• About the Grid Infrastructure Management Repository
Every Oracle Domain Services Cluster contains a Grid Infrastructure Management
Repository (GIMR), but GIMR configuration is optional for Oracle Standalone Cluster.
Deprecated Features
Review features that are deprecated starting with Oracle Grid Infrastructure 19c.
For more information about deprecated features, parameters, and views, refer to Oracle
Database Upgrade Guide
• Deprecation of Addnode Script
The addnode script is deprecated in Oracle Grid Infrastructure 19c. The functionality of
adding nodes to clusters is available in the installer wizard.
The addnode script can be removed in a future release. Instead of using the addnode
script (addnode.sh or addnode.bat), add nodes by using the installer wizard. The installer
wizard provides many enhancements over the addnode script. Using the installer wizard
simplifies management by consolidating all software lifecycle operations into a single tool.
• Deprecation of clone.pl Script
The clone.pl script is deprecated in Oracle Database 19c. The functionality of
performing a software-only installation, using the gold image, is available in the installer
wizard.
The clone.pl script can be removed in a future release. Instead of using the clone.pl
script, Oracle recommends that you install the extracted gold image as a home, using the
installer wizard.
• Deprecation of Cluster Domain - Member Clusters
Starting with Oracle Grid Infrastructure 19c (19.5), Member Clusters, which are part of
the Oracle Cluster Domain architecture, are deprecated.
Deprecating certain clustering features with limited adoption allows Oracle to focus on
improving core scaling, availability, and manageability across all features and
functionality. Oracle Cluster Domains consist of a Domain Services Cluster (DSC) and
Member Clusters. The deprecation of Member Clusters affects the clustering used with
the DSC, but not its ability to host services for other production clusters. Oracle
recommends that you align your next software or hardware upgrade with the transition off
Cluster Domain - Member Clusters.
Related Topics
• Oracle Database Upgrade Guide
Desupported Features
Review features that are desupported with Grid Infrastructure 19c.
For more information about desupported features, parameters, and views, refer to Oracle
Database Upgrade Guide
• Desupport of Leaf Nodes in Flex Cluster Architecture
23
Other Changes
Leaf nodes are no longer supported in the Oracle Flex Cluster Architecture in
Oracle Grid Infrastructure 19c.
In Oracle Grid Infrastructure 19c (19.1) and later releases, all nodes in an Oracle
Flex Cluster function as hub nodes. The capabilities offered by Leaf nodes in the
original implementation of the Oracle Flex Cluster architecture can as easily be
served by hub nodes. Therefore, leaf nodes are no longer supported.
• Desupport of Oracle Real Application Clusters for Standard Edition 2 (SE2)
Database Edition
Starting with Oracle Database 19c, Oracle Real Application Clusters (Oracle RAC)
is not supported in Oracle Database Standard Edition 2 (SE2).
Upgrading Oracle Database Standard Edition databases that use Oracle Real
Application Clusters (Oracle RAC) functionality from earlier releases to Oracle
Database 19c is not possible. To upgrade those databases to Oracle Database
19c, either remove the Oracle RAC functionality before starting the upgrade, or
upgrade from Oracle Database Standard Edition to Oracle Database Enterprise
Edition. For more information about each step, including how to reconfigure your
system after an upgrade, see My Oracle Support Note 2504078.1: "Desupport of
Oracle Real Application Clusters (RAC) with Oracle Database Standard Edition
19c."
Related Topics
• Oracle Database Upgrade Guide
• My Oracle Support Document 2504878.1
Other Changes
Review other changes for Oracle Grid Infrastructure 19c.
• Rapid Home Provisioning (RHP) Name Change
Starting with Oracle Database 19c and Oracle Grid Infrastructure 19c, Rapid
Home Provisioning is renamed to Fleet Patching and Provisioning (FPP).
• Operating System Package Names
To simplify the installation of operating system packages required for an Oracle
Database and Oracle Grid Infrastructure installation on Linux, starting with 19c,
only the operating system package names will be listed and not the exact package
version. Install or update to the latest version of these packages from the minimum
supported Linux distribution.Only packages that are officially released by Oracle or
your operating system vendor are supported.
24
1
Oracle Grid Infrastructure Installation
Checklist
Use checklists to plan and carry out Oracle Grid Infrastructure (Oracle Clusterware and
Oracle Automatic Storage Management) installation.
Oracle recommends that you use checklists as part of your installation planning process.
Using this checklist can help you to confirm that your server hardware and configuration meet
minimum requirements for this release, and to ensure you carry out a successful installation.
• Server Hardware Checklist for Oracle Grid Infrastructure
Review server hardware requirements for Oracle Grid Infrastructure installation.
• Operating System Checklist for Oracle Grid Infrastructure and Oracle RAC
Review the checklist for operating system requirements for Oracle Grid Infrastructure
installation.
• Server Configuration Checklist for Oracle Grid Infrastructure
Use this checklist to check minimum server configuration requirements for Oracle Grid
Infrastructure installations.
• Network Checklist for Oracle Grid Infrastructure
Review this network checklist for Oracle Grid Infrastructure installation to ensure that you
have required hardware, names, and addresses for the cluster.
• User Environment Configuration Checklist for Oracle Grid Infrastructure
Use this checklist to plan operating system users, groups, and environments for Oracle
Grid Infrastructure installation.
• Storage Checklist for Oracle Grid Infrastructure
Review the checklist for storage hardware and configuration requirements for Oracle Grid
Infrastructure installation.
• Cluster Deployment Checklist for Oracle Grid Infrastructure
Review the checklist for planning your cluster deployment Oracle Grid Infrastructure
installation.
• Installer Planning Checklist for Oracle Grid Infrastructure
Review the checklist for planning your Oracle Grid Infrastructure installation before
starting Oracle Universal Installer.
Check Task
Server make Confirm that server makes, models, core architecture, and host bus adaptors (HBA)
and are supported to run with Oracle Grid Infrastructure and Oracle RAC.
architecture
1-1
Chapter 1
Operating System Checklist for Oracle Grid Infrastructure and Oracle RAC
Table 1-1 (Cont.) Server Hardware Checklist for Oracle Grid Infrastructure
Check Task
Runlevel 3 or 5
Server Display At least 1024 x 768 display resolution for Oracle Universal Installer. Confirm display
Cards monitor.
Minimum At least 8 GB RAM for Oracle Grid Infrastructure installations.
Random
Access
Memory (RAM)
Intelligent IPMI cards installed and configured, with IPMI administrator account information
Platform available to the person running the installation.
Management Ensure baseboard management controller (BMC) interfaces are configured, and have
Interface (IPMI) an administration account username and password to provide when prompted during
installation.
1-2
Chapter 1
Server Configuration Checklist for Oracle Grid Infrastructure
Table 1-2 Operating System General Checklist for Oracle Grid Infrastructure and
Oracle RAC
Check Task
Operating system OpenSSH installed manually, if you do not have it installed already as part of a
general requirements default Linux installation.
Review the system requirements section for a list of minimum package
requirements. Use the same operating system kernel and packages on each
cluster member node.
• Oracle Linux 8.1 with the Unbreakable Enterprise Kernel 6:
5.4.17-2011.0.7.el8uek.x86_64 or later
Oracle Linux 8 with the Red Hat Compatible kernel:
4.18.0-80.el8.x86_64 or later
• Oracle Linux 7.4 with the Unbreakable Enterprise Kernel 4:
4.1.12-124.19.2.el7uek.x86_64 or later
Oracle Linux 7.4 with the Unbreakable Enterprise Kernel 5:
4.14.35-1818.1.6.el7uek.x86_64 or later
Oracle Linux 7.7 with the Unbreakable Enterprise Kernel 6:
5.4.17-2011.4.4.el7uek.x86_64 or later
Oracle Linux 7.5 with the Red Hat Compatible Kernel:
3.10.0-862.11.6.el7.x86_64 or later
• Red Hat Enterprise Linux 8: 4.18.0-80.el8.x86_64 or later
• Red Hat Enterprise Linux 7.5: 3.10.0-862.11.6.el7.x86_64 or later
• SUSE Linux Enterprise Server 12 SP3: 4.4.162-94.72-default or
later
SUSE Linux Enterprise Server 15: 4.12.14-23-default or later
Review the system requirements section for a list of minimum package
requirements.
IBM: Linux on System The following IBM: Linux on System z kernels are supported:
z operating system • Red Hat Enterprise Linux 8.3: 4.18.0-240.el8.s390x or later
requirements
• Red Hat Enterprise Linux 7.4: 3.10.0-693.el7.s390x or later
• SUSE Linux Enterprise Server 12: 4.4.73-5-default s390x or later
Review the system requirements section for a list of minimum package
requirements.
Oracle Database If you use Oracle Linux, then Oracle recommends that you run an Oracle
Preinstallation RPM Database preinstallation RPM for your Linux release to configure your
for Oracle Linux operating system for Oracle Database and Oracle Grid Infrastructure
installations.
Disable Transparent Oracle recommends that you disable Transparent HugePages and use
HugePages standard HugePages for enhanced performance.
1-3
Chapter 1
Server Configuration Checklist for Oracle Grid Infrastructure
Check Task
Disk space allocated At least 1 GB of space in the temporary disk space (/tmp) directory.
to the temporary file
system
Swap space allocation
relative to RAM Between 4 GB and 16 GB: Equal to RAM
More than 16 GB: 16 GB
Note: If you enable HugePages for your Linux servers, then you
should deduct the memory allocated to HugePages from the
available RAM before calculating swap space.
HugePages memory Allocate memory to HugePages large enough for the System Global
allocation Areas (SGA) of all databases planned to run on the cluster, and to
accommodate the System Global Area for the Grid Infrastructure
Management Repository.
Mount point paths for Oracle recommends that you create an Optimal Flexible Architecture
the software binaries configuration as described in the appendix "Optimal Flexible
Architecture" in Oracle Grid Infrastructure Installation and Upgrade
Guide for your platform.
Ensure that the Oracle The ASCII character restriction includes installation owner user names,
home (the Oracle which are used as a default for some home paths, as well as other
home path you select directory names you may select for paths.
for Oracle Database)
uses only ASCII
characters
Set locale (if needed) Specify the language and the territory, or locale, in which you want to
use Oracle components. A locale is a linguistic and cultural
environment in which a system or program is running. NLS (National
Language Support) parameters determine the locale-specific behavior
on both servers and clients. The locale setting of a component
determines the language of the user interface of the component, and
the globalization behavior, such as date and number formatting.
Set Network Time Oracle Clusterware requires the same time zone environment variable
Protocol for Cluster setting on all cluster nodes.
Time Synchronization Ensure that you set the time zone synchronization across all cluster
nodes using either an operating system configured network time
protocol (NTP) or Oracle Cluster Time Synchronization Service.
Check Shared By default, your operating system includes an entry in /etc/fstab to
Memory File System mount /dev/shm. However, if your Cluster Verification Utility (CVU) or
Mount Oracle Universal Installer (OUI) checks fail, then ensure that
the /dev/shm mount area is of type tmpfs and is mounted with the
following options:
• rw and exec permissions set on it
• Without noexec or nosuid set on it
Note: Your operating system usually sets these options as the default
permissions. If they are set by the operating system, then they are not
listed on the mount options.
Symlinks Oracle home or Oracle base cannot be symlinks, nor can any of their
parent directories, all the way to up to the root directory.
1-4
Chapter 1
Network Checklist for Oracle Grid Infrastructure
Related Topics
• Optimal Flexible Architecture
Oracle Optimal Flexible Architecture (OFA) rules are a set of configuration guidelines
created to ensure well-organized Oracle installations, which simplifies administration,
support and maintenance.
Table 1-4 Network Configuration Tasks for Oracle Grid Infrastructure and Oracle RAC
Check Task
Public network hardware • Public network switch (redundant switches recommended) connected to
a public gateway and to the public interface ports for each cluster
member node.
• Ethernet interface card (redundant network cards recommended,
bonded as one Ethernet port name).
• The switches and network interfaces must be at least 1 GbE.
• The network protocol is Transmission Control Protocol (TCP) and
Internet Protocol (IP).
Private network • Private dedicated network switches (redundant switches
hardware for the recommended), connected to the private interface ports for each cluster
interconnect member node.
Note: If you have more than one private network interface card for each
server, then Oracle Clusterware automatically associates these
interfaces for the private network using Grid Interprocess
Communication (GIPC) and Grid Infrastructure Redundant Interconnect,
also known as Cluster High Availability IP (HAIP).
• The switches and network interface adapters must be at least 1 GbE.
• The interconnect must support the user datagram protocol (UDP).
• Jumbo Frames (Ethernet frames greater than 1500 bits) are not an
IEEE standard, but can reduce UDP overhead if properly configured.
Oracle recommends the use of Jumbo Frames for interconnects.
However, be aware that you must load-test your system, and ensure
that they are enabled throughout the stack.
Oracle Flex ASM Oracle Flex ASM can use either the same private networks as Oracle
Network Hardware Clusterware, or use its own dedicated private networks. Each network can
be classified PUBLIC or PRIVATE+ASM or PRIVATE or ASM. Oracle ASM
networks use the TCP protocol.
1-5
Chapter 1
Network Checklist for Oracle Grid Infrastructure
Table 1-4 (Cont.) Network Configuration Tasks for Oracle Grid Infrastructure and
Oracle RAC
Check Task
Cluster Names and Determine and configure the following names and addresses for the cluster:
Addresses • Cluster name: Decide a name for the cluster, and be prepared to enter
it during installation. The cluster name should have the following
characteristics:
Globally unique across all hosts, even across different DNS domains.
At least one character long and less than or equal to 15 characters long.
Consist of the same character set used for host names, in accordance
with RFC 1123: Hyphens (-), and single-byte alphanumeric characters
(a to z, A to Z, and 0 to 9). If you use third-party vendor clusterware,
then Oracle recommends that you use the vendor cluster name.
• Grid Naming Service Virtual IP Address (GNS VIP): If you plan to use
GNS, then configure a GNS name and fixed address in DNS for the
GNS VIP, and configure a subdomain on your DNS delegated to the
GNS VIP for resolution of cluster addresses. GNS domain delegation is
mandatory with dynamic public networks (DHCP, autoconfiguration).
• Single Client Access Name (SCAN) and addresses
Using Grid Naming Service Resolution: Do not configure SCAN
names and addresses in your DNS. SCAN names are managed by
GNS.
Using Manual Configuration and DNS resolution: Configure a SCAN
name to resolve to three addresses on the domain name service (DNS).
Node Public, Private If you are not using GNS, then configure the following for each node:
and Virtual IP names • Public node name and address, configured in the DNS and in /etc/
and Addresses hosts (for example, node1.example.com, address 192.0.2.10). The
public node name should be the primary host name of each node, which
is the name displayed by the hostname command.
• Private node address, configured on the private interface for each
node.
The private subnet that the private interfaces use must connect all the
nodes you intend to have as cluster members. Oracle recommends that
the network you select for the private network uses an address range
defined as private by RFC 1918.
• Public node virtual IP name and address (for example, node1-
vip.example.com, address 192.0.2.11).
If you are not using dynamic networks with GNS and subdomain
delegation, then determine a virtual host name for each node. A virtual
host name is a public node name that is used to reroute client requests
sent to the node if the node is down. Oracle Database uses VIPs for
client-to-database connections, so the VIP address must be publicly
accessible. Oracle recommends that you provide a name in the format
hostname-vip. For example: myclstr2-vip.
Related Topics
• Understanding Oracle Flex Clusters
Starting with Oracle Grid Infrastructure 12c Release 2 (12.2), Oracle Grid
Infrastructure cluster configurations are Oracle Flex Clusters deployments.
1-6
Chapter 1
User Environment Configuration Checklist for Oracle Grid Infrastructure
Check Task
Review Oracle Inventory The Oracle Inventory directory is the central inventory of Oracle
(oraInventory) and OINSTALL software installed on your system. It should be the primary group for
Group Requirements all Oracle software installation owners. Users who have the Oracle
Inventory group as their primary group are granted the OINSTALL
privilege to read and write to the central inventory.
• If you have an existing installation, then OUI detects the
existing oraInventory directory from the /etc/
oraInst.loc file, and uses this location.
• If you are installing Oracle software for the first time, then OUI
creates an Oracle base and central inventory, and creates an
Oracle inventory using information in the following priority:
– In the path indicated in the ORACLE_BASE environment
variable set for the installation owner user account.
– In an Optimal Flexible Architecture (OFA) path (u[01–
99]/app/owner where owner is the name of the user
account running the installation), if that user account has
permissions to write to that path.
– In the user home directory, in the path /app/owner, where
owner is the name of the user account running the
installation.
Ensure that the group designated as the OINSTALL group is
available as the primary group for all planned Oracle software
installation owners.
Create operating system groups Create operating system groups and users depending on your
and users for standard or role- security requirements, as described in this installation guide.
allocated system privileges Set resource limits settings and other requirements for Oracle
software installation owners.
Group and user names must use only ASCII characters.
Note:
Do not delete an existing daemon
user. If a daemon user has been
deleted, then you must add it back.
1-7
Chapter 1
Storage Checklist for Oracle Grid Infrastructure
Table 1-5 (Cont.) User Environment Configuration for Oracle Grid Infrastructure
Check Task
Unset Oracle Software If you have an existing Oracle software installation, and you are
Environment Variables using the same user to install this installation, then unset the
following environment
variables: $ORACLE_HOME; $ORA_NLS10; $TNS_ADMIN.
If you have set $ORA_CRS_HOME as an environment variable, then
unset it before starting an installation or upgrade. Do not
use $ORA_CRS_HOME as a user environment variable, except as
directed by Oracle Support.
Configure the Oracle Software Configure the environment of the oracle or grid user by performing
Owner Environment the following tasks:
• Set the default file mode creation mask (umask) to 022 in the
shell startup file.
• Set the DISPLAY environment variable.
Determine root privilege During installation, you are asked to run configuration scripts as the
delegation option for installation root user. You can either run these scripts manually as root when
prompted, or during installation you can provide configuration
information and passwords using a root privilege delegation option.
To run root scripts automatically, select Automatically run
configuration scripts. during installation. To use the automatic
configuration option, the root user credentials for all cluster member
nodes must use the same password.
• Use root user credentials
Provide the superuser password for cluster member node
servers.
• Use sudo
sudo is a UNIX and Linux utility that allows members of the
sudoers list privileges to run individual commands as root.
Provide the user name and password of an operating system
user that is a member of sudoers, and is authorized to run
sudo on each cluster member node.
To enable sudo, have a system administrator with the
appropriate privileges configure a user that is a member of the
sudoers list, and provide the user name and password when
prompted during installation.
1-8
Chapter 1
Storage Checklist for Oracle Grid Infrastructure
Check Task
Minimum disk space • At least 12 GB of space for the Oracle Grid Infrastructure for a
(local or shared) for cluster home (Grid home).
Oracle Grid Oracle recommends that you allocate 100 GB to allow additional
Infrastructure Software
space for patches.
At least 10 GB for Oracle Database Enterprise Edition.
• Allocate additional storage space as per your cluster configuration, as
described in Oracle Clusterware Storage Space Requirements.
Select Oracle ASM During installation, based on the cluster configuration, you are asked to
Storage Options provide Oracle ASM storage paths for the Oracle Clusterware files. These
path locations must be writable by the Oracle Grid Infrastructure installation
owner (Grid user). These locations must be shared across all nodes of the
cluster on Oracle ASM because the files in the Oracle ASM disk group
created during installation must be available to all cluster member nodes.
• For Oracle Standalone Cluster deployment, shared storage, either
Oracle ASM or shared file system, is locally mounted on each of the
cluster nodes.
• For Oracle Domain Services Cluster deployment, Oracle ASM
storage is shared across all nodes, and is available to Oracle
Member Clusters.
Oracle Member Cluster for Oracle Databases can either use
storage services from the Oracle Domain Services Cluster or
local Oracle ASM storage shared across all the nodes.
Oracle Member Cluster for Applications always use storage
services from the Oracle Domain Services Cluster.
Before installing Oracle Member Cluster, create a Member
Cluster Manifest file that specifies the storage details.
Voting files are files that Oracle Clusterware uses to verify cluster node
membership and status. Oracle Cluster Registry files (OCR) contain
cluster and database configuration information for Oracle Clusterware.
Select Grid Depending on the type of cluster you are installing, you can choose to either
Infrastructure host the Grid Infrastructure Management Repository (GIMR) for a cluster on
Management Repository the same cluster or on a remote cluster.
(GIMR) Storage Option
Note:
Starting with Oracle Grid Infrastructure 19c,
configuring GIMR is optional for Oracle
Standalone Cluster deployments.
For Oracle Standalone Cluster deployment, you can specify the same or
separate Oracle ASM disk group for the GIMR.
For Oracle Domain Services Cluster deployment, the GIMR must be
configured on a separate Oracle ASM disk group.
Oracle Member Clusters use the remote GIMR of the Oracle Domain
Services Cluster. You must specify the GIMR details when you create the
Member Cluster Manifest file before installation.
1-9
Chapter 1
Cluster Deployment Checklist for Oracle Grid Infrastructure
Related Topics
• Oracle Clusterware Storage Space Requirements
Use this information to determine the minimum number of disks and the minimum
disk space requirements based on the redundancy type, for installing Oracle
Clusterware files for various Oracle Cluster deployments.
Related Topics
• About the Grid Infrastructure Management Repository
Every Oracle Domain Services Cluster contains a Grid Infrastructure Management
Repository (GIMR), but GIMR configuration is optional for Oracle Standalone
Cluster.
• Creating Member Cluster Manifest File for Oracle Member Clusters
Create a Member Cluster Manifest file to specify the Oracle Member Cluster
configuration for the Grid Infrastructure Management Repository (GIMR), Grid
Naming Service, Oracle ASM storage server, and Oracle Fleet Patching and
Provisioning configuration.
• About Oracle Standalone Clusters
An Oracle Standalone Cluster hosts all Oracle Grid Infrastructure services and
Oracle ASM locally and requires direct access to shared storage.
• About Oracle Cluster Domain and Oracle Domain Services Cluster
An Oracle Cluster Domain is a choice of deployment architecture for new clusters,
introduced in Oracle Clusterware 12c Release 2.
• About Oracle Member Clusters
Oracle Member Clusters use centralized services from the Oracle Domain
Services Cluster and can host databases or applications. Oracle Member Clusters
can be of two types - Oracle Member Clusters for Oracle Databases or Oracle
Member Clusters for applications.
• Understanding Cluster Configuration Options
Review these topics to understand the cluster configuration options available in
Oracle Grid Infrastructure 19c.
Check Task
Configure an Oracle Deploy an Oracle Standalone Cluster.
Cluster that hosts all Use the Oracle Extended Cluster option to extend an Oracle RAC
Oracle Grid cluster across two, or more, separate sites, each equipped with its own
Infrastructure services storage.
and Oracle ASM
locally and accesses
storage directly
1-10
Chapter 1
Installer Planning Checklist for Oracle Grid Infrastructure
Check Task
Configure an Oracle Deploy an Oracle Domain Services Cluster.
Cluster Domain to To run Oracle Real Application Clusters (Oracle RAC) or Oracle RAC
standardize, One Node database instances, deploy Oracle Member Cluster for
centralize, and Oracle Databases.
optimize your Oracle
To run highly-available software applications, deploy Oracle Member
Real Application
Cluster for Applications.
Clusters (Oracle RAC)
deployment
Related Topics
• Understanding Cluster Configuration Options
Review these topics to understand the cluster configuration options available in Oracle
Grid Infrastructure 19c.
Table 1-8 Oracle Universal Installer Checklist for Oracle Grid Infrastructure
Installation
Check Task
Read the Release Notes Review release notes for your platform, which are available for your
release at the following URL:
http://www.oracle.com/technetwork/indexes/documentation/
index.html
Review the Licensing Information You are permitted to use only those components in the Oracle
Database media pack for which you have purchased licenses. For
more information, see:
Oracle Database Licensing Information User Manual
Run OUI with CVU and use fixup Oracle Universal Installer is fully integrated with Cluster Verification
scripts Utility (CVU), automating many CVU prerequisite checks. Oracle
Universal Installer runs all prerequisite checks and creates fixup
scripts when you run the installer.
You can also run CVU commands manually to check system
readiness. For more information, see:
Oracle Clusterware Administration and Deployment Guide
1-11
Chapter 1
Installer Planning Checklist for Oracle Grid Infrastructure
Table 1-8 (Cont.) Oracle Universal Installer Checklist for Oracle Grid Infrastructure
Installation
Check Task
Download and run Oracle The Oracle ORAchk utility provides system checks that can help to
ORAchk for runtime and upgrade prevent issues after installation. These checks include kernel
checks, or runtime health checks requirements, operating system resource allocations, and other
system requirements.
Use the Oracle ORAchk Upgrade Readiness Assessment to obtain
an automated upgrade-specific system health check for upgrades.
For example:
./orachk -u -o pre
The Oracle ORAchk Upgrade Readiness Assessment automates
many of the manual pre- and post-upgrade checks described in
Oracle upgrade documentation.
Oracle ORAchk is supported on Windows platforms in a Cygwin
environment only. For more information, see:
https://support.oracle.com/epmos/faces/DocContentDisplay?
id=2550798.1&parent=DOCUMENTATION&sourceId=USERGUIDE
Ensure cron jobs do not run If the installer is running when daily cron jobs start, then you may
during installation encounter unexplained installation problems if your cron job is
performing cleanup, and temporary files are deleted before the
installation is finished. Oracle recommends that you complete
installation before daily cron jobs are run, or disable daily cron
jobs that perform cleanup until after the installation is completed.
Obtain Your My Oracle Support During installation, you require a My Oracle Support user name and
account information password to configure security updates, download software
updates, and other installation tasks. You can register for My Oracle
Support at the following URL:
https://support.oracle.com/
Check running Oracle processes, • On a node with a standalone database not using Oracle ASM:
and shut down processes if You do not need to shut down the database while you install
necessary Oracle Grid Infrastructure.
• On a node with a standalone Oracle Database using Oracle
ASM: Stop the existing Oracle ASM instances. The Oracle
ASM instances are restarted during installation.
• On an Oracle RAC Database node: This installation requires an
upgrade of Oracle Clusterware, as Oracle Clusterware is
required to run Oracle RAC. As part of the upgrade, you must
shut down the database one node at a time as the rolling
upgrade proceeds from node to node.
1-12
2
Checking and Configuring Server Hardware
for Oracle Grid Infrastructure
Verify that servers where you install Oracle Grid Infrastructure meet the minimum
requirements for installation.
This section provides minimum server requirements to complete installation of Oracle Grid
Infrastructure. It does not provide system resource guidelines, or other tuning guidelines for
particular workloads.
• Logging In to a Remote System Using X Window System
Use this procedure to run Oracle Universal Installer (OUI) by logging on to a remote
system where the runtime setting prohibits logging in directly to a graphical user interface
(GUI).
• Checking Server Hardware and Memory Configuration
Use this procedure to gather information about your server configuration.
Note:
If you log in as another user (for example, oracle or grid), then repeat this
procedure for that user as well.
1. Start an X Window System session. If you are using an X Window System terminal
emulator from a PC or similar system, then you may need to configure security settings to
permit remote hosts to display X applications on your local system.
2. Enter a command using the following syntax to enable remote hosts to display X
applications on the local X server:
# xhost + RemoteHost
# xhost + somehost.example.com
somehost.example.com being added to the access control list
2-1
Chapter 2
Checking Server Hardware and Memory Configuration
3. If you are not installing the software on the local system, then use the ssh
command to connect to the system where you want to install the software:
# ssh -Y RemoteHost
RemoteHost is the fully qualified remote host name. The -Y flag ("yes") enables
remote X11 clients to have full access to the original X11 display. For example:
# ssh -Y somehost.example.com
4. If you are not logged in as the root user, and you are performing configuration
steps that require root user privileges, then switch the user to root.
Note:
For more information about remote login using X Window System, refer to
your X server documentation, or contact your X server vendor or system
administrator. Depending on the X server software that you are using, you
may have to complete the tasks in a different order.
If the size of the physical RAM installed in the system is less than the required
size, then you must install more memory before continuing.
2. Determine the size of the configured swap space:
If necessary, see your operating system documentation for information about how
to configure additional swap space.
3. Determine the amount of space available in the /tmp directory:
# df -h /tmp
If the free space available in the /tmp directory is less than what is required, then
complete one of the following steps:
• Delete unused files from the /tmp directory to meet the disk space
requirement.
2-2
Chapter 2
Checking Server Hardware and Memory Configuration
Note:
If you perform this step after installing Oracle software, then do not
remove /tmp/.oracle or /var/tmp/.oracle directories or their files.
• When you set the Oracle user's environment, also set the TMP and TMPDIR
environment variables to the directory you want to use instead of /tmp.
4. Determine the amount of free RAM and disk swap space on the system:
# free
# uname -m
Verify that the processor architecture matches the Oracle software release to install. For
example, you should see the following for a x86-64 bit system:
x86_64
If you do not see the expected output, then you cannot install the software on this
system.
6. Verify that shared memory (/dev/shm) is mounted properly with sufficient size:
df -h /dev/shm
The df-h command displays the filesystem on which /dev/shm is mounted, and also
displays in GB the total size and free size of shared memory.
2-3
3
Automatically Configuring Oracle Linux with
Oracle Database Preinstallation RPM
Use Oracle Database Preinstallation RPM to simplify operating system configuration in
preparation for Oracle software installations.
Oracle recommends that you install Oracle Linux and use Oracle Database Preinstallation
RPM to configure your operating systems for Oracle Database and Oracle Grid Infrastructure
installations.
• About the Oracle Database Preinstallation RPM
If your Linux distribution is Oracle Linux, or Red Hat Enterprise Linux, and you are an
Oracle Linux support customer, then you can complete most preinstallation configuration
tasks by using the Oracle Database Preinstallation RPM for your release.
• Overview of Oracle Linux Configuration with Oracle Database Preinstallation RPM
Use Oracle Database Preinstallation RPM to simplify operating system configuration, and
to ensure that you have required kernel packages.
• Installing the Oracle Database Preinstallation RPM Using ULN
Use this procedure to subscribe to Unbreakable Linux Network (ULN) Oracle Linux
channels for your Oracle software.
• Installing Oracle Database Preinstallation RPM During an Oracle Linux Installation
Use this procedure to install a new Oracle Linux installation and to perform system
configuration with the Oracle Database Preinstallation RPM:
• Installing Oracle Oracle Database Preinstallation RPM Using the Oracle Linux yum
Server
Install Oracle Linux and configure your Oracle Linux installation for security errata or bug
fix updates using the Oracle Linux yum server.
• Configure Additional Operating System Features
Oracle recommends that you configure your operating system before starting installation
with additional features, such as IPMI or additional programming environments.
3-1
Chapter 3
Overview of Oracle Linux Configuration with Oracle Database Preinstallation RPM
• Creates an oracle user, and creates the oraInventory (oinstall) and OSDBA
(dba) groups for that user
• As needed, sets sysctl.conf settings, system startup parameters, and driver
parameters to values based on recommendations from the Oracle Database
Preinstallation RPM program
• Sets hard and soft resource limits
• Sets other recommended parameters, depending on your kernel version
• Sets numa=off in the kernel for Linux x86_64 machines.
Configure Oracle Database Preinstallation RPM only once on your operating system
when you install Oracle Database or Oracle Grid Infrastructure for the first time on
your system. For subsequent installations on the same system, do not install Oracle
Database Preinstallation RPM again.
Do not install Oracle Database Preinstallation RPM on Oracle Engineered Systems,
such as Oracle Exadata Database Machine. Oracle Engineered Systems include
integrated system software that contain the required version of the operating system
kernel and all software packages.
Note:
3-2
Chapter 3
Installing the Oracle Database Preinstallation RPM Using ULN
integrated system software that contain the required version of the operating system kernel
and all software packages.
The Oracle Database Preinstallation RPM for your Oracle Linux distributions and database
release automatically installs any additional packages needed for installing Oracle Grid
Infrastructure and Oracle Database, and configures your server operating system
automatically, including setting kernel parameters and other basic operating system
requirements for installation. For more information about Oracle Linux and Oracle Database
Preinstallation RPM, refer to:
http://docs.oracle.com/en/operating-systems/linux.html
Configuring a server using Oracle Linux and the Oracle Database Preinstallation RPM
consists of the following steps:
1. Install Oracle Linux.
2. Register your Linux distribution with Oracle Unbreakable Linux Network (ULN) or
download and configure the yum repository for your system using the Oracle Linux yum
server for your Oracle Linux release.
3. Install the Oracle Database Preinstallation RPM with the RPM for your Oracle Grid
Infrastructure and Oracle Database releases, and update your Linux release.
4. Create role-allocated groups and users with identical names and ID numbers.
5. Complete network interface configuration for each cluster node candidate.
6. Complete system configuration for shared storage access as required for each standard
or core node cluster candidate.
After these steps are complete, you can proceed to install Oracle Grid Infrastructure and
Oracle Database.
https://yum.oracle.com/oracle-linux-isos.html
https://edelivery.oracle.com/linux
Note:
Ensure that you use the latest available update release for Oracle Linux.
3-3
Chapter 3
Installing the Oracle Database Preinstallation RPM Using ULN
2. Register your server with Unbreakable Linux Network (ULN). By default, you are
registered for the Oracle Linux Latest channel for your operating system and
hardware.
• Oracle Linux 7
https://docs.oracle.com/en/operating-systems/oracle-linux/uln-user/
• Oracle Linux 8
https://docs.oracle.com/en/operating-systems/oracle-linux/8/software-
management/
3. Log in to Unbreakable Linux Network:
https://linux.oracle.com
4. Start a terminal session and enter the following command as root, depending on
your platform. For example:
• Oracle Linux 7
Note:
Use the -y option if you want yum to skip the package confirmation
prompt.
• Oracle Linux 8
You should see output indicating that you have subscribed to the Oracle Linux
channel, and that packages are being installed.
The Oracle Database Preinstallation RPM automatically creates a standard (not
role-allocated) Oracle installation owner and groups, and sets up other kernel
configuration settings as required for Oracle installations.
5. Check the RPM log file to review the system configuration changes. For example:
/var/log/oracle-database-preinstall-19c/backup/timestamp/
orakernel.log
3-4
Chapter 3
Installing Oracle Database Preinstallation RPM During an Oracle Linux Installation
https://yum.oracle.com/oracle-linux-isos.html
https://edelivery.oracle.com/linux
Note:
Ensure that you use the latest available update release for Oracle Linux.
2. Start the Oracle Linux installation and respond to installation screens with values
appropriate for your environment.
3. Review the first software selection screen, which lists task-specific software options. At
the bottom of the screen, there is an option to customize now or customize later. Select
Customize now, and click Next.
4. On Oracle Linux, select Servers on the left of the screen and System administration
tools on the right of the screen (options may vary between releases).
The Packages in System Tools window opens.
5. Select the Oracle Database Preinstallation RPM package box from the package list. For
example, select a package similar to the following:
For Oracle Linux 7:
oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm
oracle-database-preinstall-19c-1.0-1.el8.x86_64.rpm
If you do not have an Oracle Database Preinstallation RPM package option that is current
for your Oracle Database release, because you are using an Oracle Linux installation that
is previous to your Oracle Database and Oracle Grid Infrastructure release, then install
the RPM for your release manually after completing the operating system installation.
6. Close the optional package window and click Next.
7. Complete the other screens to finish the Oracle Linux installation.
3-5
Chapter 3
Installing Oracle Oracle Database Preinstallation RPM Using the Oracle Linux yum Server
Note:
Use the -y option if you want yum to skip the package confirmation
prompt.
• Oracle Linux 8
Note:
Use the -y option if you want yum to skip the package confirmation
prompt.
You should see output indicating that you have subscribed to the Oracle Linux
channel, and that packages are being installed.
3-6
Chapter 3
Configure Additional Operating System Features
The Oracle Database Preinstallation RPM automatically creates a standard (not role-
allocated) Oracle installation owner and groups and sets up other kernel
configuration settings as required for Oracle installations. If you plan to use job-role
separation, then create the extended set of database users and groups depending on
your requirements.
3-7
4
Configuring Operating Systems for Oracle
Grid Infrastructure on Linux
Complete operating system configuration requirements and checks for Linux operating
systems before you start installation.
• Guidelines for Linux Operating System Installation
Operating system guidelines to be aware of before proceeding with an Oracle installation.
• Reviewing Operating System and Software Upgrade Best Practices
These topics provide general planning guidelines and platform-specific information about
upgrades and migration.
• Reviewing Operating System Security Common Practices
Secure operating systems are an important basis for general system security.
• About Installation Fixup Scripts
Oracle Universal Installer detects when the minimum requirements for an installation are
not met, and creates shell scripts, called fixup scripts, to finish incomplete system
configuration steps.
• About Operating System Requirements
Depending on the products that you intend to install, verify that you have the required
operating system kernel and packages installed.
• Using Oracle RPM Checker on IBM: Linux on System z
Use the Oracle RPM Checker utility to verify that you have the required Red Hat
Enterprise Linux or SUSE packages installed on the operating system before you start
the Oracle Database or Oracle Grid Infrastructure installation.
• Operating System Requirements for x86-64 Linux Platforms
The Linux distributions and packages listed in this section are supported for this release
on x86-64.
• Operating System Requirements for IBM: Linux on System z
The Linux distributions and packages listed in this section are supported for this release
on IBM: Linux on System z.
• Additional Drivers and Software Packages for Linux
Information about optional drivers and software packages.
• Checking Kernel and Package Requirements for Linux
Verify your kernel and packages to see if they meet minimum requirements for
installation.
• Setting Clock Source for VMs on Linux x86-64
Oracle recommends that you set the clock source to tsc for better performance in virtual
environments (VM) on Linux x86-64.
• Installing the cvuqdisk RPM for Linux
If you do not use an Oracle Database Preinstallation RPM, and you want to use the
Cluster Verification Utility, then you must install the cvuqdisk RPM.
• Reviewing HugePages Memory Allocation
Review this information if your operating system has HugePages enabled.
4-1
Chapter 4
Guidelines for Linux Operating System Installation
4-2
Chapter 4
Guidelines for Linux Operating System Installation
If you do not install the Oracle Database Preinstallation RPM, then Oracle recommends that
you install your Linux operating system with the default software packages (RPMs).
A default Linux installation includes most of the required packages and helps you limit
manual verification of package dependencies. Oracle recommends that you do not customize
the RPMs during installation.
Refer to the official Oracle Linux documentation for more information about installing Oracle
Linux:
• Oracle Linux 7 Installation Guide
• Oracle Linux 7: Managing Software
• Oracle Linux 8 Installation Guide
• Oracle Linux 8: Managing Software
4-3
Chapter 4
Guidelines for Linux Operating System Installation
Oracle Database Preinstallation RPMs are available from the Oracle Linux Network or
available on the Oracle Linux DVDs. Using the Oracle Database Preinstallation RPM
is not required, but Oracle recommends you use it to save time in setting up your
cluster servers.
When installed, the Oracle Database Preinstallation RPM does the following:
• Automatically downloads and installs any additional RPM packages needed for
installing Oracle Grid Infrastructure and Oracle Database, and resolves any
dependencies
• Creates an oracle user, and creates the oraInventory (oinstall) and OSDBA
(dba) groups for that user
• As needed, sets sysctl.conf settings, system startup parameters, and driver
parameters to values based on recommendations from the Oracle Database
Preinstallation RPM program
• Sets hard and soft resource limits
• Sets other recommended parameters, depending on your kernel version
• Sets numa=off in the kernel for Linux x86_64 machines.
Configure Oracle Database Preinstallation RPM only once on your operating system
when you install Oracle Database or Oracle Grid Infrastructure for the first time on
your system. For subsequent installations on the same system, do not install Oracle
Database Preinstallation RPM again.
Do not install Oracle Database Preinstallation RPM on Oracle Engineered Systems,
such as Oracle Exadata Database Machine. Oracle Engineered Systems include
integrated system software that contain the required version of the operating system
kernel and all software packages.
Note:
4-4
Chapter 4
Reviewing Operating System and Software Upgrade Best Practices
environments, or performance issues or delays for Oracle Database single instances. Oracle
continues to recommend using standard HugePages for Linux.
Transparent HugePages memory is enabled by default with Oracle Linux 6 and later, Red Hat
Enterprise Linux 6 and later, SUSE 11 and later, kernels.
If you install Oracle Database Preinstallation RPM, then it disables Transparent HugePages.
Transparent Hugepages are similar to standard HugePages. However, while standard
HugePages allocate memory at startup, Transparent Hugepages memory uses the
khugepaged thread in the kernel to allocate memory dynamically during runtime, using
swappable HugePages.
HugePages allocates non-swappable memory for large page tables using memory-mapped
files. HugePages are not enabled by default. If you enable HugePages, then you should
deduct the memory allocated to HugePages from the available RAM before calculating swap
space. Refer to your distribution documentation and to Oracle Technology Network and My
Oracle Support for more information.
During Oracle Grid Infrastructure installation, the Grid Infrastructure Management Repository
(GIMR) is configured to use HugePages. Because the Grid Infrastructure Management
Repository database starts before all other databases installed on the cluster, if the space
allocated to HugePages is insufficient, then the System Global Area (SGA) of one or more
databases may be mapped to regular pages, instead of Hugepages, which can adversely
affect performance. Configure the HugePages memory allocation to a size large enough to
accommodate the sum of the SGA sizes of all the databases you intend to install on the
cluster, as well as the Grid Infrastructure Management Repository.
4-5
Chapter 4
Reviewing Operating System and Software Upgrade Best Practices
Caution:
Always create a backup of existing databases before starting any
configuration change.
Refer to Oracle Database Upgrade Guide for more information about required
software updates, pre-upgrade tasks, post-upgrade tasks, compatibility, and
interoperability between different releases.
Related Topics
• Oracle Database Upgrade Guide
Note:
Confirm that the server operating system is supported, and that kernel and
package requirements for the operating system meet or exceed the minimum
requirements for the Oracle Database release to which you want to migrate.
Manual, Command-Line Copy for Migrating Data and Upgrading Oracle Database
You can copy files to the new server and upgrade it manually. If you use this
procedure, then you cannot use Oracle Database Upgrade Assistant. However, you
can revert to your existing database if you encounter upgrade issues.
1. Copy the database files from the computer running the previous operating system
to the one running the new operating system.
2. Re-create the control files on the computer running the new operating system.
3. Manually upgrade the database using command-line scripts and utilities.
See Also:
Oracle Database Upgrade Guide to review the procedure for upgrading the
database manually, and to evaluate the risks and benefits of this option
4-6
Chapter 4
Reviewing Operating System and Software Upgrade Best Practices
See Also:
Oracle Database Upgrade Guide to review the Export/Import method for migrating
data and upgrading Oracle Database
# cd Grid_home/bin
# srvctl relocate service -drain_timeout
2. Disable the automatic startup of Oracle High Availability Services, when the server
reboots, on the first node.
# cd Grid_home/bin
# ./crsctl disable crs
4. Update the operating system to a version that is supported for your Oracle Grid
Infrastructure release.
Refer to your operating system documentation for more information about upgrading the
operating system.
5. Reboot your Oracle Grid Infrastructure server after the operating system upgrade is
complete.
6. Relink the Oracle Grid Infrastructure binaries:
a. As the root user, unlock the Oracle Grid Infrastructure installation.
# cd Grid_home/crs/install
# ./rootcrs.sh -unlock
4-7
Chapter 4
Reviewing Operating System Security Common Practices
$ cd Grid_home
$ ./relink
c. As the root user, add the Oracle Database libraries and lock the Oracle Grid
Infrastructure installation.
# cd Grid_home/rdbms/install/
# ./rootadd_rdbms.sh
# cd Grid_home/crs/install
# rootcrs.sh -lock
# ./rootcrs.sh -updateosfiles
8. As the root user, enable the automatic startup of Oracle High Availability Services,
when the server reboots, on the first node.
# cd Grid_home/bin
# ./crsctl enable crs
4-8
Chapter 4
About Installation Fixup Scripts
Ensure that your operating system deployment is in compliance with common security
practices as described in your operating system vendor security guide.
Note:
Using fixup scripts does not ensure that all the prerequisites for installing Oracle
Database are met. You must still verify that all the preinstallation requirements are
met to ensure a successful installation.
Oracle Universal Installer is fully integrated with Cluster Verification Utility (CVU) automating
many prerequisite checks for your Oracle Grid Infrastructure or Oracle Real Application
Clusters (Oracle RAC) installation. You can also manually perform various CVU verifications
by running the cluvfy command.
Related Topics
• Oracle Clusterware Administration and Deployment Guide
4-9
Chapter 4
Using Oracle RPM Checker on IBM: Linux on System z
Note:
Oracle does not support running different operating system versions on
cluster members, unless an operating system is being upgraded. You cannot
run different operating system version binaries on members of the same
cluster, even if each operating system is supported.
On Red Hat Enterprise Linux, the utility checks and also installs all required RPMs. For
example:
On Red Hat Enterprise Linux 7:
# rpm -e ora-val-rpm-RH7-DB-19c.s390x
# rpm -e ora-val-rpm-S12-DB-19c.s390x
4-10
Chapter 4
Operating System Requirements for x86-64 Linux Platforms
4-11
Chapter 4
Operating System Requirements for x86-64 Linux Platforms
• The Unbreakable Enterprise Kernel for Oracle Linux can be installed on x86-64
servers running either Oracle Linux or Red Hat Enterprise Linux. As of Oracle
Linux 5 Update 6, the Unbreakable Enterprise Kernel is the default system kernel.
An x86 (32-bit) release of Oracle Linux including the Unbreakable Enterprise
Kernel is available with Oracle Linux 5 update 7 and later.
• 32-bit packages in these requirements lists are needed only if you intend to use
32-bit client applications to access 64-bit servers that have the same operating
system.
• Oracle Database 12c Release 2 (12.2) and later does not require the compiler
packages gcc and gcc-c++ on for Oracle Database or Oracle Grid Infrastructure
installations.
• These operating system requirements do not apply to Oracle Engineered
Systems, such as Oracle Exadata Database Machine. Oracle Engineered
Systems include integrated system software that contain the required version of
the operating system kernel and all software packages. Please verify that you
have the minimum required Exadata image. Refer My Oracle Support note
888828.1 for more information.
Related Topics
• My Oracle Support Note 888828.1
Item Requirements
SSH Requirement Ensure that OpenSSH is installed on your servers. OpenSSH is the
required SSH software.
Oracle Linux 8 Minimum supported versions:
• Oracle Linux 8.1 with the Unbreakable Enterprise Kernel 6:
5.4.17-2011.0.7.el8uek.x86_64 or later
• Oracle Linux 8 with the Red Hat Compatible Kernel:
4.18.0-80.el8.x86_64 or later
Note:
Oracle recommends that you update
Oracle Linux to the latest available
version and release level.
4-12
Chapter 4
Operating System Requirements for x86-64 Linux Platforms
Item Requirements
Packages for Oracle Subscribe to the Oracle Linux 8 channel on the Unbreakable Linux
Linux 8 Network, or configure a yum repository from the Oracle Linux yum
server website, and then install the Oracle Database Preinstallation
RPM, oracle-database-preinstall-19c. The Oracle
Database Preinstallation RPM, oracle-database-
preinstall-19c, automatically installs all required packages listed
in the table below, their dependencies for Oracle Grid Infrastructure
and Oracle Database installations, and also performs other system
configuration. If you install the Oracle Database Preinstallation RPM,
oracle-database-preinstall-19c, then you do not have to
install these packages, as the Oracle Database Preinstallation RPM
automatically installs them.
bc
binutils
elfutils-libelf
elfutils-libelf-devel
fontconfig-devel
glibc
glibc-devel
ksh
libaio
libaio-devel
libXrender
libX11
libXau
libXi
libXtst
libgcc
libnsl
librdmacm
libstdc++
libstdc++-devel
libxcb
libibverbs
make
policycoreutils
policycoreutils-python-utils
smartmontools
sysstat
Note:
If you intend to use 32-bit client
applications to access 64-bit servers,
then you must also install the latest 32-bit
4-13
Chapter 4
Operating System Requirements for x86-64 Linux Platforms
Item Requirements
Optional Packages for Based on your requirement, install the latest released versions of the
Oracle Linux 8 following packages:
Patches and Known • For a list of latest Oracle Database Release Updates (RU) and
Issues Release Update Revisions (RUR) patches for Oracle Linux
Enterprise Linux 8 and Red Hat Enterprise Linux 8, visit My
Oracle Support
• For a list of known issues and open bugs for Oracle Linux 8 and
Red Hat Enterprise Linux 8, read the Oracle Database Release
Notes
KVM virtualization Kernel-based virtual machine (KVM), also known as KVM
virtualization, is certified on Oracle Database 19c for all supported
Oracle Linux 8 distributions. For more information on supported
virtualization technologies for Oracle Database, refer to the
virtualization matrix:
https://www.oracle.com/database/technologies/virtualization-
matrix.html
Item Requirements
SSH Requirement Ensure that OpenSSH is installed on your servers. OpenSSH is the
required SSH software.
4-14
Chapter 4
Operating System Requirements for x86-64 Linux Platforms
Item Requirements
Oracle Linux 7 Minimum supported versions:
• Oracle Linux 7.4 with the Unbreakable Enterprise Kernel 4:
4.1.12-124.19.2.el7uek.x86_64 or later
• Oracle Linux 7.4 with the Unbreakable Enterprise Kernel 5:
4.14.35-1818.1.6.el7uek.x86_64 or later
• Oracle Linux 7.7 with the Unbreakable Enterprise Kernel 6:
5.4.17-2011.4.4.el7uek.x86_64 or later
• Oracle Linux 7.5 with the Red Hat Compatible Kernel:
3.10.0-862.11.6.el7.x86_64 or later
Note:
Oracle recommends that you update
Oracle Linux to the latest available
version and release level.
4-15
Chapter 4
Operating System Requirements for x86-64 Linux Platforms
Item Requirements
Packages for Oracle Subscribe to the Oracle Linux 7 channel on the Unbreakable Linux
Linux 7 Network, or configure a yum repository from the Oracle Linux yum
server website, and then install the Oracle Database Preinstallation
RPM, oracle-database-preinstall-19c. The Oracle
Database Preinstallation RPM, oracle-database-
preinstall-19c, automatically installs all required packages listed
in the table below, their dependencies for Oracle Grid Infrastructure
and Oracle Database installations, and also performs other system
configuration. If you install the Oracle Database Preinstallation RPM,
oracle-database-preinstall-19c, then you do not have to
install these packages, as the Oracle Database Preinstallation RPM
automatically installs them.
bc
binutils
compat-libcap1
compat-libstdc++-33
elfutils-libelf
elfutils-libelf-devel
fontconfig-devel
glibc
glibc-devel
ksh
libaio
libaio-devel
libXrender
libXrender-devel
libX11
libXau
libXi
libXtst
libgcc
libstdc++
libstdc++-devel
libxcb
make
policycoreutils
policycoreutils-python
smartmontools
sysstat
Note:
If you intend to use 32-bit client
applications to access 64-bit servers,
then you must also install (where
4-16
Chapter 4
Operating System Requirements for x86-64 Linux Platforms
Item Requirements
Optional Packages for Based on your requirement, install the latest released versions of the
Oracle Linux 7 following packages:
Table 4-3 x86-64 Red Hat Enterprise Linux 8 Minimum Operating System
Requirements
Item Requirements
SSH Requirement Ensure that OpenSSH is installed on your servers. OpenSSH is the required
SSH software.
Red Hat Enterprise Minimum supported versions:
Linux 8 • Red Hat Enterprise Linux 8: 4.18.0-80.el8.x86_64 or later
4-17
Chapter 4
Operating System Requirements for x86-64 Linux Platforms
Table 4-3 (Cont.) x86-64 Red Hat Enterprise Linux 8 Minimum Operating System
Requirements
Item Requirements
Packages for Red Hat Install the latest released versions of the following packages:
Enterprise Linux 8
bc
binutils
elfutils-libelf
elfutils-libelf-devel
fontconfig-devel
glibc
glibc-devel
ksh
libaio
libaio-devel
libXrender
libX11
libXau
libXi
libXtst
libgcc
libnsl
librdmacm
libstdc++
libstdc++-devel
libxcb
libibverbs
make
smartmontools
sysstat
Note:
If you intend to use 32-bit client applications to
access 64-bit servers, then you must also
install the latest 32-bit versions of the
packages listed in this table.
Optional Packages for Based on your requirement, install the latest released versions of the
Red Hat Enterprise following packages:
Linux 8
4-18
Chapter 4
Operating System Requirements for x86-64 Linux Platforms
Table 4-3 (Cont.) x86-64 Red Hat Enterprise Linux 8 Minimum Operating System
Requirements
Item Requirements
Patches and Known • For a list of latest Oracle Database Release Updates (RU) and Release
Issues Update Revisions (RUR) patches for Oracle Linux Enterprise Linux 8
and Red Hat Enterprise Linux 8, visit My Oracle Support
• For a list of known issues and open bugs for Oracle Linux 8 and Red
Hat Enterprise Linux 8, read the Oracle Database Release Notes
Table 4-4 x86-64 Red Hat Enterprise Linux 7 Minimum Operating System
Requirements
Item Requirements
SSH Requirement Ensure that OpenSSH is installed on your servers. OpenSSH is the required
SSH software.
Red Hat Enterprise Minimum supported versions:
Linux 7 • Red Hat Enterprise Linux 7.5: 3.10.0-862.11.6.el7.x86_64 or later
4-19
Chapter 4
Operating System Requirements for x86-64 Linux Platforms
Table 4-4 (Cont.) x86-64 Red Hat Enterprise Linux 7 Minimum Operating System
Requirements
Item Requirements
Packages for Red Hat Install the latest released versions of the following packages:
Enterprise Linux 7
bc
binutils
compat-libcap1
compat-libstdc++-33
elfutils-libelf
elfutils-libelf-devel
fontconfig-devel
glibc
glibc-devel
ksh
libaio
libaio-devel
libX11
libXau
libXi
libXtst
libXrender
libXrender-devel
libgcc
libstdc++
libstdc++-devel
libxcb
make
smartmontools
sysstat
Note:
If you intend to use 32-bit client applications to
access 64-bit servers, then you must also
install (where available) the latest 32-bit
versions of the packages listed in this table.
4-20
Chapter 4
Operating System Requirements for x86-64 Linux Platforms
Table 4-4 (Cont.) x86-64 Red Hat Enterprise Linux 7 Minimum Operating System
Requirements
Item Requirements
Optional Packages for Based on your requirement, install the latest released versions of the
Red Hat Enterprise following packages:
Linux 7
Table 4-5 x86-64 SUSE Linux Enterprise Server 12 Minimum Operating System
Requirements
Item Requirements
SSH Requirement Ensure that OpenSSH is installed on your servers. OpenSSH is the required
SSH software.
SUSE Linux Enterprise Minimum supported version:
Server SUSE Linux Enterprise Server 12 SP3: 4.4.162-94.72-default or later or later
4-21
Chapter 4
Operating System Requirements for x86-64 Linux Platforms
Table 4-5 (Cont.) x86-64 SUSE Linux Enterprise Server 12 Minimum Operating
System Requirements
Item Requirements
Packages for SUSE Install the latest released versions of the following packages:
Linux Enterprise Server
12
bc
binutils
glibc
glibc-devel
libX11
libXau6
libXtst6
libcap-ng-utils
libcap-ng0
libcap-progs
libcap1
libcap2
libelf-devel
libgcc_s1
libjpeg-turbo
libjpeg62
libjpeg62-turbo
libpcap1
libpcre1
libpcre16-0
libpng16-16
libstdc++6
libtiff5
libaio-devel
libaio1
libXrender1
make
mksh
pixz
rdma-core
rdma-core-devel
smartmontools
sysstat
xorg-x11-libs
xz
Note:
If you intend to use 32-bit client applications to
access 64-bit servers, then you must also
install (where available) the latest 32-bit
versions of the packages listed in this table.
4-22
Chapter 4
Operating System Requirements for x86-64 Linux Platforms
Table 4-5 (Cont.) x86-64 SUSE Linux Enterprise Server 12 Minimum Operating
System Requirements
Item Requirements
Optional Packages for Based on your requirement, install the latest released versions of the
SUSE Linux Enterprise following packages:
Server 12
Table 4-6 x86-64 SUSE Linux Enterprise Server 15 Minimum Operating System
Requirements
Item Requirements
SSH Requirement Ensure that OpenSSH is installed on your servers. OpenSSH is the required
SSH software.
SUSE Linux Enterprise Minimum supported version:
Server SUSE Linux Enterprise Server 15: 4.12.14-23-default or later
4-23
Chapter 4
Operating System Requirements for x86-64 Linux Platforms
Table 4-6 (Cont.) x86-64 SUSE Linux Enterprise Server 15 Minimum Operating
System Requirements
Item Requirements
Packages for SUSE Install the latest released versions of the following packages:
Linux Enterprise Server
15
bc
binutils
glibc
glibc-devel
insserv-compat
libaio-devel
libaio1
libX11-6
libXau6
libXext-devel
libXext6
libXi-devel
libXi6
libXrender-devel
libXrender1
libXtst6
libcap-ng-utils
libcap-ng0
libcap-progs
libcap1
libcap2
libelf1
libgcc_s1
libjpeg8
libpcap1
libpcre1
libpcre16-0
libpng16-16
libstdc++6
libtiff5
libgfortran4
mksh
make
pixz
rdma-core
rdma-core-devel
smartmontools
sysstat
xorg-x11-libs
xz
4-24
Chapter 4
Operating System Requirements for x86-64 Linux Platforms
Table 4-6 (Cont.) x86-64 SUSE Linux Enterprise Server 15 Minimum Operating
System Requirements
Item Requirements
Note:
If you intend to use 32-bit client applications to
access 64-bit servers, then you must also
install (where available) the latest 32-bit
versions of the packages listed in this table.
Optional Packages for Based on your requirement, install the latest released versions of the
SUSE Linux Enterprise following packages:
Server 15
Patches and Known • For a list of latest Release Updates (RU) and Release Update Revisions
Issues (RUR) patches for SUSE Linux Enterprise Server 15, visit My Oracle
Support
• For a list of known issues and open bugs for SUSE Linux Enterprise
Server 15, read the Oracle Database Release Notes
$ yum install bc
On SUSE Linux Enterprise Server, to install the latest bc package using YaST, run the
following command:
$ yast --install bc
4-25
Chapter 4
Operating System Requirements for IBM: Linux on System z
Note:
32-bit packages in these requirements lists are needed only if you intend to
use 32-bit client applications to access 64-bit servers.
The platform-specific hardware and software requirements included in this guide were
current when this guide was published. However, because new platforms and
operating system software versions may be certified after this guide is published,
review the certification matrix on the My Oracle Support website for the most up-to-
date list of certified hardware platforms and operating system versions:
https://support.oracle.com/
• Supported Red Hat Enterprise Linux 8 Distributions for IBM: Linux on System z
Use the following information to check supported Red Hat Enterprise Linux 8
distributions:
• Supported Red Hat Enterprise Linux 7 Distributions for IBM: Linux on System z
Use the following information to check supported Red Hat Enterprise Linux 7
distributions:
• Supported SUSE Linux Enterprise Server 12 Distributions for IBM: Linux on
System z
Use the following information to check supported SUSE Linux Enterprise Server
12 distributions:
Item Requirements
SSH Requirement Ensure that OpenSSH is installed on your servers. OpenSSH is the
required SSH software.
Red Hat Enterprise Red Hat Enterprise Linux 8.3: 4.18.0-240.el8.s390x or later
Linux 8
4-26
Chapter 4
Operating System Requirements for IBM: Linux on System z
Table 4-7 (Cont.) Red Hat Enterprise Linux 8 Minimum Operating System
Requirements
Item Requirements
Packages for Red Hat The following packages (or later versions) must be installed:
Enterprise Linux 8
bc-1.07.1-5.el8 (s390x)
binutils-2.30-79.el8 (s390x)
elfutils-libelf-0.180-1.el8 (s390x)
elfutils-libelf-devel-0.180-1.el8 (s390x)
fontconfig-2.13.1-3.el8 (s390x)
gcc-8.3.1-5.1.el8 (s390x)
gcc-c++-8.3.1-5.1.el8 (s390x)
glibc-2.28-127.el8_3.2 (s390x)
glibc-devel-2.28-127.el8_3.2 (s390x)
ksh-20120801-254.el8 (s390x)
libnsl2-1.2.0-2.20180605git4a062cf.el8 (s390x)
libX11-1.6.8-3.el8 (s390x)
libXau-1.0.9-3.el8 (s390x)
libXaw-1.0.13-10.el8 (s390x)
libXi-1.7.10-1.el8 (s390x)
libXrender-0.9.10-7.el8 (s390x)
libXtst-1.2.3-7.el8 (s390x)
libaio-0.3.112-1.el8 (s390x)
libaio-devel-0.3.112-1.el8 (s390x)
libattr-devel-2.4.48-3.el8 (s390x)
libgcc-8.3.1-5.1.el8 (s390x)
libgfortran-8.3.1-5.1.el8 (s390x)
libibverbs-29.0-3.el8 (s390x)
libnsl-2.28-127.el8_3.2 (s390x)
librdmacm-29.0-3.el8 (s390x)
libstdc++-8.3.1-5.1.el8 (s390x)
libstdc++-devel-8.3.1-5.1.el8 (s390x)
libxcb-1.13.1-1.el8 (s390x)
make-4.2.1-10.el8 (s390x)
net-tools-2.0-0.52.20160912git.el8 (s390x)
pam-1.3.1-11.el8 (s390x)
pam-devel-1.3.1-11.el8 (s390x)
policycoreutils-2.9-9.el8 (s390x)
policycoreutils-python-utils-2.9-9.el8.noarch
smartmontools-7.1-1.el8 (s390x)
sysstat-11.7.3-5.el8 (s390x)
4-27
Chapter 4
Operating System Requirements for IBM: Linux on System z
Item Requirements
SSH Requirement Ensure that OpenSSH is installed on your servers. OpenSSH is the
required SSH software.
Red Hat Enterprise Red Hat Enterprise Linux 7.4: 3.10.0-693.el7.s390x or later
Linux 7
Packages for Red Hat The following packages (or later versions) must be installed:
Enterprise Linux 7
binutils-2.25.1-31.base.el7 (s390x)
compat-libcap1-1.10-7.el7 (s390x)
gcc-4.8.5-16.el7 (s390x)
gcc-c++-4.8.5-16.el7 (s390x)
glibc-2.17-196.el7 (s390)
glibc-2.17-196.el7 (s390x)
glibc-devel-2.17-196.el7 (s390)
glibc-devel-2.17-196.el7 (s390x)
ksh-20120801-34.el7 (s390x)
libX11-1.6.5-1.el7 (s390)
libX11-1.6.5-1.el7 (s390x)
libXaw-1.0.13-4.el7 (s390x)
libXft-2.3.2-2.el7 (s390)
libXi-1.7.9-1.el7 (s390)
libXi-1.7.9-1.el7 (s390x)
libXmu-1.1.2-2.el7 (s390)
libXp-1.0.2-2.1.el7 (s390)
libXtst-1.2.3-1.el7 (s390)
libXtst-1.2.3-1.el7 (s390x)
libaio-0.3.109-13.el7 (s390)
libaio-0.3.109-13.el7 (s390x)
libaio-devel-0.3.109-13.el7 (s390x)
libattr-devel-2.4.46-12.el7 (s390)
libattr-devel-2.4.46-12.el7 (s390x)
libgcc-4.8.5-16.el7 (s390)
libgcc-4.8.5-16.el7 (s390x)
libgfortran-4.8.5-16.el7 (s390x)
libstdc++-4.8.5-16.el7 (s390x)
libstdc++-devel-4.8.5-16.el7 (s390x)
make-3.82-23.el7 (s390x)
pam-1.1.8-18.el7 (s390x)
pam-devel-1.1.8-18.el7 (s390x)
sysstat-10.1.5-12.el7 (s390x)
4-28
Chapter 4
Additional Drivers and Software Packages for Linux
Item Requirements
SSH Requirement Ensure that OpenSSH is installed on your servers. OpenSSH is the required
SSH software.
SUSE Linux Enterprise SUSE Linux Enterprise Server 12: 4.4.73-5-default s390x or later
Server 12
Packages for SUSE The following packages (or later versions) must be installed:
Linux Enterprise Server
12 binutils-2.26.1-9.12.1 (s390x)
gcc-4.8-6.189 (s390x)
gcc-c++-4.8-6.189 (s390x)
glibc-2.22-61.3 (s390x)
glibc-2.22-61.3 (s390)
glibc-devel-2.22-61.3 (s390x)
glibc-devel-2.22-61.3 (s390)
libaio-devel-0.3.109-17.15 (s390x)
libaio1-0.3.109-17.15 (s390x)
libaio1-0.3.109-17.15 (s390)
libX11-6-1.6.2-11.1 (s390x)
libXau6-1.0.8-4.58 (s390x)
libXau6-1.0.8-4.58 (s390x)
libXaw7-1.0.12-4.1 (s390x)
libXext6-1.3.2-3.61 (s390x)
libXext6-1.3.2-3.61 (s390)
libXft2-2.3.1-9.32 (s390x)
libXft2-2.3.1-9.32 (s390)
libXi6-1.7.4-17.1 (s390x)
libXi6-1.7.4-17.1 (s390)
libXmu6-1.1.2-3.60 (s390x)
libXp6-1.0.2-3.58 (s390x)
libXp6-1.0.2-3.58 (s390)
libXtst6-1.2.2-7.1 (s390x)
libXtst6-1.2.2-7.1 (s390)
libcap2-2.22-13.1 (s390x)
libstdc++48-devel-4.8.5-30.1 (s390)
libstdc++48-devel-4.8.5-30.1 (s390x)
libstdc++6-6.2.1+r239768-2.4 (s390)
libstdc++6-6.2.1+r239768-2.4 (s390x)
libxcb1-1.10-3.1 (s390x)
libxcb1-1.10-3.1 (s390)
libXmu6-1.1.2-3.60 (s390x)
mksh-50-2.13 (s390x)
make-4.0-4.1 (s390x)
4-29
Chapter 4
Additional Drivers and Software Packages for Linux
4-30
Chapter 4
Additional Drivers and Software Packages for Linux
Installing OCFS2
An OCFS2 installation consists of two parts, the kernel module and the tools module.
The supported version of the OCFS2 kernel module depends on the version of Unbreakable
Enterprise Kernel available with Oracle Linux 7. Run the following command to install the
latest version of the OCFS2 kernel module:
# yum install kernel-uek ocfs2
OCFS2 Release 1.8.6–9 is the supported version of OCFS2 tools module for this release.
After you install the OCFS2 kernel module, run the following command, to install the OCFS2
tools module:
# yum install ocfs2–tools-1.8.6–9
Note:
Each cluster node should run the same version of OCFS2 modules and a
compatible version of Unbreakable Enterprise Kernel.
https://oss.oracle.com/projects/ocfs2/
4-31
Chapter 4
Additional Drivers and Software Packages for Linux
http://www.unixodbc.org
Review the minimum supported ODBC driver releases, and install ODBC drivers of the
following or later releases for all Linux distributions:
unixODBC-2.3.4 or later
Note:
Oracle Messaging Gateway does not support the integration of Advanced
Queuing with TIBCO Rendezvous on IBM: Linux on System z.
Related Topics
• Oracle Database Advanced Queuing User's Guide
4-32
Chapter 4
Additional Drivers and Software Packages for Linux
gcc
gcc-c++
gcc-info
gcc-locale
gcc48
gcc48-info
gcc48-locale
gcc48-c++
Note:
If you intend to use 32-bit client
applications to access 64-bit
servers, then you must also install
the latest 32-bit versions of the
packages listed in this table.
Oracle XML Developer's Kit (XDK) Oracle XML Developer's Kit is supported with the same
compilers as OCCI.
Pro*COBOL Micro Focus Visual COBOL for Eclipse 2.3 - Update 2
4-33
Chapter 4
Checking Kernel and Package Requirements for Linux
# cat /etc/oracle-release
# cat /etc/redhat-release
# cat /etc/os-release
# lsb_release -id
# uname -r
4.14.35-1902.0.18.el7uek.x86_6
Review the required errata level for your distribution. If the errata level is previous
to the required minimum errata update, then obtain and install the latest kernel
update from your Linux distributor.
3. To determine whether the required packages are installed, enter commands
similar to the following:
4-34
Chapter 4
Setting Clock Source for VMs on Linux x86-64
# rpm -q package_name
Alternatively, if you require specific system architecture information, then enter the
following command:
# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep
package_name
You can also combine a query for multiple packages, and review the output for the
correct versions. For example:
If a package is not installed, then install it from your Linux distribution media or download
the required package version from your Linux distributor's website.
# cat /sys/devices/system/clocksource/clocksource0/available_clocksource
xen tsc acpi_pm
2. If the tsc clock source is available, then set tsc as the current clock source.
# echo "tsc">/sys/devices/system/clocksource/clocksource0/
current_clocksource
# cat /sys/devices/system/clocksource/clocksource0/current_clocksource
tsc
4. Using any text editor, append the clocksource directive to the GRUB_CMDLINE_LINUX line
in the /etc/default/grub file to retain this clock source setting even after a reboot.
Without cvuqdisk, Cluster Verification Utility cannot discover shared disks, and you receive
the error message "Package cvuqdisk not installed" when you run Cluster Verification Utility.
Use the cvuqdisk RPM for your hardware (for example, x86_64).
1. Locate the cvuqdisk RPM package, which is located in the directory Grid_home/cv/rpm.
Where Grid_home is the Oracle Grid Infrastructure home directory.
4-35
Chapter 4
Reviewing HugePages Memory Allocation
2. Copy the cvuqdisk package to each node on the cluster. You should ensure that
each node is running the same version of Linux.
3. Log in as root.
4. Use the following command to find if you have an existing version of the cvuqdisk
package:
If you have an existing version of cvuqdisk, then enter the following command to
deinstall the existing version:
# rpm -e cvuqdisk
5. Set the environment variable CVUQDISK_GRP to point to the group that owns
cvuqdisk, typically oinstall. For example:
6. In the directory where you have saved the cvuqdisk RPM, use the command rpm
-iv package to install the cvuqdisk package. For example:
4-36
Chapter 4
Disabling Transparent HugePages
Note:
Although Transparent HugePages is disabled on UEK2 and later UEK kernels,
Transparent HugePages may be enabled by default on your Linux system.
Transparent HugePages memory is enabled by default with Oracle Linux 6 and later, Red Hat
Enterprise Linux 6 and later, SUSE 11 and later, kernels.
Transparent HugePages can cause memory allocation delays during runtime. To avoid
performance issues, Oracle recommends that you disable Transparent HugePages on all
Oracle Database servers. Oracle recommends that you instead use standard HugePages for
enhanced performance.
To check if Transparent HugePages is enabled, run one of the following commands as the
root user:
# cat /sys/kernel/mm/redhat_transparent_hugepage/enabled
Other kernels:
# cat /sys/kernel/mm/transparent_hugepage/enabled
The following is a sample output that shows Transparent HugePages are being used as the
[always] flag is enabled.
[always] never
Note:
If Transparent HugePages is removed from the kernel, then neither /sys/
kernel/mm/transparent_hugepage nor /sys/kernel/mm/
redhat_transparent_hugepage files exist.
4-37
Chapter 4
Enabling the Name Service Cache Daemon
1. For Oracle Linux 7 and Red Hat Enterprise Linux 7, add or modify the
transparent_hugepage=never parameter in the /etc/default/grub file:
transparent_hugepage=never
For example:
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rhgb quiet numa=off
transparent_hugepage=never"
GRUB_DISABLE_RECOVERY="true"
Note:
The file name may vary for your operating systems. Check your
operating system documentation for the exact file name and the steps to
disable Transparent HugePages.
# grub2-mkconfig -o /boot/grub2/grub.cfg
To check to see if nscd is set to load when the system is restarted, enter the command
chkconfig --list nscd. For example:
nscd is turned on for run level 3, and turned off for run level 5. The nscd should be
turned on for both run level 3 and run level 5.
To change the configuration to ensure that nscd is on for both run level 3 and run level
5, enter the following command as root:
4-38
Chapter 4
Verifying the Disk I/O Scheduler on Linux
To restart nscd with the new setting, enter the following command as root:
nscd
# cat /sys/block/${ASM_DISK}/queue/scheduler
noop [deadline] cfq
In this example, the default disk I/O scheduler is Deadline and ASM_DISK is the Oracle
Automatic Storage Management (Oracle ASM) disk device.
On some virtual environments (VM) and special devices such as fast storage devices, the
output of the above command may be none. The operating system or VM bypasses the
kernel I/O scheduling and submits all I/O requests directly to the device. Do not change the
I/O Scheduler settings on such environments.
If the default disk I/O scheduler is not Deadline, then set it using a rules file:
1. Using a text editor, create a UDEV rules file for the Oracle ASM devices:
# vi /etc/udev/rules.d/60-oracle-schedulers.rules
2. Add the following line to the rules file and save it:
3. On clustered systems, copy the rules file to all other nodes on the cluster. For example:
4. Load the rules file and restart the UDEV service. For example:
4-39
Chapter 4
Using Automatic SSH Configuration During Installation
Note:
Oracle configuration assistants use SSH for configuration operations from
local to remote nodes. Oracle Enterprise Manager also uses SSH. RSH is no
longer supported.
You can configure SSH from the OUI interface during installation for the user account
running the installation. The automatic configuration creates passwordless SSH
connectivity between all cluster member nodes. Oracle recommends that you use the
automatic procedure if possible.
To enable the script to run, you must remove stty commands from the profiles of any
existing Oracle software installation owners you want to use, and remove other
security measures that are triggered during a login, and that generate messages to the
terminal. These messages, mail checks, and other displays prevent Oracle software
installation owners from using the SSH configuration script that is built into OUI. If they
are not disabled, then SSH must be configured manually before an installation can be
run.
In rare cases, Oracle Clusterware installation can fail during the "AttachHome"
operation when the remote node closes the SSH connection. To avoid this problem,
set the timeout wait to unlimited by setting the following parameter in the SSH daemon
configuration file /etc/ssh/sshd_config on all cluster nodes:
LoginGraceTime 0
4-40
Chapter 4
Setting Network Time Protocol for Cluster Time Synchronization
Oracle Clusterware requires the same time zone environment variable setting on all cluster
nodes. During installation, the installation process picks up the time zone (TZ) environment
variable setting of the Grid installation owner on the node where Oracle Universal Installer
(OUI) runs, and uses that time zone value on all nodes as the default TZ environment
variable setting for all processes managed by Oracle Clusterware. The time zone default is
used for databases, Oracle ASM, and any other managed processes. You have two options
for time synchronization:
• An operating system configured network time protocol (NTP) such as chronyd or ntpd
• Oracle Cluster Time Synchronization Service
Oracle Cluster Time Synchronization Service is designed for organizations whose cluster
servers are unable to access NTP services. If you use NTP, then the Oracle Cluster Time
Synchronization daemon (ctssd) starts up in observer mode. If you do not have NTP
daemons, then ctssd starts up in active mode and synchronizes time among cluster
members without contacting an external time server.
Note:
If you have NTP daemons on your server but you cannot configure them to synchronize time
with a time server, and you want to use Cluster Time Synchronization Service to provide
synchronization service in the cluster, then deactivate and deinstall the NTP.
When the installer finds that the NTP protocol is not active, the Cluster Time Synchronization
Service is installed in active mode and synchronizes the time across the nodes. If NTP is
found configured, then the Cluster Time Synchronization Service is started in observer mode,
and no active time synchronization is performed by Oracle Clusterware within the cluster.
4-41
Chapter 4
Setting Network Time Protocol for Cluster Time Synchronization
4-42
5
Configuring Networks for Oracle Grid
Infrastructure and Oracle RAC
Check that you have the networking hardware and internet protocol (IP) addresses required
for an Oracle Grid Infrastructure for a cluster installation.
• About Oracle Grid Infrastructure Network Configuration Options
Ensure that you have the networking hardware and internet protocol (IP) addresses
required for an Oracle Grid Infrastructure for a cluster installation.
• Understanding Network Addresses
During installation, you are asked to identify the planned use for each network interface
that Oracle Universal Installer (OUI) detects on your cluster node.
• Network Interface Hardware Minimum Requirements
Review these requirements to ensure that you have the minimum network hardware
technology for Oracle Grid Infrastructure clusters.
• Private IP Interface Configuration Requirements
Requirements for private interfaces depend on whether you are using single or multiple
Interfaces.
• IPv4 and IPv6 Protocol Requirements
Oracle Grid Infrastructure and Oracle RAC support the standard IPv6 address notations
specified by RFC 2732 and global and site-local IPv6 addresses as defined by RFC
4193.
• Oracle Grid Infrastructure IP Name and Address Requirements
Review this information for Oracle Grid Infrastructure IP Name and Address
requirements.
• Broadcast Requirements for Networks Used by Oracle Grid Infrastructure
Broadcast communications (ARP and UDP) must work properly across all the public and
private interfaces configured for use by Oracle Grid Infrastructure.
• Multicast Requirements for Networks Used by Oracle Grid Infrastructure
For each cluster member node, the Oracle mDNS daemon uses multicasting on all
interfaces to communicate with other nodes in the cluster.
• Domain Delegation to Grid Naming Service
If you are configuring Grid Naming Service (GNS) for a standard cluster, then before
installing Oracle Grid Infrastructure you must configure DNS to send to GNS any name
resolution requests for the subdomain served by GNS.
• Configuration Requirements for Oracle Flex Clusters
Understand Oracle Flex Clusters and their configuration requirements.
• Grid Naming Service Cluster Configuration Example
Review this example to understand Grid Naming Service configuration.
• Manual IP Address Configuration Example
If you choose not to use GNS, then before installation you must configure public, virtual,
and private IP addresses.
5-1
Chapter 5
About Oracle Grid Infrastructure Network Configuration Options
5-2
Chapter 5
Understanding Network Addresses
5-3
Chapter 5
Understanding Network Addresses
After installation, if you modify the interconnect for Oracle Real Application Clusters
(Oracle RAC) with the CLUSTER_INTERCONNECTS initialization parameter, then you must
change the interconnect to a private IP address, on a subnet that is not used with a
public IP address, nor marked as a public subnet by oifcfg. Oracle does not support
changing the interconnect to an interface using a subnet that you have designated as
a public subnet.
You should not use a firewall on the network with the private network IP addresses,
because this can block interconnect traffic.
Note:
Starting with Oracle Grid Infrastructure 18c, using VIP is optional for Oracle
Clusterware deployments. You can specify VIPs for all or none of the cluster
nodes. However, specifying VIPs for selected cluster nodes is not supported.
Select an address for your VIP that meets the following requirements:
• The IP address and host name are currently unused (it can be registered in a
DNS, but should not be accessible by a ping command)
• The VIP is on the same subnet as your public interface
If you are not using Grid Naming Service (GNS), then determine a virtual host name
for each node. A virtual host name is a public node name that reroutes client requests
sent to the node if the node is down. Oracle Database uses VIPs for client-to-database
connections, so the VIP address must be publicly accessible. Oracle recommends that
you provide a name in the format hostname-vip. For example: myclstr2-vip.
5-4
Chapter 5
Understanding Network Addresses
Clients configured to use IP addresses for Oracle Database releases prior to Oracle
Database 11g release 2 can continue to use their existing connection addresses; using SCAN
is not required. When you upgrade to Oracle Clusterware 12c release 1 (12.1) or later
releases, the SCAN becomes available, and you should use the SCAN for connections to
Oracle Database 11g release 2 or later databases. When an earlier release of Oracle
Database is upgraded, it registers with the SCAN listeners, and clients can start using the
SCAN to connect to that database. The database registers with the SCAN listener through
the remote listener parameter in the init.ora file. The REMOTE_LISTENER parameter must be
set to SCAN:PORT. Do not set it to a TNSNAMES alias with a single address for the SCAN, for
example, using HOST= SCAN_name.
The SCAN is optional for most deployments. However, clients using Oracle Database 11g
release 2 and later policy-managed databases using server pools must access the database
5-5
Chapter 5
Network Interface Hardware Minimum Requirements
using the SCAN. This is required because policy-managed databases can run on
different servers at different times, so connecting to a particular node by using the
virtual IP address for a policy-managed database is not possible.
Provide SCAN addresses for client access to the cluster. These addresses must be
configured as round robin addresses on the domain name service (DNS), if DNS is
used. Oracle recommends that you supply three SCAN addresses.
Identify public and private interfaces. Oracle Universal Installer configures public
interfaces for use by public and virtual IP addresses, and configures private IP
addresses on private interfaces. The private subnet that the private interfaces use
must connect all the nodes you intend to have as cluster members. The SCAN must
be in the same subnet as the public interface.
Related Topics
• Oracle Real Application Clusters Administration and Deployment Guide
5-6
Chapter 5
Private IP Interface Configuration Requirements
Storage Networks
Oracle Automatic Storage Management and Oracle Real Application Clusters require
network-attached storage.
Oracle Automatic Storage Management (Oracle ASM): The network interfaces used for
Oracle Clusterware files are also used for Oracle ASM.
Third-party storage: Oracle recommends that you configure additional interfaces for storage.
5-7
Chapter 5
IPv4 and IPv6 Protocol Requirements
When you define multiple interfaces, Oracle Clusterware creates from one to four
highly available IP (HAIP) addresses. Oracle RAC and Oracle Automatic Storage
Management (Oracle ASM) instances use these interface addresses to ensure highly
available, load-balanced interface communication between nodes. The installer
enables Redundant Interconnect Usage to provide a high availability private network.
By default, Oracle Grid Infrastructure software uses all of the HAIP addresses for
private network communication, providing load-balancing across the set of interfaces
you identify for the private network. If a private interconnect interface fails or become
non-communicative, then Oracle Clusterware transparently moves the corresponding
HAIP address to one of the remaining functional interfaces.
• Each private interface should be on a different subnet.
• Each cluster member node must have an interface on each private interconnect
subnet, and these subnets must connect to every node of the cluster.
For example, you can have private networks on subnets 192.168.0 and 10.0.0, but
each cluster member node must have an interface connected to the 192.168.0 and
10.0.0 subnets.
• Endpoints of all designated interconnect interfaces must be completely reachable
on the network. There should be no node that is not connected to every private
network interface.
You can test if an interconnect interface is reachable using ping.
• You can use IPv4 and IPv6 addresses for the interfaces with Oracle Clusterware
Redundant interconnects.
Note:
During installation, you can define up to four interfaces for the private
network. The number of HAIP addresses created during installation is based
on both physical and logical interfaces configured for the network adapter.
After installation, you can define additional interfaces. If you define more than
four interfaces as private network interfaces, then be aware that Oracle
Clusterware activates only four of the interfaces at a time. However, if one of
the four active interfaces fails, then Oracle Clusterware transitions the HAIP
addresses configured to the failed interface to one of the reserve interfaces
in the defined set of private interfaces.
Related Topics
• Oracle Clusterware Administration and Deployment Guide
5-8
Chapter 5
Oracle Grid Infrastructure IP Name and Address Requirements
public network as IPv4 or IPv6 types of addresses. You can configure an IPv6 cluster by
selecting VIP and SCAN names that resolve to addresses in an IPv6 subnet for the cluster,
and selecting that subnet as public during installation. After installation, you can also
configure cluster member nodes with a mixture of IPv4 and IPv6 addresses.
If you install using static virtual IP (VIP) addresses in an IPv4 cluster, then the VIP names you
supply during installation should resolve only to IPv4 addresses. If you install using static
IPv6 addresses, then the VIP names you supply during installation should resolve only to
IPv6 addresses.
During installation, you cannot configure the cluster with VIP and SCAN names that resolve
to both IPv4 and IPv6 addresses. You cannot configure VIPs and SCANS on some cluster
member nodes to resolve to IPv4 addresses, and VIPs and SCANs on other cluster member
nodes to resolve to IPv6 addresses. Oracle does not support this configuration.
Note:
Link-local and site-local IPv6 addresses as defined in RFC 1884 are not supported.
5-9
Chapter 5
Oracle Grid Infrastructure IP Name and Address Requirements
Note:
All name servers in the DNS configuration for a cluster must resolve to all
host names used in the cluster, such as cluster node host name, VIP host
name, and SCAN host name.
5-10
Chapter 5
Oracle Grid Infrastructure IP Name and Address Requirements
Note:
Select your cluster name carefully. After installation, you can only change the
cluster name by reinstalling Oracle Grid Infrastructure.
5-11
Chapter 5
Oracle Grid Infrastructure IP Name and Address Requirements
domain, GNS processes the requests and responds with the appropriate addresses for
the name requested. To use GNS, you must specify a static IP address for the GNS
VIP address.
5-12
Chapter 5
Oracle Grid Infrastructure IP Name and Address Requirements
For example:
Copy the GNS Client data file to a secure path on the GNS Client node where you run the
GNS Client cluster installation. The Oracle installation user must have permissions to access
that file. Oracle recommends that no other user is granted permissions to access the GNS
Client data file. During installation, you are prompted to provide a path to that file.
For example:
5-13
Chapter 5
Oracle Grid Infrastructure IP Name and Address Requirements
Related Topics
• Oracle Clusterware Administration and Deployment Guide
5-14
Chapter 5
Oracle Grid Infrastructure IP Name and Address Requirements
Note:
The SCAN and cluster name are entered in separate fields during installation, so
cluster name requirements do not apply to the SCAN name.
Oracle strongly recommends that you do not configure SCAN VIP addresses in the
hosts file. Use DNS resolution for SCAN VIPs. If you use the hosts file to resolve
SCANs, then the SCAN can resolve to one IP address only.
Configuring SCANs in a DNS or a hosts file is the only supported configuration.
Configuring SCANs in a Network Information Service (NIS) is not supported.
Note:
All name servers in the DNS configuration for a cluster must resolve to all host
names used in the cluster, such as cluster node host name, VIP host name, and
SCAN host name.
The following example shows how to use the nslookup command to confirm that the DNS is
correctly associating the SCAN with the addresses:
Name: mycluster-scan.example.com
Address: 192.0.2.201
Name: mycluster-scan.example.com
Address: 192.0.2.202
5-15
Chapter 5
Broadcast Requirements for Networks Used by Oracle Grid Infrastructure
Name: mycluster-scan.example.com
Address: 192.0.2.203
After installation, when a client sends a request to the cluster, the Oracle Clusterware
SCAN listeners redirect client requests to servers in the cluster.
Oracle strongly recommends that you do not configure SCAN VIP addresses in the
hosts file. Use DNS resolution for SCAN VIPs. If you use the hosts file to resolve
SCANs, then the SCAN can resolve to one IP address only.
Configuring SCANs in a DNS or a hosts file is the only supported configuration.
Configuring SCANs in a Network Information Service (NIS) is not supported.
5-16
Chapter 5
Domain Delegation to Grid Naming Service
Oracle recommends that the subdomain name is distinct from your corporate domain. For
example, if your corporate domain is mycorp.example.com, the subdomain for GNS might be
rac-gns.mycorp.example.com.
If the subdomain is not distinct, then it should be for the exclusive use of GNS. For example,
if you delegate the subdomain mydomain.example.com to GNS, then there should be no other
domains that share it such as lab1.mydomain.example.com.
mycluster-gns-vip.example.com A 192.0.2.1
5-17
Chapter 5
Configuration Requirements for Oracle Flex Clusters
cluster01.example.com NS mycluster-gns-vip.example.com
3. When using GNS, you must configure resolve.conf on the nodes in the cluster
(or the file on your system that provides resolution information) to contain name
server entries that are resolvable to corporate DNS servers. The total timeout
period configured—a combination of options attempts (retries) and options timeout
(exponential backoff)—should be less than 30 seconds. For example, where
xxx.xxx.xxx.42 and xxx.xxx.xxx.15 are valid name server addresses in your
network, provide an entry similar to the following in /etc/resolv.conf:
options attempts: 2
options timeout: 1
/etc/nsswitch.conf
hosts: files dns nis
Be aware that use of NIS is a frequent source of problems when doing cable pull tests,
as host name and user name resolution can fail.
5-18
Chapter 5
Configuration Requirements for Oracle Flex Clusters
The Oracle ASM network can be configured during installation, or configured or modified after
installation.
5-19
Chapter 5
Configuration Requirements for Oracle Flex Clusters
• Can provide database clients that are running on nodes of the Oracle ASM cluster
remote access to Oracle ASM for metadata, and allow database clients to perform
block I/O operations directly to Oracle ASM disks. The hosts running the Oracle
ASM server and the remote database client must both be cluster nodes.
About Oracle Flex ASM Cluster with Oracle IOServer (IOS) Configuration
An Oracle IOServer instance provides Oracle ASM file access for Oracle Database
instances on nodes of Oracle Member Clusters that do not have connectivity to Oracle
ASM managed disks. IOS enables you to configure Oracle Member Clusters on such
nodes. On the storage cluster, the IOServer instance on each node opens up network
ports to which clients send their I/O. The IOServer instance receives data packets from
the client and performs the appropriate I/O to Oracle ASM disks similar to any other
database client. On the client side, databases can use direct NFS (dNFS) to
communicate with an IOServer instance. However, no client side configuration is
required to use IOServer, so you are not required to provide a server IP address or
any additional configuration information. On nodes and clusters that are configured to
access Oracle ASM files through IOServer, the discovery of the Oracle IOS instance
occurs automatically.
To install an Oracle Member Cluster, the administrator of the Oracle Domain Services
Cluster creates an Oracle Member Cluster using a crsctl command that creates a
Member Cluster Manifest file. During Oracle Grid Infrastructure installation, if you
choose to install an Oracle Member Cluster, then the installer prompts you for the
Member Cluster Manifest file. An attribute in the Member Cluster Manifest file specifies
if the Oracle Member Cluster is expected to access Oracle ASM files through an
IOServer instance.
Related Topics
• Oracle Automatic Storage Management Administrator's Guide
5-20
Chapter 5
Configuration Requirements for Oracle Flex Clusters
Syntax
You can create manual addresses using alphanumeric strings.
Example 5-1 Examples of Manually-Assigned Addresses
mycloud001nd; mycloud046nd; mycloud046-vip; mycloud348nd; mycloud784-vip
5-21
Chapter 5
Grid Naming Service Cluster Configuration Example
Identity Home Node Host Node Given Type Address Address Resolved
Name Assigned By
By
GNS VIP None Selected by mycluster- virtual 192.0.2.1 Fixed by net DNS
Oracle gns- administrato
Clusterware vip.example. r
com
Node 1 Node 1 node1 node1 public 192.0.2.101 Fixed GNS
Public
Node 1 VIP Node 1 Selected by node1-vip virtual 192.0.2.104 DHCP GNS
Oracle
Clusterware
Node 1 Node 1 node1 node1-priv private 192.168.0.1 Fixed or GNS
Private DHCP
Node 2 Node 2 node2 node2 public 192.0.2.102 Fixed GNS
Public
Node 2 VIP Node 2 Selected by node2-vip virtual 192.0.2.105 DHCP GNS
Oracle
Clusterware
Node 2 Node 2 node2 node2-priv private 192.168.0.2 Fixed or GNS
Private DHCP
5-22
Chapter 5
Manual IP Address Configuration Example
Identity Home Node Host Node Given Type Address Address Resolved
Name Assigned By
By
SCAN VIP 1 none Selected by mycluster- virtual 192.0.2.201 DHCP GNS
Oracle scan.myclus
Clusterware ter.cluster01.
example.co
m
SCAN VIP 2 none Selected by mycluster- virtual 192.0.2.202 DHCP GNS
Oracle scan.myclus
Clusterware ter.cluster01.
example.co
m
SCAN VIP 3 none Selected by mycluster- virtual 192.0.2.203 DHCP GNS
Oracle scan.myclus
Clusterware ter.cluster01.
example.co
m
For example, with a two-node cluster where each node has one public and one private
interface, and you have defined a SCAN domain address to resolve on your DNS to one of
three IP addresses, you might have the configuration shown in the following table for your
network interfaces:
Identity Home Node Host Node Given Type Address Address Resolved
Name Assigned By
By
Node 1 Node 1 node1 node1 public 192.0.2.101 Fixed DNS
Public
Node 1 VIP Node 1 Selected by node1-vip virtual 192.0.2.104 Fixed DNS and
Oracle hosts file
Clusterware
Node 1 Node 1 node1 node1-priv private 192.168.0.1 Fixed DNS and
Private hosts file, or
none
Node 2 Node 2 node2 node2 public 192.0.2.102 Fixed DNS
Public
Node 2 VIP Node 2 Selected by node2-vip virtual 192.0.2.105 Fixed DNS and
Oracle hosts file
Clusterware
5-23
Chapter 5
Network Interface Configuration Options
Identity Home Node Host Node Given Type Address Address Resolved
Name Assigned By
By
Node 2 Node 2 node2 node2-priv private 192.168.0.2 Fixed DNS and
Private hosts file, or
none
SCAN VIP 1 none Selected by mycluster- virtual 192.0.2.201 Fixed DNS
Oracle scan
Clusterware
SCAN VIP 2 none Selected by mycluster- virtual 192.0.2.202 Fixed DNS
Oracle scan
Clusterware
SCAN VIP 3 none Selected by mycluster- virtual 192.0.2.203 Fixed DNS
Oracle scan
Clusterware
You do not need to provide a private name for the interconnect. If you want name
resolution for the interconnect, then you can configure private IP names in the hosts
file or the DNS. However, Oracle Clusterware assigns interconnect addresses on the
interface defined during installation as the private interface (eth1, for example), and to
the subnet used for the private subnet.
The addresses to which the SCAN resolves are assigned by Oracle Clusterware, so
they are not fixed to a particular node. To enable VIP failover, the configuration shown
in the preceding table defines the SCAN addresses and the public and VIP addresses
of both nodes on the same subnet, 192.0.2.0/24.
Note:
All host names must conform to the RFC–952 standard, which permits
alphanumeric characters, but does not allow underscores ("_").
5-24
Chapter 5
Multiple Private Interconnects and Oracle Linux
net.ipv4.conf.eth2.rp_filter = 2
net.ipv4.conf.eth1.rp_filter = 2
net.ipv4.conf.eth0.rp_filter = 1
See Also:
https://support.oracle.com/rs?type=doc&id=1286796.1 for more information about
rp_filter for multiple private interconnects and Linux Kernel 2.6.32+
5-25
6
Configuring Users, Groups and Environments
for Oracle Grid Infrastructure and Oracle
Database
Before installation, create operating system groups and users, and configure user
environments.
• Creating Groups, Users and Paths for Oracle Grid Infrastructure
Log in as root, and use the following instructions to locate or create the Oracle Inventory
group, and create a software owner for Oracle Grid Infrastructure, and directories for
Oracle home.
• Oracle Installations with Standard and Job Role Separation Groups and Users
A job role separation configuration of Oracle Database and Oracle ASM is a configuration
with groups and users to provide separate groups for operating system authentication.
• Creating Operating System Privileges Groups
The following sections describe how to create operating system groups for Oracle Grid
Infrastructure and Oracle Database:
• Creating Operating System Oracle Installation User Accounts
Before starting installation, create Oracle software owner user accounts, and configure
their environments.
• Configuring Grid Infrastructure Software Owner User Environments
Understand the software owner user environments to configure before installing Oracle
Grid Infrastructure.
• Enabling Intelligent Platform Management Interface (IPMI)
Intelligent Platform Management Interface (IPMI) provides a set of common interfaces to
computer hardware and firmware that system administrators can use to monitor system
health and manage the system.
6-1
Chapter 6
Creating Groups, Users and Paths for Oracle Grid Infrastructure
• Creating the Oracle Inventory Group If an Oracle Inventory Does Not Exist
If the oraInst.loc file does not exist, then create the Oracle Inventory group.
• About Oracle Installation Owner Accounts
Select or create an Oracle installation owner for your installation, depending on the
group and user management plan you want to use for your installations.
• Restrictions for Oracle Software Installation Owners
Review the following restrictions for users created to own Oracle software.
• Identifying an Oracle Software Owner User Account
You must create at least one software owner user account the first time you install
Oracle software on the system. Either use an existing Oracle software user
account, or create an Oracle software owner user account for your installation.
• About the Oracle Base Directory for the grid User
Review this information about creating the Oracle base directory on each cluster
node.
• About the Oracle Home Directory for Oracle Grid Infrastructure Software
Review this information about creating the Oracle home directory location on each
cluster node.
• About Creating the Oracle Home and Oracle Base Directory
Create Grid home and Oracle base home directories on each cluster node.
inventory_loc=central_inventory_location
inst_group=group
Use the more command to determine if you have an Oracle central inventory on your
system. For example:
# more /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall
6-2
Chapter 6
Creating Groups, Users and Paths for Oracle Grid Infrastructure
Use the command grep groupname /etc/group to confirm that the group specified as
the Oracle Inventory group still exists on the system. For example:
Note:
Do not put the oraInventory directory under the Oracle base directory for a new
installation, because that can result in user permission errors for other installations.
Creating the Oracle Inventory Group If an Oracle Inventory Does Not Exist
If the oraInst.loc file does not exist, then create the Oracle Inventory group.
Members of the OINSTALL group are granted privileges to write to the Oracle central
inventory (oraInventory), and other system privileges for Oracle installation owner users.
An Oracle installation owner should always have the group you want to have designated as
the OINSTALL group (oinstall) as its primary group. Ensure that this group is available as
the primary group for all planned Oracle software installation owners. By default, if an
oraInst.loc file does not exist and an Oracle central inventory (oraInventory) is not
identified, then the installer designates the primary group of the installation owner running the
installation as the OINSTALL group.
The following example creates the oraInventory group oinstall, with the group ID number
54321.
Note:
For installations on Oracle Clusterware, group and user IDs must be identical on all
nodes in the cluster. Ensure that the group and user IDs you want to use are
available on each cluster member node, and confirm that the primary group for
each Oracle Grid Infrastructure for a cluster installation owner has the same name
and group ID.
6-3
Chapter 6
Creating Groups, Users and Paths for Oracle Grid Infrastructure
• If an Oracle software owner user exists, but you want to use a different operating
system user, with different group membership, to separate Oracle Grid
Infrastructure administrative privileges from Oracle Database administrative
privileges.
In Oracle documentation, a user created to own only Oracle Grid Infrastructure
software installations is called the Grid user (grid). This user owns both the Oracle
Clusterware and Oracle Automatic Storage Management binaries. A user created to
own either all Oracle installations, or one or more Oracle database installations, is
called the Oracle user (oracle). You can have only one Oracle Grid Infrastructure
installation owner, but you can have different Oracle users to own different
installations.
Oracle software owners must have the Oracle Inventory group as their primary group,
so that each Oracle software installation owner can write to the central inventory
(oraInventory), and so that OCR and Oracle Clusterware resource permissions are set
correctly. The database software owner must also have the OSDBA group and (if you
create them) the OSOPER, OSBACKUPDBA, OSDGDBA, OSRACDBA, and
OSKMDBA groups as secondary groups.
$ export ORACLE_HOME=/u01/app/19.0.0/grid
• If you try to administer an Oracle home or Grid home instance using sqlplus,
lsnrctl, or asmcmd commands while the environment variable $ORACLE_HOME is set
to a different Oracle home or Grid home path, then you encounter errors. For
example, when you start SRVCTL from a database home, $ORACLE_HOME should
be set to that database home, or SRVCTL fails. The exception is when you are
using SRVCTL in the Oracle Grid Infrastructure home. In that case, $ORACLE_HOME
is ignored, and the Oracle home environment variable does not affect SRVCTL
6-4
Chapter 6
Creating Groups, Users and Paths for Oracle Grid Infrastructure
commands. In all other cases, you must change $ORACLE_HOME to the instance that you
want to administer.
• To create separate Oracle software owners and separate operating system privileges
groups for different Oracle software installations, note that each of these users must have
the Oracle central inventory group (oraInventory group) as their primary group.
Members of this group are granted the OINSTALL system privileges to write to the Oracle
central inventory (oraInventory) directory, and are also granted permissions for various
Oracle Clusterware resources, OCR keys, directories in the Oracle Clusterware home to
which DBAs need write access, and other necessary privileges. Members of this group
are also granted execute permissions to start and stop Clusterware infrastructure
resources and databases. In Oracle documentation, this group is represented as
oinstall in code examples.
• Each Oracle software owner must be a member of the same central inventory
oraInventory group, and they must have this group as their primary group, so that all
Oracle software installation owners share the same OINSTALL system privileges. Oracle
recommends that you do not have more than one central inventory for Oracle
installations. If an Oracle software owner has a different central inventory group, then you
may corrupt the central inventory.
You can then use the ID command to verify that the Oracle installation owners you intend to
use have the Oracle Inventory group as their primary group. For example:
$ id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),
54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54327(asmdba),54330(ra
cdba)
$ id grid
uid=54331(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),
54327(asmdba),54328(asmoper),54329(asmadmin),54330(racdba)
For Oracle Restart installations, to successfully install Oracle Database, ensure that the grid
user is a member of the racdba group.
After you create operating system groups, create or modify Oracle user accounts in
accordance with your operating system authentication planning.
6-5
Chapter 6
Creating Groups, Users and Paths for Oracle Grid Infrastructure
/u01/app/grid
Note:
Oracle home or Oracle base cannot be symlinks, nor can any of their parent
directories, all the way to up to the root directory.
/u01/app/19.0.0/grid
6-6
Chapter 6
Creating Groups, Users and Paths for Oracle Grid Infrastructure
Note:
Oracle home or Oracle base cannot be symlinks, nor can any of their parent
directories, all the way to up to the root directory.
During installation, ownership of the entire path to the Grid home is changed to root (/
u01, /u01/app, /u01/app/19.0.0, /u01/app/19.0.0/grid). If you do not create a unique path
to the Grid home, then after the Grid install, you can encounter permission errors for other
installations, including any existing installations under the same path. To avoid placing the
application directory in the mount point under root ownership, you can create and select
paths such as the following for the Grid home:
/u01/19.0.0/grid
Caution:
For Oracle Grid Infrastructure for a cluster installations, note the following
restrictions for the Oracle Grid Infrastructure binary home (Grid home directory for
Oracle Grid Infrastructure):
• It must not be placed under one of the Oracle base directories, including the
Oracle base directory of the Oracle Grid Infrastructure installation owner.
• It must not be placed in the home directory of an installation owner. These
requirements are specific to Oracle Grid Infrastructure for a cluster installations.
Oracle Grid Infrastructure for a standalone server (Oracle Restart) can be installed
under the Oracle base for the Oracle Database installation.
# mkdir -p /u01/app/19.0.0/grid
# mkdir -p /u01/app/grid
# mkdir -p /u01/app/oracle
# chown -R grid:oinstall /u01
# chown oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01/
6-7
Chapter 6
Oracle Installations with Standard and Job Role Separation Groups and Users
Note:
Placing Oracle Grid Infrastructure for a cluster binaries on a cluster file
system is not supported.
If you plan to install an Oracle RAC home on a shared OCFS2 location, then
you must upgrade OCFS2 to at least version 1.4.1, which supports shared
writable maps.
Related Topics
• Optimal Flexible Architecture
Oracle Optimal Flexible Architecture (OFA) rules are a set of configuration
guidelines created to ensure well-organized Oracle installations, which simplifies
administration, support and maintenance.
6-8
Chapter 6
Oracle Installations with Standard and Job Role Separation Groups and Users
Note:
To configure users for installation that are on a network directory service such as
Network Information Services (NIS), refer to your directory service documentation.
Related Topics
• Oracle Database Administrator’s Guide
• Oracle Automatic Storage Management Administrator's Guide
6-9
Chapter 6
Oracle Installations with Standard and Job Role Separation Groups and Users
6-10
Chapter 6
Oracle Installations with Standard and Job Role Separation Groups and Users
Create this group if you want a separate group of operating system users to have a
limited set of privileges for encryption key management such as Oracle Wallet Manager
management (the SYSKM privilege). To use this privilege, add the Oracle Database
installation owners as members of this group.
• The OSRACDBA group for Oracle Real Application Clusters Administration (typically,
racdba)
Create this group if you want a separate group of operating system users to have a
limited set of Oracle Real Application Clusters (RAC) administrative privileges (the
SYSRAC privilege). To use this privilege:
– Add the Oracle Database installation owners as members of this group.
– For Oracle Restart configurations, if you have a separate Oracle Grid Infrastructure
installation owner user (grid), then you must also add the grid user as a member of
the OSRACDBA group of the database to enable Oracle Grid Infrastructure
components to connect to the database.
Related Topics
• Oracle Database Administrator’s Guide
• Oracle Database Security Guide
During installation, you are prompted to provide a password for the ASMSNMP user. You can
create an operating system authenticated user, or you can create an Oracle Database user
called asmsnmp. In either case, grant the user SYSDBA privileges.
6-11
Chapter 6
Creating Operating System Privileges Groups
6-12
Chapter 6
Creating Operating System Privileges Groups
6-13
Chapter 6
Creating Operating System Privileges Groups
Create the OSDBA group using the group name dba, unless a group with that name
already exists:
6-14
Chapter 6
Creating Operating System Oracle Installation User Accounts
6-15
Chapter 6
Creating Operating System Oracle Installation User Accounts
The following example shows how to create the user grid with the user ID 54331; with
the primary group oinstall; and with secondary groups dba, asmdba, backupdba,
dgdba, kmdba, and racdba:
You must note the user ID number for installation users, because you need it during
preinstallation.
For Oracle Grid Infrastructure installations, user IDs and group IDs must be identical
on all candidate nodes.
Warning:
Each Oracle software owner must be a member of the same central
inventory group. Do not modify the primary group of an existing Oracle
software owner account, or designate different groups as the OINSTALL
group. If Oracle software owner accounts have different groups as their
primary group, then you can corrupt the central inventory.
During installation, the user that is installing the software should have the OINSTALL
group as its primary group, and it must be a member of the operating system groups
appropriate for your installation. For example:
# /usr/sbin/usermod -g oinstall -G
dba,asmdba,backupdba,dgdba,kmdba,racdba[,oper] oracle
6-16
Chapter 6
Creating Operating System Oracle Installation User Accounts
# id oracle
uid=54321(oracle) gid=54421(oinstall)
groups=54322(dba),54323(oper),54327(asmdba)
2. From the output, identify the user ID (uid) for the user and the group identities (gids) for
the groups to which it belongs.
Ensure that these ID numbers are identical on each node of the cluster. The user's
primary group is listed after gid. Secondary groups are listed after groups.
6-17
Chapter 6
Creating Operating System Oracle Installation User Accounts
Note:
You are not required to use the UIDs and GIDs in this example. If a
group already exists, then use the groupmod command to modify it if
necessary. If you cannot use the same group ID for a particular group on
a node, then view the /etc/group file on all nodes to identify a group ID
that is available on every node. You must then change the group ID on
all nodes to the same group ID.
3. To create the Oracle Grid Infrastructure (grid) user, enter a command similar to
the following:
• The -u option specifies the user ID, which must be the user ID that you
identified earlier.
• The -g option specifies the primary group for the Grid user, which must be the
Oracle Inventory group (OINSTALL), which grants the OINSTALL system
privileges. In this example, the OINSTALL group is oinstall.
• The -G option specifies the secondary groups. The Grid user must be a
member of the OSASM group (asmadmin) and the OSDBA for ASM group
(asmdba).
Note:
If the user already exists, then use the usermod command to modify it if
necessary. If you cannot use the same user ID for the user on every
node, then view the /etc/passwd file on all nodes to identify a user ID
that is available on every node. You must then specify that ID for the
user on all of the nodes.
# passwd grid
6-18
Chapter 6
Creating Operating System Oracle Installation User Accounts
• Creation of the Oracle Grid Infrastructure software owner (grid), and one Oracle
Database owner (oracle) with correct group memberships
• Creation and configuration of an Oracle base path compliant with OFA structure with
correct permissions
Enter the following commands to create a minimal operating system authentication
configuration:
After running these commands, you have the following groups and users:
• An Oracle central inventory group, or oraInventory group (oinstall). Members who have
the central inventory group as their primary group, are granted the OINSTALL permission
to write to the oraInventory directory.
• One system privileges group, dba, for Oracle Grid Infrastructure, Oracle ASM and Oracle
Database system privileges. Members who have the dba group as their primary or
secondary group are granted operating system authentication for OSASM/SYSASM,
OSDBA/SYSDBA, OSOPER/SYSOPER, OSBACKUPDBA/SYSBACKUP, OSDGDBA/
SYSDG, OSKMDBA/SYSKM, OSDBA for ASM/SYSDBA for ASM, and OSOPER for
ASM/SYSOPER for Oracle ASM to administer Oracle Clusterware, Oracle ASM, and
Oracle Database, and are granted SYSASM and OSOPER for Oracle ASM access to the
Oracle ASM storage.
• An Oracle Grid Infrastructure for a cluster owner, or Grid user (grid), with the
oraInventory group (oinstall) as its primary group, and with the OSASM group (dba) as
the secondary group, with its Oracle base directory /u01/app/grid.
• An Oracle Database owner (oracle) with the oraInventory group (oinstall) as its
primary group, and the OSDBA group (dba) as its secondary group, with its Oracle base
directory /u01/app/oracle.
• /u01/app owned by grid:oinstall with 775 permissions before installation, and by root
after the root.sh script is run during installation. This ownership and permissions
enables OUI to create the Oracle Inventory directory, in the path /u01/app/
oraInventory.
• /u01 owned by grid:oinstall before installation, and by root after the root.sh script is
run during installation.
• /u01/app/19.0.0/grid owned by grid:oinstall with 775 permissions. These
permissions are required for installation, and are changed during the installation process.
• /u01/app/grid owned by grid:oinstall with 775 permissions. These permissions are
required for installation, and are changed during the installation process.
• /u01/app/oracle owned by oracle:oinstall with 775 permissions.
6-19
Chapter 6
Creating Operating System Oracle Installation User Accounts
Note:
You can use one installation owner for both Oracle Grid Infrastructure and
any other Oracle installations. However, Oracle recommends that you use
separate installation owner accounts for each Oracle software installation.
6-20
Chapter 6
Creating Operating System Oracle Installation User Accounts
After running these commands, you have a set of administrative privileges groups and users
for Oracle Grid Infrastructure, and for two separate Oracle databases (DB1 and DB2):
• An Oracle Database software owner (oracle1), which owns the Oracle Database binaries
for DB1. The oracle1 user has the oraInventory group as its primary group, and the
OSDBA group for its database (dba1) and the OSDBA for ASM group for Oracle Grid
Infrastructure (asmdba) as secondary groups. In addition, the oracle1 user is a member
of asmoper, granting that user privileges to start up and shut down Oracle ASM.
6-21
Chapter 6
Creating Operating System Oracle Installation User Accounts
• An OSDBA group (dba1). During installation, you identify the group dba1 as the
OSDBA group for the database installed by the user oracle1. Members of dba1
are granted the SYSDBA privileges for the Oracle Database DB1. Users who
connect as SYSDBA are identified as user SYS on DB1.
• An OSBACKUPDBA group (backupdba1). During installation, you identify the
group backupdba1 as the OSDBA group for the database installed by the user
oracle1. Members of backupdba1 are granted the SYSBACKUP privileges for the
database installed by the user oracle1 to back up the database.
• An OSDGDBA group (dgdba1). During installation, you identify the group dgdba1
as the OSDGDBA group for the database installed by the user oracle1. Members
of dgdba1 are granted the SYSDG privileges to administer Oracle Data Guard for
the database installed by the user oracle1.
• An OSKMDBA group (kmdba1). During installation, you identify the group kmdba1
as the OSKMDBA group for the database installed by the user oracle1. Members
of kmdba1 are granted the SYSKM privileges to administer encryption keys for the
database installed by the user oracle1.
• An OSOPER group (oper1). During installation, you identify the group oper1 as
the OSOPER group for the database installed by the user oracle1. Members of
oper1 are granted the SYSOPER privileges (a limited set of the SYSDBA
privileges), including the right to start up and shut down the DB1 database. Users
who connect as OSOPER privileges are identified as user PUBLIC on DB1.
• An Oracle base /u01/app/oracle1 owned by oracle1:oinstall with 775
permissions. The user oracle1 has permissions to install software in this directory,
but in no other directory in the /u01/app path.
Example 6-3 Oracle Database DB2 Groups and Users Example
The command creates the following Oracle Database (DB2) groups and users:
• An Oracle Database software owner (oracle2), which owns the Oracle Database
binaries for DB2. The oracle2 user has the oraInventory group as its primary
group, and the OSDBA group for its database (dba2) and the OSDBA for ASM
group for Oracle Grid Infrastructure (asmdba) as secondary groups. However, the
oracle2 user is not a member of the asmoper group, so oracle2 cannot shut down
or start up Oracle ASM.
• An OSDBA group (dba2). During installation, you identify the group dba2 as the
OSDBA group for the database installed by the user oracle2. Members of dba2
are granted the SYSDBA privileges for the Oracle Database DB2. Users who
connect as SYSDBA are identified as user SYS on DB2.
• An OSBACKUPDBA group (backupdba2). During installation, you identify the
group backupdba2 as the OSDBA group for the database installed by the user
oracle2. Members of backupdba2 are granted the SYSBACKUP privileges for the
database installed by the user oracle2 to back up the database.
• An OSDGDBA group (dgdba2). During installation, you identify the group dgdba2
as the OSDGDBA group for the database installed by the user oracle2. Members
of dgdba2 are granted the SYSDG privileges to administer Oracle Data Guard for
the database installed by the user oracle2.
• An OSKMDBA group (kmdba2). During installation, you identify the group kmdba2
as the OSKMDBA group for the database installed by the user oracle2. Members
6-22
Chapter 6
Configuring Grid Infrastructure Software Owner User Environments
of kmdba2 are granted the SYSKM privileges to administer encryption keys for the
database installed by the user oracle2.
• An OSOPER group (oper2). During installation, you identify the group oper2 as the
OSOPER group for the database installed by the user oracle2. Members of oper2 are
granted the SYSOPER privileges (a limited set of the SYSDBA privileges), including the
right to start up and shut down the DB2 database. Users who connect as OSOPER
privileges are identified as user PUBLIC on DB2.
• An Oracle base /u01/app/oracle2 owned by oracle2:oinstall with 775 permissions.
The user oracle2 has permissions to install software in this directory, but in no other
directory in the /u01/app path.
6-23
Chapter 6
Configuring Grid Infrastructure Software Owner User Environments
Caution:
If you have existing Oracle installations that you installed with the user ID
that is your Oracle Grid Infrastructure software owner, then unset all Oracle
environment variable settings for that user.
$ xhost + hostname
3. If you are not logged in as the software owner user, then switch to the software
owner user you are configuring. For example, with the user grid:
$ su - grid
$ sudo -u grid -s
4. To determine the default shell for the user, enter the following command:
$ echo $SHELL
$ vi .bash_profile
$ vi .profile
% vi .login
6-24
Chapter 6
Configuring Grid Infrastructure Software Owner User Environments
6. Enter or edit the following line, specifying a value of 022 for the default file mode creation
mask:
umask 022
7. If the ORACLE_SID, ORACLE_HOME, or ORACLE_BASE environment variables are set in the file,
then remove these lines from the file.
8. Save the file, and exit from the text editor.
9. To run the shell startup script, enter one of the following commands:
• Bash shell:
$ . ./.bash_profile
$ . ./.profile
• C shell:
% source ./.login
10. Use the following command to check the PATH environment variable:
$ echo $PATH
If you have an existing Oracle software installation, and you are using the same user to
install this installation, then unset the $ORACLE_HOME, $ORA_NLS10,
and $TNS_ADMIN environment variables.
If you have set $ORA_CRS_HOME as an environment variable, then unset it before
starting an installation or upgrade. Do not use $ORA_CRS_HOME as a user environment
variable, except as directed by Oracle Support.
12. If you are not installing the software on the local system, then enter a command similar to
the following to direct X applications to display on the local system:
• Bourne, Bash, or Korn shell:
$ export DISPLAY=local_host:0.0
• C shell:
In this example, local_host is the host name or IP address of the system (your
workstation, or another client) on which you want to display the installer.
13. If the /tmp directory has less than 1 GB of free space, then identify a file system with at
least 1 GB of free space and set the TMP and TMPDIR environment variables to specify a
temporary directory on this file system:
6-25
Chapter 6
Configuring Grid Infrastructure Software Owner User Environments
Note:
You cannot use a shared file system as the location of the temporary file
directory (typically /tmp) for Oracle RAC installations. If you place /tmp
on a shared file system, then the installation fails.
a. Use the df -h command to identify a suitable file system with sufficient free
space.
b. If necessary, enter commands similar to the following to create a temporary
directory on the file system that you identified, and set the appropriate
permissions on the directory:
$ sudo -s
# mkdir /mount_point/tmp
# chmod 775 /mount_point/tmp
# exit
c. Enter commands similar to the following to set the TMP and TMPDIR
environment variables:
Bourne, Bash, or Korn shell:
$ TMP=/mount_point/tmp
$ TMPDIR=/mount_point/tmp
$ export TMP TMPDIR
C shell:
14. To verify that the environment has been set correctly, enter the following
commands:
$ umask
$ env | more
Verify that the umask command displays a value of 22, 022, or 0022 and that the
environment variables you set in this section have the correct values.
6-26
Chapter 6
Configuring Grid Infrastructure Software Owner User Environments
Use the following ranges as guidelines for resource allocation to Oracle installation owners:
$ ulimit -Sn
1024
$ ulimit -Hn
65536
3. Check the soft and hard limits for the number of processes available to a user. Ensure
that the result is in the recommended range. For example:
$ ulimit -Su
2047
$ ulimit -Hu
16384
4. Check the soft limit for the stack setting. Ensure that the result is in the recommended
range. For example:
$ ulimit -Ss
10240
$ ulimit -Hs
32768
6-27
Chapter 6
Configuring Grid Infrastructure Software Owner User Environments
Note:
If you make changes to an Oracle installation user account and that user
account is logged in, then changes to the limits.conf file do not take effect
until you log these users out and log them back in. You must do this before
you use these accounts for installation.
Remote Display
Bourne, Korn, and Bash shells
$ export DISPLAY=hostname:0
C shell
For example, if you are using the Bash shell and if your host name is local_host, then
enter the following command:
$ export DISPLAY=node1:0
X11 Forwarding
To ensure that X11 forwarding does not cause the installation to fail, use the following
procedure to create a user-level SSH client configuration file for Oracle installation
owner user accounts:
1. Using any text editor, edit or create the software installation owner's ~/.ssh/
config file.
2. Ensure that the ForwardX11 attribute in the ~/.ssh/config file is set to no. For
example:
Host *
ForwardX11 no
3. Ensure that the permissions on ~/.ssh are secured to the Oracle installation
owner user account. For example:
$ ls -al .ssh
total 28
drwx------ 2 grid oinstall 4096 Jun 21 2020
drwx------ 19 grid oinstall 4096 Jun 21 2020
-rw-r--r-- 1 grid oinstall 1202 Jun 21 2020 authorized_keys
6-28
Chapter 6
Enabling Intelligent Platform Management Interface (IPMI)
if [ -t 0 ]; then
stty intr ^C
fi
C shell:
test -t 0
if ($status == 0) then
stty intr ^C
endif
Note:
If the remote shell can load hidden files that contain stty commands, then OUI
indicates an error and stops the installation.
6-29
Chapter 6
Enabling Intelligent Platform Management Interface (IPMI)
Note:
IPMI operates on the physical hardware platform through the network
interface of the baseboard management controller (BMC). Depending on
your system configuration, an IPMI-initiated restart of a server can affect all
virtual environments hosted on the server. Contact your hardware and OS
vendor for more information.
6-30
Chapter 6
Enabling Intelligent Platform Management Interface (IPMI)
Note:
If you configure IPMI, and you use Grid Naming Service (GNS) you still must
configure separate addresses for the IPMI interfaces. As the IPMI adapter is not
seen directly by the host, the IPMI adapter is not visible to GNS as an address on
the host.
# /sbin/modprobe ipmi_msghandler
# /sbin/modprobe ipmi_si
# /sbin/modprobe ipmi_devintf
3. (Optional) Run the command /sbin/lsmod |grep ipmi to confirm that the IPMI modules
are loaded. For example:
6-31
Chapter 6
Enabling Intelligent Platform Management Interface (IPMI)
4. Open the /etc/rc.local file using a text editor, navigate to the end of the file, and
enter lines similar to the following, to run the modprobe commands in step 2
automatically on system restart:
On SUSE Linux Enterprise Server systems, add the modprobe commands above
to /etc/init.d/boot.local.
5. Check to ensure that the Linux system is recognizing the IPMI device, using the
following command:
ls -l /dev/ipmi0
If the IPMI device is dynamically loaded, then the output must be similar to the
following:
If you do see the device file output, then the IPMI driver is configured, and you can
ignore the following step.
If you do not see the device file output, then the udevd daemon is not set up to
create device files automatically. Proceed to the next step.
6. Determine the device major number for the IPMI device using the command grep
ipmi /proc/devices. For example:
6-32
Chapter 6
Enabling Intelligent Platform Management Interface (IPMI)
1. Log in as root.
2. Verify that ipmitool can communicate with the BMC using the IPMI driver by using the
command bmc info, and looking for a device ID in the output. For example:
If ipmitool is not communicating with the BMC, then review the section Configuring the
BMC and ensure that the IPMI driver is running.
3. Enable IPMI over LAN using the following procedure
a. Determine the channel number for the channel used for IPMI over LAN. Beginning
with channel 1, run the following command until you find the channel that displays
LAN attributes (for example, the IP address):
. . .
IP Address Source : 0x01
IP Address : 140.87.155.89
. . .
b. Turn on LAN access for the channel found. For example, where the channel is 1:
6-33
Chapter 6
Enabling Intelligent Platform Management Interface (IPMI)
4. Configure IP address settings for IPMI using the static IP addressing procedure:
• Using static IP Addressing
If the BMC shares a network connection with ILOM, then the IP address must
be on the same subnet. You must set not only the IP address, but also the
proper values for netmask, and the default gateway. For example, assuming
the channel is 1:
Note that the specified address (192.168.0.55) is associated only with the
BMC, and does not respond to normal pings.
5. Establish an administration account with a username and password, using the
following procedure (assuming the channel is 1):
a. Set BMC to require password authentication for ADMIN access over LAN. For
example:
b. List the account slots on the BMC, and identify an unused slot less than the
maximum ID and not listed, for example, ID 4 in the following example. Note
that some slots may be reserved and not available for reuse on some
hardware.
In the example above, there are 20 possible slots, and the first unused slot is
number 4.
c. Assign the desired administrator user name and password and enable
messaging for the identified slot. (Note that for IPMI v1.5 the user name and
password can be at most 16 characters). Also, set the privilege level for that
slot when accessed over LAN (channel 1) to ADMIN (level 4). For example,
where username is the administrative user name, and password is the
password:
6-34
Chapter 6
Enabling Intelligent Platform Management Interface (IPMI)
d. Verify the setup using the command lan print 1. The output should appear similar
to the following. Note that the items in bold text are the settings made in the
preceding configuration steps, and comments or alternative options are indicated
within brackets []:
User ID : 4
User Name : username [This is the administration user]
Fixed Name : No
Access Available : call-in / callback
Link Authentication : enabled
IPMI Messaging : enabled
Privilege Level : ADMINISTRATOR
6. Verify that the BMC is accessible and controllable from a remote node in your cluster
using the bmc info command. For example, if node2-ipmi is the network host name
assigned the IP address of node2's BMC, then to verify the BMC on node node2 from
node1, with the administrator account username, enter the following command on node1:
6-35
Chapter 6
Enabling Intelligent Platform Management Interface (IPMI)
7. Repeat this process for each cluster member node. If the IPMI administrator
account credentials on each cluster member node are not identical, then IPMI will
fail during configuration.
6-36
7
Supported Storage Options for Oracle
Database and Oracle Grid Infrastructure
Review supported storage options as part of your installation planning process.
• Supported Storage Options for Oracle Grid Infrastructure
The following table shows the storage options supported for Oracle Grid Infrastructure
binaries and files:
• Oracle ACFS and Oracle ADVM
Oracle Automatic Storage Management Cluster File System (Oracle ACFS) extends
Oracle ASM technology to support of all of your application data in both single instance
and cluster configurations.
• Storage Considerations for Oracle Grid Infrastructure and Oracle RAC
For all installations, you must choose the storage option to use for Oracle Grid
Infrastructure (Oracle Clusterware and Oracle ASM), and Oracle Real Application
Clusters (Oracle RAC) databases.
• Guidelines for Using Oracle ASM Disk Groups for Storage
Plan how you want to configure Oracle ASM disk groups for deployment.
• Guidelines for Configuring Oracle ASM Disk Groups on NFS
Configuration guidelines for Automatic Storage Management (Oracle ASM) on NFS file
systems.
• Using Logical Volume Managers with Oracle Grid Infrastructure and Oracle RAC
Oracle Grid Infrastructure and Oracle RAC only support cluster-aware volume managers.
• Using a Cluster File System for Oracle Clusterware Files
Starting with Oracle Grid Infrastructure 19c, you can use Oracle Automatic Storage
Management (Oracle ASM) or certified shared file system to store OCR files and voting
files.
• About NFS Storage for Data Files
Review this section for NFS storage configuration guidelines.
• About Direct NFS Client Mounts to NFS Storage Devices
Direct NFS Client integrates the NFS client functionality directly in the Oracle software to
optimize the I/O path between Oracle and the NFS server. This integration can provide
significant performance improvements.
7-1
Chapter 7
Supported Storage Options for Oracle Grid Infrastructure
7-2
Chapter 7
Oracle ACFS and Oracle ADVM
• You can choose any combination of the supported storage options for each file type
provided that you satisfy all requirements listed for the chosen storage options.
• You can use Oracle ASM or shared file system to store Oracle Clusterware files.
• Direct use of raw or block devices is not supported. You can only use raw or block
devices under Oracle ASM.
Related Topics
• Oracle Database Upgrade Guide
Table 7-2 Platforms That Support Oracle ACFS and Oracle ADVM
7-3
Chapter 7
Oracle ACFS and Oracle ADVM
Note:
If you use Security Enhanced Linux (SELinux) in enforcing mode with Oracle
ACFS, then ensure that you mount the Oracle ACFS file systems with an
SELinux default context. Refer to your Linux vendor documentation for
information about the context mount option.
Important:
You must apply patches to some of the Linux kernel versions for successful
Oracle Grid Infrastructure installation. Refer to the following notes for more
information:
• My Oracle Support Note 1369107.1 for more information and a complete
list of platforms and releases that support Oracle ACFS and Oracle
ADVM:
https://support.oracle.com/rs?type=doc&id=1369107.1
• Patch Set Updates for Oracle Products (My Oracle Support Note
854428.1) for current release and support information:
https://support.oracle.com/rs?type=doc&id=854428.1
7-4
Chapter 7
Storage Considerations for Oracle Grid Infrastructure and Oracle RAC
– Starting with Oracle Database 18c, configuration assistants do not allow the creation
of Oracle Database homes on Oracle ACFS in an Oracle Restart configuration.
– Oracle Restart does not support Oracle ACFS resources on all platforms.
– Starting with Oracle Database 12c, Oracle Restart configurations do not support the
Oracle ACFS registry.
– On Linux, Oracle ACFS provides an automated mechanism to load and unload
drivers and mount and unmount Oracle ACFS file systems on system restart and
shutdown. However, Oracle ACFS does not provide automated recovery of mounted
file systems when the system is running. Other than Linux, Oracle ACFS does not
provide this automated mechanism on other operating systems.
– Creating Oracle data files on an Oracle ACFS file system is not supported in Oracle
Restart configurations. Creating Oracle data files on an Oracle ACFS file system is
supported on Oracle Grid Infrastructure for a cluster configurations.
• Oracle ACFS and Oracle ADVM are not supported on IBM AIX Workload Partitions
(WPARs).
Related Topics
• Oracle Automatic Storage Management Administrator's Guide
7-5
Chapter 7
Guidelines for Using Oracle ASM Disk Groups for Storage
• You can choose any combination of the supported storage options for each file
type provided that you satisfy all requirements listed for the chosen storage
options.
• If you plan to install an Oracle RAC home on a shared OCFS2 location, then you
must upgrade OCFS2 to at least version 1.4.1, which supports shared writable
memory maps.
• To use Oracle ASM with Oracle RAC, and if you are configuring a new Oracle
ASM instance, then your system must meet the following conditions:
– All nodes on the cluster have Oracle Clusterware and Oracle ASM 19c
installed as part of an Oracle Grid Infrastructure for a cluster installation.
– Any existing Oracle ASM instance on any node in the cluster is shut down.
– To provide voting file redundancy, one Oracle ASM disk group is sufficient.
The Oracle ASM disk group provides three or five copies.
You can use NFS, with or without Direct NFS, to store Oracle Database data files.
Note:
If you install Oracle Database or Oracle RAC after you install Oracle Grid
Infrastructure, then you can either use the same disk group for database files, OCR,
and voting files, or you can use different disk groups. If you create multiple disk groups
before installing Oracle RAC or before creating a database, then you can do one of the
following:
7-6
Chapter 7
Guidelines for Configuring Oracle ASM Disk Groups on NFS
• Place the data files in the same disk group as the Oracle Clusterware files.
• Use the same Oracle ASM disk group for data files and recovery files.
• Use different disk groups for each file type.
If you create only one disk group for storage, then the OCR and voting files, database files,
and recovery files are contained in the one disk group. If you create multiple disk groups for
storage, then you can place files in different disk groups.
With Oracle Database 11g Release 2 (11.2) and later releases, Oracle Database
Configuration Assistant (DBCA) does not have the functionality to create disk groups for
Oracle ASM.
See Also:
Oracle Automatic Storage Management Administrator's Guide for information about
creating disk groups
Note:
All storage products must be supported by both your server and storage vendors.
Guidelines for Deploying Oracle ASM Disk Groups Without Quorum Disks
• To use an NFS file system, it must be on a supported NAS device. Log in to My Oracle
Support at the following URL, and click Certifications to find the most current information
about supported NAS devices:
https://support.oracle.com/
• NFS file systems must be mounted and available over NFS mounts before you start
installation. Refer to your vendor documentation to complete NFS configuration and
mounting.
• Direct NFS requires hard mounts. Hard mounting NFS filers prevents corruption which
could occur if the client connection were to time out. If an NFS filer hangs on an I/O
operation to a mirrored file, then the database and Oracle ASM cannot failover to the
surviving mirror copy. Therefore, Oracle recommends that you use external redundancy
when you deploy Oracle ASM disk groups on NFS storage.
• Oracle ASM Filter Driver and Oracle ACFS and Oracle ADVM are not supported with
NFS. These features are incompatible because of the nature of the operating system
interface for NFS-based storage.
7-7
Chapter 7
Guidelines for Configuring Oracle ASM Disk Groups on NFS
Note:
Oracle ACFS does not support NFS disks in an Oracle ASM disk group,
but you can use NFS disks with Oracle ACFS using Oracle ACFS NAS
Maximum Availability eXtensions (Oracle ACFS NAS MAX).
• The performance of Oracle software and databases stored on Oracle ASM disk
groups on NFS depends on the performance of the network connection between
the Oracle server and the NAS device. Oracle recommends that you connect the
server to the NAS device using a private dedicated network connection, which
should be Gigabit Ethernet or better.
• You can configure Oracle ASM on NFS when you deploy an Oracle Standalone
Cluster configuration.
• You can specify separate NFS locations for Oracle ASM disk groups for Oracle
Clusterware files and the Grid Infrastructure Management Repository (GIMR).
• The user account with which you perform the installation (oracle or grid) must
have write permissions to create the files in the path that you specify.
• When you choose Oracle ASM on NFS, you cannot use Oracle Automatic Storage
Management Cluster File System (Oracle ACFS) for storage. This cluster
configuration cannot be used as a Oracle Fleet Patching and Provisioning Server.
Guidelines for Deploying Oracle ASM Disk Groups With Quorum Disks
• SAN-attached storage or iSCSI-attached devices are the preferred ways to
connect to quorum disks. If your standard deployment requires NFS to be used as
storage, then use soft mounts for NFS-based Oracle ASM quorum disks and hard
mounts for other Oracle ASM disks.
• You can use Direct NFS (dNFS) for storage of Oracle Database data files. dNFS
does not support soft mounts, so you cannot use dNFS for quorum failure groups.
Alternatively, use kernel-based NFS with a soft mount for NFS storage residing in
a quorum failure group.
• The quorum failure group feature in Oracle ASM enables use of NFS storage in an
Oracle ASM disk group without requiring a hard mount for NFS storage in the
quorum failure group. This capability is useful for Oracle Extended Clusters where
a third site is required for establishing quorum.
Related Topics
• Creating Files on a NAS Device for Use with Oracle Automatic Storage
Management
If you have a certified NAS storage device, then you can create zero-padded files
in an NFS mounted directory and use those files as disk devices in an Oracle ASM
disk group.
7-8
Chapter 7
Using Logical Volume Managers with Oracle Grid Infrastructure and Oracle RAC
Note:
The performance of Oracle software and databases stored on NAS devices
depends on the performance of the network connection between the servers and
the network-attached storage devices.For better performance, Oracle recommends
that you connect servers to NAS devices using private dedicated network
connections. NFS network connections should use Gigabit Ethernet or better.
7-9
Chapter 7
About Direct NFS Client Mounts to NFS Storage Devices
7-10
Chapter 7
About Direct NFS Client Mounts to NFS Storage Devices
Note:
You can have only one active NFS Client implementation for each instance.
Enabling Direct NFS Client on an instance prevents you from using another NFS
Client implementation, such as kernel NFS Client.
See Also:
7-11
8
Configuring Storage for Oracle Grid
Infrastructure
Complete these procedures to configure Oracle Automatic Storage Management (Oracle
ASM) for Oracle Grid Infrastructure for a cluster.
Oracle Grid Infrastructure for a cluster provides system support for Oracle Database. Oracle
ASM is a volume manager and a file system for Oracle database files that supports single-
instance Oracle Database and Oracle Real Application Clusters (Oracle RAC) configurations.
Oracle Automatic Storage Management also supports a general purpose file system for your
application needs, including Oracle Database binaries. Oracle Automatic Storage
Management is Oracle's recommended storage management solution. It provides an
alternative to conventional volume managers and file systems.
Note:
Oracle ASM and shared file system are the supported storage management
solutions for Oracle Cluster Registry (OCR) and Oracle Clusterware voting files.
The OCR is a file that contains the configuration information and status of the
cluster. The installer automatically initializes the OCR during the Oracle Clusterware
installation. Database Configuration Assistant uses the OCR for storing the
configurations for the cluster databases that it creates.
8-1
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
8-2
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
files and the Grid Infrastructure Management Repository (GIMR) on separate Oracle
ASM disk groups and hence require configuration of two separate Oracle ASM disk
groups, one for OCR and voting files and the other for the GIMR.
2. Determine whether you want to use Oracle ASM for Oracle Database files, recovery files,
and Oracle Database binaries. Oracle Database files include data files, control files, redo
log files, the server parameter file, and the password file.
Note:
• You do not have to use the same storage mechanism for Oracle Database
files and recovery files. You can use a shared file system for one file type
and Oracle ASM for the other.
• There are two types of Oracle Clusterware files: OCR files and voting files.
You can use either Oracle ASM or a shared file system to store OCR and
voting files on Oracle Standalone Cluster deployments, but you must use
Oracle ASM to store OCR and voting files on Oracle Domain Services
cluster deployments.
• If your database files are stored on a shared file system, then you can
continue to use the same for database files, instead of moving them to
Oracle ASM storage.
3. Choose the Oracle ASM redundancy level to use for the Oracle ASM disk group.
Except when using external redundancy, Oracle ASM mirrors all Oracle Clusterware files
in separate failure groups within a disk group. A quorum failure group, a special type of
failure group, contains mirror copies of voting files when voting files are stored in normal
or high redundancy disk groups. The disk groups that contain Oracle Clusterware files
(OCR and voting files) have a higher minimum number of failure groups than other disk
groups because the voting files are stored in quorum failure groups in the Oracle ASM
disk group.
A quorum failure group is a special type of failure group that is used to store the Oracle
Clusterware voting files. The quorum failure group is used to ensure that a quorum of the
specified failure groups are available. When Oracle ASM mounts a disk group that
contains Oracle Clusterware files, the quorum failure group is used to determine if the
disk group can be mounted in the event of the loss of one or more failure groups. Disks in
the quorum failure group do not contain user data, therefore a quorum failure group is not
considered when determining redundancy requirements in respect to storing user data.
The redundancy levels are as follows:
• High redundancy
In a high redundancy disk group, Oracle ASM uses three-way mirroring to increase
performance and provide the highest level of reliability. A high redundancy disk group
requires a minimum of three disk devices (or three failure groups). The effective disk
space in a high redundancy disk group is one-third the sum of the disk space in all of
its devices.
For Oracle Clusterware files, a high redundancy disk group requires a minimum of
five disk devices and provides five voting files and one OCR (one primary and two
secondary copies). For example, your deployment may consist of three regular failure
groups and two quorum failure groups. Note that not all failure groups can be quorum
8-3
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
failure groups, even though voting files need all five disks. With high
redundancy, the cluster can survive the loss of two failure groups.
While high redundancy disk groups do provide a high level of data protection,
you should consider the greater cost of additional storage devices before
deciding to select high redundancy disk groups.
• Normal redundancy
In a normal redundancy disk group, to increase performance and reliability,
Oracle ASM by default uses two-way mirroring. A normal redundancy disk
group requires a minimum of two disk devices (or two failure groups). The
effective disk space in a normal redundancy disk group is half the sum of the
disk space in all of its devices.
For Oracle Clusterware files, a normal redundancy disk group requires a
minimum of three disk devices and provides three voting files and one OCR
(one primary and one secondary copy). For example, your deployment may
consist of two regular failure groups and one quorum failure group. With
normal redundancy, the cluster can survive the loss of one failure group.
If you are not using a storage array providing independent protection against
data loss for storage, then Oracle recommends that you select normal
redundancy.
• External redundancy
An external redundancy disk group requires a minimum of one disk device.
The effective disk space in an external redundancy disk group is the sum of
the disk space in all of its devices.
Because Oracle ASM does not mirror data in an external redundancy disk
group, Oracle recommends that you use external redundancy with storage
devices such as RAID, or other similar devices that provide their own data
protection mechanisms.
• Flex redundancy
A flex redundancy disk group is a type of redundancy disk group with features
such as flexible file redundancy, mirror splitting, and redundancy change. A
flex disk group can consolidate files with different redundancy requirements
into a single disk group. It also provides the capability for databases to change
the redundancy of its files. A disk group is a collection of file groups, each
associated with one database. A quota group defines the maximum storage
space or quota limit of a group of databases within a disk group.
In a flex redundancy disk group, Oracle ASM uses three-way mirroring of
Oracle ASM metadata to increase performance and provide reliability. For
database data, you can choose no mirroring (unprotected), two-way mirroring
(mirrored), or three-way mirroring (high). A flex redundancy disk group
requires a minimum of three disk devices (or three failure groups).
• Extended redundancy
Extended redundancy disk group has similar features as the flex redundancy
disk group. Extended redundancy is available when you configure an Oracle
Extended Cluster. Extended redundancy extends Oracle ASM data protection
to cover failure of sites by placing enough copies of data in different failure
groups of each site. A site is a collection of failure groups. For extended
redundancy with three sites, for example, two data sites, and one quorum
failure group, the minimum number of disks is seven (three disks each for two
data sites and one quorum failure group outside the two data sites). The
8-4
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
See Also:
Oracle Automatic Storage Management Administrator's Guide for more
information about file groups and quota groups for flex disk groups
Note:
You can alter the redundancy level of the disk group after a disk group is
created. For example, you can convert a normal or high redundancy disk group
to a flex redundancy disk group. Within a flex redundancy disk group, file
redundancy can change among three possible values: unprotected, mirrored, or
high.
4. Determine the total amount of disk space that you require for Oracle Clusterware files,
and for the database files and recovery files.
If an Oracle ASM instance is running on the system, then you can use an existing disk
group to meet these storage requirements. If necessary, you can add disks to an existing
disk group during the database installation.
See Oracle Clusterware Storage Space Requirements to determine the minimum number
of disks and the minimum disk space requirements for installing Oracle Clusterware files,
and installing the starter database, where you have voting files in a separate disk group.
5. Determine an allocation unit size.
Every Oracle ASM disk is divided into allocation units (AU). An allocation unit is the
fundamental unit of allocation within a disk group. You can select the AU Size value from
1, 2, 4, 8, 16, 32, or 64 MB, depending on the specific disk group compatibility level. For
flex disk groups, the default value for AU size is set to 4 MB. For external, normal, and
high redundancies, the default AU size is 1 MB.
6. For Oracle Clusterware installations, you must also add additional disk space for the
Oracle ASM metadata. You can use the following formula to calculate the disk space
requirements (in MB) for OCR and voting files, and the Oracle ASM metadata:
8-5
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
Note:
You can define custom failure groups during installation of Oracle Grid
Infrastructure. You can also define failure groups after installation using
the GUI tool ASMCA, the command line tool asmcmd, or SQL
commands. If you define custom failure groups, then you must specify a
minimum of two failure groups for normal redundancy disk groups and
three failure groups for high redundancy disk groups.
8. If you are sure that a suitable disk group does not exist on the system, then install
or identify appropriate disk devices to add to a new disk group. Use the following
guidelines when identifying appropriate disk devices:
• The disk devices must be owned by the user performing Oracle Grid
Infrastructure installation.
• All the devices in an Oracle ASM disk group must be the same size and have
the same performance characteristics.
• Do not specify multiple partitions on a single physical disk as a disk group
device. Oracle ASM expects each disk group device to be on a separate
physical disk.
• Although you can specify a logical volume as a device in an Oracle ASM disk
group, Oracle does not recommend their use because it adds a layer of
complexity that is unnecessary with Oracle ASM. Oracle recommends that if
you choose to use a logical volume manager, then use the logical volume
manager to represent a single logical unit number (LUN) without striping or
mirroring, so that you can minimize the effect on storage performance of the
additional storage layer.
9. If you use Oracle ASM disk groups created on Network File System (NFS) for
storage, then ensure that you follow recommendations described in Guidelines for
Configuring Oracle ASM Disk Groups on NFS.
8-6
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
Related Topics
• Storage Checklist for Oracle Grid Infrastructure
Review the checklist for storage hardware and configuration requirements for Oracle Grid
Infrastructure installation.
• Oracle Clusterware Storage Space Requirements
Use this information to determine the minimum number of disks and the minimum disk
space requirements based on the redundancy type, for installing Oracle Clusterware files
for various Oracle Cluster deployments.
• Configuring Storage Device Path Persistence Using Oracle ASMFD
Oracle ASM Filter Driver (Oracle ASMFD) maintains storage file path persistence and
helps to protect files from accidental overwrites.
Note:
Starting with Oracle Grid Infrastructure 19c, configuring GIMR is optional for Oracle
Standalone Cluster deployments. When upgrading to Oracle Grid Infrastructure
19c, a new GIMR is created only if the source Grid home has a GIMR configured.
Based on the cluster configuration you want to install, the Oracle Clusterware space
requirements vary for different redundancy levels. The following tables list the space
requirements for each cluster configuration and redundancy level.
Note:
The DATA disk group stores OCR and voting files, and the MGMT disk group stores
GIMR and Oracle Clusterware backup files.
8-7
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
Redund DATA Disk MGMT Disk Group Oracle Fleet Total Storage
ancy Group Patching and
Level Provisioning
External 1 GB 28 GB 1 GB 30 GB
Each node beyond four:
5 GB
Normal 2 GB 56 GB 2 GB 60 GB
Each node beyond four:
5 GB
High/ 3 GB 84 GB 3 GB 90 GB
Flex/ Each node beyond four:
Extende
5 GB
d
• Oracle recommends that you use a separate disk group, other than DATA, for
GIMR and Oracle Clusterware backup files.
• The initial GIMR sizing for the Oracle Standalone Cluster is for up to four
nodes. You must add additional storage space to the disk group containing the
GIMR and Oracle Clusterware backup files for each new node added to the
cluster.
• By default, all new Oracle Standalone Cluster deployments are configured with
Oracle Fleet Patching and Provisioning for patching that cluster only. This
deployment requires a minimal ACFS file system that is automatically configured
in the same disk group as the GIMR.
• Oracle recommends that you use a separate disk group, other than DATA, for
Oracle Clusterware backup files.
• The initial sizing for the Oracle Standalone Cluster is for up to four nodes. You
must add additional storage space to the disk group containing Oracle Clusterware
backup files for each new node added to the cluster.
• By default, all new Oracle Standalone Cluster deployments are configured with
Oracle Fleet Patching and Provisioning for patching that cluster only. This
deployment requires a minimal ACFS file system that is automatically configured.
8-8
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
Table 8-3 Minimum Available Space Requirements for Oracle Member Cluster with
Local ASM
• For Oracle Member Cluster, the storage space for the GIMR is pre-allocated in the
centralized GIMR on the Oracle Domain Services Cluster as described in Table 8–5.
• Oracle recommends that you use a separate disk group, other than DATA, for Oracle
Clusterware backup files.
Table 8-4 Minimum Available Space Requirements for Oracle Domain Services
Cluster
Redundancy DATA Disk MGMT Disk Trace File Total Storage Additional
Level Group Group Analyzer Oracle
Member
Cluster
External 1 GB and 1 GB 140 GB 200 GB 345 GB Oracle Fleet
for each Oracle (excluding Patching and
Member Oracle Fleet Provisioning:
Cluster Patching and 100 GB
Provisioning) GIMR for each
Oracle Member
Cluster beyond
four: 28 GB
Normal 2 GB and 2 GB 280 GB 400 GB 690 GB Oracle Fleet
for each Oracle (excluding Patching and
Member Oracle Fleet Provisioning:
Cluster Patching and 200 GB
Provisioning) GIMR for each
Oracle Member
Cluster beyond
four: 56 GB
High/Flex/ 3 GB and 3 GB 420 GB 600 GB 1035 GB Oracle Fleet
Extended for each Oracle (excluding Patching and
Member Oracle Fleet Provisioning:
Cluster Patching and 300 GB
Provisioning) GIMR for each
Oracle Member
Cluster beyond
four: 84 GB
• By default, the initial space allocation for the GIMR is for the Oracle Domain Services
Cluster and four or fewer Oracle Member Clusters. You must add additional storage
space for each Oracle Member Cluster beyond four.
8-9
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
Related Topics
• About Oracle Standalone Clusters
An Oracle Standalone Cluster hosts all Oracle Grid Infrastructure services and
Oracle ASM locally and requires direct access to shared storage.
• About Oracle Cluster Domain and Oracle Domain Services Cluster
An Oracle Cluster Domain is a choice of deployment architecture for new clusters,
introduced in Oracle Clusterware 12c Release 2.
• About Oracle Member Clusters
Oracle Member Clusters use centralized services from the Oracle Domain
Services Cluster and can host databases or applications. Oracle Member Clusters
can be of two types - Oracle Member Clusters for Oracle Databases or Oracle
Member Clusters for applications.
8-10
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
$ $ORACLE_HOME/bin/asmcmd
ASMCMD> startup
2. Enter one of the following commands to view the existing disk groups, their redundancy
level, and the amount of free disk space in each one:
ASMCMD> lsdg
or
$ORACLE_HOME/bin/asmcmd -p lsdg
The lsdg command lists information about mounted disk groups only.
3. From the output, identify a disk group with the appropriate redundancy level and note the
free space that it contains.
4. If necessary, install or identify the additional disk devices required to meet the storage
requirements for your installation.
Note:
If you are adding devices to an existing disk group, then Oracle recommends that
you use devices that have the same size and performance characteristics as the
existing devices in that disk group.
8-11
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
Related Topics
• Oracle Automatic Storage Management Administrator's Guide
• Oracle Database Upgrade Guide
See Also:
8-12
Chapter 8
Configuring Storage for Oracle Automatic Storage Management
Creating Files on a NAS Device for Use with Oracle Automatic Storage
Management
If you have a certified NAS storage device, then you can create zero-padded files in an NFS
mounted directory and use those files as disk devices in an Oracle ASM disk group.
Ensure that you specify the ASM discovery path for Oracle ASM disks.
During installation of Oracle Grid Infrastructure, Oracle Universal Installer (OUI) can create
files in the NFS mounted directory you specify. The following procedure explains how to
manually create files in an NFS mounted directory to use as disk devices in an Oracle ASM
disk group:
1. If necessary, create an exported directory for the disk group files on the NAS device.
2. Switch user to root.
3. Create a mount point directory on the local system.
For example:
# mkdir -p /mnt/oracleasm
4. To ensure that the NFS file system is mounted when the system restarts, add an entry for
the file system in the mount file /etc/fstab.
5. Enter a command similar to the following to mount the NFS on the local system:
# mount /mnt/oracleasm
6. Choose a name for the disk group to create, and create a directory for the files on the
NFS file system, using the disk group name as the directory name.
For example, if you want to set up a disk group for a sales database:
# mkdir /mnt/oracleasm/sales1
7. Use commands similar to the following to create the required number of zero-padded files
in this directory:
# dd if=/dev/zero
of=/mnt/oracleasm/sales1/disk1 bs=1024k
count=1000
This example creates 1 GB files on the NFS file system. You must create one, two, or
three files respectively to create an external, normal, or high redundancy disk group.
Note:
Creating multiple zero-padded files on the same NAS device does not guard
against NAS failure. Instead, create one file for each NAS device and mirror
them using the Oracle ASM technology.
8-13
Chapter 8
Configuring Storage Device Path Persistence Using Oracle ASMFD
8. Enter commands similar to the following to change the owner, group, and
permissions on the directory and files that you created:
In this example, the installation owner is grid and the OSASM group is asmadmin.
9. During Oracle Database installations, edit the Oracle ASM disk discovery string to
specify a regular expression that matches the file names you created.
For example:
/mnt/oracleasm/sales1/
Related Topics
• My Oracle Support Note 359515.1
8-14
Chapter 8
Using Disk Groups with Oracle Database Files on Oracle ASM
Caution:
When you configure Oracle ASM, including Oracle ASMFD, do not modify or erase
the contents of the Oracle ASM disks, or modify any files, including the
configuration files.
Note:
Oracle ASMFD is supported on Linux x86–64 and Oracle Solaris operating
systems.
Related Topics
• Deinstalling Oracle ASMLIB On Oracle Grid Infrastructure
If Oracle ASM library driver (Oracle ASMLIB) is installed but you do not use it for device
path persistence, then deinstall Oracle ASMLIB.
• Oracle Automatic Storage Management Administrator's Guide
• Identifying and Using Existing Oracle Database Disk Groups on Oracle ASM
Identify existing disk groups and determine the free disk space that they contain.
Optionally, identify failure groups for the Oracle ASM disk group devices.
• Creating Disk Groups for Oracle Database Data Files
If you are sure that a suitable disk group does not exist on the system, then install or
identify appropriate disk devices to add to a new disk group.
• Creating Directories for Oracle Database Files
You can store Oracle Database and recovery files on a separate file system from the
configuration files.
8-15
Chapter 8
Using Disk Groups with Oracle Database Files on Oracle ASM
Note:
If you define custom failure groups, then you must specify a minimum of two
failure groups for normal redundancy and three failure groups for high
redundancy.
Related Topics
• Oracle Automatic Storage Management Administrator's Guide
# df -h
Option Description
Database Files Select one of the following:
• A single file system with at least 1.5 GB
of free disk space
• Two or more file systems with at least
3.5 GB of free disk space in total
Recovery Files Choose a file system with at least 2 GB of
free disk space
8-16
Chapter 8
Configuring File System Storage for Oracle Database
If you are using the same file system for multiple file types, then add the disk space
requirements for each type to determine the total disk space requirement.
3. Note the names of the mount point directories for the file systems that you identified.
4. If the user performing installation has permissions to create directories on the disks
where you plan to install Oracle Database, then DBCA creates the Oracle Database file
directory, and the Recovery file directory. If the user performing installation does not have
write access, then you must create these directories manually.
For example, given the user oracle and Oracle Inventory Group oinstall, and using the
paths /u03/oradata/wrk_area for Oracle Database files, and /u01/oradata/
rcv_area for the recovery area, these commands create the recommended
subdirectories in each of the mount point directories and set the appropriate owner,
group, and permissions on them:
• Database file directory:
# mkdir -p /u01/oradata/
# chown oracle:oinstall /u01/oradata/
# chmod 775 /u01/oradata
# mkdir -p /u01/oradata/rcv_area
# chown oracle:oinstall /u01/oradata/rcv_area
# chmod 775 /u01/oradata/rcv_area
8-17
Chapter 8
Configuring File System Storage for Oracle Database
For example, to use rsize and wsize buffer settings with the value 32768 for an
Oracle Database data files mount point, set mount point parameters to values similar
to the following:
Direct NFS Client issues writes at wtmax granularity to the NFS server.
Related Topics
• My Oracle Support note 359515.1
Oracle recommends that you set the value based on the link speed of your servers.
For example, perform the following steps:
1. As root, use a text editor to open /etc/sysctl.conf, and add or change the
following:
8-18
Chapter 8
Configuring File System Storage for Oracle Database
# sysctl -p
# /etc/rc.d/init.d/network restart
Create an oranfstab file with the following attributes for each NFS server that you want to
access using Direct NFS Client:
• server
The NFS server name.
For NFS setup with Kerberos authentication, the server attribute name must be the
fully-qualified name of the NFS server. This server attribute name is used to create
service principal for Ticket Granting Service (TGS) request from the Kerberos server. If
you are configuring external storage snapshot cloning, then the NFS server name
should be a valid host name. For all other scenarios, the NFS server name can be any
unique name.
• local
Up to four paths on the database host, specified by IP address or by name, as displayed
using the ifconfig command run on the database host.
• path
Up to four network paths to the NFS server, specified either by IP address, or by name,
as displayed using the ifconfig command on the NFS server.
• export
The exported path from the NFS server.
• mount
The corresponding local mount point for the exported volume.
• mnt_timeout
Specifies (in seconds) the time Direct NFS Client should wait for a successful mount
before timing out. This parameter is optional. The default timeout is 10 minutes (600).
• nfs_version
Specifies the NFS protocol version used by Direct NFS Client. Possible values are
NFSv3, NFSv4, NFSv4.1, and pNFS. The default version is NFSv3. If you select
NFSv4.x, then you must configure the value in oranfstab for nfs_version. Specify
nfs_version as pNFS, if you want to use Direct NFS with Parallel NFS.
• security_default
Specifies the default security mode applicable for all the exported NFS server paths for a
server entry. This parameter is optional. sys is the default value. See the description of
the security parameter for the supported security levels for the security_default
parameter.
8-19
Chapter 8
Configuring File System Storage for Oracle Database
• security
Specifies the security level, to enable security using Kerberos authentication
protocol with Direct NFS Client. This optional parameter can be specified per
export-mount pair. The supported security levels for the security_default and
security parameters are:
sys: UNIX level security AUTH_UNIX authentication based on user identifier
(UID) and group identifier (GID) values. This is the default value for security
parameters.
krb5: Direct NFS runs with plain Kerberos authentication. Server is
authenticated as the real server which it claims to be.
krb5i: Direct NFS runs with Kerberos authentication and NFS integrity. Server
is authenticated and each of the message transfers is checked for integrity.
krb5p: Direct NFS runs with Kerberos authentication and NFS privacy. Server
is authenticated, and all data is completely encrypted.
The security parameter, if specified, takes precedence over the
security_default parameter. If neither of these parameters are specified, then
sys is the default authentication.
For NFS server Kerberos security setup, review the relevant NFS server
documentation. For Kerberos client setup, review the relevant operating system
documentation.
• dontroute
Specifies that outgoing messages should not be routed by the operating system,
but instead sent using the IP address to which they are bound.
Note:
The dontroute option is a POSIX option, which sometimes does not
work on Linux systems with multiple paths in the same subnet.
• management
Enables Direct NFS Client to use the management interface for SNMP queries.
You can use this parameter if SNMP is running on separate management
interfaces on the NFS server. The default value is the server parameter value.
• community
Specifies the community string for use in SNMP queries. Default value is public.
The following examples show three possible NFS server entries in oranfstab. A single
oranfstab can have multiple NFS server entries.
server: MyDataServer1
local: 192.0.2.0
path: 192.0.2.1
local: 192.0.100.0
8-20
Chapter 8
Configuring File System Storage for Oracle Database
path: 192.0.100.1
export: /vol/oradata1 mount: /mnt/oradata1
Example 8-2 Using Local and Path in the Same Subnet, with dontroute
Local and path in the same subnet, where dontroute is specified:
server: MyDataServer2
local: 192.0.2.0
path: 192.0.2.128
local: 192.0.2.1
path: 192.0.2.129
dontroute
export: /vol/oradata2 mount: /mnt/oradata2
server: MyDataServer3
local: LocalPath1
path: NfsPath1
local: LocalPath2
path: NfsPath2
local: LocalPath3
path: NfsPath3
local: LocalPath4
path: NfsPath4
dontroute
export: /vol/oradata3 mount: /mnt/oradata3
export: /vol/oradata4 mount: /mnt/oradata4
export: /vol/oradata5 mount: /mnt/oradata5
export: /vol/oradata6 mount: /mnt/oradata6
management: MgmtPath1
community: private
server: nfsserver
local: 192.0.2.0
path: 192.0.2.2
local: 192.0.2.3
path: 192.0.2.4
export: /private/oracle1/logs mount: /logs security: krb5
export: /private/oracle1/data mount: /data security: krb5p
export: /private/oracle1/archive mount: /archive security: sys
export: /private/oracle1/data1 mount: /data1
security_default: krb5i
8-21
Chapter 8
Creating Member Cluster Manifest File for Oracle Member Clusters
Note:
If you remove an NFS path that an Oracle Database is using, then you must
restart the database for the change to take effect.
2. If SNMP is enabled on an interface other than the NFS server, then configure
oranfstab using the management parameter.
3. If SNMP is configured using a community string other than public, then configure
oranfstab file using the community parameter.
4. Ensure that libnetsnmp.so is installed by checking if snmpget is available.
8-22
Chapter 8
Creating Member Cluster Manifest File for Oracle Member Clusters
Service, Oracle ASM storage server, and Oracle Fleet Patching and Provisioning
configuration.
Oracle Member Clusters use Oracle ASM storage from the Oracle Domain Services Cluster.
Grid Naming Service (GNS) without zone delegation must be configured so that the GNS
virtual IP address (VIP) is available for connection.
1. (Optional) If the Oracle Member Cluster accesses direct or indirect Oracle ASM storage,
then, enable access to the disk group. Connect to any Oracle ASM instance as SYSASM
user and run the command:
2. From the Grid home on the Oracle Domain Services Cluster, create the member cluster
manifest file:
cd Grid_home/bin
./crsctl create member_cluster_configuration member_cluster_name
-file cluster_manifest_file_name -member_type database|application
[-version member_cluster_version
[-domain_services [asm_storage local|direct|indirect][rhp] [acfs]]
• As root or grid user, export the Grid Naming Service (GNS) client data, to the
member cluster manifest file created earlier:
8-23
Chapter 8
Configuring Oracle Automatic Storage Management Cluster File System
$ cd /u01/app/19.0.0/grid
3. Ensure that the Oracle Grid Infrastructure installation owner has read and write
permissions on the storage mountpoint you want to use. For example, if you want
to use the mountpoint /u02/acfsmounts/:
$ ls -l /u02/acfsmounts
4. Start Oracle ASM Configuration Assistant as the grid installation owner. For
example:
./asmca
5. The Configure ASM: ASM Disk Groups page shows you the Oracle ASM disk
group you created during installation. Click the ASM Cluster File Systems tab.
6. On the ASM Cluster File Systems page, right-click the Data disk, then select
Create ACFS for Database Use.
7. In the Create ACFS for Database window, enter the following information:
• Volume Name: Enter the name of the database home. The name must be
unique in your enterprise. For example: dbase_01
• Mount Point: Enter the directory path for the mount point. For
example: /u02/acfsmounts/dbase_01
Make a note of this mount point for future reference.
• Size (GB): Enter in gigabytes the size you want the database home to be. The
default is 12 GB and the minimum recommended size.
• Owner Name: Enter the name of the Oracle Database installation owner you
plan to use to install the database. For example: oracle1
• Owner Group: Enter the OSDBA group whose members you plan to provide
when you install the database. Members of this group are given operating
system authentication for the SYSDBA privileges on the database. For
example: dba1
8-24
Chapter 8
Checking OCFS2 Version Manually
Related Topics
• Oracle Automatic Storage Management Administrator's Guide
modinfo ocfs2
rpm -qa |grep ocfs2
8-25
9
Installing Oracle Grid Infrastructure
Review this information for installation and deployment options for Oracle Grid Infrastructure.
Oracle Database and Oracle Grid Infrastructure installation software is available in multiple
media, and can be installed using several options. The Oracle Grid Infrastructure software is
available as an image, available for download from the Oracle Technology Network website,
or the Oracle Software Delivery Cloud portal. In most cases, you use the graphical user
interface (GUI) provided by Oracle Universal Installer to install the software. You can also use
Oracle Universal Installer to complete silent mode installations, without using the GUI. You
can also use Oracle Fleet Patching and Provisioning for subsequent Oracle Grid
Infrastructure and Oracle Database deployments.
• About Image-Based Oracle Grid Infrastructure Installation
Installation and configuration of Oracle Grid Infrastructure software is simplified with
image-based installation.
• Setup Wizard Installation Options for Creating Images
Before you start the setup wizards for your Oracle Database or Oracle Grid Infrastructure
installation, decide if you want to use any of the available image-creation options.
• Downloading the Software from Oracle Software Delivery Cloud Portal
You can download the software from Oracle Software Delivery Cloud.
• Understanding Cluster Configuration Options
Review these topics to understand the cluster configuration options available in Oracle
Grid Infrastructure 19c.
• Installing Oracle Grid Infrastructure for a New Cluster
Review these procedures to install the cluster configuration options available in this
release of Oracle Grid Infrastructure.
• Installing Oracle Grid Infrastructure Using a Cluster Configuration File
During installation of Oracle Grid Infrastructure, you have the option of either providing
cluster configuration information manually, or using a cluster configuration file.
• Installing Only the Oracle Grid Infrastructure Software
This installation option requires manual postinstallation steps to enable the Oracle Grid
Infrastructure software.
• About Deploying Oracle Grid Infrastructure Using Oracle Fleet Patching and Provisioning
Oracle Fleet Patching and Provisioning (Oracle FPP) is a software lifecycle management
method for provisioning and maintaining Oracle homes. Oracle Fleet Patching and
Provisioning enables mass deployment and maintenance of standard operating
environments for databases, clusters, and user-defined software types.
• Confirming Oracle Clusterware Function
After Oracle Grid Infrastructure installation, confirm that your Oracle Clusterware
installation is installed and running correctly.
• Confirming Oracle ASM Function for Oracle Clusterware Files
Confirm Oracle ASM is running after installing Oracle Grid Infrastructure.
9-1
Chapter 9
About Image-Based Oracle Grid Infrastructure Installation
Note:
You must extract the image software into the directory where you want your
Grid home to be located, and then run the %ORACLE_HOME%\gridSetup.sh
script to start the Oracle Grid Infrastructure Setup Wizard. Ensure that the
Grid home directory path you create is in compliance with the Oracle Optimal
Flexible Architecture recommendations.
Related Topics
• Installing Oracle Domain Services Cluster
Complete this procedure to install Oracle Grid Infrastructure software for Oracle
Domain Services Cluster.
• Installing Oracle Standalone Cluster
Complete this procedure to install Oracle Grid Infrastructure software for Oracle
Standalone Cluster.
• Installing Oracle Member Clusters
Complete this procedure to install Oracle Grid Infrastructure software for Oracle
Member Cluster for Oracle Database and Oracle Member Cluster for Applications.
9-2
Chapter 9
Setup Wizard Installation Options for Creating Images
Option Description
-createGoldImage Creates a gold image from the current Oracle home.
-destinationLocation Specify the complete path, or location, where the gold image will be
created.
-exclFiles Specify the complete paths to the files to be excluded from the newly
created gold image.
—help Displays help for all the available options.
For example:
Where:
/tmp/my_db_images is a temporary file location where the image zip file is created.
/tmp/my_grid_images is a temporary file location where the image zip file is created.
9-3
Chapter 9
Understanding Cluster Configuration Options
5. Select the operating system platform on which you want to install the software
from the Platform/Languages column.
6. Click Continue.
7. Review the license agreement.
8. Select the I reviewed and accept the Oracle License Agreement checkbox.
Click Continue.
9. Click Download to start downloading the software.
10. After you download the files, click View Digest to verify that the checksum
matches the value listed on the download page.
9-4
Chapter 9
Understanding Cluster Configuration Options
When you deploy an Oracle Standalone Cluster, you can also choose to configure it as an
Oracle Extended cluster. An Oracle Extended Cluster consists of nodes that are located in
multiple locations or sites.
9-5
Chapter 9
Understanding Cluster Configuration Options
Private Network
SAN
Oracle ASM Network Storage
Related Topics
• About Oracle Member Clusters
Oracle Member Clusters use centralized services from the Oracle Domain
Services Cluster and can host databases or applications. Oracle Member Clusters
can be of two types - Oracle Member Clusters for Oracle Databases or Oracle
Member Clusters for applications.
9-6
Chapter 9
Understanding Cluster Configuration Options
Oracle Member Clusters cannot provide services to other clusters. For example, you cannot
configure and use a member cluster as a GNS server or Oracle Fleet Patching and
Provisioning Server.
Note:
Before running Oracle Universal Installer, you must specify the Oracle Domain
Services Cluster configuration details for the Oracle Member Cluster by creating the
Member Cluster Manifest file.
Oracle Member Cluster for Oracle Database does not support Oracle Database
12.1 or earlier, where Oracle Member Cluster is configured with Oracle ASM
storage as direct or indirect.
Related Topics
• Creating Member Cluster Manifest File for Oracle Member Clusters
Create a Member Cluster Manifest file to specify the Oracle Member Cluster
configuration for the Grid Infrastructure Management Repository (GIMR), Grid Naming
Service, Oracle ASM storage server, and Oracle Fleet Patching and Provisioning
configuration.
9-7
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
When you deploy an Oracle Standalone Cluster, you can also choose to configure the
cluster as an Oracle Extended Cluster. You can extend an Oracle RAC cluster across
two, or more, geographically separate sites, each equipped with its own storage. In the
event that one of the sites fails, the other site acts as an active standby.
Both Oracle ASM and the Oracle Database stack, in general, are designed to use
enterprise-class shared storage in a data center. Fibre Channel technology, however,
enables you to distribute compute and storage resources across two or more data
centers, and connect them through Ethernet cables and Fibre Channel, for compute
and storage needs, respectively.
You can configure an Oracle Extended Cluster when you install Oracle Grid
Infrastructure. You can also do so post installation using the ConvertToExtended script.
You manage your Oracle Extended Cluster using CRSCTL.
You can assign nodes and failure groups to sites. Sites contain failure groups, and
failure groups contain disks.
The following conditions apply when you select redundancy levels for Oracle Extended
Clusters:
Table 9-2 Oracle ASM Disk Group Redundancy Levels for Oracle Extended
Clusters with 2 Data Sites
Related Topics
• Converting to Oracle Extended Cluster After Upgrading Oracle Grid Infrastructure
Review this information to convert to an Oracle Extended Cluster after upgrading
Oracle Grid Infrastructure. Oracle Extended Cluster enables you to deploy Oracle
RAC databases on a cluster, in which some of the nodes are located in different
sites.
• Identifying Storage Requirements for Oracle Automatic Storage Management
To identify the storage requirements for using Oracle ASM, you must determine
the number of devices and the amount of free disk space that you require.
• Oracle Clusterware Administration and Deployment Guide
9-8
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
Note:
These installation instructions assume you do not already have any Oracle software
installed on your system. If you have already installed Oracle ASMLIB, then you
cannot install Oracle ASM Filter Driver (Oracle ASMFD) until you uninstall Oracle
ASMLIB. You can use Oracle ASMLIB instead of Oracle ASMFD for managing the
disks used by Oracle ASM.
9-9
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
1. As the grid user, download the Oracle Grid Infrastructure image files and extract
the files into the Grid home. For example:
$ mkdir -p /u01/app/19.0.0/grid
$ chown grid:oinstall /u01/app/19.0.0/grid
$ cd /u01/app/19.0.0/grid
$ unzip -q download_location/grid.zip
grid.zip is the name of the Oracle Grid Infrastructure image zip file.
Note:
• You must extract the zip image software into the directory where you
want your Grid home to be located.
• Download and copy the Oracle Grid Infrastructure image files to the
local node only. During installation, the software is copied and
installed on all other nodes in the cluster.
• Oracle home or Oracle base cannot be symlinks, nor can any of
their parent directories, all the way to up to the root directory.
2. Configure the shared disks for use with Oracle ASM Filter Driver:
a. Log in as the root user and set the environment variable ORACLE_HOME to the
location of the Grid home.
For C shell:
$ su root
# setenv ORACLE_HOME /u01/app/19.0.0/grid
$ su root
# export ORACLE_HOME=/u01/app/19.0.0/grid
b. Use Oracle ASM command line tool (ASMCMD) to provision the disk devices
for use with Oracle ASM Filter Driver.
# cd /u01/app/19.0.0/grid/bin
# ./asmcmd afd_label DATA1 /dev/sdb --init
# ./asmcmd afd_label DATA2 /dev/sdc --init
# ./asmcmd afd_label DATA3 /dev/sdd --init
c. Verify the device has been marked for use with Oracle ASMFD.
9-10
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
3. Log in as the grid user, and start the Oracle Grid Infrastructure installer by running the
following command:
$ /u01/app/19.0.0/grid/gridSetup.sh
The installer starts and the Select Configuration Option window appears.
4. Choose the option Configure Grid Infrastructure for a New Cluster, then click Next.
The Select Cluster Configuration window appears.
5. Choose the option Configure an Oracle Standalone Cluster, then click Next.
Select the Configure as Extended Cluster option to extend an Oracle RAC cluster
across two or more separate sites, each equipped with its own storage.
The Grid Plug and Play Information window appears.
6. In the Cluster Name and SCAN Name fields, enter the names for your cluster and
cluster scan that are unique throughout your entire enterprise network.
You can select Configure GNS if you have configured your domain name server (DNS)
to send to the GNS virtual IP address name resolution requests for the subdomain GNS
serves, as explained in this guide.
For cluster member node public and VIP network addresses, provide the information
required depending on the kind of cluster you are configuring:
• If you plan to use automatic cluster configuration with DHCP addresses configured
and resolved through GNS, then you only need to provide the GNS VIP names as
configured on your DNS.
• If you plan to use manual cluster configuration, with fixed IP addresses configured
and resolved on your DNS, then provide the SCAN names for the cluster, and the
public names, and VIP names for each cluster member node. For example, you can
choose a name that is based on the node names' common prefix. The cluster name
can be mycluster and the cluster SCAN name can be mycluster-scan.
Click Next.
The Cluster Node Information window appears.
7. In the Public Hostname column of the table of cluster nodes, you should see your local
node, for example node1.example.com.
The following is a list of additional information about node IP addresses:
• For the local node only, OUI automatically fills in public and VIP fields. If your system
uses vendor clusterware, then OUI may fill additional fields.
• Host names and virtual host names are not domain-qualified. If you provide a domain
in the address field during installation, then OUI removes the domain from the
address.
• Interfaces identified as private for private IP addresses should not be accessible as
public interfaces. Using public interfaces for Cache Fusion can cause performance
problems.
• When you enter the public node name, use the primary host name of each node. In
other words, use the name displayed by the /bin/hostname command.
a. Click Add to add another node to the cluster.
9-11
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
b. Enter the second node's public name (node2), and virtual IP name (node2-
vip), then click OK. Provide the virtual IP (VIP) host name for all cluster
nodes, or none.
You are returned to the Cluster Node Information window. You should now see
all nodes listed in the table of cluster nodes.
c. Make sure all nodes are selected, then click the SSH Connectivity button at
the bottom of the window.
The bottom panel of the window displays the SSH Connectivity information.
d. Enter the operating system user name and password for the Oracle software
owner (grid). If you have configured SSH connectivity between the nodes,
then select the Reuse private and public keys existing in user home
option. Click Setup.
A message window appears, indicating that it might take several minutes to
configure SSH connectivity between the nodes. After a short period, another
message window appears indicating that passwordless SSH connectivity has
been established between the cluster nodes. Click OK to continue.
e. When returned to the Cluster Node Information window, click Next to continue.
The Specify Network Interface Usage window appears.
8. Select the usage type for each network interface displayed.
Verify that each interface has the correct interface type associated with it. If you
have network interfaces that should not be used by Oracle Clusterware, then set
the network interface type to Do Not Use. For example, if you have only two
network interfaces, then set the public interface to have a Use for value of Public
and set the private network interface to have a Use for value of ASM & Private.
Click Next. The Storage Option Information window appears.
9. Select storage option for Oracle Cluster Registry (OCR) and voting files:
a. Select Use Oracle Flex ASM for storage to store OCR and voting files on an
Oracle ASM disk group.
b. Select Use Shared File System to store OCR and voting files on a shared file
system, and then click Next. The Create Grid Infrastructure Management
Repository Option window appears.
10. Choose whether you want to create a Grid Infrastructure Management Repository
for your Oracle Standalone Cluster installation, then click Next.
If you choose Yes on this window, then the Grid Infrastructure Management
Repository Option window appears. Otherwise, the Create ASM Disk Group
window appears.
11. Choose whether you want to store the Grid Infrastructure Management Repository
in a separate Oracle ASM disk group, then click Next.
The Create ASM Disk Group window appears.
12. Provide the name and specifications for the Oracle ASM disk group.
a. In the Disk Group Name field, enter a name for the disk group, for example
DATA.
b. Choose the Redundancy level for this disk group. Normal is the recommended
option.
9-12
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
c. In the Add Disks section, choose the disks to add to this disk group.
In the Add Disks section you should see the disks that you labeled in Step 2. If you
do not see the disks, click the Change Discovery Path button and provide a path
and pattern match for the disk, for example, /dev/sd*
During installation, disks labelled as Oracle ASMFD disks or Oracle ASMLIB disks
are listed as candidate disks when using the default discovery string. However, if the
disk has a header status of MEMBER, then it is not a candidate disk.
d. If you want to use Oracle ASM Filter Driver (Oracle ASMFD) to manage your Oracle
ASM disk devices, then select the option Configure Oracle ASM Filter Driver.
If you are installing on Linux systems, and you want to use Oracle ASM Filter Driver
(Oracle ASMFD) to manage your Oracle ASM disk devices, then you must deinstall
Oracle ASM library driver (Oracle ASMLIB) before starting Oracle Grid Infrastructure
installation.
When you have finished providing the information for the disk group, click Next.
13. If you selected to use a different disk group for the GIMR, then the Grid Infrastructure
Management Repository Option window appears. Provide the name and specifications
for the GIMR disk group.
a. In the Disk Group Name field, enter a name for the disk group, for example DATA.
b. Choose the Redundancy level for this disk group. Normal is the recommended
option.
c. In the Add Disks section, choose the disks to add to this disk group.
When you have finished providing the information for the disk group, click Next.
The Specify ASM Password window appears.
14. Choose the same password for the Oracle ASM SYS and ASMSNMP account, or specify
different passwords for each account, then click Next.
The Failure Isolation Support window appears.
15. Select the option Do not use Intelligent Platform Management Interface (IPMI), then
click Next.
The Specify Management Options window appears.
16. If you have Enterprise Manager Cloud Control installed in your enterprise, then choose
the option Register with Enterprise Manager (EM) Cloud Control and provide the EM
configuration information. If you do not have Enterprise Manager Cloud Control installed
in your enterprise, then click Next to continue.
The Privileged Operating System Groups window appears.
17. Accept the default operating system group names for Oracle ASM administration and
click Next.
The Specify Install Location window appears.
18. Specify the directory to use for the Oracle base for the Oracle Grid Infrastructure
installation, then click Next. The Oracle base directory must be different from the Oracle
home directory.
If you copied the Oracle Grid Infrastructure installation files into the Oracle Grid home
directory as directed in Step 1, then the default location for the Oracle base directory
should display as /u01/app/grid.
9-13
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
If you have not installed Oracle software previously on this computer, then the
Create Inventory window appears.
19. Change the path for the inventory directory, if required. Then, click Next.
If you are using the same directory names as the examples in this book, then it
should show a value of /u01/app/oraInventory. The group name for the
oraInventory directory should show oinstall.
The Root Script Execution Configuration window appears.
20. Select the option to Automatically run configuration scripts. Enter the
credentials for the root user or a sudo account, then click Next.
Alternatively, you can Run the scripts manually as the root user at the end of the
installation process when prompted by the installer.
The Perform Prerequisite Checks window appears.
21. If any of the checks have a status of Failed and are not Fixable, then you must
manually correct these issues. After you have fixed the issue, you can click the
Check Again button to have the installer recheck the requirement and update the
status. Repeat as needed until all the checks have a status of Succeeded. Click
Next.
The Summary window appears.
22. Review the contents of the Summary window and then click Install.
The installer displays a progress indicator enabling you to monitor the installation
process.
23. If you did not configure automation of the root scripts, then you are required to run
certain scripts as the root user, as specified in the Execute Configuration Scripts
window. Do not click OK until you have run all the scripts. Run the scripts on all
nodes as directed, in the order shown.
For example, on Oracle Linux you perform the following steps (note that for clarity,
the examples show the current user, node and directory in the prompt):
a. As the grid user on node1, open a terminal window, and enter the following
commands:
b. Enter the password for the root user, and then enter the following command
to run the first script on node1:
9-14
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
d. Enter the password for the root user, and then enter the following command to run
the first script on node2:
[root@node2 oraInventory]#./orainstRoot.sh
e. After the orainstRoot.sh script finishes on node2, go to the terminal window you
opened in part a of this step. As the root user on node1, enter the following
commands to run the second script, root.sh:
Note:
You must run the root.sh script on the first node and wait for it to finish.
You can run root.sh scripts concurrently on all other nodes except for the
last node on which you run the script. Like the first node, the root.sh script
on the last node must be run separately.
f. After the root.sh script finishes on node1, go to the terminal window you opened in
part c of this step. As the root user on node2, enter the following commands:
After the root.sh script completes, return to the Oracle Universal Installer window
where the Installer prompted you to run the orainstRoot.sh and root.sh scripts.
Click OK.
The software installation monitoring window reappears.
24. Continue monitoring the installation until the Finish window appears. Then click Close to
complete the installation process and exit the installer.
Caution:
After installation is complete, do not remove manually or run cron jobs that
remove /tmp/.oracle or /var/tmp/.oracle directories or their files while Oracle
software is running on the server. If you remove these files, then the Oracle
software can encounter intermittent hangs. Oracle Clusterware installations can fail
with the error:
CRS-0184: Cannot communicate with the CRS daemon.
After your Oracle Grid Infrastructure installation is complete, you can install Oracle Database
on a cluster node for high availability, or install Oracle RAC.
9-15
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
See Also:
Oracle Real Application Clusters Installation Guide or Oracle Database
Installation Guide for your platform for information on installing Oracle
Database
$ mkdir -p /u01/app/19.0.0/grid
$ chown grid:oinstall /u01/app/19.0.0/grid
$ cd /u01/app/19.0.0/grid
$ unzip -q download_location/grid.zip
grid.zip is the name of the Oracle Grid Infrastructure image zip file.
Note:
• You must extract the zip image software into the directory where you
want your Grid home to be located.
• Download and copy the Oracle Grid Infrastructure image files to the
local node only. During installation, the software is copied and
installed on all other nodes in the cluster.
• Oracle home or Oracle base cannot be symlinks, nor can any of
their parent directories, all the way to up to the root directory.
2. Configure the shared disks for use with Oracle ASM Filter Driver:
a. Log in as the root user and set the environment variable ORACLE_HOME to the
location of the Grid home.
For C shell:
$ su root
# setenv ORACLE_HOME /u01/app/19.0.0/grid
$ su root
# export ORACLE_HOME=/u01/app/19.0.0/grid
9-16
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
b. Use Oracle ASM command line tool (ASMCMD) to provision the disk devices for use
with Oracle ASM Filter Driver.
# cd /u01/app/19.0.0/grid/bin
# ./asmcmd afd_label DATA1 /dev/sdb --init
# ./asmcmd afd_label DATA2 /dev/sdc --init
# ./asmcmd afd_label DATA3 /dev/sdd --init
c. Verify the device has been marked for use with Oracle ASMFD.
3. Log in as the grid user, and start the Oracle Grid Infrastructure installer by running the
following command:
$ /u01/app/19.0.0/grid/gridSetup.sh
The installer starts and the Select Configuration Option window appears.
4. Choose the option Configure Grid Infrastructure for a New Cluster, then click Next.
The Select Cluster Configuration window appears.
5. Choose the option Configure an Oracle Domain Services Cluster, then click Next.
The Grid Plug and Play Information window appears.
6. In the Cluster Name and SCAN Name fields, enter the names for your cluster and
cluster scan that are unique throughout your entire enterprise network.
You can select Configure GNS if you have configured your domain name server (DNS)
to send to the GNS virtual IP address name resolution requests for the subdomain GNS
serves, as explained in this guide.
For cluster member node public and VIP network addresses, provide the information
required depending on the kind of cluster you are configuring:
• If you plan to use automatic cluster configuration with DHCP addresses configured
and resolved through GNS, then you only need to provide the GNS VIP names as
configured on your DNS.
• If you plan to use manual cluster configuration, with fixed IP addresses configured
and resolved on your DNS, then provide the SCAN names for the cluster, and the
public names, and VIP names for each cluster member node. For example, you can
choose a name that is based on the node names' common prefix. This example uses
the cluster name mycluster and the cluster SCAN name of mycluster-scan.
Click Next.
The Cluster Node Information window appears.
7. In the Public Hostname column of the table of cluster nodes, you should see your local
node, for example node1.example.com.
The following is a list of additional information about node IP addresses:
• For the local node only, OUI automatically fills in public and VIP fields. If your system
uses vendor clusterware, then OUI may fill additional fields.
9-17
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
• Host names and virtual host names are not domain-qualified. If you provide a
domain in the address field during installation, then OUI removes the domain
from the address.
• Interfaces identified as private for private IP addresses should not be
accessible as public interfaces. Using public interfaces for Cache Fusion can
cause performance problems.
• When you enter the public node name, use the primary host name of each
node. In other words, use the name displayed by the /bin/hostname
command.
a. Click Add to add another node to the cluster.
b. Enter the second node's public name (node2), and virtual IP name (node2-
vip), then click OK.Provide the virtual IP (VIP) host name for all cluster nodes,
or none.
You are returned to the Cluster Node Information window. You should now see
all nodes listed in the table of cluster nodes.
c. Make sure all nodes are selected, then click the SSH Connectivity button at
the bottom of the window.
The bottom panel of the window displays the SSH Connectivity information.
d. Enter the operating system user name and password for the Oracle software
owner (grid). If you have configured SSH connectivity between the nodes,
then select the Reuse private and public keys existing in user home
option. Click Setup.
A message window appears, indicating that it might take several minutes to
configure SSH connectivity between the nodes. After a short period, another
message window appears indicating that passwordless SSH connectivity has
been established between the cluster nodes. Click OK to continue.
e. When returned to the Cluster Node Information window, click Next to continue.
The Specify Network Interface Usage window appears.
8. Select the usage type for each network interface displayed.
Verify that each interface has the correct interface type associated with it. If you
have network interfaces that should not be used by Oracle Clusterware, then set
the network interface type to Do Not Use. For example, if you have only two
network interfaces, then set the public interface to have a Use for value of Public
and set the private network interface to have a Use for value of ASM & Private.
Click Next. The Create ASM Disk Group window appears.
9. Provide the name and specifications for the Oracle ASM disk group.
a. In the Disk Group Name field, enter a name for the disk group, for example
DATA.
b. Choose the Redundancy level for this disk group. Normal is the recommended
option.
c. In the Add Disks section, choose the disks to add to this disk group.
In the Add Disks section you should see the disks that you labeled in Step 2. If
you do not see the disks, click the Change Discovery Path button and
provide a path and pattern match for the disk, for example, /dev/sd*.
9-18
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
During installation, disks labelled as Oracle ASMFD disks or Oracle ASMLIB disks
are listed as candidate disks when using the default discovery string. However, if the
disk has a header status of MEMBER, then it is not a candidate disk.
d. Check the option Configure Oracle ASM Filter Driver.
If you are installing on Linux systems, and you want to use Oracle ASM Filter Driver
(Oracle ASMFD) to manage your Oracle ASM disk devices, then you must deinstall
Oracle ASM library driver (Oracle ASMLIB) before starting Oracle Grid Infrastructure
installation.
When you have finished providing the information for the disk group, click Next.
The Grid Infrastructure Management Repository Option window appears
10. Provide the name and specifications for the GIMR disk group.
a. In the Disk Group Name field, enter a name for the disk group, for example DATA1.
b. Choose the Redundancy level for this disk group. Normal is the recommended
option.
c. In the Add Disks section, choose the disks to add to this disk group.
d. Select the Configure Fleet Patching and Provisioning Server option to configure a
Oracle Fleet Patching and Provisioning Server as part of the Oracle Domain Services
Cluster. Oracle Fleet Patching and Provisioning enables you to install clusters, and
provision, patch, and upgrade Oracle Grid Infrastructure and Oracle Database
homes.
When you have finished providing the information for the disk group, click Next.
The Specify ASM Password window appears.
11. Choose the same password for the Oracle ASM SYS and ASMSNMP account, or specify
different passwords for each account, then click Next.
The Failure Isolation Support window appears.
12. Select the option Do not use Intelligent Platform Management Interface (IPMI), then
click Next.
The Specify Management Options window appears.
13. If you have Enterprise Manager Cloud Control installed in your enterprise, then choose
the option Register with Enterprise Manager (EM) Cloud Control and provide the EM
configuration information. If you do not have Enterprise Manager Cloud Control installed
in your enterprise, then click Next to continue.
You can manage Oracle Grid Infrastructure and Oracle Automatic Storage Management
(Oracle ASM) using Oracle Enterprise Manager Cloud Control. To register the Oracle
Grid Infrastructure cluster with Oracle Enterprise Manager, ensure that Oracle
Management Agent is installed and running on all nodes of the cluster.
The Privileged Operating System Groups window appears.
14. Accept the default operating system group names for Oracle ASM administration and
click Next.
The Specify Install Location window appears.
15. Specify the directory to use for the Oracle base for the Oracle Grid Infrastructure
installation, then click Next. The Oracle base directory must be different from the Oracle
home directory.
9-19
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
If you copied the Oracle Grid Infrastructure installation files into the Oracle Grid
home directory as directed in Step 1, then the default location for the Oracle base
directory should display as /u01/app/grid.
If you have not installed Oracle software previously on this computer, then the
Create Inventory window appears.
16. Change the path for the inventory directory, if required. Then, click Next.
If you are using the same directory names as the examples in this book, then it
should show a value of /u01/app/oraInventory. The group name for the
oraInventory directory should show oinstall.
The Root Script Execution Configuration window appears.
17. Select the option to Automatically run configuration scripts. Enter the
credentials for the root user or a sudo account, then click Next.
Alternatively, you can Run the scripts manually as the root user at the end of the
installation process when prompted by the installer.
The Perform Prerequisite Checks window appears.
18. If any of the checks have a status of Failed and are not Fixable, then you must
manually correct these issues. After you have fixed the issue, you can click the
Check Again button to have the installer recheck the requirement and update the
status. Repeat as needed until all the checks have a status of Succeeded. Click
Next.
The Summary window appears.
19. Review the contents of the Summary window and then click Install.
The installer displays a progress indicator enabling you to monitor the installation
process.
20. If you did not configure automation of the root scripts, then you are required to run
certain scripts as the root user, as specified in the Execute Configuration Scripts
window. Do not click OK until you have run all the scripts. Run the scripts on all
nodes as directed, in the order shown.
For example, on Oracle Linux you perform the following steps (note that for clarity,
the examples show the current user, node and directory in the prompt):
a. As the grid user on node1, open a terminal window, and enter the following
commands:
b. Enter the password for the root user, and then enter the following command
to run the first script on node1:
9-20
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
d. Enter the password for the root user, and then enter the following command to run
the first script on node2:
e. After the orainstRoot.sh script finishes on node2, go to the terminal window you
opened in part a of this step. As the root user on node1, enter the following
commands to run the second script, root.sh:
Note:
You must run the root.sh script on the first node and wait for it to finish. f
your cluster has three or more nodes, then root.sh can be run concurrently
on all nodes but the first. Node numbers are assigned according to the
order of running root.sh. If you want to create a particular node number
assignment, then run the root scripts in the order of the node assignments
you want to make, and wait for the script to finish running on each node
before proceeding to run the script on the next node. However, Oracle
system identifier, or SID, for your Oracle RAC databases, do not follow the
node numbers.
f. After the root.sh script finishes on node1, go to the terminal window you opened in
part c of this step. As the root user on node2, enter the following commands:
After the root.sh script completes, return to the OUI window where the Installer
prompted you to run the orainstRoot.sh and root.sh scripts. Click OK.
The software installation monitoring window reappears.
When you run root.sh during Oracle Grid Infrastructure installation, the Trace File
Analyzer (TFA) Collector is also installed in the directory.grid_home/tfa.
21. After root.sh runs on all the nodes, OUI runs Net Configuration Assistant (netca) and
Cluster Verification Utility. These programs run without user intervention.
22. During the installation, Oracle Automatic Storage Management Configuration Assistant
(asmca) configures Oracle ASM for storage.
23. Continue monitoring the installation until the Finish window appears. Then click Close to
complete the installation process and exit the installer.
9-21
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
Caution:
After installation is complete, do not remove manually or run cron jobs that
remove /tmp/.oracle or /var/tmp/.oracle directories or their files while
Oracle software is running on the server. If you remove these files, then the
Oracle software can encounter intermittent hangs. Oracle Clusterware
installations can fail with the error:
CRS-0184: Cannot communicate with the CRS daemon.
After your Oracle Domain Services Cluster installation is complete, you can install
Oracle Member Clusters for Oracle Databases and Oracle Member Clusters for
Applications.
$ mkdir -p /u01/app/19.0.0/grid
$ chown grid:oinstall /u01/app/19.0.0/grid
$ cd /u01/app/19.0.0/grid
$ unzip -q download_location/grid.zip
grid.zip is the name of the Oracle Grid Infrastructure image zip file.
Note:
• You must extract the zip image software into the directory where you
want your Grid home to be located.
• Download and copy the Oracle Grid Infrastructure image files to the
local node only. During installation, the software is copied and
installed on all other nodes in the cluster.
• Oracle home or Oracle base cannot be symlinks, nor can any of
their parent directories, all the way to up to the root directory.
2. Log in as the grid user, and start the Oracle Grid Infrastructure installer by running
the following command:
$ /u01/app/19.0.0/grid/gridSetup.sh
The installer starts and the Select Configuration Option window appears.
9-22
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
3. Choose the option Configure Grid Infrastructure for a New Cluster, then click Next.
The Select Cluster Configuration window appears.
4. Choose either the Configure an Oracle Member Cluster for Oracle Databases or
Configure an Oracle Member Cluster for Applications option, then click Next.
The Cluster Domain Services window appears.
5. Select the Manifest file that contains the configuration details about the management
repository and other services for the Oracle Member Cluster.
For Oracle Member Cluster for Oracle Databases, you can also specify the Grid Naming
Service and Oracle ASM Storage server details using a Member Cluster Manifest file.
Click Next.
6. If you selected to configure an Oracle Member Cluster for applications, then the
Configure Virtual Access window appears. Provide a Cluster Name and optional Virtual
Host Name.
The virtual host name serves as a connection address for the Oracle Member Cluster,
and to provide service access to the software applications that you want the Oracle
Member Cluster to install and run.
Click Next.
The Cluster Node Information window appears.
7. In the Public Hostname column of the table of cluster nodes, you should see your local
node, for example node1.example.com.
The following is a list of additional information about node IP addresses:
• For the local node only, Oracle Universal Installer (OUI) automatically fills in public
and VIP fields. If your system uses vendor clusterware, then OUI may fill additional
fields.
• Host names and virtual host names are not domain-qualified. If you provide a domain
in the address field during installation, then OUI removes the domain from the
address.
• Interfaces identified as private for private IP addresses should not be accessible as
public interfaces. Using public interfaces for Cache Fusion can cause performance
problems.
• When you enter the public node name, use the primary host name of each node. In
other words, use the name displayed by the /bin/hostname command.
a. Click Add to add another node to the cluster.
b. Enter the second node's public name (node2), and virtual IP name (node2-vip), then
click OK. Provide the virtual IP (VIP) host name for all cluster nodes, or none.
You are returned to the Cluster Node Information window. You should now see all
nodes listed in the table of cluster nodes.
c. Make sure all nodes are selected, then click the SSH Connectivity button at the
bottom of the window.
The bottom panel of the window displays the SSH Connectivity information.
d. Enter the operating system user name and password for the Oracle software owner
(grid). If you have configured SSH connectivity between the nodes, then select the
Reuse private and public keys existing in user home option. Click Setup.
9-23
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
If you are using the same directory names as the examples in this book, then it
should show a value of /u01/app/oraInventory. The group name for the
oraInventory directory should show oinstall.
The Root Script Execution Configuration window appears.
13. Select the option to Automatically run configuration scripts. Enter the
credentials for the root user or a sudo account, then click Next.
Alternatively, you can Run the scripts manually as the root user at the end of the
installation process when prompted by the installer.
The Perform Prerequisite Checks window appears.
14. If any of the checks have a status of Failed and are not Fixable, then you must
manually correct these issues. After you have fixed the issue, you can click the
Check Again button to have the installer recheck the requirement and update the
9-24
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
status. Repeat as needed until all the checks have a status of Succeeded. Click Next.
The Summary window appears.
15. Review the contents of the Summary window and then click Install.
The installer displays a progress indicator enabling you to monitor the installation
process.
16. If you did not configure automation of the root scripts, then you are required to run certain
scripts as the root user, as specified in the Execute Configuration Scripts window
appears. Do not click OK until you have run the scripts. Run the scripts on all nodes as
directed, in the order shown.
For example, on Oracle Linux you perform the following steps (note that for clarity, the
examples show the current user, node and directory in the prompt):
a. As the grid user on node1, open a terminal window, and enter the following
commands:
b. Enter the password for the root user, and then enter the following command to run
the first script on node1:
c. After the orainstRoot.sh script finishes on node1, open another terminal window,
and as the grid user, enter the following commands:
d. Enter the password for the root user, and then enter the following command to run
the first script on node2:
e. After the orainstRoot.sh script finishes on node2, go to the terminal window you
opened in part a of this step. As the root user on node1, enter the following
commands to run the second script, root.sh:
9-25
Chapter 9
Installing Oracle Grid Infrastructure for a New Cluster
Note:
You must run the root.sh script on the first node and wait for it to
finish. f your cluster has three or more nodes, then root.sh can be
run concurrently on all nodes but the first. Node numbers are
assigned according to the order of running root.sh. If you want to
create a particular node number assignment, then run the root
scripts in the order of the node assignments you want to make, and
wait for the script to finish running on each node before proceeding
to run the script on the next node. However, Oracle system identifier,
or SID, for your Oracle RAC databases, do not follow the node
numbers.
f. After the root.sh script finishes on node1, go to the terminal window you
opened in part c of this step. As the root user on node2, enter the following
commands:
After the root.sh script completes, return to the OUI window where the
Installer prompted you to run the orainstRoot.sh and root.sh scripts. Click
OK.
The software installation monitoring window reappears.
When you run root.sh during Oracle Grid Infrastructure installation, the Trace File
Analyzer (TFA) Collector is also installed in the directory.grid_home/tfa.
17. After root.sh runs on all the nodes, OUI runs Net Configuration Assistant (netca)
and Cluster Verification Utility. These programs run without user intervention.
18. During installation of Oracle Member Cluster for Oracle Databases, if the Member
Cluster Manifest file does not include configuration details for Oracle ASM, then
Oracle Automatic Storage Management Configuration Assistant (asmca) configures
Oracle ASM for storage.
19. Continue monitoring the installation until the Finish window appears. Then click
Close to complete the installation process and exit the installer.
Caution:
After installation is complete, do not remove manually or run cron jobs that
remove /tmp/.oracle or /var/tmp/.oracle directories or their files while
Oracle software is running on the server. If you remove these files, then the
Oracle software can encounter intermittent hangs. Oracle Clusterware
installations can fail with the error:
CRS-0184: Cannot communicate with the CRS daemon.
9-26
Chapter 9
Installing Oracle Grid Infrastructure Using a Cluster Configuration File
After your Oracle Grid Infrastructure installation is complete, you can install Oracle Database
on a cluster node for high availability, other applications, or install Oracle RAC.
Related Topics
• Creating Member Cluster Manifest File for Oracle Member Clusters
Create a Member Cluster Manifest file to specify the Oracle Member Cluster
configuration for the Grid Infrastructure Management Repository (GIMR), Grid Naming
Service, Oracle ASM storage server, and Oracle Fleet Patching and Provisioning
configuration.
See Also:
Oracle Real Application Clusters Installation Guide or Oracle Database Installation
Guide for your platform for information on installing Oracle Database
To create a cluster configuration file manually, start a text editor, and create a file that
provides the name of the public and virtual IP addresses for each cluster member node, in
the following format:
node1 node1-vip
node2 node2-vip
.
.
.
Specify the different nodes, separating them with either spaces or colon (:).
For example:
mynode1 mynode1-vip
mynode2 mynode2-vip
9-27
Chapter 9
Installing Oracle Grid Infrastructure Using a Cluster Configuration File
mynode1:mynode1-vip
mynode2:mynode2-vip
#
# Cluster nodes configuration specification file
#
# Format:
# node [vip] [site-name]
#
# node - Node's public host name
# vip - Node's virtual host name
# site-name - Node's assigned site
#
# Specify details of one node per line.
# Lines starting with '#' will be skipped.
#
# (1) vip is not required for Oracle Grid Infrastructure software only
# installs and Oracle Member cluster for Applications
# (2) vip should be specified as AUTO if Node Virtual host names are
Dynamically
# assigned
# (3) site-name should be specified only when configuring Oracle Grid
Infrastructure with "Extended Cluster" option
#
# Examples:
# --------
# For installing GI software only on a cluster:
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# node1
# node2
#
# For Standalone Cluster:
# ^^^^^^^^^^^^^^^^^^^^^^
# node1 node1-vip
# node2 node2-vip
#
# For Standalone Extended Cluster:
# ^^^^^^^^^^^^^^^^^^^^^^
# node1 node1-vip sitea
# node2 node2-vip siteb
#
# For Domain Services Cluster:
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^
# node1 node1-vip
# node2 node2-vip
#
# For Member Cluster for Oracle Database:
9-28
Chapter 9
Installing Only the Oracle Grid Infrastructure Software
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# node1 node1-vip
# node2 node2-vip
#
# For Member Cluster for Applications:
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# node1
# node2
#
See Also:
Oracle Clusterware Administration and Deployment Guide for information about
cloning an Oracle Grid Infrastructure installation to other nodes that were not
included in the initial installation of Oracle Grid Infrastructure, and then adding them
to the cluster
9-29
Chapter 9
Installing Only the Oracle Grid Infrastructure Software
Note:
You can use the gridSetup.sh command with the -applyRU and -
applyOneOffs flags to install Release Updates (RUs) and one-off
patches during an Oracle Grid Infrastructure installation or upgrade.
8. Configure the cluster using the Oracle Universal Installer (OUI) configuration
wizard or response files.
Related Topics
• Configuring Software Binaries for Oracle Grid Infrastructure for a Cluster
Configure the software binaries by starting Oracle Grid Infrastructure configuration
wizard in GUI mode.
• Configuring the Software Binaries Using a Response File
When you install or copy Oracle Grid Infrastructure software on any node, you can
defer configuration for a later time. Review this procedure for completing
configuration after the software is installed or copied on nodes, using the
configuration wizard (gridSetup.sh).
• Installing Oracle Grid Infrastructure for a New Cluster
Review these procedures to install the cluster configuration options available in
this release of Oracle Grid Infrastructure.
• Applying Patches During an Oracle Grid Infrastructure Installation or Upgrade
Starting with Oracle Grid Infrastructure 18c, you can download and apply Release
Updates (RUs) and one-off patches during an Oracle Grid Infrastructure
installation or upgrade.
9-30
Chapter 9
Installing Only the Oracle Grid Infrastructure Software
$ ./gridSetup.sh
3. Provide information as needed for configuration. OUI validates the information and
configures the installation on all cluster nodes.
4. When you complete providing information, OUI shows you the Summary page, listing the
information you have provided for the cluster. Verify that the summary has the correct
information for your cluster, and click Install to start configuration of the local node.
When configuration of the local node is complete, OUI copies the Oracle Grid
Infrastructure configuration file to other cluster member nodes.
5. When prompted, run root scripts.
6. When you confirm that all root scripts are run, OUI checks the cluster configuration
status, and starts other configuration tools as needed.
To configure the Oracle Grid Infrastructure software binaries using a response file:
1. As the Oracle Grid Infrastructure installation owner (grid), start Oracle Universal Installer
in Oracle Grid Infrastructure configuration wizard mode from the Oracle Grid
Infrastructure software-only home using the following syntax, where filename is the
response file name:
For example:
$ cd /u01/app/19.0.0/grid
$ ./gridSetup.sh -responseFile /u01/app/grid/response/response_file.rsp
2. When you complete configuring values, OUI shows you the Summary page, listing all
information you have provided for the cluster. Verify that the summary has the correct
information for your cluster, and click Install to start configuration of the local node.
When configuration of the local node is complete, OUI copies the Oracle Grid
Infrastructure configuration file to other cluster member nodes.
3. When prompted, run root scripts.
4. When you confirm that all root scripts are run, OUI checks the cluster configuration
status, and starts other configuration tools as needed.
9-31
Chapter 9
About Deploying Oracle Grid Infrastructure Using Oracle Fleet Patching and Provisioning
./gridSetup.sh oracle_install_crs_Ping_Targets=Host1|IP1,Host2|IP2
The ping utility contacts the comma-separated list of host names or IP addresses
Host1|IP1,Host2|IP2 to determine whether the public network is available. If none of
the hosts respond, then the network is considered to be offline. Addresses outside the
cluster, like of a switch or router, should be used.
For example:
/gridSetup.sh oracle_install_crs_Ping_Targets=192.0.2.1,192.0.2.2
Note:
Starting with Oracle Grid Infrastructure 19c, the feature formerly known as
Rapid Home Provisioning (RHP) is now Oracle Fleet Patching and
Provisioning (Oracle FPP).
Oracle Fleet Patching and Provisioning enables you to install clusters, and provision,
patch, scale, and upgrade Oracle Grid Infrastructure, Oracle Restart, and Oracle
Database homes. The supported versions are 11.2, 12.1, 12.2, 18c, and 19c. You can
also provision applications and middleware using Oracle Fleet Patching and
Provisioning.
Oracle Fleet Patching and Provisioning is a service in Oracle Grid Infrastructure that
you can use in either of the following modes:
• Central Oracle Fleet Patching and Provisioning Server
The Oracle Fleet Patching and Provisioning Server stores and manages
standardized images, called gold images. Gold images can be deployed to any
number of nodes across the data center. You can create new clusters and
9-32
Chapter 9
About Deploying Oracle Grid Infrastructure Using Oracle Fleet Patching and Provisioning
databases on the deployed homes and can use them to patch, upgrade, and scale
existing installations.
The Oracle Fleet Patching and Provisioning Server can manage the following types of
installations:
– Software homes on the cluster hosting the Oracle Fleet Patching and Provisioning
Server itself.
– Oracle Fleet Patching and Provisioning Clients running Oracle Grid Infrastructure 12c
Release 2 (12.2), 18c, and 19c.
– Installations running Oracle Grid Infrastructure 11g Release 2 (11.2) and 12c Release
1 (12.1).
– Installations running without Oracle Grid Infrastructure.
The Oracle Fleet Patching and Provisioning Server can provision new installations and
can manage existing installations without requiring any changes to the existing
installations. The Oracle Fleet Patching and Provisioning Server can automatically share
gold images among peer servers to support enterprises with geographically distributed
data centers.
• Oracle Fleet Patching and Provisioning Client
The Oracle Fleet Patching and Provisioning Client can be managed from the Oracle Fleet
Patching and Provisioning Server, or directly by executing commands on the client itself.
The Oracle Fleet Patching and Provisioning Client is a service built into the Oracle Grid
Infrastructure and is available in Oracle Grid Infrastructure 12c Release 2 (12.2) and later
releases. The Oracle Fleet Patching and Provisioning Client can retrieve gold images
from the Oracle Fleet Patching and Provisioning Server, upload new images based on
the policy, and apply maintenance operations to itself.
9-33
Chapter 9
Confirming Oracle Clusterware Function
For example:
**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
Note:
After installation is complete, do not remove manually or run cron jobs that
remove /tmp/.oracle or /var/tmp/.oracle or its files while Oracle
Clusterware is up. If you remove these files, then Oracle Clusterware could
encounter intermittent hangs, and you will encounter error CRS-0184: Cannot
communicate with the CRS daemon.
9-34
Chapter 9
Confirming Oracle ASM Function for Oracle Clusterware Files
For example:
Note:
To manage Oracle ASM or Oracle Net 11g Release 2 (11.2) or later installations,
use the srvctl binary in the Oracle Grid Infrastructure home for a cluster (Grid
home). If you have Oracle Real Application Clusters or Oracle Database installed,
then you cannot use the srvctl binary in the database home to manage Oracle
ASM or Oracle Net.
9-35
10
Oracle Grid Infrastructure Postinstallation
Tasks
Complete configuration tasks after you install Oracle Grid Infrastructure.
You are required to complete some configuration tasks after Oracle Grid Infrastructure is
installed. In addition, Oracle recommends that you complete additional tasks immediately
after installation. You must also complete product-specific configuration tasks before you use
those products.
Note:
This chapter describes basic configuration only. Refer to product-specific
administration and tuning guides for more detailed configuration and tuning
information.
10-1
Chapter 10
Recommended Postinstallation Tasks
Note:
If you are not a My Oracle Support registered user, then click Register
for My Oracle Support and register.
10-2
Chapter 10
Recommended Postinstallation Tasks
to add an Oracle Database for a standalone server or an Oracle RAC database, then you
should create the fast recovery area for database files.
• Checking the SCAN Configuration
The Single Client Access Name (SCAN) is a name that is used to provide service access
for clients to the cluster. Because the SCAN is associated with the cluster as a whole,
rather than to a particular node, the SCAN makes it possible to add or remove nodes
from the cluster without needing to reconfigure clients.
• Setting Resource Limits for Oracle Clusterware and Associated Databases and
Applications
After you have completed Oracle Grid Infrastructure installation, you can set resource
limits in the Grid_home/crs/install/s_crsconfig_nodename_env.txt file.
If you install other products in the same Oracle home directory subsequent to this installation,
then Oracle Universal Installer updates the contents of the existing root.sh script during the
installation. If you require information contained in the original root.sh script, then you can
recover it from the backed up root.sh file.
10-3
Chapter 10
Recommended Postinstallation Tasks
• About the Fast Recovery Area and the Fast Recovery Area Disk Group
The fast recovery area is a unified storage location for all Oracle Database files
related to recovery. Enabling rapid backups for recent data can reduce requests to
system administrators to retrieve backup tapes for recovery operations.
• Creating the Fast Recovery Area Disk Group
Procedure to create the fast recovery area disk group.
About the Fast Recovery Area and the Fast Recovery Area Disk Group
The fast recovery area is a unified storage location for all Oracle Database files related
to recovery. Enabling rapid backups for recent data can reduce requests to system
administrators to retrieve backup tapes for recovery operations.
Database administrators can define the DB_RECOVERY_FILE_DEST parameter to the
path for the fast recovery area to enable on disk backups and rapid recovery of data.
When you enable fast recovery in the init.ora file, Oracle Database writes all RMAN
backups, archive logs, control file automatic backups, and database copies to the fast
recovery area. RMAN automatically manages files in the fast recovery area by deleting
obsolete backups and archiving files no longer required for recovery.
Oracle recommends that you create a fast recovery area disk group. Oracle
Clusterware files and Oracle Database files can be placed on the same disk group,
and you can also place fast recovery files in the same disk group. However, Oracle
recommends that you create a separate fast recovery disk group to reduce storage
device contention.
The fast recovery area is enabled by setting the DB_RECOVERY_FILE_DEST
parameter. The size of the fast recovery area is set with
DB_RECOVERY_FILE_DEST_SIZE. As a general rule, the larger the fast recovery
area, the more useful it becomes. For ease of use, Oracle recommends that you
create a fast recovery area disk group on storage devices that can contain at least
three days of recovery information. Ideally, the fast recovery area is large enough to
hold a copy of all of your data files and control files, the online redo logs, and the
archived redo log files needed to recover your database using the data file backups
kept under your retention policy.
Multiple databases can use the same fast recovery area. For example, assume you
have created a fast recovery area disk group on disks with 150 GB of storage, shared
by 3 different databases. You can set the size of the fast recovery for each database
depending on the importance of each database. For example, if database1 is your
least important database, database2 is of greater importance, and database3 is of
greatest importance, then you can set different DB_RECOVERY_FILE_DEST_SIZE
settings for each database to meet your retention target for each database: 30 GB for
database1, 50 GB for database2, and 70 GB for database3.
10-4
Chapter 10
Recommended Postinstallation Tasks
For example:
$ cd /u01/app/19.0.0/grid/bin
$ ./asmca
In the Select Member Disks field, select eligible disks you want to add to the fast recovery
area, and click OK.
5. When the Fast Recovery Area disk group creation is complete, click Exit and click Yes to
confirm closing the ASMCA application.
10-5
Chapter 10
About Changes in Default SGA Permissions for Oracle Database
After installation, when a client sends a request to the cluster, the Oracle Clusterware
SCAN listeners redirect client requests to servers in the cluster.
See Also:
Oracle Clusterware Administration and Deployment Guide for more
information about system checks and configurations
The resource limits apply to all Oracle Clusterware processes and Oracle databases
managed by Oracle Clusterware. For example, to set a higher number of processes
limit, edit the file and set the CRS_LIMIT_NPROC parameter to a high value.
---
#Do not modify this file except as documented above or under the
#direction of Oracle Support Services.
########################################################################
#
TZ=PST8PDT
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
CRS_LIMIT_STACK=2048
CRS_LIMIT_OPENFILE=65536
CRS_LIMIT_NPROC=65536
TNS_ADMIN=
10-6
Chapter 10
Using Earlier Oracle Database Releases with Oracle Grid Infrastructure
parameter is FALSE, so that only the Oracle Database installation owner has read and write
permissions to the SGA. Group access to the SGA is removed by default. This change affects
all Linux and UNIX platforms.
If members of the OSDBA group require read access to the SGA, then you can change the
initialization parameter ALLOW_GROUP_ACCESS_TO_SGA setting from FALSE to TRUE.
Oracle strongly recommends that you accept the default permissions that limit access to the
SGA to the oracle user account.
Related Topics
• Oracle Database Reference
10-7
Chapter 10
Using Earlier Oracle Database Releases with Oracle Grid Infrastructure
When installing 11.2 databases on an Oracle Flex ASM cluster, the Oracle ASM
cardinality must be set to All.
Note:
If you are installing Oracle Database 11g release 2 with Oracle Grid
Infrastructure 19c, then before running Oracle Universal Installer (OUI) for
Oracle Database, run the following command on the local node only:
Grid_home/oui/bin/runInstaller -ignoreSysPrereqs -
updateNodeList
ORACLE_HOME=Grid_home
"CLUSTER_NODES={comma_separated_list_of_nodes}"
CRS=true LOCAL_NODE=local_node [-cfs]
./asmca
Follow the steps in the configuration wizard to create Oracle ACFS storage for the
earlier release Oracle Database home.
3. Install Oracle Database 11g release 2 (11.2) software-only on the Oracle ACFS file
system you configured.
4. From the 11.2 Oracle Database home, run Oracle Database Configuration
Assistant (DBCA) and create the Oracle RAC Database, using Oracle ASM as
storage for the database data files.
./dbca
Related Topics
• Oracle Automatic Storage Management Administrator's Guide
10-8
Chapter 10
Using Earlier Oracle Database Releases with Oracle Grid Infrastructure
This setting updates the oratab file for Oracle ASM entries.
You can check the pinned nodes using the following command:
$ ./olsnodes -t -n
Note:
Restart Oracle ASM to load the updated oratab file.
See Also:
Oracle Automatic Storage Management Administrator's Guide for details about
configuring disk group compatibility for databases using Oracle Database 11g or
earlier software with Oracle Grid Infrastructure 19c
10-9
Chapter 10
Modifying Oracle Clusterware Binaries After Installation
Caution:
Before relinking executables, you must shut down all executables that run in
the Oracle home directory that you are relinking. In addition, shut down
applications linked with Oracle shared libraries.
# cd /u01/app/19.0.0/grid/crs/install
# ./rootcrs.sh -unlock
2. Change user to the Oracle Grid Infrastructure software owner, and relink binaries
using the command syntax make -f /u01/app/19.0.0/grid/rdbms/lib/
ins_rdbms.mk target, where target is the binaries that you want to relink. For
example, where you are updating the interconnect protocol from UDP to IPC, enter
the following command:
# su grid
$ make -f /u01/app/19.0.0/grid/rdbms/lib/ins_rdbms.mk ipc_rds
ioracle
Note:
To relink binaries, you can also change to the grid installation owner and
run the command /u01/app/19.0.0/grid/bin/relink.
# ./rootcrs.sh -lock
# crsctl start crs
10-10
Chapter 10
Adding Grid Infrastructure Management Repository to an Existing Cluster
Note:
Do not delete directories in the Grid home. For example, do not delete the directory
Grid_home/OPatch. If you delete the directory, then the Grid infrastructure
installation owner cannot use OPatch to patch the Grid home, and OPatch displays
the error message "checkdir error: cannot create Grid_home/OPatch".
• As the grid user, create GIMR container database on any cluster node using the mgmtca
createGIMRContainer command from the Grid home directory.
• For Oracle Database 19c Release Update (19.6) or earlier releases:
10-11
11
Upgrading Oracle Grid Infrastructure
Oracle Grid Infrastructure upgrade consists of upgrade of Oracle Clusterware and Oracle
Automatic Storage Management (Oracle ASM).
Oracle Grid Infrastructure upgrades can be rolling upgrades, in which a subset of nodes are
brought down and upgraded while other nodes remain active. Starting with Oracle ASM 11g
Release 2 (11.2), Oracle ASM upgrades can be rolling upgrades.
You can also use Oracle Fleet Patching and Provisioning to upgrade Oracle Grid
Infrastructure for a cluster.
• Understanding Out-of-Place Upgrade
With an out-of-place upgrade, the installer installs the newer version in a separate Oracle
Clusterware home.
• About Oracle Grid Infrastructure Upgrade and Downgrade
You have the ability to upgrade or downgrade Oracle Grid Infrastructure to a supported
release.
• Options for Oracle Grid Infrastructure Upgrades
When you upgrade to Oracle Grid Infrastructure 19c, you upgrade to an Oracle Flex
Cluster configuration.
• Restrictions for Oracle Grid Infrastructure Upgrades
Review the following information for restrictions and changes for upgrades to Oracle Grid
Infrastructure installations, which consists of Oracle Clusterware and Oracle Automatic
Storage Management (Oracle ASM).
• Preparing to Upgrade an Existing Oracle Clusterware Installation
If you have an existing Oracle Clusterware installation, then you upgrade your existing
cluster by performing an out-of-place upgrade. You cannot perform an in-place upgrade.
• Understanding Rolling Upgrades Using Batches
You can perform rolling upgrades of Oracle Grid Infrastructure in batches.
• Performing Rolling Upgrade of Oracle Grid Infrastructure
Review this information to perform rolling upgrade of Oracle Grid Infrastructure.
• About Upgrading Oracle Grid Infrastructure Using Oracle Fleet Patching and Provisioning
Oracle Fleet Patching and Provisioning (Oracle FPP) is a software lifecycle management
method for provisioning and patching Oracle homes.
• Applying Patches to Oracle Grid Infrastructure
After you have upgraded Oracle Grid Infrastructure 19c, you can install individual
software patches by downloading them from My Oracle Support.
• Updating Oracle Enterprise Manager Cloud Control Target Parameters
After upgrading Oracle Grid Infrastructure, upgrade the Enterprise Manager Cloud
Control target.
• Unlocking and Deinstalling the Previous Release Grid Home
After upgrading from previous releases, if you want to deinstall the previous release
Oracle Grid Infrastructure home, then you must first change the permission and
ownership of the previous release Grid home.
11-1
Chapter 11
Understanding Out-of-Place Upgrade
11-2
Chapter 11
Options for Oracle Grid Infrastructure Upgrades
you initiate the upgrade. After upgrade is completed, the new Oracle Clusterware is
started on all the nodes.
Note that some services are disabled when one or more nodes are in the process of being
upgraded. All upgrades are out-of-place upgrades, meaning that the software binaries are
placed in a different Grid home from the Grid home used for the prior release.
You can downgrade from Oracle Grid Infrastructure 19c to Oracle Grid Infrastructure 18c,
Oracle Grid Infrastructure 12c Release 2 (12.2), Oracle Grid Infrastructure 12c Release 1
(12.1), and Oracle Grid Infrastructure 11g Release 2 (11.2). Be aware that if you downgrade
to a prior release, then your cluster must conform with the configuration requirements for that
prior release, and the features available for the cluster consist only of the features available
for that prior release of Oracle Clusterware and Oracle ASM.
You can perform out-of-place upgrades to an Oracle ASM instance using Oracle ASM
Configuration Assistant (ASMCA). In addition to running ASMCA using the graphical user
interface, you can run ASMCA in non-interactive (silent) mode.
Note:
You must complete an upgrade before attempting to use cluster backup files. You
cannot use backups for a cluster that has not completed upgrade.
See Also:
Oracle Database Upgrade Guide and Oracle Automatic Storage Management
Administrator's Guide for additional information about upgrading existing Oracle
ASM installations
11-3
Chapter 11
Restrictions for Oracle Grid Infrastructure Upgrades
Note:
11-4
Chapter 11
Restrictions for Oracle Grid Infrastructure Upgrades
To manage databases in existing earlier release database homes during the Oracle Grid
Infrastructure upgrade, use the srvctl from the existing database homes.
• To upgrade existing Oracle Clusterware installations to Oracle Grid Infrastructure 19c
cluster, your release must be greater than or equal to Oracle Grid Infrastructure 11g
Release 2 (11.2.0.4).
See Also:
Oracle Database Upgrade Guide for additional information about preparing for
upgrades
11-5
Chapter 11
Preparing to Upgrade an Existing Oracle Clusterware Installation
Related Topics
• Using CVU to Validate Readiness for Oracle Clusterware Upgrades
Oracle recommends that you use Cluster Verification Utility (CVU) to help to
ensure that your upgrade is successful.
Check Task
Review Upgrade Guide for Oracle Database Upgrade Guide
deprecation and desupport
information that may affect
upgrade planning.
11-6
Chapter 11
Preparing to Upgrade an Existing Oracle Clusterware Installation
Table 11-1 (Cont.) Upgrade Checklist for Oracle Grid Infrastructure Installation
Check Task
Patch set (recommended) Install the latest patch set release for your existing installation. Review My Oracle
Support note 2180188.1 for the list of latest patches before upgrading Oracle Grid
Infrastructure.
Install user account Confirm that the installation owner you plan to use is the same as the installation
owner that owns the installation you want to upgrade.
Create a Grid home Create a new Oracle Grid Infrastructure Oracle home (Grid home) where you can
extract the image files. All Oracle Grid Infrastructure upgrades (upgrades of
existing Oracle Clusterware and Oracle ASM installations) are out-of-place
upgrades.
Instance names for Oracle Oracle Automatic Storage Management (Oracle ASM) instances must use
ASM standard Oracle ASM instance names.
The default ASM SID for a single-instance database is +ASM.
Cluster names and Site Cluster names must have the following characteristics:
names • At least one character but no more than 15 characters in length.
• Hyphens (-), and single-byte alphanumeric characters (a to z, A to Z, and 0 to
9).
• It cannot begin with a numeric character.
• It cannot begin or end with the hyphen (-) character.
Operating System Confirm that you are using a supported operating system, kernel release, and all
required operating system packages for the new Oracle Grid Infrastructure
installation.
Network addresses for For standard Oracle Grid Infrastructure installations, confirm the following network
standard Oracle Grid configuration:
Infrastructure • The private and public IP addresses are in unrelated, separate subnets. The
private subnet should be in a dedicated private subnet.
• The public and virtual IP addresses, including the SCAN addresses, are in the
same subnet (the range of addresses permitted by the subnet mask for the
subnet network).
• Neither private nor public IP addresses use a link local subnet (169.254.*.*).
OCR on raw or block devices Migrate OCR files from RAW or Block devices to Oracle ASM or a supported file
system. Direct use of RAW and Block devices is not supported.
Run the ocrcheck command to confirm Oracle Cluster Registry (OCR) file
integrity. If this check fails, then repair the OCR before proceeding.
Check space for GIMR When upgrading to Oracle Grid Infrastructure 19c, a new GIMR is created only if
the source Grid home has a GIMR configured. Allocate additional storage space
as described in Oracle Clusterware Storage Space Requirements.
When upgrading from Oracle Grid Infrastructure 12c Release 2 (12.2), the GIMR is
preserved with its contents.
11-7
Chapter 11
Preparing to Upgrade an Existing Oracle Clusterware Installation
Table 11-1 (Cont.) Upgrade Checklist for Oracle Grid Infrastructure Installation
Check Task
Oracle ASM password file When upgrading from Oracle Grid Infrastructure 12c Release 1 (12.1), Oracle Grid
Infrastructure 12c Release 2 (12.2), or Oracle Grid Infrastructure 18c to Oracle
Grid Infrastructure 19c, move the Oracle ASM password file from file system to
Oracle ASM, before proceeding with the upgrade using the following ASMCMD
command:
ASMCMD [+] > pwcopy --asm
current_location_of_ASM_password_file_in_OS_directory
+target_disk_group_name/orapwASM
When upgrading from Oracle Grid Infrastructure 11g Release 2 (11.2) to Oracle
Grid Infrastructure 19c, move the Oracle ASM password file from file system to
Oracle ASM, after the upgrade.
Note:
Set compatible.asm to at least 12.1.0.2, before
moving the password file.
CVU Upgrade Validation Use Cluster Verification Utility (CVU) to assist you with system checks in
preparation for starting an upgrade.
Unset Environment variables As the user performing the upgrade, unset the environment
variables $ORACLE_HOME and $ORACLE_SID.
Check that the $ORA_CRS_HOME environment variable is not set. Do not
use $ORA_CRS_HOME as an environment variable, except under explicit
direction from Oracle Support.
Refer to the Checks to Complete Before Upgrading Oracle Grid Infrastructure for a
complete list of environment variables to unset.
Check system upgrade Run Oracle Grid Infrastructure installation wizard, gridSetup.sh, in dry-run
readiness with dry-run upgrade mode to perform system readiness checks for Oracle Grid Infrastructure
upgrade upgrades.
Oracle ORAchk Upgrade Download and run the Oracle ORAchk Upgrade Readiness Assessment to obtain
Readiness Assessment automated upgrade-specific health check for upgrades to Oracle Grid
Infrastructure. See My Oracle Support note 1457357.1, which is available at the
following URL:
https://support.oracle.com/rs?type=doc&id=1457357.1
Back Up the Oracle software Before you make any changes to the Oracle software, Oracle recommends that
before upgrades you create a backup of the Oracle software and databases.
HugePages memory Allocate memory to HugePages large enough for the System Global Areas (SGA)
allocation of all databases planned to run on the cluster, and to accommodate the System
Global Area for the Grid Infrastructure Management Repository.
Remove encryption of Oracle To avoid data corruption, ensure that encryption of Oracle ACFS file systems is
ACFS File Systems Before removed before upgrading from Oracle Grid Infrastructure 11g Release 2
Upgrading from Oracle Grid (11.2.0.4).
Infrastructure 11g Release 2 If you do not want to remove Oracle ACFS file system encryption, see My Oracle
(11.2.0.4) Support note 2147979.1 for an alternate:
https://support.oracle.com/rs?type=doc&id=2147979.1
11-8
Chapter 11
Preparing to Upgrade an Existing Oracle Clusterware Installation
Related Topics
• Checks to Complete Before Upgrading Oracle Grid Infrastructure
Complete the following tasks before upgrading Oracle Grid Infrastructure.
• Oracle Clusterware Storage Space Requirements
Use this information to determine the minimum number of disks and the minimum disk
space requirements based on the redundancy type, for installing Oracle Clusterware files
for various Oracle Cluster deployments.
Related Topics
• Moving Oracle Clusterware Files from NFS to Oracle ASM
You can move Oracle Cluster Registry (OCR) and voting files from Network File System
(NFS) to Oracle Automatic Storage Management (Oracle ASM) disk groups.
• Checks to Complete Before Upgrading Oracle Grid Infrastructure
Complete the following tasks before upgrading Oracle Grid Infrastructure.
• My Oracle Support Note 2180188.1
$ unset ORACLE_BASE
$ unset ORACLE_HOME
$ unset ORACLE_SID
For C shell:
$ unsetenv ORACLE_BASE
$ unsetenv ORACLE_HOME
$ unsetenv ORACLE_SID
11-9
Chapter 11
Preparing to Upgrade an Existing Oracle Clusterware Installation
1. As the grid user, create the Oracle ASM disk group using ASMCA.
$ ./asmca
Follow the steps in the ASMCA wizard to create the Oracle ASM disk group, for
example, DATA.
2. Move the voting files to the Oracle ASM disk group you created:
$ ./ocrcheck
11-10
Chapter 11
Preparing to Upgrade an Existing Oracle Clusterware Installation
4. As the root user, move the OCR files to the Oracle ASM disk group you created:
11-11
Chapter 11
Preparing to Upgrade an Existing Oracle Clusterware Installation
11-12
Chapter 11
Preparing to Upgrade an Existing Oracle Clusterware Installation
designating the release to the level of the platform-specific patch. For example:
19.0.0.0.0.
• -fixup [-method sudo -user user_name [-location dir_path][-method root]
Use the -fixup option to indicate that you want to generate instructions for any required
steps you need to complete to ensure that your cluster is ready for an upgrade. The
default location is the CVU work directory.
The -fixup -method option defines the method by which root scripts are run. The -
method flag requires one of the following options:
– sudo: Run as a user on the sudoers list.
– root: Run as the root user.
If you select sudo, then enter the -location option to provide the path to Sudo on the
server, and enter the -user option to provide the user account with Sudo privileges.
• -fixupnoexec
If the option is specified, then on verification failure, the fix up data is generated and the
instruction for manual execution of the generated fix ups is displayed.
• -verbose
Use the -verbose flag to produce detailed output of individual checks.
Related Topics
• Oracle Database Upgrade Guide
11-13
Chapter 11
Preparing to Upgrade an Existing Oracle Clusterware Installation
Oracle Grid Infrastructure dry-run upgrade, create a new Grid home with the
necessary user group permissions, extract the Oracle Grid Infrastructure 19c gold
image to the new Grid home, and then start the installer with —dryRunForUpgrade flag.
Dry-run upgrades are not supported on Oracle Grid Infrastructure for a standalone
server (Oracle Restart) configurations.
Note:
The installer does not perform an actual upgrade in the dry-run upgrade
mode. You can relaunch the installer, without any flag, from any of the cluster
nodes to upgrade Oracle Grid Infrastructure if dry-run is successful.
The Grid infrastructure dry-run upgrade flow is similar to a regular upgrade, but the
installer does not run any configuration tool.
Related Topics
• Performing Dry-Run Upgrade Using Oracle Universal Installer
Run the Oracle Grid Infrastructure installer in dry-run upgrade mode to determine
if the system is ready for upgrade.
$ mkdir -p /u01/app/19.0.0/grid
$ chown grid:oinstall /u01/app/19.0.0/grid
$ cd /u01/app/19.0.0/grid
$ unzip -q download_location/grid_home.zip
11-14
Chapter 11
Preparing to Upgrade an Existing Oracle Clusterware Installation
Note:
You must extract the image file into the directory where you want your Grid
home to be located.
2. Start the Oracle Grid Infrastructure installation wizard in dry-run upgrade mode.
$ /u01/app/19.0.0/grid/gridSetup.sh -dryRunForUpgrade
3. Select Upgrade Oracle Grid Infrastructure option to perform dry-run upgrade for Oracle
Grid Infrastructure (Oracle Clusterware and Oracle ASM).
4. Select installation options as prompted. Oracle recommends that you
configure root script automation, so that the rootupgrade.sh script can be run
automatically during the dry-run upgrade.
5. Run root scripts, either automatically or manually:
• Running root scripts automatically
If you have configured root script automation, then the installer will run the
rootupgrade.sh script automatically on the local node.
• Running root scripts manually
If you have not configured root script automation, then when prompted, run the
rootupgrade.sh script on the local node.
If you run root scripts manually, then run the script only on the local node.
6. Check the gridSetupActions<timestamp>.log log file for errors and fix errors reported in
the log file.
7. Exit the installer on the Finish screen.
Note:
The gridSetup.sh -dryRunForUpgrade command registers the new Oracle
Grid Infrastructure home in the Oracle Inventory (oraInventory).
8. Relaunch the installer from the same Grid home without any flag to start an actual
upgrade.
$ /u01/app/19.0.0/grid/gridSetup.sh
Note:
If you need to apply patches before performing the upgrade, then use the
OPatch utility to apply patches on each cluster node.
11-15
Chapter 11
Understanding Rolling Upgrades Using Batches
Related Topics
• Upgrading Oracle Grid Infrastructure from an Earlier Release
Complete this procedure to upgrade Oracle Grid Infrastructure (Oracle
Clusterware and Oracle Automatic Storage Management) from an earlier release.
When you upgrade Oracle Grid Infrastructure without using root user automation, you
upgrade the entire cluster. You cannot select or de-select individual nodes for upgrade.
Oracle does not support attempting to add additional nodes to a cluster during a rolling
upgrade. Oracle recommends that you leave Oracle RAC instances running when
upgrading Oracle Clusterware. When you start the root script on each node, the
database instances on that node are shut down and then the rootupgrade.sh script
starts the instances again.
11-16
Chapter 11
Performing Rolling Upgrade of Oracle Grid Infrastructure
mkdir -p /u01/app/19.0.0/grid
chown grid:oinstall /u01/app/19.0.0/grid
cd /u01/app/19.0.0/grid
unzip -q download_location/grid_home.zip
Note:
• You must extract the image software into the directory where you want your
Grid home to be located.
• Download and copy the Oracle Grid Infrastructure image files to the local
node only. During upgrade, the software is copied and installed on all other
nodes in the cluster.
2. Start the Oracle Grid Infrastructure wizard by running the following command:
/u01/app/19.0.0/grid/gridSetup.sh
Note:
Oracle Clusterware must always be the later release, so you cannot upgrade
Oracle ASM to a release that is more recent than Oracle Clusterware.
11-17
Chapter 11
Performing Rolling Upgrade of Oracle Grid Infrastructure
Note:
11-18
Chapter 11
Performing Rolling Upgrade of Oracle Grid Infrastructure
Related Topics
• Oracle Clusterware Administration and Deployment Guide
To resolve this problem, run the rootupgrade.sh command with the -force flag using the
following syntax:
Grid_home/rootupgrade -force
For example:
# /u01/app/19.0.0/grid/rootupgrade -force
This command forces the upgrade to complete. Verify that the upgrade has completed by
using the command crsctl query crs activeversion. The active release should be the
upgrade release.
The force cluster upgrade has the following limitations:
• All active nodes must be upgraded to the newer release.
• All inactive nodes (accessible or inaccessible) may be either upgraded or not upgraded.
• For inaccessible nodes, after patch set upgrades, you can delete the node from the
cluster. If the node becomes accessible later, and the patch version upgrade path is
supported, then you can upgrade it to the new patch version.
$ cd /u01/app/19.0.0/grid/
3. Run the following command, where upgraded_node is one of the cluster nodes that is
upgraded successfully:
11-19
Chapter 11
About Upgrading Oracle Grid Infrastructure Using Oracle Fleet Patching and Provisioning
For upgrade:
11-20
Chapter 11
Applying Patches to Oracle Grid Infrastructure
See Also:
Oracle Clusterware Administration and Deployment Guide for information about
setting up the Oracle Fleet Patching and Provisioning Server and Client, creating
and using gold images for provisioning and patching Oracle Grid Infrastructure,
Oracle Database, and Oracle Restart homes.
See Also:
Oracle Clusterware Administration and Deployment Guide for more information
about patching Oracle Grid Infrastructure using Oracle Fleet Patching and
Provisioning.
11-21
Chapter 11
Applying Patches to Oracle Grid Infrastructure
Note:
The success of zero-downtime patching depends on the availability of the
system resources. If sufficient system resources are not available and if the
patching software estimates that Oracle Grid Infrastructure will remain shut
down for more than 20 seconds, then the patching software stops the
patching process. In this case, you need to patch Oracle Grid Infrastructure
when sufficient system resources are available.
11-22
Chapter 11
Applying Patches to Oracle Grid Infrastructure
If you are using Oracle ASM Filter Driver (Oracle ASMFD) or Oracle ACFS for database
storage, then operating system drivers are updated in either of the following scenarios:
1. You update your operating system kernel and restart the cluster node.
2. You run the rootcrs.sh -updateosfiles command on each cluster node and restart the
cluster nodes, if the operating system drivers fail to install.
You can check the active operating system driver version on a cluster node using the crsctl
query driver activeversion [-all] command and available operating system driver
version using the crsctl query driver softwareversion [-all] [-f] command.
If the patch that you are installing contains operating system driver updates, then you must
use the -skipDriverUpdate option during zero-downtime patching, else the patching process
fails. The -skipDriverUpdate option does not affect the existing operating system driver,
even if the Release Update (RU) contains driver updates.
After you apply the patches using zero-downtime patching, the active and available operating
system driver versions are different until either of the above two scenarios is true.
Related Topics
• Oracle Automatic Storage Management Cluster File System Administrator's Guide
11-23
Chapter 11
Applying Patches to Oracle Grid Infrastructure
Note:
You can use zero-downtime patching only for out-of-place patching of Oracle
Grid Infrastructure 19c Release Update (RU) 19.11 or later releases with
Oracle RAC or Oracle RAC One Node databases of 19c or later releases. If
your Oracle RAC or Oracle RAC One Node database release is older than
19c, then the database instances stop during zero-downtime patching.
1. Download the Oracle Database Release Update (RU) 19.11 or later from My
Oracle Support.
2. As the grid user, download the Oracle Grid Infrastructure 19c image files and
extract the files into a new Oracle Grid Infrastructure home directory.
$ mkdir -p /u01/app/19.11.0/grid
$ chown grid:oinstall /u01/app/19.11.0/grid
$ cd /u01/app/19.11.0/grid
$ unzip -q download_location/grid.zip
Note:
The new Oracle Grid Infrastructure home path must be different from the
current Oracle Grid Infrastructure home path.
3. Start the Oracle Grid Infrastructure installer with the -switchGridHome flag to
switch to the patched Oracle Grid Infrastructure home after the installation.
$ /u01/app/19.11.0/grid/gridSetup.sh -switchGridHome
[-applyRU patch_directory_location] [-applyOneOffs
comma_seperated_list_of_patch_directory_locations]
4. Follow the steps in the configuration wizard to complete the Oracle Grid
Infrastructure installation.
During configuration, do not select the option to Automatically run configuration
scripts.
5. When prompted, run the root.sh script with the -transparent and -
nodriverupdate flags on the first node.
6. Run the root.sh script with the -transparent and -nodriverupdate flags on all
other nodes.
All Oracle Grid Infrastructure services start running from the new Grid home after
the installation is complete.
7. Verify that the patching has completed.
11-24
Chapter 11
Applying Patches to Oracle Grid Infrastructure
Related Topics
• Zero-Downtime Grid Infrastucture Patching Using Oracle FPP
$ cd /u01/app/19.0.0/grid
5. Apply Release Updates (RUs) and any one-off patches during the installation or upgrade
process:
Note:
You can apply RUs and one-off patches separately or together in the same
command.
6. Complete the remaining steps in the Oracle Grid Infrastructure configuration wizard to
complete the installation or upgrade.
Related Topics
• Oracle Grid Infrastructure Upgrading Oracle Restart for Linux and Unix-Based Operating
Systems
11-25
Chapter 11
Applying Patches to Oracle Grid Infrastructure
with a list of the most recent Release Updates (RUs) and Release Update
Revisions (RURs).
Place the patches in an accessible directory, such as /tmp.
2. Review the patch documentation for the patch you want to apply, and complete all
required steps before starting the patch upgrade.
3. Follow the instructions in the patch documentation to apply the patch on the first
node.
Wait for the patching to finish on the first node before patching other cluster nodes.
4. Apply the patch on all other cluster nodes, one at a time.
Related Topics
• OPatch User's Guide
# $ORACLE_HOME/crs/install/rootcrs.sh -unlock
5. As the grid user, follow the instructions in the patch documentation to apply the
patch on the first node.
11-26
Chapter 11
Applying Patches to Oracle Grid Infrastructure
Wait for the patching to finish on the first node before patching other cluster nodes.
6. As the root user, lock the Oracle Grid Infrastructure home on the first node.
# $ORACLE_HOME/rdbms/install/rootadd_rdbms.sh
# $ORACLE_HOME/crs/install/rootcrs.sh -lock
7. Update Oracle Local Registry (OLR) to the patch level on the first node.
# $ORACLE_HOME/bin/clscfg -localpatch
9. Update Oracle Cluster Registry (OCR) to the patch level on the first node.
# $ORACLE_HOME/bin/clscfg -patch
$ mkdir -p /u01/app/19.11.0/grid
$ chown grid:oinstall /u01/app/19.11.0/grid
$ cd /u01/app/19.11.0/grid
$ unzip -q download_location/grid.zip
Note:
The new Oracle Grid Infrastructure home path must be different from the
current Oracle Grid Infrastructure home path.
11-27
Chapter 11
Updating Oracle Enterprise Manager Cloud Control Target Parameters
3. Start the Oracle Grid Infrastructure installer with the -switchGridHome flag to
switch to the patched Oracle Grid Infrastructure home after the installation and the
optional -applyRU flag to apply Release Updates (RUs) during the installation.
4. Follow the steps in the configuration wizard to complete the Oracle Grid
Infrastructure installation.
All Oracle Grid Infrastructure services start running from the new Grid home after
the installation is complete.
11-28
Chapter 11
Updating Oracle Enterprise Manager Cloud Control Target Parameters
11. Click the target name to navigate to the target home page.
11-29
Chapter 11
Unlocking and Deinstalling the Previous Release Grid Home
12. From the host, database, middleware target, or application menu displayed on the
target home page, select Target Setup, then select Monitoring Configuration.
13. In the Monitoring Configuration page for the listener, specify the host name in the
Machine Name field and the password for the ASMSNMP user in the Password
field.
14. Click OK.
For example:
2. After you change the permissions and ownership of the previous release Grid
home, log in as the Oracle Grid Infrastructure installation owner (grid, in the
preceding example), and use the deinstall command from previous release
Grid home (oldGH) $ORACLE_HOME/deinstall directory.
Caution:
You must use the deinstall command from the same release to remove
Oracle software. Do not run the deinstall command from a later release
to remove Oracle software from an earlier release. For example, do not run
the deinstall command from the 19.0.0.0.0 Oracle home to remove
Oracle software from an existing 18.0.0.0.0 Oracle home.
Related Topics
• About Oracle Deinstallation Options
You can stop and remove Oracle Database software and components in an Oracle
Database home with the deinstall command.
11-30
Chapter 11
Checking Cluster Health Monitor Repository Size After Upgrading
Note:
Your previous IPD/OS repository is deleted when you install Oracle Grid
Infrastructure.
By default, the CHM repository size is a minimum of either 1GB or 3600 seconds (1
hour), regardless of the size of the cluster.
2. To enlarge the CHM repository, use the following command syntax, where
RETENTION_TIME is the size of CHM repository in number of seconds:
The value for RETENTION_TIME must be more than 3600 (one hour) and less than
259200 (three days). If you enlarge the CHM repository size, then you must ensure that
there is local space available for the repository size you select on each node of the
cluster. If you do not have sufficient space available, then you can move the repository to
shared storage.
11-31
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
Note:
• Starting with Oracle Grid Infrastructure 12c Release 2 (12.2), you can
downgrade the cluster nodes in any sequence. You can downgrade all
cluster nodes except one, in parallel. You must downgrade the last node
after you downgrade all other nodes.
• When downgrading after a failed upgrade, if the rootcrs.sh or
rootcrs.bat file does not exist on a node, then instead of executing the
script, use the command perl rootcrs.pl . Use the Perl interpreter
located in the Oracle Home directory.
11-32
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
Release 2 (11.2), you downgrade from an Oracle Flex cluster configuration to a Standard
cluster configuration.
Note:
When you downgrade Oracle Grid Infrastructure to an earlier release, for example
from Oracle Grid Infrastructure 19c to Oracle Grid Infrastructure 18c, the later
release RAC databases already registered with Oracle Grid Infrastructure will not
start after the downgrade.
Related Topics
• My Oracle Support Note 2180188.1
11-33
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
2. As root user, use the command syntax rootcrs.sh -downgrade from 19c Grid
home to downgrade Oracle Grid Infrastructure on all nodes, in any sequence. For
example:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
Run this command from a directory that has write permissions for the Oracle Grid
Infrastructure installation user. You can run the downgrade script in parallel on all
cluster nodes, but one.
3. As root user, downgrade the last node after you downgrade all other nodes:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
4. As grid user, remove Oracle Grid Infrastructure 19c Grid home as the active
Oracle Clusterware home:
a. On any of the cluster member nodes where the rootupgrade.sh script has
run successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where /u01/app/19.0.0/
grid is the location of the new (upgraded) Grid home:
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=true
11-34
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
ORACLE_HOME=/u01/app/18.0.0/grid
"CLUSTER_NODES=node1,node2,node3"
6. As root user, start the 18c Oracle Clusterware stack on all nodes.
7. As grid user, from any Oracle Grid Infrastructure 18c node, remove the MGMTDB
resource as follows:
8. As grid user, run DBCA in the silent mode from the 18c Grid home and create the
Management Database container database (CDB) as follows:
9. Configure the Management Database by running the Configuration Assistant from the
location $ORACLE_HOME/bin/mgmtca —local.
10. As grid user, run the post_gimr_ugdg.pl script from 19c Grid home:
Where:
SC is the type of the cluster as Oracle Standalone Cluster. The value for -clusterType
can be SC for Oracle Standalone Cluster, DSC for Oracle Domain Services Cluster, or
MC for Oracle Member Cluster.
/u01/app/19.0.0/grid is the Oracle home for Oracle Grid Infrastructure 19c.
18.0.0.0.0 is the version of Oracle Grid Infrastructure to which you are downgrading.
/u01/app/grid2 is the Oracle base for Oracle Grid Infrastructure 19c
$ cp $ORACLE_HOME/oracore/zoneinfo/timezlrg_number.dat /u01/app/
18.0.0/grid/oracore/zoneinfo/timezlrg_number.dat
11-35
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
$ cp $ORACLE_HOME/oracore/zoneinfo/timezone_number.dat /u01/app/
18.0.0/grid/oracore/zoneinfo/timezone_number.dat
b. Downgrade application schema using the following command syntax from 19c
Grid home:
$ cd $ORACLE_HOME/bin
$ ./srvctl disable mgmtdb
$ ./srvctl stop mgmtdb
$ export ORACLE_SID=-MGMTDB
$ cd $ORACLE_HOME/bin
$ ./sqlplus / as sysdba
SQL> startup downgrade
SQL> alter pluggable database all open downgrade;
SQL> exit
$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/
catcon.pl -d
/u01/app/grid2 -e -l /u01/app/grid2/cfgtoollogs/mgmtua -b
mgmtdowngrade -r
$ORACLE_HOME/rdbms/admin/catdwgrd.sql
iv. Set ORACLE_HOME and ORACLE_SID environment variables for 18c Grid
home:
$ export ORACLE_HOME=/u01/app/18.0.0/grid/
$ export ORACLE_SID=-MGMTDB
$ cd $ORACLE_HOME/bin
$ ./sqlplus / as sysdba
SQL> shutdown immediate
SQL> startup upgrade
SQL> alter pluggable database all open upgrade;
SQL> exit
11-36
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
vi. Run catrelod script for 18c Management Database using the following
command syntax, where /u01/app/grid is the Oracle base for Oracle Grid
Infrastructure 18c:
$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -d
/u01/app/grid -e -l /u01/app/grid/cfgtoollogs/mgmtua -b
mgmtdowngrade
$ORACLE_HOME/rdbms/admin/catrelod.sql
vii. Recompile all invalid objects after downgrade using the following command
syntax from 18c Grid home:
$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -d
/u01/app/grid -e -l /u01/app/grid/cfgtoollogs/mgmtua -b
mgmtdowngrade
$ORACLE_HOME/rdbms/admin/utlrp.sql
$ ./sqlplus / as sysdba
SQL> shutdown immediate
SQL> exit
2. As root user, use the following command syntax rootcrs.sh -downgrade from 19c Grid
home to downgrade Oracle Grid Infrastructure on all nodes, in any sequence. For
example:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
Run this command from a directory that has write permissions for the Oracle Grid
Infrastructure installation user. You can run the downgrade script in parallel on all cluster
nodes, but one.
3. As root user, downgrade the last node after you downgrade all other nodes:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
4. As grid user, remove Oracle Grid Infrastructure 19c Grid home as the active Oracle
Clusterware home:
a. On any of the cluster member nodes where the rootupgrade.sh script has run
successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where /u01/app/19.0.0/grid is
the location of the new (upgraded) Grid home:
$ cd $ORACLE_HOME/oui/bin
./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=false
ORACLE_HOME=/u01/app/19.0.0/grid
"CLUSTER_NODES=node1,node2,node3"
-doNotUpdateNodeList
11-37
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
5. As root user, start the 18c Oracle Clusterware stack on all nodes.
6. As grid user, set Oracle Grid Infrastructure 18c Grid home as the active Oracle
Clusterware home:
a. On any of the cluster member nodes where the rootupgrade script has run
successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where the path you provide
for ORACLE_HOME is the location of the home directory from the earlier Oracle
Clusterware installation.
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=true
ORACLE_HOME=/u01/app/18.0.0/grid
"CLUSTER_NODES=node1,node2,node3"
7. As grid user, downgrade CHA models from any node where the Grid
Infrastructure stack is running from 18c Grid home and Management Database
and ochad are up:
In the example above, DEFAULT_CLUSTER and DEFAULT_DB are function names that
you must pass as values.
2. As root user, use the command syntax rootcrs.sh -downgrade from 19c Grid
home to downgrade Oracle Grid Infrastructure on all nodes, in any sequence. For
example:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
11-38
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
Run this command from a directory that has write permissions for the Oracle Grid
Infrastructure installation user. You can run the downgrade script in parallel on all cluster
nodes, but one.
3. As root user, downgrade the last node after you downgrade all other nodes:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
4. As grid user, remove Oracle Grid Infrastructure 19c Grid home as the active Oracle
Clusterware home:
a. On any of the cluster member nodes where the rootupgrade.sh script has run
successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where /u01/app/19.0.0/grid is
the location of the new (upgraded) Grid home:
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=false
ORACLE_HOME=/u01/app/19.0.0/grid
"CLUSTER_NODES=node1,node2,node3"
-doNotUpdateNodeList
Note:
You must start Oracle Clusterware on last downgraded node first, and then on
other nodes.
6. As grid user, set Oracle Grid Infrastructure 18c Grid home as the active Oracle
Clusterware home:
a. On any of the cluster member nodes where the rootupgrade script has run
successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where the path you provide for
ORACLE_HOME is the location of the home directory from the earlier Oracle Clusterware
installation.
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=true
ORACLE_HOME=/u01/app/18.0.0/grid
"CLUSTER_NODES=node1,node2,node3"
11-39
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
7. As grid user, downgrade CHA models from any node where the Grid
Infrastructure stack is running from 18c Grid home and Management Database
and ochad are up:
In the example above, DEFAULT_CLUSTER and DEFAULT_DB are function names that
you must pass as values.
• From the later release Grid home, run gridSetup.sh in silent mode, to downgrade
Oracle Clusterware:
Note:
You can downgrade the cluster nodes in any sequence.
11-40
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
2. As root user, use the command syntax rootcrs.sh -downgrade from 19c Grid home to
downgrade Oracle Grid Infrastructure on all nodes, in any sequence. For example:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
Run this command from a directory that has write permissions for the Oracle Grid
Infrastructure installation user. You can run the downgrade script in parallel on all cluster
nodes, but one.
3. As root user, downgrade the last node after you downgrade all other nodes:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
4. As grid user, remove Oracle Grid Infrastructure 19c Grid home as the active Oracle
Clusterware home:
a. On any of the cluster member nodes where the rootupgrade.sh script has run
successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where /u01/app/19.0.0/grid is
the location of the new (upgraded) Grid home:
11-41
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
b. Use the following command to start the installer, where the path you provide
for ORACLE_HOME is the location of the home directory from the earlier Oracle
Clusterware installation.
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=true
ORACLE_HOME=/u01/app/12.2.0/grid
"CLUSTER_NODES=node1,node2,node3"
6. As root user, start the 12c Release 2 (12.2) Oracle Clusterware stack on all
nodes.
7. As grid user, from any Oracle Grid Infrastructure 12c Release 2 (12.2) node,
remove the MGMTDB resource as follows:
8. As grid user, run DBCA in the silent mode from the 12.2.0.1 Grid home and
create the Management Database container database (CDB) as follows:
$ $ORACLE_HOME/crs/install/post_gimr_ugdg.pl -downgrade -
clusterType SC -destHome /u01/app/19.0.0/grid
-lowerVersion 12.2.0.1.0 -oraBase /u01/app/grid2
Where:
SC is the type of the cluster as Oracle Standalone Cluster. The value for -
clusterType can be SC for Oracle Standalone Cluster, DSC for Oracle Domain
Services Cluster, or MC for Oracle Member Cluster.
/u01/app/19.0.0/grid is the Oracle home for Oracle Grid Infrastructure 19c.
12.2.0.1.0 is the version of Oracle Grid Infrastructure to which you are
downgrading.
/u01/app/grid2 is the Oracle base for Oracle Grid Infrastructure 19c
11-42
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
1. As grid user, downgrade the Management Database to Oracle Grid Infrastructure 12c
Release 2 (12.2):
a. Manually copy the most recent time zone files from 19c Grid home to 12c Release 2
(12.2) Grid home, where timezlrg_number is the name of the most recent timzlrg
file and timezone_number is the name of the most recent timezone file:
$ cp $ORACLE_HOME/oracore/zoneinfo/timezlrg_number.dat /u01/app/
12.2.0/grid/oracore/zoneinfo/timezlrg_number.dat
$ cp $ORACLE_HOME/oracore/zoneinfo/timezone_number.dat /u01/app/
12.2.0/grid/oracore/zoneinfo/timezone_number.dat
b. Downgrade application schema using the following command syntax from 19c Grid
home:
$ cd $ORACLE_HOME/bin
$ ./srvctl disable mgmtdb
$ ./srvctl stop mgmtdb
$ export ORACLE_SID=-MGMTDB
$ cd $ORACLE_HOME/bin
$ ./sqlplus / as sysdba
SQL> startup downgrade
SQL> alter pluggable database all open downgrade;
SQL> exit
iii. Downgrade 19c Management Database using the following command syntax,
where /u01/app/grid2 is the Oracle base for Oracle Grid Infrastructure 19c:
$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -d
/u01/app/grid2 -e -l /u01/app/grid2/cfgtoollogs/mgmtua -b
mgmtdowngrade -r
$ORACLE_HOME/rdbms/admin/catdwgrd.sql
iv. Set ORACLE_HOME and ORACLE_SID environment variables for 12c Release 2 (12.2)
Grid home:
$ export ORACLE_HOME=/u01/app/12.2.0/grid/
$ export ORACLE_SID=-MGMTDB
$ cd $ORACLE_HOME/bin
11-43
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
$ ./sqlplus / as sysdba
SQL> shutdown immediate
SQL> startup upgrade
SQL> alter pluggable database all open upgrade;
SQL> exit
vi. Run catrelod script for 12c Release 2 (12.2) Management Database
using the following command syntax, where /u01/app/grid is the Oracle
base for Oracle Grid Infrastructure 12c Release 2 (12.2):
$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/
catcon.pl -d
/u01/app/grid -e -l /u01/app/grid/cfgtoollogs/mgmtua -b
mgmtdowngrade
$ORACLE_HOME/rdbms/admin/catrelod.sql
vii. Recompile all invalid objects after downgrade using the following
command syntax from 12c Release 2 (12.2) Grid home:
$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/
catcon.pl -d
/u01/app/grid -e -l /u01/app/grid/cfgtoollogs/mgmtua -b
mgmtdowngrade
$ORACLE_HOME/rdbms/admin/utlrp.sql
$ ./sqlplus / as sysdba
SQL> shutdown immediate
SQL> exit
2. As root user, use the following command syntax rootcrs.sh -downgrade from
19c Grid home to downgrade Oracle Grid Infrastructure on all nodes, in any
sequence. For example:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
Run this command from a directory that has write permissions for the Oracle Grid
Infrastructure installation user. You can run the downgrade script in parallel on all
cluster nodes, but one.
3. As root user, downgrade the last node after you downgrade all other nodes:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
4. As grid user, remove Oracle Grid Infrastructure 19c Grid home as the active
Oracle Clusterware home:
a. On any of the cluster member nodes where the rootupgrade.sh script has
run successfully, log in as the Oracle Grid Infrastructure installation owner.
11-44
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
$ cd $ORACLE_HOME/oui/bin
./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=false
ORACLE_HOME=/u01/app/19.0.0/grid
"CLUSTER_NODES=node1,node2,node3"
-doNotUpdateNodeList
6. As grid user, set Oracle Grid Infrastructure 12c Release 2 (12.2) Grid home as the active
Oracle Clusterware home:
a. On any of the cluster member nodes where the rootupgrade script has run
successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where the path you provide for
ORACLE_HOME is the location of the home directory from the earlier Oracle Clusterware
installation.
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=true
ORACLE_HOME=/u01/app/12.2.0/grid
"CLUSTER_NODES=node1,node2,node3"
7. As grid user, downgrade CHA models from any node where the Grid Infrastructure stack
is running from 12c Release 2 (12.2) Grid home and Management Database and ochad
are up:
In the example above, DEFAULT_CLUSTER and DEFAULT_DB are function names that you
must pass as values.
Related Topics
• Downgrading to Oracle Grid Infrastructure 12c Release 1 (12.1)
11-45
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
1. As grid user, use the command syntax mgmtua downgrade from 19c Grid home to
downgrade Oracle Member Cluster where oldOracleHome is 12c Release 2 (12.2)
Grid home and version is the five digit release number:
2. As root user, use the command syntax rootcrs.sh -downgrade from 19c Grid
home to downgrade Oracle Grid Infrastructure on all nodes, in any sequence. For
example:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
Run this command from a directory that has write permissions for the Oracle Grid
Infrastructure installation user. You can run the downgrade script in parallel on all
cluster nodes, but one.
3. As root user, downgrade the last node after you downgrade all other nodes:
# $ORACLE_HOME/crs/install/rootcrs.sh -downgrade
4. As grid user, remove Oracle Grid Infrastructure 19c Grid home as the active
Oracle Clusterware home:
a. On any of the cluster member nodes where the rootupgrade.sh script has
run successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where /u01/app/19.0.0/
grid is the location of the new (upgraded) Grid home:
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=false
ORACLE_HOME=/u01/app/19.0.0/grid
"CLUSTER_NODES=node1,node2,node3"
-doNotUpdateNodeList
Note:
You must start Oracle Clusterware on last downgraded node first, and
then on other nodes.
6. As grid user, set Oracle Grid Infrastructure 12c Release 2 (12.2) Grid home as
the active Oracle Clusterware home:
11-46
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
a. On any of the cluster member nodes where the rootupgrade script has run
successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where the path you provide for
ORACLE_HOME is the location of the home directory from the earlier Oracle Clusterware
installation.
$ cd $ORACLE_HOME/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=true
ORACLE_HOME=/u01/app/12.2.0/grid
"CLUSTER_NODES=node1,node2,node3"
7. As grid user, downgrade CHA models from any node where the Grid Infrastructure stack
is running from 12c Release 2 (12.2) Grid home and Management Database and ochad
are up:
In the example above, DEFAULT_CLUSTER and DEFAULT_DB are function names that you
must pass as values.
Related Topics
• Downgrading Oracle Domain Services Cluster to 12c Release 2 (12.2)
Use this procedure to downgrade Oracle Domain Services Cluster to Oracle Grid
Infrastructure 12c Release 2 (12.2) after a successful upgrade.
• Downgrading to Oracle Grid Infrastructure 12c Release 1 (12.1)
Use this procedure to downgrade to Oracle Grid Infrastructure 12c Release 1 (12.1).
Downgrading Oracle Grid Infrastructure to 12c Release 2 (12.2) when Upgrade Fails
If upgrade of Oracle Grid Infrastructure fails before CVU post upgrade checks succeed, then
you can run gridSetup.sh and downgrade Oracle Grid Infrastructure to the earlier release.
Run this procedure to downgrade Oracle Clusterware only when the upgrade fails before
CVU post upgrade checks succeed.
• From the later release Grid home, run gridSetup.sh in silent mode, to downgrade Oracle
Clusterware:
11-47
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
Note:
You can downgrade the cluster nodes in any sequence.
Related Topics
• Downgrading Oracle Domain Services Cluster to 12c Release 2 (12.2)
# /u01/app/19.0.0/grid/crs/install/rootcrs.sh -downgrade
Run this command from a directory that has write permissions for the Oracle Grid
Infrastructure installation user. You can run the downgrade script in parallel on all
cluster nodes, but one.
3. Downgrade the last node after you downgrade all other nodes:
# /u01/app/19.0.0/grid/crs/install/rootcrs.sh -downgrade
4. Remove Oracle Grid Infrastructure 19c Grid home as the active Oracle
Clusterware home:
a. On any of the cluster member nodes where the rootupgrade.sh script has
run successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where /u01/app/19.0.0/
grid is the location of the new (upgraded) Grid home:
$ cd /u01/app/19.0.0/grid/oui/bin
./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=false
ORACLE_HOME=/u01/app/19.0.0/grid
"CLUSTER_NODES=node1,node2,node3"
-doNotUpdateNodeList
11-48
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
b. Use the following command to start the installer, where the path you provide for
ORACLE_HOME is the location of the home directory from the earlier Oracle Clusterware
installation.
$ cd /u01/app/12.1.0/grid/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs
-updateNodeList -silent CRS=true
ORACLE_HOME=/u01/app/12.1.0/grid
"CLUSTER_NODES=node1,node2,node3"
8. If you are downgrading to Oracle Grid Infrastructure 12c Release 1 (12.1.0.2), run the
following commands to configure the Grid Infrastructure Management Database:
a. Run DBCA in the silent mode from the 12.1.0.2 Oracle home and create the
Management Database container database (CDB) as follows:
b. Run DBCA in the silent mode from the 12.1.0.2 Oracle home and create the
Management Database pluggable database (PDB) as follows:
9. If you are downgrading to Oracle Grid Infrastructure 12c Release 1 (12.1.0.1), run DBCA
in the silent mode from the 12.1.0.1 Oracle home and create the Management Database
as follows:
10. Configure the Management Database by running the Configuration Assistant from the
location 121_Grid_home/bin/mgmtca.
11-49
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
# /u01/app/19.0.0/grid/crs/install/rootcrs.sh -downgrade
4. Follow these steps to remove Oracle Grid Infrastructure 19c Grid home as the
active Oracle Clusterware home:
a. On any of the cluster member nodes where the rootupgrade.sh script has
run successfully, log in as the Oracle Grid Infrastructure installation owner.
b. Use the following command to start the installer, where /u01/app/19.0.0/
grid is the location of the new (upgraded) Grid home:
$ cd /u01/app/19.0.0/grid/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -
updateNodeList
-silent CRS=false ORACLE_HOME=/u01/app/19.0.0/grid
"CLUSTER_NODES=node1,node2,node3" -doNotUpdateNodeList
$ cd /u01/app/11.2.0/grid/oui/bin
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -
updateNodeList
-silent CRS=true ORACLE_HOME=/u01/app/11.2.0/grid
11-50
Chapter 11
Downgrading Oracle Clusterware to an Earlier Release
2. From any node where the Grid Infrastructure stack from the earlier release is running,
unset the Oracle ASM rolling migration mode as follows:
• Log in as grid user, and run the following command as SYSASM user on the Oracle
ASM instance:
3. If you are upgrading from 11.2.0.4 or 12.1.0.1, then apply the latest available patches on
all nodes in the cluster. If the pre-upgrade version is 12.1.0.2 or later, then patch is not
required.
a. On all other nodes except the first node, where the earlier release Grid Infrastructure
stack is running, apply the latest patch using the opatchauto procedure.
b. On the first node where the earlier release Grid Infrastructure stack is stopped, apply
the latest patch using the opatch apply procedure.
For the list of latest available patches, see My Oracle Support at the following link:
https://support.oracle.com/
i. Unlock the Grid Infrastructure home from the earlier release:
11-51
Chapter 11
Completing Failed or Interrupted Installations and Upgrades
rootcrs.pl -lock
c. From any other node where the Grid Infrastructure stack from the earlier
release is running, unset the Oracle ASM rolling migration mode as explained
in step 2.
4. On any node running Oracle Grid Infrastructure other than the first node, from the
Grid home of the earlier release, run the command:
Note:
You can downgrade the cluster nodes in any sequence.
Related Topics
• Downgrading Oracle Grid Infrastructure to 12c Release 2 (12.2) when Upgrade
Fails
11-52
Chapter 11
Completing Failed or Interrupted Installations and Upgrades
Grid_home/deinstall/deinstall -local
2. As the root user, from a node where Oracle Clusterware is installed, delete the failed
nodes using the delete node command:
Grid_home/gridSetup.sh
[root@node1]# cd /u01/app/19.0.0/grid
[root@node1]# ./rootupgrade.sh
[root@node2]# ./rootupgrade.sh
11-53
Chapter 11
Completing Failed or Interrupted Installations and Upgrades
5. To complete the upgrade, log in as the Grid installation owner, and run
gridSetup.sh, located in the Grid_home, specifying the response file that you
created. For example, where the response file is gridinstall.rsp:
Note:
When you reupgrade Oracle Grid Infrastructure, you must use the -all
flag with the executeConfigTools command to run all the configuration
tools.
[root@node6]# cd /u01/app/19.0.0/grid
[root@node6]# ./rootupgrade.sh
1. If the root script failure indicated a need to reboot, through the message
CLSRSC-400, then reboot the first node (the node where the installation was
started). Otherwise, manually fix or clear the error condition, as reported in the
error output.
2. If necessary, log in as root to the first node. Run the orainstRoot.sh script on
that node again. For example:
$ sudo -s
[root@node1]# cd /u01/app/oraInventory
[root@node1]# ./orainstRoot.sh
3. Change directory to the Grid home on the first node, and run the root script on
that node again. For example:
[root@node1]# cd /u01/app/19.0.0/grid
[root@node1]# ./root.sh
11-54
Chapter 11
Converting to Oracle Extended Cluster After Upgrading Oracle Grid Infrastructure
Related Topics
• Editing a Response File Template
Oracle provides response file templates for each product and each configuration tool.
list_of_sites is the comma-separated list of sites in the extended cluster, and node_site is
the node containing the site.
11-55
Chapter 11
Converting to Oracle Extended Cluster After Upgrading Oracle Grid Infrastructure
For example:
3. Delete the default site after the associated nodes and storage are migrated.
For example:
11-56
12
Removing Oracle Database Software
These topics describe how to remove Oracle software and configuration files.
Use the deinstall command that is included in Oracle homes to remove Oracle software.
Oracle does not support the removal of individual products or components.
Caution:
If you have a standalone database on a node in a cluster, and if you have multiple
databases with the same global database name (GDN), then you cannot use the
deinstall command to remove one database only.
12-1
Chapter 12
About Oracle Deinstallation Options
• Oracle Database
• Oracle Grid Infrastructure, which includes Oracle Clusterware and Oracle
Automatic Storage Management (Oracle ASM)
• Oracle Real Application Clusters (Oracle RAC)
• Oracle Database Client
The deinstall command is available in Oracle home directories after installation. It
is located in the $ORACLE_HOME/deinstall directory.
deinstall creates a response file by using information in the Oracle home and using
the information you provide. You can use a response file that you generated previously
by running the deinstall command using the -checkonly option. You can also
edit the response file template.
If you run deinstall to remove an Oracle Grid Infrastructure installation, then the
deinstaller prompts you to run the deinstall command as the root user. For Oracle
Grid Infrastructure for a cluster, the script is rootcrs.sh, and for Oracle Grid
Infrastructure for a standalone server (Oracle Restart), the script is roothas.sh.
Note:
• You must run the deinstall command from the same release to
remove Oracle software. Do not run the deinstall command from a
later release to remove Oracle software from an earlier release. For
example, do not run the deinstall command from the 19c Oracle
home to remove Oracle software from an existing 11.2.0.4 Oracle home.
• Starting with Oracle Database 12c Release 1 (12.1.0.2), the
roothas.sh script replaces the roothas.pl script in the Oracle Grid
Infrastructure home for Oracle Restart, and the rootcrs.sh script
replaces the rootcrs.pl script in the Grid home for Oracle Grid
Infrastructure for a cluster.
If the software in the Oracle home is not running (for example, after an unsuccessful
installation), then deinstall cannot determine the configuration, and you must
provide all the configuration details either interactively or in a response file.
In addition, before you run deinstall for Oracle Grid Infrastructure installations:
12-2
Chapter 12
Oracle Deinstallation (Deinstall)
Caution:
deinstall deletes Oracle Database configuration files, user data, and fast
recovery area (FRA) files even if they are located outside of the Oracle base
directory path.
Purpose
deinstall stops Oracle software, and removes Oracle software and configuration files on
the operating system for a specific Oracle home.
Syntax
The deinstall command uses the following syntax:
12-3
Chapter 12
Oracle Deinstallation (Deinstall)
Parameters
Parameter Description
-silent Use this flag to run deinstall in
noninteractive mode. This option requires one
of the following:
• A working system that it can access to
determine the installation and
configuration information. The -silent
flag does not work with failed installations.
• A response file that contains the
configuration values for the Oracle home
that is being deinstalled or deconfigured.
You can generate a response file to use or
modify by running deinstall with the -
checkonly flag. deinstall then discovers
information from the Oracle home to deinstall
and deconfigure. It generates the response file
that you can then use with the -silent
option.
You can also modify the template file
deinstall.rsp.tmpl, located in
the $ORACLE_HOME/deinstall/
response directory.
-checkonly Use this flag to check the status of the Oracle
software home configuration. Running
deinstall with the -checkonly flag does
not remove the Oracle configuration. The -
checkonly flag generates a response file that
you can then use with the deinstall
command and -silent option.
-paramfile complete path of input Use this flag to run deinstall with a
response file response file in a location other than the
default. When you use this flag, provide the
complete path where the response file is
located.
The default location of the response file
is $ORACLE_HOME/deinstall/
response.
-params [name1=value name2=value Use this flag with a response file to override
name3=value . . .] one or more values to change in a response
file you have created.
-o complete path of directory for saving Use this flag to provide a path other than the
response files default location where the response file
(deinstall.rsp.tmpl) is saved.
The default location of the response file
is $ORACLE_HOME/deinstall/
response .
-tmpdir complete path of temporary Use this flag to specify a non-default location
directory to use where deinstall writes the temporary files
for the deinstallation.
12-4
Chapter 12
Deinstallation Examples for Oracle Database
Parameter Description
-logdir complete path of log directory to Use this flag to specify a non-default location
use where deinstall writes the log files for the
deinstallation.
-local Use this flag on a multinode environment to
deinstall Oracle software in a cluster.
When you run deinstall with this flag, it
deconfigures and deinstalls the Oracle
software on the local node (the node where
deinstall is run). On remote nodes, it
deconfigures Oracle software, but does not
deinstall the Oracle software.
-skipLocalHomeDeletion Use this flag in Oracle Grid Infrastructure
installations on a multinode environment to
deconfigure a local Grid home without deleting
the Grid home.
-skipRemoteHomeDeletion Use this flag in Oracle Grid Infrastructure
installations on a multinode environment to
deconfigure a remote Grid home without
deleting the Grid home.
-help Use this option to obtain additional information
about the command option flags.
$ ./deinstall
You can generate a deinstallation response file by running deinstall with the -checkonly
flag. Alternatively, you can use the response file template located at $ORACLE_HOME/
deinstall/response/deinstall.rsp.tmpl. If you have a response file, then use the
optional flag -paramfile to provide a path to the response file.
$ cd /u01/app/oracle/product/19.0.0/dbhome_1/deinstall
$ ./deinstall -paramfile /home/usr/oracle/my_db_paramfile.tmpl
To remove the Oracle Grid Infrastructure home, use the deinstall command in the Oracle
Grid Infrastructure home.
12-5
Chapter 12
Deinstallation Response File Example for Oracle Grid Infrastructure for a Cluster
$ cd /u01/app/19.0.0/grid/deinstall
$ ./deinstall -paramfile /home/usr/oracle/my_grid_paramfile.tmpl
12-6
Chapter 12
Deinstallation Response File Example for Oracle Grid Infrastructure for a Cluster
ASM_DISKSTRING=/dev/rdsk/*,AFD:*
CDATA_QUORUM_GROUPS=
CRS_HOME=true
ODA_CONFIG=
JLIBDIR=/u01/app/jlib
CRFHOME="/u01/app/"
USER_IGNORED_PREREQ=true
MGMTDB_ORACLE_BASE=/u01/app/grid/
DROP_MGMTDB=true
RHP_CONF=false
OCRLOC=
GNS_TYPE=local
CRS_STORAGE_OPTION=1
CDATA_SITES=
GIMR_CONFIG=local
CDATA_BACKUP_SIZE=0
GPNPGCONFIGDIR=$ORACLE_HOME
MGMTDB_IN_HOME=true
CDATA_DISK_GROUP=+DATA2
LANGUAGE_ID=AMERICAN_AMERICA.AL32UTF8
CDATA_BACKUP_FAILURE_GROUPS=
CRS_NODEVIPS='AUTO/255.255.254.0/net0,AUTO/255.255.254.0/net0'
ORACLE_OWNER=cuser
GNS_ALLOW_NET_LIST=
silent=true
INSTALL_NODE=node1.example.com
ORACLE_HOME_VERSION_VALID=true
inst_group=oinstall
LOGDIR=/tmp/deinstall2016-10-06_09-36-04AM/logs/
EXTENDED_CLUSTER_SITES=
CDATA_REDUNDANCY=EXTERNAL
CDATA_BACKUP_DISK_GROUP=+DATA2
APPLICATION_VIP=
HUB_NODE_LIST=node1,node2
NODE_NAME_LIST=node1,node2
GNS_DENY_ITF_LIST=
ORA_CRS_HOME=/u01/app/12.2.0/grid/
JREDIR=/u01/app/12.2.0/grid/jdk/jre/
ASM_LOCAL_SID=+ASM1
ORACLE_BASE=/u01/app/
GNS_CONF=true
CLUSTER_CLASS=DOMAINSERVICES
ORACLE_BINARY_OK=true
CDATA_BACKUP_REDUNDANCY=EXTERNAL
CDATA_FAILURE_GROUPS=
ASM_CONFIG=near
OCR_LOCATIONS=
ASM_ORACLE_BASE=/u01/app/12.2.0/
OLRLOC=
GIMR_CREDENTIALS=
GPNPCONFIGDIR=$ORACLE_HOME
ORA_ASM_GROUP=asmadmin
GNS_CREDENTIALS=
CDATA_BACKUP_AUSIZE=4
GNS_DENY_NET_LIST=
12-7
Chapter 12
Deinstallation Response File Example for Oracle Grid Infrastructure for a Cluster
OLD_CRS_HOME=
NEW_NODE_NAME_LIST=
GNS_DOMAIN_LIST=node1.example.com
ASM_UPGRADE=false
NETCA_LISTENERS_REGISTERED_WITH_CRS=LISTENER
CDATA_BACKUP_DISKS=/dev/rdsk/
ASMCA_ARGS=
CLUSTER_GUID=
CLUSTER_NODES=node1,node2
MGMTDB_NODE=node2
ASM_DIAGNOSTIC_DEST=/u01/app/
NEW_PRIVATE_NAME_LIST=
AFD_LABELS_NO_DG=
AFD_CONFIGURED=true
CLSCFG_MISSCOUNT=
MGMT_DB=true
SCAN_PORT=1521
ASM_DROP_DISKGROUPS=true
OPC_NAT_ADDRESS=
CLUSTER_TYPE=DB
NETWORKS="net0"/IP_Address:public,"net1"/IP_Address:asm,"net1"/
IP_Address:cluster_interconnect
OCR_VOTINGDISK_IN_ASM=true
HUB_SIZE=32
CDATA_BACKUP_SITES=
CDATA_SIZE=0
REUSEDG=false
MGMTDB_DATAFILE=
ASM_IN_HOME=true
HOME_TYPE=CRS
MGMTDB_SID="-MGMTDB"
GNS_ADDR_LIST=mycluster-gns.example.com
CLUSTER_NAME=node1-cluster
AFD_CONF=true
MGMTDB_PWDFILE=
OPC_CLUSTER_TYPE=
VOTING_DISKS=
SILENT=false
VNDR_CLUSTER=false
TZ=localtime
GPNP_PA=
DC_HOME=/tmp/deinstall2016-10-06_09-36-04AM/logs/
CSS_LEASEDURATION=400
REMOTE_NODES=node2
ASM_SPFILE=
NEW_NODEVIPS='n1-vip/255.255.252.0/eth0,n2-vip/255.255.252.0/eth0'
SCAN_NAME=node1-cluster-scan.node1-cluster.com
RIM_NODE_LIST=
INVENTORY_LOCATION=/u01/app/oraInventory
12-8
Chapter 12
Migrating Standalone Oracle Grid Infrastructure Servers to a Cluster
Note:
Do not use quotation marks with variables except in the following cases:
• Around addresses in CRS_NODEVIPS:
CRS_NODEVIPS='n1-vip/255.255.252.0/eth0,n2-vip/255.255.252.0/eth0'
• Around interface names in NETWORKS:
NETWORKS="eth0"/192.0.2.1\:public,"eth1"/10.0.0.1\:cluster_interconnect
VIP1_IP=192.0.2.2
# cd /u01/app/19.0.0/grid/crs/install
# roothas.sh -deconfig -force
12-9
Chapter 12
Migrating Standalone Oracle Grid Infrastructure Servers to a Cluster
+ASM:oracle_restart_home:N
b. Proceed to step 7.
Installing in a Different Location than Oracle Restart
a. Set up Oracle Grid Infrastructure software in the new Grid home software
location as described in Installing Only the Oracle Grid Infrastructure Software.
b. Proceed to step 7.
9. Set the environment variables as follows:
13. Mount the Oracle ASM disk group used by Oracle Restart.
12-10
Chapter 12
Relinking Oracle Grid Infrastructure for a Cluster Binaries
16. Add the Oracle Database for support by Oracle Grid Infrastructure for a cluster, using the
configuration information you recorded in step 1. Use the following command syntax,
where db_unique_name is the unique name of the database on the node, and nodename
is the name of the node:
srvctl add database -db db_unique_name -spfile spfile_name -pwfile
pwfile_name -oraclehome $ORACLE_HOME -node nodename
a. For example, first verify that the ORACLE_HOME environment variable is set to the
location of the database home directory.
b. Next, to add the database name mydb, enter the following command:
c. Add each service to the database, using the command srvctl add service. For
example, add myservice as follows:
17. Add nodes to your cluster, as required, using the Oracle Grid Infrastructure installer.
See Also:
Oracle Clusterware Administration and Deployment Guide for information about
adding nodes to your cluster.
Caution:
Before relinking executables, you must shut down all executables that run in the
Oracle home directory that you are relinking. In addition, shut down applications
linked with Oracle shared libraries. If present, unmount all Oracle Automatic
Storage Management Cluster File System (Oracle ACFS) filesystems.
As root user:
# cd Grid_home/crs/install
# rootcrs.sh -unlock
12-11
Chapter 12
Changing the Oracle Grid Infrastructure Home Path
# cd Grid_home/rdbms/install/
# ./rootadd_rdbms.sh
# cd Grid_home/crs/install
# rootcrs.sh -lock
You must relink the Oracle Clusterware and Oracle ASM binaries every time you apply
an operating system patch or after you perform an operating system upgrade that
does not replace the root file system. For an operating system upgrade that results in
a new root file system, you must remove the node from the cluster and add it back into
the cluster.
For upgrades from previous releases, if you want to deinstall the prior release Grid
home, then you must first unlock the prior release Grid home. Unlock the previous
release Grid home by running the command rootcrs.sh -unlock from the previous
release home. After the script has completed, you can run the deinstall command.
Note:
Before changing the Grid home, you must shut down all executables that run
in the Grid home directory that you are relinking. In addition, shut down
applications linked with Oracle shared libraries.
$ cd /u01/app/19.0.0/grid/bin
$ ./crsctl stop crs
3. As grid user, detach the existing Grid home by running the following command,
where /u01/app/19.0.0/grid is the existing Grid home location:
$ /u01/app/19.0.0/grid/oui/bin/runInstaller -silent -
waitforcompletion\
-detachHome ORACLE_HOME='/u01/app/19.0.0/grid' -local
12-12
Chapter 12
Unconfiguring Oracle Clusterware Without Removing Binaries
4. As root, move the Grid binaries from the old Grid home location to the new Grid home
location. For example, where the old Grid home is /u01/app/19.0.0/grid and the new
Grid home is /u01/app/19c:
# mkdir /u01/app/19c
# cp -pR /u01/app/19.0.0/grid /u01/app/19c
# cd /u01/app/19c/grid/crs/install
# ./rootcrs.sh -unlock -dstcrshome /u01/app/19c/grid
6. As grid, run the gridSetup.sh command from the new Grid home directory, and select
the Configuration Option as Set Up Software Only:
$ /u01/app/19c/grid/gridSetup.sh
This step relinks the Oracle Grid Infrastructure binaries and updates the Oracle Inventory
(oraInventory) with the new Oracle Grid Infrastructure home.
7. As root again, enter the following command to start up in the new home location:
# cd /u01/app/19c/grid/crs/install
# ./rootcrs.sh -move -dstcrshome /u01/app/19c/grid
Note:
While cloning, ensure that you do not change the Oracle home base, otherwise the
move operation fails.
12-13
Chapter 12
Unconfiguring Oracle Member Cluster
Note:
Stop any databases, services, and listeners that may be installed and
running before deconfiguring Oracle Clusterware. In addition, dismount
Oracle Automatic Storage Management Cluster File System (Oracle ACFS)
and disable Oracle Automatic Storage Management Dynamic Volume
Manager (Oracle ADVM) volumes.
Caution:
Commands used in this section remove the Oracle Grid infrastructure
installation for the entire cluster. If you want to remove the installation from
an individual node, then see Oracle Clusterware Administration and
Deployment Guide.
3. Run rootcrs.sh with the -deconfig and -force flags. For example:
# ./rootcrs.sh -deconfig -force
The -lastnode flag completes deconfiguration of the cluster, including the OCR
and voting files.
Grid_home/deinstall/deinstall.sh
2. Complete the deinstallation by running the root script on all the nodes when
prompted.
# rootcrs.sh -deconfig
12-14
Chapter 12
Unconfiguring Oracle Member Cluster
3. Delete the Member Cluster Manifest File for the Oracle Member Cluster and stored on
the Oracle Domain Services Cluster:
Related Topics
• Oracle Clusterware Administration and Deployment Guide
12-15
A
Installing and Configuring Oracle Database
Using Response Files
Review the following topics to install and configure Oracle products using response files.
• How Response Files Work
Response files can assist you with installing an Oracle product multiple times on multiple
computers.
• Reasons for Using Silent Mode or Response File Mode
Review this section for use cases for running the installer in silent mode or response file
mode.
• Using Response Files
Review this information to use response files.
• Preparing Response Files
Review this information to prepare response files for use during silent mode or response
file mode installations.
• Running Oracle Universal Installer Using a Response File
After creating the response file, run Oracle Universal Installer at the command line,
specifying the response file you created, to perform the installation.
• Running Configuration Assistants Using Response Files
You can run configuration assistants in response file or silent mode to configure and start
Oracle software after it is installed on the system. To run configuration assistants in
response file or silent mode, you must copy and edit a response file template.
• Postinstallation Configuration Using Response File Created During Installation
Use response files to configure Oracle software after installation. You can use the same
response file created during installation to also complete postinstallation configuration.
• Postinstallation Configuration Using the ConfigToolAllCommands Script
You can create and run a response file configuration after installing Oracle software. The
configToolAllCommands script requires users to create a second response file, of a
different format than the one used for installing the product.
A-1
Appendix A
Reasons for Using Silent Mode or Response File Mode
• Silent mode
If you include responses for all of the prompts in the response file and specify the
-silent option when starting the installer, then it runs in silent mode. During a
silent mode installation, the installer does not display any screens. Instead, it
displays progress information in the terminal that you used to start it.
• Response file mode
If you include responses for some or all of the prompts in the response file and
omit the -silent option, then the installer runs in response file mode. During a
response file mode installation, the installer displays all the screens, screens for
which you specify information in the response file, and also screens for which you
did not specify the required information in the response file.
You define the settings for a silent or response file installation by entering values for
the variables listed in the response file. For example, to specify the Oracle home
name, provide the Oracle home path for the ORACLE_HOME environment variable:
ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1
Mode Uses
Silent Use silent mode for the following installations:
• Complete an unattended installation, which you schedule using
operating system utilities such as at.
• Complete several similar installations on multiple systems without user
interaction.
• Install the software on a system that does not have X Window System
software installed on it.
The installer displays progress information on the terminal that you used to
start it, but it does not display any of the installer screens.
Response file Use response file mode to complete similar Oracle software installations on
more than one system, providing default answers to some, but not all of the
installer prompts.
Note:
You must complete all required preinstallation tasks on a system before
running the installer in silent or response file mode.
A-2
Appendix A
Preparing Response Files
Note:
If you copied the software to a hard disk, then the response files are located in
the $ORACLE_HOME/install/response directory.
All response file templates contain comment entries, sample formats, examples, and other
useful instructions. Read the response file instructions to understand how to specify values
for the response file variables, so that you can customize your installation.
The following table lists the response files provided with this software:
Table A-1 Response Files for Oracle Database and Oracle Grid Infrastructure
A-3
Appendix A
Preparing Response Files
Table A-1 (Cont.) Response Files for Oracle Database and Oracle Grid Infrastructure
Caution:
When you modify a response file template and save a file for use, the
response file may contain plain text passwords. Ownership of the response
file should be given to the Oracle software installation owner only, and
permissions on the response file should be changed to 600. Oracle strongly
recommends that database administrators or other administrators delete or
secure response files when they are not in use.
$ cp $ORACLE_HOME/install/response/db_install.rsp local_directory
Note:
The installer or configuration assistant fails if you do not correctly
configure the response file. Also, ensure that your response file name
has the .rsp suffix.
4. Secure the response file by changing the permissions on the file to 600:
$ chmod 600 /local_dir/db_install.rsp
Ensure that only the Oracle software owner user can view or modify response files
or consider deleting them after the installation succeeds.
Note:
A fully-specified response file for an Oracle Database installation
contains the passwords for database administrative accounts and for a
user who is a member of the OSDBA group (required for automated
backups).
A-4
Appendix A
Preparing Response Files
Note:
OUI does not save passwords while recording the response file.
Note:
Ensure that your response file name has the .rsp suffix.
5. Before you use the saved response file on another system, edit the file and make any
required changes. Use the instructions in the file as a guide when editing it.
A-5
Appendix A
Running Oracle Universal Installer Using a Response File
$ $ORACLE_HOME/runInstaller -help
$ /u01/app/19.0.0/grid/gridSetup.sh -help
Note:
You do not have to set the DISPLAY environment variable if you are
completing a silent mode installation.
4. To start the installer in silent or response file mode, enter a command similar to the
following:
• For Oracle Database:
$ $ORACLE_HOME/runInstaller [-silent] \
-responseFile responsefilename
$ /u01/app/19.0.0/grid/gridSetup.sh [-silent] \
-responseFile responsefilename
A-6
Appendix A
Running Configuration Assistants Using Response Files
Note:
Do not specify a relative path to the response file. If you specify a relative path,
then the installer fails.
In this example:
• -silent runs the installer in silent mode.
• responsefilename is the full path and file name of the installation response file that
you configured.
5. If this is the first time you are installing Oracle software on your system, then Oracle
Universal Installer prompts you to run the orainstRoot.sh script.
Log in as the root user and run the orainstRoot.sh script:
$ su root
password:
# /u01/app/oraInventory/orainstRoot.sh
Note:
You do not have to manually create the oraInst.loc file. Running the
orainstRoot.sh script is sufficient as it specifies the location of the Oracle
Inventory directory.
6. When the installation completes, log in as the root user and run the root.sh script. For
example:
$ su root
password:
# $ORACLE_HOME/root.sh
Note:
If you copied the software to a hard disk, then the response file template is located
in the /response directory.
A-7
Appendix A
Running Configuration Assistants Using Response Files
$ cp /directory_path/assistants/dbca/dbca.rsp local_directory
In this example, directory_path is the path of the directory where you have
copied the installation binaries.
As an alternative to editing the response file template, you can also create a
database by specifying all required information as command line options when you
run Oracle DBCA. For information about the list of options supported, enter the
following command:
$ $ORACLE_HOME/bin/dbca -help
$ vi /local_dir/dbca.rsp
Note:
Oracle DBCA fails if you do not correctly configure the response file.
4. Log in as the Oracle software owner user, and set the ORACLE_HOME environment
variable to specify the correct Oracle home directory.
5. To run Oracle DBCA in response file mode, set the DISPLAY environment variable.
6. Use the following command syntax to run Oracle DBCA in silent or response file
mode using a response file:
A-8
Appendix A
Running Configuration Assistants Using Response Files
In this example:
• -silent option indicates that Oracle DBCA runs in silent mode.
• local_dir is the full path of the directory where you copied the dbca.rsp response
file template.
During configuration, Oracle DBCA displays a window that contains the status messages
and a progress bar.
$ cp /directory_path/assistants/netca/netca.rsp local_directory
In this example, directory_path is the path of the directory where you have copied the
installation binaries.
2. Open the response file in a text editor:
$ vi /local_dir/netca.rsp
Note:
Net Configuration Assistant fails if you do not correctly configure the response
file.
4. Log in as the Oracle software owner user, and set the ORACLE_HOME environment variable
to specify the correct Oracle home directory.
5. Enter a command similar to the following to run Net Configuration Assistant in silent
mode:
In this command:
• The /silent option indicates to run Net Configuration Assistant in silent mode.
• local_dir is the full path of the directory where you copied the netca.rsp response
file template.
A-9
Appendix A
Postinstallation Configuration Using Response File Created During Installation
Oracle strongly recommends that you maintain security with a password response file:
• Permissions on the response file should be set to 600.
• The owner of the response file should be the installation owner user, with the
group set to the central inventory (oraInventory) group.
Example A-1 Response File Passwords for Oracle Grid Infrastructure (grid
user)
grid.install.crs.config.ipmi.bmcPassword=password
grid.install.asm.SYSASMPassword=password
grid.install.asm.monitorPassword=password
grid.install.config.emAdminPassword=password
If you do not have a BMC card, or you do not want to enable IPMI, then leave the
ipmi.bmcPassword input field blank.
If you do not want to enable Oracle Enterprise Manager for management, then leave
the emAdminPassword password field blank.
Example A-2 Response File Passwords for Oracle Grid Infrastructure for a
Standalone Server (oracle user)
oracle.install.asm.SYSASMPassword=password
oracle.install.asm.monitorPassword=password
oracle.install.config.emAdminPassword=password
A-10
Appendix A
Postinstallation Configuration Using Response File Created During Installation
If you do not want to enable Oracle Enterprise Manager for management, then leave the
emAdminPassword password field blank.
Example A-3 Response File Passwords for Oracle Database (oracle user)
This example illustrates the passwords to specify for use with the database configuration
assistants.
oracle.install.db.config.starterdb.password.SYS=password
oracle.install.db.config.starterdb.password.SYSTEM=password
oracle.install.db.config.starterdb.password.DBSNMP=password
oracle.install.db.config.starterdb.password.PDBADMIN=password
oracle.install.db.config.starterdb.emAdminPassword=password
oracle.install.db.config.asm.ASMSNMPPassword=password
oracle.install.asm.SYSASMPassword=password
oracle.install.config.emAdminPassword=password
grid.install.asm.SYSASMPassword=password
grid.install.config.emAdminPassword=password
2. Change directory to the Oracle home containing the installation software. For example:
A-11
Appendix A
Postinstallation Configuration Using Response File Created During Installation
cd Grid_home
cd $ORACLE_HOME
For Oracle Database, you can also run the response file located in the
directory $ORACLE_HOME/inventory/response/:
The postinstallation configuration tool runs the installer in the graphical user
interface mode, displaying the progress of the postinstallation configuration.
Specify the [-silent] option to run the postinstallation configuration in the
silent mode.
For example, for Oracle Grid Infrastructure:
A-12
Appendix A
Postinstallation Configuration Using the ConfigToolAllCommands Script
The configToolAllCommands password response file has the following syntax options:
A-13
Appendix A
Postinstallation Configuration Using the ConfigToolAllCommands Script
internal_component_name|variable_name=value
For example:
oracle.crs|S_ASMPASSWORD=PassWord
The database configuration assistants require the SYS, SYSTEM, and DBSNMP
passwords for use with Oracle DBCA. You may need to specify the following additional
passwords, depending on your system configuration:
• If the database is using Oracle Automatic Storage Management (Oracle ASM) for
storage, then you must specify a password for the S_ASMSNMPPASSWORD variable. If
you are not using Oracle ASM, then leave the value for this password variable
blank.
• If you create a multitenant container database (CDB) with one or more pluggable
databases (PDBs), then you must specify a password for the S_PDBADMINPASSWORD
variable. If you are not using Oracle ASM, then leave the value for this password
variable blank.
Oracle strongly recommends that you maintain security with a password response file:
• Permissions on the response file should be set to 600.
• The owner of the response file should be the installation owner user, with the
group set to the central inventory (oraInventory) group.
$ touch pwdrsp.properties
2. Open the file with a text editor, and cut and paste the sample password file
contents, as shown in the examples, modifying as needed.
3. Change permissions to secure the password response file. For example:
$ ls -al pwdrsp.properties
-rw------- 1 oracle oinstall 0 Apr 30 17:30 pwdrsp.properties
Example A-4 Password response file for Oracle Grid Infrastructure (grid user)
grid.crs|S_ASMPASSWORD=password
grid.crs|S_OMSPASSWORD=password
grid.crs|S_BMCPASSWORD=password
grid.crs|S_ASMMONITORPASSWORD=password
If you do not have a BMC card, or you do not want to enable IPMI, then leave the
S_BMCPASSWORD input field blank.
A-14
Appendix A
Postinstallation Configuration Using the ConfigToolAllCommands Script
Example A-5 Password response file for Oracle Grid Infrastructure for a Standalone
Server (oracle user)
oracle.crs|S_ASMPASSWORD=password
oracle.crs|S_OMSPASSWORD=password
oracle.crs|S_ASMMONITORPASSWORD=password
Example A-6 Password response file for Oracle Database (oracle user)
This example provides a template for a password response file to use with the database
configuration assistants.
oracle.server|S_SYSPASSWORD=password
oracle.server|S_SYSTEMPASSWORD=password
oracle.server|S_EMADMINPASSWORD=password
oracle.server|S_DBSNMPPASSWORD=password
oracle.server|S_ASMSNMPPASSWORD=password
oracle.server|S_PDBADMINPASSWORD=password
If you do not want to enable Oracle Enterprise Manager for management, then leave those
password fields blank.
configToolAllCommands RESPONSE_FILE=/path/name.properties
For example:
$ ./configToolAllCommands RESPONSE_FILE=/home/oracle/pwdrsp.properties
A-15
B
Completing Preinstallation Tasks Manually
You can complete the preinstallation configuration tasks manually.
Oracle recommends that you use Oracle Universal Installer and Cluster Verification Utility
fixup scripts to complete minimal configuration settings. If you cannot use fixup scripts, then
complete minimum system settings manually.
• Configuring SSH Manually on All Cluster Nodes
Passwordless SSH configuration is a mandatory installation requirement. SSH is used
during installation to configure cluster member nodes, and SSH is used after installation
by configuration assistants, Oracle Enterprise Manager, Opatch, and other features.
• Configuring Storage Device Path Persistence Using Oracle ASMLIB
To use Oracle ASMLIB to configure Oracle ASM devices, complete the following tasks.
• Configuring Storage Device Path Persistence Manually
You can maintain storage file path persistence by creating a rules file.
• Configuring Kernel Parameters for Linux
These topics explain how to configure kernel parameters manually for Linux if you cannot
complete them using the fixup scripts.
• Configuring Default Thread Limits Value for SUSE Linux
If you are on SUSE Linux Enterprise Server 12 SP2 or later, or SUSE Linux Enterprise
Server 15 or later, then set the DefaultTasksMax parameter value to 65535.
Note:
The supported version of SSH for Linux distributions is OpenSSH.
B-1
Appendix B
Configuring SSH Manually on All Cluster Nodes
$ pgrep sshd
If SSH is running, then the response to this command is one or more process ID
numbers. In the home directory of the installation software owner (grid, oracle), use
the command ls -al to ensure that the .ssh directory is owned and writable only by
the user.
You need either an RSA or a DSA key for the SSH protocol. RSA is used with the SSH
1.5 protocol, while DSA is the default for the SSH 2.0 protocol. With OpenSSH, you
can use either RSA or DSA. The instructions that follow are for SSH1. If you have an
SSH2 installation, and you cannot use SSH1, then refer to your SSH distribution
documentation to configure SSH1 compatibility or to configure SSH2 with DSA.
$ id
$ id grid
B-2
Appendix B
Configuring SSH Manually on All Cluster Nodes
Ensure that Oracle user group and user and the user terminal window process you are
using have group and user IDs are identical.
For example:
uid=54322(grid) gid=54321(oinstall)
groups=54321(oinstall),54322(grid,asmadmin,asmdba)
$ id grid uid=54322(grid) gid=54321(oinstall)
groups=54321(oinstall),54322(grid,asmadmin,asmdba)
3. If necessary, create the .ssh directory in the grid user's home directory, and set
permissions on it to ensure that only the oracle user has read and write permissions:
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
Note that the SSH configuration fails if the permissions are not set to 700.
4. Enter the following command:
$ /usr/bin/ssh-keygen -t dsa
Note:
If you have OpenSSH version 7.8 or higher installed on your system, then enter
the following command to create SSH keys on each node:
At the prompts, accept the default location for the key file (press Enter).
Never distribute the private key to anyone not authorized to perform Oracle software
installations.
This command writes the DSA public key to the ~/.ssh/id_dsa.pub file and the private
key to the ~/.ssh/id_dsa file.
5. Repeat steps 1 through 4 on each node that you intend to make a member of the cluster,
using the DSA key.
B-3
Appendix B
Configuring SSH Manually on All Cluster Nodes
1. On the local node, change directories to the .ssh directory in the Oracle Grid
Infrastructure owner's home directory (typically, either grid or oracle). Then, add
the DSA key to the authorized_keys file using the following commands:
In the .ssh directory, you should see the id_dsa.pub keys that you have created,
and the file authorized_keys.
2. On the local node, use SCP (Secure Copy) or SFTP (Secure FTP) to copy the
authorized_keys file to the oracle user .ssh directory on a remote node. The
following example is with SCP, on a node called node2, with the Oracle Grid
Infrastructure owner grid, where the grid user path is /home/grid:
a. You are prompted to accept a DSA key. Enter Yes, and you see that the node
you are copying to is added to the known_hosts file.
b. When prompted, provide the password for the grid user, which should be the
same on all nodes in the cluster. The authorized_keys file is copied to the
remote node.
Your output should be similar to the following, where xxx represents parts of a
valid IP address:
3. Using SSH, log in to the node where you copied the authorized_keys file. Then
change to the .ssh directory, and using the cat command, add the DSA keys for
the second node to the authorized_keys file, clicking Enter when you are
prompted for a password, so that passwordless SSH is set up:
4. Repeat steps 2 and 3 from each node to each other member node in the cluster.
5. When you have added keys from each cluster node member to the
authorized_keys file on the last node you want to have as a cluster node member,
then use scp to copy the authorized_keys file with the keys from all nodes back to
each cluster node member, overwriting the existing version on the other nodes. To
confirm that you have all nodes in the authorized_keys file, enter the command
more authorized_keys, and determine if there is a DSA key for each member
B-4
Appendix B
Configuring Storage Device Path Persistence Using Oracle ASMLIB
node. The file lists the type of key (ssh-dsa), followed by the key, and then followed by
the user and server. For example:
The grid user's /.ssh/authorized_keys file on every node must contain the contents
from all of the /.ssh/id_dsa.pub files that you generated on all cluster nodes.
At the end of this process, the public host name for each member node should be
registered in the known_hosts file for all other cluster nodes. If you are using a remote
client to connect to the local node, and you see a message similar to "Warning: No
xauth data; using fake authentication data for X11 forwarding," then this means
that your authorized keys file is configured correctly, but your SSH configuration has X11
forwarding enabled. To correct this issue, see Setting Remote Display and X11
Forwarding Configuration.
3. Repeat step 2 on each cluster node member.
If you have configured SSH correctly, then you can now use the ssh or scp commands
without being prompted for a password. For example:
If any node prompts for a password, then verify that the ~/.ssh/authorized_keys file on that
node contains the correct public keys, and that you have created an Oracle software owner
with identical group membership and IDs.
B-5
Appendix B
Configuring Storage Device Path Persistence Using Oracle ASMLIB
Note:
If you configure disks using Oracle ASMLIB, then you must change the disk
discovery string to ORCL:*. If the diskstring is set to ORCL:*, or is left empty
(""), then the installer discovers these disks.
B-6
Appendix B
Configuring Storage Device Path Persistence Using Oracle ASMLIB
using yum to retrieve the most current package for your system and kernel. For additional
information, see the following URL:
http://www.oracle.com/technetwork/topics/linux/asmlib/index-101839.html
To install and configure the Oracle Automatic Storage Management library driver software
manually, perform the following steps:
1. Enter the following command to determine the kernel version and architecture of the
system:
# uname -rm
2. Depending on your operating system version, download the required Oracle Automatic
Storage Management library driver packages and driver:
http://www.oracle.com/technetwork/server-storage/linux/asmlib/
index-101839.html
See Also:
My Oracle Support note 1089399.1 for information about Oracle ASMLIB
support with Red Hat distributions:
https://support.oracle.com/rs?type=doc&id=1089399.1
Method 2: Alternatively, install the following packages in sequence, where version is the
version of the Oracle Automatic Storage Management library driver, arch is the system
architecture, and kernel is the version of the kernel that you are using:
oracleasm-support-version.arch.rpm
oracleasm-kernel-version.arch.rpm
oracleasmlib-version.arch.rpm
For example, if you are using the Red Hat Enterprise Linux 5 AS kernel on an AMD64
system, then enter a command similar to the following:
B-7
Appendix B
Configuring Storage Device Path Persistence Using Oracle ASMLIB
Note:
The oracleasm command in /usr/sbin is the command you should use.
The /etc/init.d path is not deprecated, but the oracleasm binary in
that path is now used typically for internal commands.
6. Enter the following information in response to the prompts that the script displays:
Note:
The Oracle ASMLIB file system is not a regular file system. It is used
only by the Oracle ASM library to communicate with the Oracle
ASMLIB.
B-8
Appendix B
Configuring Storage Device Path Persistence Using Oracle ASMLIB
To include devices in a disk group, you can specify either whole-drive device names
or partition device names.
B-9
Appendix B
Configuring Storage Device Path Persistence Using Oracle ASMLIB
Note:
Oracle recommends that you create a single whole-disk partition on
each disk to use.
c. Use either fdisk or parted to create a single whole-disk partition on the disk
devices.
2. Enter a command similar to the following to mark a disk as an Oracle Automatic
Storage Management disk:
# /usr/sbin/oracleasm createdisk DISK1 /dev/sdb1
Note:
• The disk names you specify can contain uppercase letters, numbers,
and the underscore character. They must start with an uppercase
letter.
• To create a database during the installation using the Oracle
Automatic Storage Management library driver, you must change the
disk discovery string to ORCL:*.
• If you are using a multi-pathing disk driver with Oracle ASM, then
make sure that you specify the correct logical device name for the
disk.
3. To make the disk available on the other nodes in the cluster, enter the following
command as root on each node:
# /usr/sbin/oracleasm scandisks
This command identifies shared disks attached to the node that are marked as
Oracle ASM disks.
B-10
Appendix B
Configuring Storage Device Path Persistence Using Oracle ASMLIB
B-11
Appendix B
Configuring Storage Device Path Persistence Using Oracle ASMLIB
See Also:
My Oracle Support site for updates to supported storage options:
https://support.oracle.com/
B-12
Appendix B
Configuring Storage Device Path Persistence Using Oracle ASMLIB
In a multipath disk configuration, the same disk can appear three times: the initial path to the
disk, the second path to the disk, and the multipath disk access point.
For example: If you have one local disk, /dev/sda, and one disk attached with external
storage, then your server shows two connections, or paths, to that external storage. The
Linux SCSI driver shows both paths. They appear as /dev/sdb and /dev/sdc. The system
may access either /dev/sdb or /dev/sdc, but the access is to the same disk.
If you enable multipathing, then you have a multipath disk (for example, /dev/multipatha),
which can access both /dev/sdb and /dev sdc; any I/O to multipatha can use either the sdb
or sdc path. If a system is using the /dev/sdb path, and that cable is unplugged, then the
system shows an error. But the multipath disk will switch from the /dev/sdb path to
the /dev/sdc path.
Most system software is unaware of multipath configurations. They can use any paths (sdb,
sdc or multipatha). ASMLIB also is unaware of multipath configurations.
By default, ASMLIB recognizes the first disk path that Linux reports to it, but because it
imprints an identity on that disk, it recognizes that disk only under one path. Depending on
your storage driver, it may recognize the multipath disk, or it may recognize one of the single
disk paths.
Instead of relying on the default, you should configure Oracle ASM to recognize the multipath
disk.
For Oracle Linux and Red Hat Enterprise Linux, when scanning, the kernel sees the devices
as /dev/mapper/XXX entries. By default, the device file naming scheme udev creates
the /dev/mapper/XXX names for human readability. Any configuration using
ORACLEASM_SCANORDER should use the /dev/mapper/XXX entries.
B-13
Appendix B
Configuring Storage Device Path Persistence Using Oracle ASMLIB
For example, if the multipath disks use the prefix multipath (/dev/mapper/
multipatha, /dev/mapper/multipathb and so on), and the multipath disks mount
SCSI disks, then provide a prefix path similar to the following:
ORACLEASM_SCANORDER="multipath sd"
multipaths {
multipath {
wwid 3600508b4000156d700012000000b0000
alias multipath
...
}
multipath {
...
alias mympath
...
}
...
}
ORACLEASM_SCANEXCLUDE="sdb sdc"
B-14
Appendix B
Configuring Storage Device Path Persistence Manually
and /dev/sdc, but you have configured ASMLIB to ignore sdb and sdc, then ASMLIB
ignores these disks and instead marks only the multipath disk as an Oracle ASM disk.
To stop the last Oracle Flex ASM instance on the node, stop the Oracle Clusterware
stack:
# /usr/sbin/oracleasm configure -d
# rpm -e oracleasm-support
# rpm -e oracleasmlib
B-15
Appendix B
Configuring Storage Device Path Persistence Manually
named /dev/sdd owned by the user grid may be on a device named /dev/sdf
owned by root after restarting the server.
If you use Oracle ASMFD, then you do not have to ensure permissions and device
path persistence in udev.
If you do not use Oracle ASMFD, then you must create a custom rules file. Linux
vendors customize their udev configurations and use different orders for reading rules
files. For example, on some Linux distributions when udev is started, it sequentially
carries out rules (configuration directives) defined in rules files. These files are in the
path /etc/udev/rules.d/. Rules files are read in lexical order. For example, rules
in the file 10-wacom.rules are parsed and carried out before rules in the rules file 90-
ib.rules.
When specifying the device information in the udev rules file, ensure that the OWNER,
GROUP, and MODE are specified before all other characteristics in the order shown.
For example, to include the characteristic ACTION on the UDEV line, specify ACTION
after OWNER, GROUP, and MODE.
Where rules files describe the same devices, on the supported Linux kernel versions,
the last file read is the one that is applied.
• Configuring Device Persistence Manually for Oracle ASM
Complete these tasks to create device path persistence manually for Oracle ASM.
# /sbin/scsi_id -g -s /block/sdb/sdb1
360a98000686f6959684a453333524174
# /sbin/scsi_id -g -s /block/sde/sde1
360a98000686f6959684a453333524179
Record the unique SCSI identifiers, so you can provide them when required.
Note:
The command scsi_id should return the same device identifier value
for a given device, regardless of which node the command is run from.
B-16
Appendix B
Configuring Storage Device Path Persistence Manually
3. Using a text editor, create a UDEV rules file for the Oracle ASM devices, setting
permissions to 0660 for the installation owner and the operating system group you have
designated the OSASM group, whose members are administrators of the Oracle Grid
Infrastructure software. For example, on Oracle Linux, to create a role-based
configuration rules.d file where the installation owner is grid and the OSASM group
asmadmin, enter commands similar to the following:
# vi /etc/udev/rules.d/99-oracle-asmdevices.rules
4. On clustered systems, copy the rules.d file to all other nodes on the cluster. For
example:
# /sbin/partprobe /dev/sdc1
# /sbin/partprobe /dev/sdd1
# /sbin/partprobe /dev/sde1
# /sbin/partprobe /dev/sdf1
6. Run the command udevtest (/sbin/udevtest) to test the UDEV rules configuration
you have created. The output should indicate that the devices are available and the rules
are applied as expected. For example, for /dev/ssd1:
# udevtest /block/sdd/sdd1
main: looking at device '/block/sdd/sdd1' from subsystem 'block'
udev_rules_get_name: add symlink
'disk/by-id/scsi-360a98000686f6959684a453333524174-part1'
udev_rules_get_name: add symlink
'disk/by-path/ip-192.168.1.1:3260-iscsi-iqn.1992-08.com.netapp:sn.887085-
part1'
udev_node_mknod: preserve file '/dev/.tmp-8-17', because it has correct
dev_t
run_program: '/lib/udev/vol_id --export /dev/.tmp-8-17'
run_program: '/lib/udev/vol_id' returned with status 4
run_program: '/sbin/scsi_id'
B-17
Appendix B
Configuring Kernel Parameters for Linux
In the example output, note that applying the rules renames OCR device /dev/
sdd1 to /dev/data1.
7. Load the rules and restart the UDEV service. For example:
• Oracle Linux and Red Hat Enterprise Linux
Verify that the device permissions and ownerships are set correctly.
B-18
Appendix B
Configuring Kernel Parameters for Linux
Note:
• Unless otherwise specified, the kernel parameter and shell limit values shown
in the following table are minimum values only. For production database
systems, Oracle recommends that you tune these values to optimize the
performance of the system. See the operating system documentation for more
information about tuning kernel parameters.
• If the current value for any parameter is greater than the value listed in this
table, then the Fixup scripts do not change the value of that parameter.
B-19
Appendix B
Configuring Kernel Parameters for Linux
Parameter Command
semmsl, semmns, semopm, and semmni # /sbin/sysctl -a | grep sem
This command displays the value of the
semaphore parameters in the order listed.
shmall, shmmax, and shmmni # /sbin/sysctl -a | grep shm
This command displays the details of the
shared memory segment sizes.
file-max # /sbin/sysctl -a | grep file-max
This command displays the maximum number
of file handles.
ip_local_port_range # /sbin/sysctl -a | grep
ip_local_port_range
This command displays a range of port
numbers.
rmem_default # /sbin/sysctl -a | grep
rmem_default
rmem_max # /sbin/sysctl -a | grep rmem_max
wmem_default # /sbin/sysctl -a | grep
wmem_default
wmem_max # /sbin/sysctl -a | grep wmem_max
aio-max-nr # /sbin/sysctl -a | grep aio-max-nr
If you used the Oracle Database Preinstallation RPM to complete you preinstallation
configuration tasks, then the Oracle Database Preinstallation RPM sets these kernel
parameters for you. However, if you did not use the Oracle Database Preinstallation
RPM or the kernel parameters are different from the minimum recommended value,
then to change these kernel parameter values:
B-20
Appendix B
Configuring Kernel Parameters for Linux
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 4294967295
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
# /sbin/sysctl --system
Review the output. If the values are incorrect, edit the /etc/sysctl.d/97-oracle-
database-sysctl.conf file, then enter this command again.
3. Confirm that the values are set correctly:
# /sbin/sysctl -a
4. Restart the computer, or run sysctl --system to make the changes in the /etc/
sysctl.d/97-oracle-database-sysctl.conf file available in the active kernel
memory.
Guidelines for Setting Kernel Parameter Values
• If you used the Oracle Database Preinstallation RPM, then your kernel parameter
settings reside in the /etc/sysctl.d/99-oracle-database-server-19c-
preinstall-sysctl.conf file.
• Include lines only for the kernel parameter values to change. For the semaphore
parameters (kernel.sem), you must specify all four values. If any of the current values
are larger than the minimum value, then specify the larger value.
• The /etc/sysctl.conf file has been deprecated.
• Avoid setting kernel parameter values in multiple files under /etc/sysctl.d/. The file
with a lexically later name under /etc/sysctl.d/ takes precedence, followed
by /etc/sysctl.conf. Oracle recommends that you use the Oracle Database
Preinstallation RPM which, among other preinstallation tasks, also sets the kernel
parameter values for your database installation.
See Also:
sysctl.conf(5) and sysctl.d(5) man pages for more information
B-21
Appendix B
Configuring Kernel Parameters for Linux
# /sbin/chkconfig boot.sysctl on
2. Enter the GID of the oinstall group as the value for the parameter /
proc/sys/vm/hugetlb_shm_group.
For example, where the oinstall group GID is 501:
vm.hugetlb_shm_group=501
Note:
Only one group can be defined as the vm.hugetlb_shm_group.
$ cat /proc/sys/net/ipv4/ip_local_port_range
32768 61000
In the preceding example, the lowest port (32768) and the highest port (61000) are set
to the default range.
B-22
Appendix B
Configuring Default Thread Limits Value for SUSE Linux
If necessary, update the UDP and TCP ephemeral port range to a range high enough for
anticipated system workloads, and to ensure that the ephemeral port range starts at 9000
and above. For example:
Oracle recommends that you make these settings permanent. For example, as root, use a
text editor to open /etc/sysctl.conf, and add or change to the following:
net.ipv4.ip_local_port_range = 9000 65500, and then restart the network:
# /etc/rc.d/init.d/network restart
Refer to your Linux distribution system administration documentation for information about
automating ephemeral port range alteration on system restarts.
Increase the default thread limits, DefaultTasksMax value from 512 to 65535 to avoid
running into service failures.
1. View the contents of /etc/systemd/system.conf to know the current
DefaultTasksMax value. Alternatively, run the following command:
2. If your DefaultTasksMax value is not 65535, then uncomment the line in /etc/
systemd/system.conf and set the value to 65535.
3. To enable the new settings, reboot your system or run the following command:
$ systemctl daemon-reload
B-23
C
Optimal Flexible Architecture
Oracle Optimal Flexible Architecture (OFA) rules are a set of configuration guidelines created
to ensure well-organized Oracle installations, which simplifies administration, support and
maintenance.
• About the Optimal Flexible Architecture Standard
Oracle Optimal Flexible Architecture (OFA) rules help you to organize database software
and configure databases to allow multiple databases, of different versions, owned by
different users to coexist.
• About Multiple Oracle Homes Support
Oracle Database supports multiple Oracle homes. You can install this release or earlier
releases of the software more than once on the same system, in different Oracle home
directories.
• About the Oracle Inventory Directory and Installation
The directory that you designate as the Oracle Inventory directory (oraInventory) stores
an inventory of all software installed on the system.
• Oracle Base Directory Naming Convention
The Oracle Base directory is the database home directory for Oracle Database
installation owners, and the log file location for Oracle Grid Infrastructure owners.
• Oracle Home Directory Naming Convention
By default, Oracle Universal Installer configures Oracle home directories using these
Oracle Optimal Flexible Architecture conventions.
• Optimal Flexible Architecture File Path Examples
Review examples of hierarchical file mappings of an Optimal Flexible Architecture-
compliant installation.
C-1
Appendix C
About Multiple Oracle Homes Support
Note:
OFA assists in identification of an ORACLE_BASE with its Automatic
Diagnostic Repository (ADR) diagnostic data to properly collect incidents.
C-2
Appendix C
About the Oracle Inventory Directory and Installation
• For information about release support timelines, refer to My Oracle Support Doc ID
742060.1
Related Topics
• My Oracle Support Note 742060.1
C-3
Appendix C
Oracle Base Directory Naming Convention
If you have neither set ORACLE_BASE, nor created an OFA-compliant path, then the
Oracle Inventory directory is placed in the home directory of the user that is performing
the installation, and the Oracle software is installed in the path /app/owner, where
owner is the Oracle software installation owner. For example:
/home/oracle/oraInventory
/home/oracle/app/oracle/product/19.0.0/dbhome_1
Example Description
Oracle Database Oracle base, where the Oracle Database software
/u01/app/ installation owner name is oracle. The Oracle Database binary home is
oracle located underneath the Oracle base path.
Oracle Grid Infrastructure Oracle base, where the Oracle Grid Infrastructure
/u01/app/ software installation owner name is grid.
grid
Caution:
The Oracle Grid Infrastructure Oracle base
should not contain the Oracle Grid
Infrastructure binaries for an Oracle Grid
Infrastructure for a cluster installation.
Permissions for the file path to the Oracle Grid
Infrastructure binary home are changed to
root during installation.
C-4
Appendix C
Oracle Home Directory Naming Convention
Note:
Oracle home or Oracle base cannot be symlinks, nor can any of their parent
directories, all the way to up to the root directory.
Variable Description
pm A mount point name.
s A standard directory name.
u The name of the owner of the directory.
v The version of the software.
type The type of installation. For example: Database (dbhome), Client (client), or
Oracle Grid Infrastructure (grid)
n An optional counter, which enables you to install the same product more than
once in the same Oracle base directory. For example: Database 1 and Database
2 (dbhome_1, dbhome_2)
For example, the following path is typical for the first installation of Oracle Database on this
system:
/u01/app/oracle/product/19.0.0/dbhome_1
Note:
Oracle home or Oracle base cannot be symlinks, nor can any of their parent
directories, all the way to up to the root directory.
C-5
Appendix C
Optimal Flexible Architecture File Path Examples
Note:
• The Grid homes are examples of Grid homes used for an Oracle Grid
Infrastructure for a standalone server deployment (Oracle Restart), or a
Grid home used for an Oracle Grid Infrastructure for a cluster
deployment (Oracle Clusterware). You can have either an Oracle Restart
deployment, or an Oracle Clusterware deployment. You cannot have
both options deployed at the same time.
• Oracle Automatic Storage Management (Oracle ASM) is included as part
of an Oracle Grid Infrastructure installation. Oracle recommends that you
use Oracle ASM to provide greater redundancy and throughput.
Directory Description
Root directory
/
C-6
Appendix C
Optimal Flexible Architecture File Path Examples
Table C-2 (Cont.) Optimal Flexible Architecture Hierarchical File Path Examples
Directory Description
Subtree for support log files
/u01/app/oracle/
admin/TAR
Common path for Oracle software products other than Oracle Grid
/u01/app/oracle/ Infrastructure for a cluster
product/
C-7
Appendix C
Optimal Flexible Architecture File Path Examples
Table C-2 (Cont.) Optimal Flexible Architecture Hierarchical File Path Examples
Directory Description
Oracle home directory for Oracle Database 2, owned by Oracle
/u01/app/oracle/ Database installation owner account oracle
product/19.0.0/
dbhome_2
Oracle home directory for Oracle Grid Infrastructure for a cluster (Grid
/u01/app/19.0.0/ home), owned by user grid before installation, and owned by root
grid after installation.
C-8
Index
A changing kernel parameter values, B-20
checkdir error, 10-10, 11-4
adding Oracle ASM listener, 11-29 checklists, 1-1
apply patches during install client-server configurations, C-2
apply patches during upgrade, 11-25 clients
ASM_DISKSTRING, 8-12 and upgrades, 5-5
ASMCA connecting to SCAN, 5-5
Used to create disk groups for older Oracle using SCAN, 5-5
Database releases on Oracle ASM, cluster configuration
10-9 Oracle Domain Services Cluster, 9-5
asmdba groups Oracle Extended Clusters, 9-7
creating, 6-13 Oracle Member Clusters, 9-6
asmoper group Oracle Standalone Clusters, 9-4
creating, 6-13 cluster file system
ASMSNMP, 1-5 storage option for data files, 7-5
Automatic Diagnostic Repository (ADR), C-1 cluster name, 1-5
Automatic Storage Management Cluster File requirements for, 1-5
System cluster nodes
See Oracle ACFS. private network node interfaces, 1-5
private node names, 5-5
B public network node names and addresses,
1-5
backupdba group virtual node names, 1-5, 5-4
creating, 6-14 Cluster Time Synchronization Service, 4-40
Bash shell CLUSTER_INTERCONNECTS parameter, 5-3
default user startup file, 6-24 clusterware
batch upgrade, 11-16 requirements for third party clusterware, 1-5
binaries Clusterware files
relinking, 10-10 OCR files and voting files, 7-9
binary files Clusterware Installation, 9-9
supported storage options for, 7-1 commands
BMC asmca, 8-24
configuring, 6-32 asmcmd, 8-11
BMC interface crsctl, 11-4
preinstallation tasks, 6-29 df -h, 2-2
Bourne shell free, 2-2
default user startup file, 6-24 grep MemTotal, 2-2
grep SwapTotal, 2-2
gridSetup.sh, 9-29
C ipmitool, 6-33
C shell lsmod, 6-31
default user startup file, 6-24 modprobe, 6-31
central inventory, 1-7, C-5 nscd, 4-38
See also Oracle inventory directory root.sh, 10-3
See also OINSTALL directory
Index-1
Index
Index-2
Index
dry-run upgrade G
check upgrade readiness
Clusterware upgrade, 11-13 GIMR, 8-10
Oracle Grid Infrastructure preupgrade check, globalization, 1-11
11-13 GNS
about, 5-11
configuration example, 5-22
E configuring, 5-10
editing shell startup file, 6-24 GNS client clusters
enterprise.rsp file, A-3 and GNS client data file, 5-13
environment GNS client data file required for installation,
configuring for Oracle user, 6-23 5-12
environment variables name resolution for, 5-12
ORACLE_HOME, 11-9 GNS client data file
ORACLE_SID, 11-9 how to create, 5-13
errors GNS virtual IP address, 1-5
X11 forwarding, 6-28, B-5 grid home
errors using Opatch, 11-4 unlocking, 10-10
errors using OPatch, 10-10 grid infrastructure management repository, 9-4
Exadata Grid Infrastructure Management Repository, 8-10
relinking binaries example for, 10-10 global, 8-10
examples local, 8-10
Oracle ASM failure groups, 8-2 Grid user
executeConfigTools, A-11 creating, 6-16
gridSetup script, 9-9, 9-16, 9-22
groups
F creating the asmdba group, 6-13
failed install, 11-53 creating the asmoper group, 6-13
failed upgrade, 11-53 creating the backupdba group, 6-14
failure group creating the dba group, 6-13
characteristics of Oracle ASM failure group, creating the dgdba group, 6-14
8-2, 8-15 creating the kmdba group, 6-15
examples of Oracle ASM failure groups, 8-2 creating the Oracle ASM group, 6-13
Oracle ASM, 8-2 creating the racdba group, 6-15
fast recovery area, 10-4 OINSTALL, 6-3
filepath, C-5 OINSTALL group, 1-3
Grid home OSBACKUPDBA (backupdba), 6-10
filepath, C-5 OSDBA (dba), 6-10
fdisk command, B-9 OSDBA group (dba), 6-10
fencing OSDGDBA (dgdba), 6-10
and IPMI, 6-29 OSKMDBA (kmdba), 6-10
file mode creation mask OSOPER (oper), 6-10
setting, 6-23 OSOPER group (oper), 6-10
file system
storage option for data files, 7-5 H
files
bash_profile, 6-24 hardware requirements
dbca.rsp, A-3 display, 1-1
enterprise.rsp, A-3 IPMI, 1-1
login, 6-24 local storage for Oracle homes, 1-1
profile, 6-24 network, 1-1
response files, A-3 RAM, 1-1
filesets, 4-9 tmp, 1-1
highly available IP addresses (HAIP), 5-6, 5-7
Index-3
Index
host names K
legal host names, 1-5
hugepages, 1-3 kernel parameters
Hugepages,, 4-4 changing, B-20
displaying, B-20
SUSE Linux, B-22
I tcp and udp, B-22
IDE disks kernel parameters configuration, B-18
device names, B-9 kmdba group
image creating, 6-15
install, 9-2 Korn shell
image-based installation of Oracle Grid default user startup file, 6-24
Infrastructure, 9-9, 9-16, 9-22
inaccessible nodes L
upgrading, 11-19
incomplete installations, 11-55 legal host names, 1-5
init.ora licensing, 1-11
and SGA permissions, 10-6 Linux kernel parameters, B-18
installation lsdev command, B-9
cloning a Grid infrastructure installation to LVM
other nodes, 9-30 recommendations for Oracle ASM, 8-2
response files, A-3
preparing, A-3, A-5
templates, A-3
M
silent mode, A-6 Management Database, 8-10
installation planning, 1-1 management repository service, 9-4
installation types manifest file, 8-22
and Oracle ASM, 8-2 mask
installer screen setting default file mode creation mask, 6-23
Specify NFS Locations for ASM Disk Groups, mixed binaries, 4-9
7-7 mode
Storage Option Information, 7-7 setting default file mode creation mask, 6-23
installer screens multipath disks, B-12
ASM Storage Option, 8-14 Multiple Oracle Homes Support
Cluster Node Information, 5-21 advantages, C-2
Grid Plug and Play Information, 5-11, 5-21 multiversioning, C-2
Network Interface Usage, 5-19
Node Selection screen, 11-14, 11-17
installing Oracle Member Cluster, 9-22 N
interconnect, 1-5 Name Service Cache Daemon
interconnects enabling, 4-38
single interface, 5-7 Net Configuration Assistant (NetCA)
interfaces, 1-5 response files, A-9
requirements for private interconnect, 5-3 running at command prompt, A-9
IPMI netca.rsp file, A-3
addresses not configurable by GNS, 6-31 network interface cards
preinstallation tasks, 6-29 requirements, 5-6
IPv4 requirements, 5-2, 5-8 network requirements, 1-5
IPv6 requirements, 5-2, 5-8 network, minimum requirements, 1-1
networks
J configuring interfaces, 5-24
for Oracle Flex Clusters, 5-19
JDK requirements, 4-9 hardware minimum requirements, 5-6
IP protocol requirements for, 5-2, 5-8
Index-4
Index
Index-5
Index
Index-6
Index
Index-7
Index
Index-8
Index
Index-9