Oracle RAC Install
Oracle RAC Install
Oracle RAC Install
Software
Oracle Database 12c Real Application Clusters 12.1.0.2.0
Pre-install
1. Check rpms
2. Disable ip v6 - /sbin/ip -6 addr See this
3. nslookup not working - /etc/resolv.conf commented?
4. Check exec on /tmp
5. Check umask 022 ( OEL 7 defaults to 027)
6. HugePages – 60% of 384GB = 226GB or 115712 ~ 115720
vi /etc/security/limits.conf
# memlock should be 90% of memory when hugepages are enabled
@oinstall soft memlock 355781836
@oinstall hard memlock 355781836
vi /etc/sysctl.conf
vm.nr_hugepages=115720
sysctl -p
7. tuned-adm active Current active profile: throughput-performance
8. transparent Hugepages
appears disabled in /etc/default/grub.conf, but
cat /sys/kernel/mm/transparent_hugepage/defrag
always defer [madvise] never
cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
9. Check NFS mount options
10. Set Timezone for all users - export TZ=/usr/share/zoneinfo/CET
/etc/hosts
# Public Hostnames
10.254.30.188 ebsdbi01ptmp.wincor-nixdorf.com ebsdbi01ptmp ebsdbi01p.wincor-nixdorf.com
ebsdbi01p ebsdbi05p.wincor-nixdorf.com ebsdbi05p ebsdbi06p.wincor-nixdorf.com ebsdbi06p
10.254.30.191 ebsdbi02ptmp.wincor-nixdorf.com ebsdbi02ptmp ebsdbi02p.wincor-nixdorf.com
ebsdbi02p
10.254.30.194 ebsdbi03ptmp.wincor-nixdorf.com ebsdbi03ptmp ebsdbi03p.wincor-nixdorf.com
ebsdbi03p
10.254.30.197 ebsdbi04ptmp.wincor-nixdorf.com ebsdbi04ptmp ebsdbi04p.wincor-nixdorf.com
ebsdbi04p
# Virtual Hostnames
10.254.30.189 ebsdbi01ptmp-virt.wincor-nixdorf.com ebsdbi01ptmp-virt ebsdbi01p-virt.wincor-
nixdorf.com ebsdbi01p-virt ebsdbi05p-virt.wincor-nixdorf.com ebsdbi05p-virt ebsdbi06p-
virt.wincor-nixdorf.com ebsdbi06p-virt
10.254.30.192 ebsdbi02ptmp-virt.wincor-nixdorf.com ebsdbi02ptmp-virt ebsdbi02p-virt.wincor-
nixdorf.com ebsdbi02p-virt
10.254.30.195 ebsdbi03ptmp-virt.wincor-nixdorf.com ebsdbi03ptmp-virt ebsdbi03p-virt.wincor-
nixdorf.com ebsdbi03p-virt
10.254.30.198 ebsdbi04ptmp-virt.wincor-nixdorf.com ebsdbi04ptmp-virt ebsdbi04p-virt.wincor-
nixdorf.com ebsdbi04p-virt
# Private Hostnames
192.168.100.8 ebsdbi01ptmp-priv1.wincor-nixdorf.com ebsdbi01ptmp-priv1
192.168.100.18 ebsdbi01ptmp-priv2.wincor-nixdorf.com ebsdbi01ptmp-priv2
192.168.100.9 ebsdbi02ptmp-priv1.wincor-nixdorf.com ebsdbi02ptmp-priv1
192.168.100.19 ebsdbi02ptmp-priv2.wincor-nixdorf.com ebsdbi02ptmp-priv2
192.168.100.10 ebsdbi03ptmp-priv1.wincor-nixdorf.com ebsdbi03ptmp-priv1
192.168.100.110 ebsdbi03ptmp-priv2.wincor-nixdorf.com ebsdbi03ptmp-priv2
192.168.100.11 ebsdbi04ptmp-priv1.wincor-nixdorf.com ebsdbi04ptmp-priv1
192.168.100.111 ebsdbi04ptmp-priv2.wincor-nixdorf.com ebsdbi04ptmp-priv2
#SCAN IPs
10.254.30.190 ebsdbiptmp-scan.wincor-nixdorf.com ebsdbiptmp-scan ebsdbip-scan.wincor-
nixdorf.com ebsdbip-scan
10.254.30.193 ebsdbiptmp-scan.wincor-nixdorf.com ebsdbiptmp-scan ebsdbip-scan.wincor-
nixdorf.com ebsdbip-scan
10.254.30.196 ebsdbiptmp-scan.wincor-nixdorf.com ebsdbiptmp-scan ebsdbip-scan.wincor-
nixdorf.com ebsdbip-scan
Server Layout
Locatio Memor FI A/B FEX A/B Temp Host Temp Virtual Temp Virtual Temp SCAN Temp SCAN
n Usage CPU Model Cores y Serial# Port Port OS Server Name Server IP Netmask Name Temp IP Virtual Host Virtual IP Name IP Private Host Private IP SCAN Name SCAN Ips Name Ips
DC3 CRM Production Database Servers E5-2667 v4 16 384 FCH2143V1W9 1/11 N/A OEL 7.4 ebsdbi01p 10.254.30.132 ebsdbi01ptmp 10.254.30.188 ebsdbi01p-virt 10.254.30.133 ebsdbi01ptmp-virt 10.254.30.189
10.254.30.134
DC3 CRM Production Database Servers E5-2667 v4 16 384 FCH2143V1V4 1/12 N/A OEL 7.4 ebsdbi02p 10.254.30.135 ebsdbi02ptmp 10.254.30.191 ebsdbi02p-virt 10.254.30.136 ebsdbi02ptmp-virt 10.254.30.192
10.254.30.137 10.254.30.190
DC3 CRM Production Database Servers E5-2667 v4 16 384 FCH2143V1V5 1/13 N/A OEL 7.4 ebsdbi03p 10.254.30.138 ebsdbi03ptmp 10.254.30.194 ebsdbi03p-virt 10.254.30.139 ebsdbi03ptmp-virt 10.254.30.195 10.254.30.140 10.254.30.193
DC3 CRM Production Database Servers E5-2667 v4 16 384 FCH2143V1W8 1/14 N/A OEL 7.4 ebsdbi04p 10.254.30.141 ebsdbi04ptmp 10.254.30.197 ebsdbi04p-virt 10.254.30.142 ebsdbi04ptmp-virt 10.254.30.198 ebsdbip-scan 10.254.30.143 ebsdbiptmp-scan 10.254.30.196
Storage
See “EBSP Filesystem Layout on Fibre.docx”
Pre-requisites
Confirm BIOS settings
VT enabled
Directed IO
RAS Memory – Maximum performance
Jumbo frames
TCP Settings
The following values can be set on the virtual machines as well as dom0. Edit /etc/sysctl.conf for the
below mentioned changes. These can be changed both on virtual machines as well as dom0. (See, Oracle
VM 3: 10GbE Network Performance Tuning whitepaper
net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.ipv4.tcp_rmem = 4096 87380 134217728
net.ipv4.tcp_wmem = 4096 65536 134217728
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_moderate_rcvbuf =1
Turn off ip v6
Issue#2
These are 4k Disks in Emulation mode
cat /sys/block/sdb/queue/physical_block_size
4096
cat /sys/block/sdb/queue/logical_block_size
512
Make sure that ORACLEASM_USE_LOGICAL_BLOCK_SIZE=true, or else the asm diskgroups will be
created with 4096 sector size instead of 512. This can be done with:
/usr/sbin/oracleasm configure -b
oracleasm configure -i
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
/usr/sbin/oracleasm configure -b
cd /etc/sysconfig/
vi oracleasm
Confirm the following:
ORACLEASM_SCANORDER="dm-"
ORACLEASM_SCANEXCLUDE="sd"
ORACLEASM_USE_LOGICAL_BLOCK_SIZE=true
Check above on all nodes
On All nodes
kpartx -a /dev/mapper/arch01
kpartx -a /dev/mapper/arch02
kpartx -a /dev/mapper/data01
kpartx -a /dev/mapper/data02
kpartx -a /dev/mapper/data03
kpartx -a /dev/mapper/data04
kpartx -a /dev/mapper/data05
kpartx -a /dev/mapper/data06
kpartx -a /dev/mapper/data07
kpartx -a /dev/mapper/data08
kpartx -a /dev/mapper/data09
kpartx -a /dev/mapper/fra01
kpartx -a /dev/mapper/fra02
kpartx -a /dev/mapper/ocrvote01
kpartx -a /dev/mapper/redo01
kpartx -a /dev/mapper/redo02
kpartx -a /dev/mapper/temp01
oracleasm exit
oracleasm init
oracleasm scandisks
oracleasm listdisks
oracleasm listdisks | xargs oracleasm querydisk -d
Grid Install
CRS_HOME = /oracle/app/oracrs/oracrsdb/grid
ORACLE_BASE = /oracle/app/oracrs/base
Unix id = uid=501(oracrs) gid=501(oinstall) groups=501(oinstall),500(dba)
set ssh manually - https://docs.oracle.com/database/121/CWLIN/manpreins.htm#CWLIN515
cd /oracle/app/oracrs/software/grid
./runInstaller
2018/03/20 23:33:31 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2018/03/20 23:40:16 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2018/03/20 23:44:25 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
2018/03/20 23:45:27 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2018/03/20 23:49:36 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
/oracle/app/oracrs/oraInventory/logs/installActions2018-03-20_10-50-08PM.log
INFO: PRVF-7590 : "ntpd" is not running on node "ebsdbi02p"
INFO: PRVF-7590 : "ntpd" is not running on node "ebsdbi01p"
INFO: PRVF-7590 : "ntpd" is not running on node "ebsdbi03p"
INFO: PRVG-1024 : The NTP Daemon or Service was not running on any of the cluster nodes.
INFO: PRVF-5415 : Check to see if NTP daemon or service is running failed
INFO: Clock synchronization check using Network Time Protocol(NTP) failed
INFO: PRVF-9652 : Cluster Time Synchronization Services check failed
We are waiting on F/W rules. Run CVU after rules are in place.
You can find the log of this install session at:
/oracle/app/oracrs/oraInventory/logs/installActions2018-03-20_10-50-08PM.log
Grid Patches
Patch 6880880
cd /oracle/app/oracrs/oracrsdb/grid/OPatch
zip -r opatch_old.zip *
cd /oracle/app/oracrs/oracrsdb/grid
unzip -o /dbinst/software/p6880880_122010_Linux-x86-64.zip
cd /dbinst/software/26838953
opatch prereq CheckConflictAgainstOHWithDetail -ph ./
As root:
/oracle/app/oracrs/oracrsdb/grid/crs/install/rootcrs.sh -prepatch
As oracrs:
cd /dbinst/software/26838953
/oracle/app/oracrs/oracrsdb/grid/OPatch/opatch apply
As root: /oracle/app/oracrs/oracrsdb/grid/crs/install/rootcrs.sh -postpatch
Patch 27611612
p27611612_12201180116DBJAN2018RU_Linux-x86-64.zip
cd /dbinst/software/27611612
opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Conflict with Composite Patch 26925263
Conflict with Sub-Patch 26717470
Conflict with Sub-Patch 24732088
Not Applied
ASM Diskgroups
asmca
Disk group NameRedundanc Disks AU ASM Database
y size Compatibility Compatibility
EBSP_REDO01_DG External ORCL:EBSP_REDO01 1 12.1.0.0 11.2.0.4
EBSP_REDO02_DG External ORCL:EBSP_REDO02 1 12.1.0.0 11.2.0.4
EBSP_ARCH01_DG External ORCL:EBSP_ARCH01 1 12.1.0.0 11.2.0.4
ORCL:EBSP_ARCH02
EBSP_FRA_DG External ORCL:EBSP_FRA01 1 12.1.0.0 11.2.0.4
ORCL:EBSP_FRA02
EBSP_TEMP_DG External ORCL:EBSP_TEMP01 1 12.1.0.0 11.2.0.4
EBSP_DATA_DG External ORCL:EBSP_DATA01 1 12.1.0.0 11.2.0.4
ORCL:EBSP_DATA02
ORCL:EBSP_DATA03
ORCL:EBSP_DATA04
ORCL:EBSP_DATA05
ORCL:EBSP_DATA06
ORCL:EBSP_DATA07
ORCL:EBSP_DATA08
ORCL:EBSP_DATA09
Make sure to use sector_size and compatible=11.2, au =16M
CREATE DISKGROUP data NORMAL REDUNDANCY
FAILGROUP controller1 DISK '/devices/diska1', '/devices/diska2', '/devices/diska3', '/devices/diska4'
FAILGROUP controller2 DISK '/devices/diskb1', '/devices/diskb2', '/devices/diskb3', '/devices/diskb4'
ATTRIBUTE 'compatible.asm' = '11.2', 'compatible.rdbms' = '11.2', 'sector_size'='512';
ASM in init.ora
show parameter spfile
+GRID/ebsp-dc3/ASMPARAMETERFILE/registry.253.971307355
Backup the spfile on disk
create pfile='$ORACLE_HOME/dbs/asm_pfile_orig_bkp' from spfile;
alter system set ASM_DISKSTRING='ORCL:*' scope=both sid='*';
alter system set memory_max_target=2G scope=spfile sid='*';
alter system set memory_target=1G scope=spfile sid='*';
create pfile='$ORACLE_HOME/dbs/asm_pfile_bkp' from spfile;
vi ~/rsync_oh_exclude
- admin/CRM*/**
- log/**
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk dnfs_off
chown root oradism
chmod u+s oradism
attachome
Backup the inventory
As root:
cp -pr /oracle/app/oracrs/oraInventory
/oracle/app/oracrs/oraInventory_20180320
As crm:
vi /oracle/app/crm/crmdb/11gRAC/oraInst.loc
cd $ORACLE_HOME/oui/bin
/oracle/app/crm/crmdb/11gRAC/oui/bin/runInstaller -attachHome -
noClusterEnabled ORACLE_HOME=/oracle/app/crm/crmdb/11gRAC
ORACLE_HOME_NAME=OraDb11g_home1
CLUSTER_NODES=ebsdbi01p,ebsdbi02p,ebsdbi03p
"INVENTORY_LOCATION=/oracle/app/oracrs/oraInventory"
LOCAL_NODE=ebsdbi01p
Checking swap space: must be greater than 500 MB. Actual 51199 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/app/oracrs/oraInventory
'AttachHome' was successful.
Repeat on nodes 2 and 3. Change the LOCAL_NODE as needed
$ORACLE_HOME/OPatch/opatch lsinventory
Setup APPL_TOP
vi $HOME/rsync_appl_top_exclude
- admin/log/**
- admin/out/**
- *.log
- *.out
- *.req
- *.mgr
- *.tmp
- *.rti
- *.PDF
- *.tar.gz
- *.EXCEL
function tmp {
rsync --delete -auvzpoglH -e ssh --exclude-from=$HOME/rsync_appl_top_exclude ${1} 10.254.30.203:${1}
}
tmp /oracle/shr/crm/share_r12/EBSapps.env
tmp /oracle/shr/crm/share_r12/tools/
tmp /oracle/shr/crm/share_r12/fs1/EBSapps/appl/
tmp /oracle/shr/crm/share_r12/fs1/EBSapps/comn/
tmp /oracle/shr/crm/share_r12/fs1/EBSapps/10.1.2/
tmp /oracle/shr/crm/share_r12/fs1/FMW_Home
tmp /oracle/shr/crm/share_r12/fs1/inst/
tmp /oracle/shr/crm/share_r12/fs2/EBSapps/appl/
tmp /oracle/shr/crm/share_r12/fs2/EBSapps/comn/
tmp /oracle/shr/crm/share_r12/fs2/EBSapps/10.1.2/
tmp /oracle/shr/crm/share_r12/fs2/FMW_Home
tmp /oracle/shr/crm/share_r12/fs2/inst/
tmp /oracle/shr/crm/share_r12/fs_ne/inst/
tmp /oracle/shr/crm/share_r12/fs_ne/EBSapps/appl/
/oracle/shr/crm/share_r12/fs2/EBSapps/
appl – 23GB
comn – 33GB
10.1.2 – 2.1GB
cp -p /oracle/app/admin/CRM/net/admin/db_CRM1_ebsdbi01p_listener_ifile.ora
/oracle/app/admin/CRM/net/admin/db_CRM1_ebsdbi01p_listener_ifile.ora.DC3
vi /oracle/app/admin/CRM/net/admin/db_CRM1_ebsdbi01p_listener_ifile.ora
Change
(SID_DESC = (ORACLE_HOME = /oracle/app/crm/crmdb/11gRAC)(SID_NAME = CRM1)
(GLOBAL_DBNAME=CRM_DC1_DGMGRL))
To
(SID_DESC = (ORACLE_HOME = /oracle/app/crm/crmdb/11gRAC)(SID_NAME = CRM1)
(GLOBAL_DBNAME=CRM_DC3_DGMGRL))
SET SERVEROUTPUT ON
DECLARE
lat INTEGER;
iops INTEGER;
mbps INTEGER;
BEGIN
DBMS_RESOURCE_MANAGER.CALIBRATE_IO (2, 10, iops, mbps, lat);
DBMS_OUTPUT.PUT_LINE ('max_iops = ' || iops);
DBMS_OUTPUT.PUT_LINE ('latency = ' || lat);
dbms_output.put_line('max_mbps = ' || mbps);
end;
/
max_iops = 307483
latency = 0
max_mbps = 3485
cd /oracle/app/oracrs/oracrsdb/grid/crs/install
vi s_crsconfig_ebsdbi02d_env.txt
TZ=Etc/UTC
Change To
TZ=Europe/Berlin
References:
FAQ: Flash Storage with ASM (Doc ID 1626228.1)
How To Align Partitions on Large HDD (Doc ID 1523947.1)
Supporting 4K Sector Disks [Video] (Doc ID 1133713.1)
Supporting ASM on 4K/4096 Sector Size (SECTOR_SIZE) Disks (Doc ID 1630790.1)
https://kb.netapp.com/app/answers/answer_view/a_id/1001578
Business Continuity for Oracle E-Business Suite Release 12.2 using Virtual Hosts with Oracle 12c Physical
Standby Database (Doc ID 2088692.1)
defaults {
polling_interval 10
}
devices {
device {
vendor "NETAPP"
path_grouping_policy multibus
path_checker tur
path_selector "queue-length 0"
fast_io_fail_tmo 10
dev_loss_tmo 60
no_path_retry 0
}
}
flush_on_last_del yes
max_fds max
pg_prio_calc avg
queue_without_daemon no
user_friendly_names no will this impact current names?
dev_loss_tmo infinity
fast_io_fail_tmo 5
failback immediate
features 3 queue_if_no_path pg_init_retries 50
getuid_callout uid_attribute "ID_SERIAL"
hardware_handle 1 alua
path_checker tur
path_grouping_policy group_by_prio
path_selector service-time 0
prio ALUA
product LUN.*
rr_min_io 1000
rr_weight uniform
vendor NETAPP
The Netapp doc has recommendations for both Veritas and Oracle VM
Hi Bob,
Here is another recommendation
Current:
tuned-adm active
Current active profile: throughput-performance
Change to:
tuned-adm profile enterprise storage
Also for the VMs like SAP, we have to go through a tuning exercise, tuned-admin should be set to virtual-
guest
Reference: https://library.netapp.com/ecm/ecm_download_file/ECMLP2547958
Thanks,
Aditya
Hi Bob,
From: https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/
ucs_flashstack_oracleRAC12cr2.html
Below are for Pure storage, but I think it should be same for Netapp too. But I am not sure if these are
for physical or VM:
# Recommended settings for PURE Storage FlashArray
# Use noop scheduler for high-performance solid-state storage
ACTION=="add|change", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE",
ATTR{queue/scheduler}="noop"
# Reduce CPU overhead due to entropy collection
ACTION=="add|change", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE",
ATTR{queue/add_random}="0"
# Schedule I/O on the core that initiated the process
ACTION=="add|change", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE",
ATTR{queue/rq_affinity}="2"
# End of Recommended settings for PURE Storage FlashArray
# cat /sys/block/${ASM_DISK}/queue/scheduler
noop [deadline] cfq
What we have:
cd /sys/block/dm-45/queue
[root@ebsdbi01ptmp queue]# cat iostats
0
[root@ebsdbi01ptmp queue]# cat add_random
0
[root@ebsdbi01ptmp queue]# cat scheduler
none
[root@ebsdbi01ptmp queue]# cat rq_affinity
0
cd /sys/block/sda/queue
cat scheduler
noop [deadline] cfq
So the ASM disks are not choosing the right scheduler. Need to change
More recommedations:
echo 256 > /sys/block/${block}/queue/nr_requests Currently 128
echo 0 > /sys/block/${block}/queue/rotational Currently 1
echo 256 > /sys/block/${block}/queue/read_ahead_kb Currently 128