RACSetup v2
RACSetup v2
adapters.
Select all option excluding MariaDB and PostgreSQL
Setting for second Network Adapter
Set IP:-192.168.0.1
Subnet Mask:-255.255.255.0
Gateway:-192.168.0.2
Then Click on Done
Similarly Install Node2
Setting for second Network Adapter
Set IP:-192.168.0.2
Subnet Mask:-255.255.255.0
Gateway:-192.168.0.1
Then Click on Done
Add 50GB Hard Disk
Specify Host File Entries and Try to ping both nodes and storage
vi /etc/hosts → On both nodes
##############Public IP ########################
192.168.1.11 rac1.dba.com rac1
192.168.1.12 rac2.dba.com rac2
####################Private IP #####################
147.43.1.11 rac1-priv.dba.com rac1-priv
147.43.1.12 rac2-priv.dba.com rac2-priv
####################Virtual IP #####################
192.168.1.21 rac1-vip.dba.com rac1-vip
192.168.1.22 rac2-vip.dba.com rac2-vip
##################SCAN IP ######################
192.168.1.30 node-scan.dba.com node-scan
##################SAN ##########################
192.168.1.50 san.dba.com san
now reboot san system....................
]#xhost+
Pass: password
id : openfiler
Password : password
*click on system
configure network access
Click on Volume
click on manage volumes (on right side)
click on add volume (on right side)
click on service
enable iscsi target server
click on volume
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@node1 ~]# fdisk -l
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Final Setup:: Create directory into both nodes, but unzip the grid software from node1 only.
Craete Directory for Grid software unzip.
[grid@rac2 DBSOFT]$ mkdir -p /grid/app/grid/19.0.0/
[grid@rac2 DBSOFT]$ mkdir -p /grid/app/grid_base
unzip LINUX.X64_193000_grid_home.zip -d /grid/app/grid/19.0.0/
/grid/app/grid/19.0.0/jdk/bin/ControlPanel -> jcontrol
/grid/app/grid/19.0.0/javavm/admin/cbp.jar -> ../../javavm/jdk/jdk8/admin/cbp.jar
/grid/app/grid/19.0.0/lib/libclntshcore.so -> libclntshcore.so.19.1
/grid/app/grid/19.0.0/lib/libclntsh.so.12.1 -> libclntsh.so
/grid/app/grid/19.0.0/lib/libclntsh.so.18.1 -> libclntsh.so
/grid/app/grid/19.0.0/lib/libclntsh.so.11.1 -> libclntsh.so
/grid/app/grid/19.0.0/lib/libclntsh.so.10.1 -> libclntsh.so
/grid/app/grid/19.0.0/jdk/jre/bin/ControlPanel -> jcontrol
/grid/app/grid/19.0.0/javavm/admin/libjtcjt.so -> ../../javavm/jdk/jdk8/admin/libjtcjt.so
/grid/app/grid/19.0.0/javavm/admin/classes.bin -> ../../javavm/jdk/jdk8/admin/classes.bin
/grid/app/grid/19.0.0/javavm/admin/lfclasses.bin -> ../../javavm/jdk/jdk8/admin/lfclasses.bin
/grid/app/grid/19.0.0/javavm/lib/security/cacerts -> ../../../javavm/jdk/jdk8/lib/security/cacerts
/grid/app/grid/19.0.0/javavm/lib/sunjce_provider.jar -> ../../javavm/jdk/jdk8/lib/sunjce_provider.jar
/grid/app/grid/19.0.0/javavm/lib/security/README.txt -> ../../../javavm/jdk/jdk8/lib/security/README.txt
/grid/app/grid/19.0.0/javavm/lib/security/java.security -> ../../../javavm/jdk/jdk8/lib/security/java.security
/grid/app/grid/19.0.0/jdk/jre/lib/amd64/server/libjsig.so -> ../libjsig.so
[grid@rac1 DBSOFT]$
Run cluvfy command for verify all things are OK or not::
[grid@rac1 19.0.0]$ pwd
/grid/app/grid/19.0.0
./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose (on rac1)
Execute "/tmp/CVU_19.0.0.0.0_grid/runfixup.sh" as root user on nodes "rac1,rac2" to perform the fix up operations
manually
Press ENTER key to continue after execution of "/tmp/CVU_19.0.0.0.0_grid/runfixup.sh" has completed on nodes
"rac1,rac2"
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle Installations::
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
INSTANCE_NAME STATUS
---------------- ------------
HOST_NAME
----------------------------------------------------------------
prod1 OPEN
rac1.dba.com
SQL>
srvctl config database → this will show how many database is configured into cluster
prod
srvctl config database –d prod → it will show particular database cluster details
Database unique name: prod
Database name: prod
Oracle home: /oracle/app/product/19.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/PROD/PARAMETERFILE/spfile.268.1164600257
Password file: +DATA/PROD/PASSWORD/pwdprod.256.1164599509
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: oinstall
OSOPER group: oinstall
Database instances: prod1,prod2
Configured nodes: rac1,rac2
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed
srvctl status database –d prod → this will show databae status instance wise.
Instance prod1 is running on node rac1
Instance prod2 is running on node rac2
srvctl status database –d prod –v → this will show databae status instance wise with database open mode.
Instance prod1 is running on node rac1. Instance status: Open.
Instance prod2 is running on node rac2. Instance status: Open.
Node2: init 0
Node1: init 0
SAN : init 0