0% found this document useful (0 votes)
9 views166 pages

RACSetup v2

The document outlines the steps to create two virtual machines (VMs) and install Linux on both with specific network configurations and mount points. It includes instructions for setting up a storage area network (SAN) using Openfiler, configuring iSCSI targets, and managing disk partitions and user accounts on the nodes. Additionally, it details the installation of Oracle ASM and the creation of disks for use in the environment.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views166 pages

RACSetup v2

The document outlines the steps to create two virtual machines (VMs) and install Linux on both with specific network configurations and mount points. It includes instructions for setting up a storage area network (SAN) using Openfiler, configuring iSCSI targets, and managing disk partitions and user accounts on the nodes. Additionally, it details the installation of Oracle ASM and the creation of disks for use in the environment.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 166

Create Two VMs and Install Linux in both of them with required mount points and two network

adapters.
Select all option excluding MariaDB and PostgreSQL
Setting for second Network Adapter
Set IP:-192.168.0.1
Subnet Mask:-255.255.255.0
Gateway:-192.168.0.2
Then Click on Done
Similarly Install Node2
Setting for second Network Adapter
Set IP:-192.168.0.2
Subnet Mask:-255.255.255.0
Gateway:-192.168.0.1
Then Click on Done
Add 50GB Hard Disk
Specify Host File Entries and Try to ping both nodes and storage
vi /etc/hosts → On both nodes
##############Public IP ########################
192.168.1.11 rac1.dba.com rac1
192.168.1.12 rac2.dba.com rac2

####################Private IP #####################
147.43.1.11 rac1-priv.dba.com rac1-priv
147.43.1.12 rac2-priv.dba.com rac2-priv
####################Virtual IP #####################
192.168.1.21 rac1-vip.dba.com rac1-vip
192.168.1.22 rac2-vip.dba.com rac2-vip

##################SCAN IP ######################
192.168.1.30 node-scan.dba.com node-scan

##################SAN ##########################
192.168.1.50 san.dba.com san
now reboot san system....................

From Node1.... open openfiler through firefox

]#xhost+

#] firefox https://192.168.1.50:446/ (its given on san machine)


User: openfiller

Pass: password
id : openfiler

Password : password

*click on system
configure network access

add hostname,ip add,network mask 255.255.255.255,select share (both nodes info)

Click on Volume
click on manage volumes (on right side)
click on add volume (on right side)
click on service
enable iscsi target server

click on volume

click on iscsi targets (on right side)


click on add
click on lum map
than logout openfiler
[root@rac1 ~]# fdisk -l

Disk /dev/sda: 85.9 GB, 85899345920 bytes, 167772160 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000c32df

Device Boot Start End Blocks Id System


/dev/sda1 2048 62916607 31457280 83 Linux
/dev/sda2 * 62916608 83888127 10485760 83 Linux
/dev/sda3 83888128 94373887 5242880 82 Linux swap / Solaris
/dev/sda4 94373888 167772159 36699136 5 Extended
/dev/sda5 94375936 104861695 5242880 83 Linux
/dev/sda6 104863744 167772159 31454208 83 Linux
[root@rac1 ~]#

[root@rac1 ~]# iscsiadm -m discovery -t st -p 192.168.1.50 → On both nodes


192.168.1.50:3260,1 iqn.2006-01.com.openfiler:tsn.e304ba072f82
[root@rac1 ~]# service iscsi restart → on both nodes
Redirecting to /bin/systemctl restart iscsi.service
[root@rac1 ~]#
* install rmp--------------------------
root@rac2 64Bit]# ls –lrt → On both nodes
total 356
-rwxrwxrwx. 1 root root 136910 Nov 23 2016 oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm
-rwxrwxrwx. 1 root root 14176 Nov 23 2016 oracleasmlib-2.0.4-1.el5.x86_64.rpm
-rwxrwxrwx. 1 root root 90444 Nov 23 2016 oracleasm-support-2.1.3-1.el5.x86_64.rpm
-rwxrwxrwx. 1 root root 15224 Nov 23 2016 oracle-validated-1.0.0-18.el5.x86_64.rpm
-rwxrwxrwx. 1 root root 19360 Feb 11 2023 oracleasmlib-2.0.12-1.el7.x86_64.rpm
-rwxrwxrwx. 1 root root 86908 Feb 11 2023 oracleasm-support-2.1.11-2.el7.x86_64.rpm
[root@rac2 64Bit]#
[root@rac2 64Bit]# rpm -ivh * --nodeps --force
warning: oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 1e5e0159: NOKEY
warning: oracleasmlib-2.0.12-1.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:oracleasm-support-2.1.11-2.el7 ################################# [ 17%]
Note: Forwarding request to 'systemctl enable oracleasm.service'.
Created symlink from /etc/systemd/system/multi-user.target.wants/oracleasm.service to
/usr/lib/systemd/system/oracleasm.service.
2:oracleasm-2.6.18-164.el5-2.0.5-1.################################# [ 33%]
depmod: WARNING: -e needs -E or -F
depmod: WARNING: could not open /lib/modules/2.6.18-164.el5/modules.order: No such file or directory
depmod: WARNING: could not open /lib/modules/2.6.18-164.el5/modules.builtin: No such file or directory
3:oracleasmlib-2.0.12-1.el7 ################################# [ 50%]
4:oracleasmlib-2.0.4-1.el5 ################################# [ 67%]
5:oracle-validated-1.0.0-18.el5 ################################# [ 83%]
6:oracleasm-support-2.1.3-1.el5 ################################# [100%]
Note: Forwarding request to 'systemctl enable oracleasm.service'.
[root@rac2 64Bit]#

[root@rac2 64Bit]# rpm -qa|grep -i oracleasm


oracleasm-support-2.1.3-1.el5.x86_64
oracleasmlib-2.0.12-1.el7.x86_64
oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64
oracleasmlib-2.0.4-1.el5.x86_64
oracleasm-support-2.1.11-2.el7.x86_64
[root@rac2 64Bit]#

* partion on disk (only from node1) don’t do on both nodes


[root@node1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

The number of cylinders for this disk is set to 51168.


There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n


Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-51168, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-51168, default 51168): +5g

Command (m for help): n


Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (4770-51168, default 4770):
Using default value 4770
Last cylinder or +size or +sizeM or +sizeK (4770-51168, default 51168): +15g

Command (m for help): n


Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (19076-51168, default 19076):
Using default value 19076
Last cylinder or +size or +sizeM or +sizeK (19076-51168, default 51168): +15g

Command (m for help): n


Command action
e extended
p primary partition (1-4)
p
Selected partition 4
First cylinder (33382-51168, default 33382):
Using default value 33382
Last cylinder or +size or +sizeM or +sizeK (33382-51168, default 51168):
Using default value 51168

Command (m for help): wq


The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@node1 ~]# fdisk -l

Disk /dev/sda: 53.6 GB, 53687091200 bytes


255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sda1 * 1 1275 10241406 83 Linux
/dev/sda2 1276 1912 5116702+ 83 Linux
/dev/sda3 1913 6016 32965380 83 Linux
/dev/sda4 6017 6527 4104607+ 5 Extended
/dev/sda5 6017 6527 4104576 82 Linux swap / Solaris

Disk /dev/sdb: 53.6 GB, 53653536768 bytes


64 heads, 32 sectors/track, 51168 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System


/dev/sdb1 1 4769 4883440 83 Linux
/dev/sdb2 4770 19075 14649344 83 Linux
/dev/sdb3 19076 33381 14649344 83 Linux
/dev/sdb4 33382 51168 18213888 83 Linux

* create user and group on both nodes


]# userdel -r oracle
[root@node1 ~]# groupdel oinstall
[root@node1 ~]# groupdel dba
[root@node1 ~]# groupdel asmadmin
groupdel: group asmadmin does not exist
groupadd -g 500 oinstall
groupadd -g 501 dba
groupadd -g 502 asmadmin
useradd -u 500 -m oracle -g oinstall -G dba,asmadmin -d /usr/oracle
useradd -u 501 -m grid -g oinstall -G dba,asmadmin -d /usr/grid
chmod -R 775 /grid
chmod -R 775 /oracle
chown -R oracle:oinstall /grid
chown -R oracle:oinstall /oracle
chown -R grid:oinstall /grid
chown -R grid:oinstall /oracle
passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
passwd grid
Changing password for user grid.
New UNIX password:
BAD PASSWORD: it is too short
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@rac1 ~]# oracleasm configure –I → on both nodes
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: grid


Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
[root@rac1 ~]#
[root@rac1 ~]# oracleasm init → on both nodes
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Mounting ASMlib driver filesystem: /dev/oracleasm
[root@rac1 ~]#

[root@rac1 ~]# partprobe /dev/sdb


[root@rac1 ~]# cd /dev/oracleasm/disks/
[root@rac1 disks]# pwd
/dev/oracleasm/disks
[root@rac1 disks]# ls
[root@rac1 disks]#

[root@rac1 disks]# fdisk -l

Disk /dev/sda: 85.9 GB, 85899345920 bytes, 167772160 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000c32df

Device Boot Start End Blocks Id System


/dev/sda1 2048 62916607 31457280 83 Linux
/dev/sda2 * 62916608 83888127 10485760 83 Linux
/dev/sda3 83888128 94373887 5242880 82 Linux swap / Solaris
/dev/sda4 94373888 167772159 36699136 5 Extended
/dev/sda5 94375936 104861695 5242880 83 Linux
/dev/sda6 104863744 167772159 31454208 83 Linux
Disk /dev/sdb: 51.2 GB, 51170508800 bytes, 99942400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x58d1a593

Device Boot Start End Blocks Id System


/dev/sdb1 2048 20973567 10485760 83 Linux
/dev/sdb2 20973568 41945087 10485760 83 Linux
/dev/sdb3 41945088 52430847 5242880 83 Linux
/dev/sdb4 52430848 99942399 23755776 83 Linux
[root@rac1 disks]#

Below command execute only from node1


[root@rac1 disks]# oracleasm createdisk DISK1 /dev/sdb1
Writing disk header: done
Instantiating disk: done
[root@rac1 disks]# oracleasm createdisk DISK2 /dev/sdb2
Writing disk header: done
Instantiating disk: done
[root@rac1 disks]# oracleasm createdisk DISK3 /dev/sdb3
Writing disk header: done
Instantiating disk: done
[root@rac1 disks]# oracleasm createdisk DISK4 /dev/sdb4
Writing disk header: done
Instantiating disk: done
[root@rac1 disks]# pwd
/dev/oracleasm/disks
[root@rac1 disks]# ls
DISK1 DISK2 DISK3 DISK4
[root@rac1 disks]# ls -lrt
total 0
brw-rw---- 1 grid oinstall 8, 17 Mar 25 03:43 DISK1
brw-rw---- 1 grid oinstall 8, 18 Mar 25 03:43 DISK2
brw-rw---- 1 grid oinstall 8, 19 Mar 25 03:43 DISK3
brw-rw---- 1 grid oinstall 8, 20 Mar 25 03:43 DISK4
[root@rac1 disks]#
(on node2)

[root@rac2 ~]# cd /dev/oracleasm/disks/


[root@rac2 disks]# ls -lrt
total 0
[root@rac2 disks]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DISK1"
Instantiating disk "DISK2"
Instantiating disk "DISK3"
Instantiating disk "DISK4"
[root@rac2 disks]# ls -lrt
total 0
brw-rw----. 1 grid oinstall 8, 20 Mar 25 03:46 DISK4
brw-rw----. 1 grid oinstall 8, 19 Mar 25 03:46 DISK3
brw-rw----. 1 grid oinstall 8, 18 Mar 25 03:46 DISK2
brw-rw----. 1 grid oinstall 8, 17 Mar 25 03:46 DISK1
[root@rac2 disks]#

shared grid software and unzip (on node1)

Create ssh Connectivity (PAsswordless) → do the same on all nodes

[root@rac1 ~]# su - grid


[grid@rac1 ~]$
[grid@rac1 ~]$
[grid@rac1 ~]$
[grid@rac1 ~]$ mkdir .ssh
[grid@rac1 ~]$ cd .ssh/
[grid@rac1 .ssh]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/usr/grid/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /usr/grid/.ssh/id_rsa.
Your public key has been saved in /usr/grid/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:wUeO0sB/UdmCb2diGbBT5K5eBJ+3lacLEQZrjlf3hmU [email protected]
The key's randomart image is:
+---[RSA 2048]----+
| .. +=+o |
| .+ +o*+ . |
| ..= Oo=+. E|
| ..*.B*+o=.|
| S.oo*+o.=|
| . o o =.|
| .oo |
| ....|
| . . |
+----[SHA256]-----+
[grid@rac1 .ssh]$ ls -lrt
total 8
-rw-r--r-- 1 grid oinstall 399 Mar 25 03:53 id_rsa.pub
-rw------- 1 grid oinstall 1675 Mar 25 03:53 id_rsa

[grid@rac1 .ssh]$ vi authorized_keys → on both nodes


[grid@rac1 .ssh]$
[grid@rac1 .ssh]$
[grid@rac1 .ssh]$ cat authorized_keys
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQClvizWjDLPKkjXnSQbSywcQRtTAAX9TE+ad9IOSoJxz4wxvuMVBg6Graz6A/tfkNLi
66flJCLCvB9K9z49R94LDXmOnKX0Ikgfsb+XBIUk7OltZF9qtOOtjEw808savKPOY9UJyQW5QCDFmXYKe1coEpcdHay35ZU5Z0
MKFls8kCwazl/PLzI/bH1tvdW01oZk5nv5oOKr+2YqSbnlhG7cRcXOj0ozNM+nslaLuemMq28p1Xp32Df4LaRqvdnXgk5nvO5a
HwpDaGNZk9SUalNLQP0auYSXYnrEgMCvV8t+cOZUHYFUy1o3BqgedZ+8zQCgTRIeH3fcHLnvS2jZvuGT [email protected]
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQCuCAALRx33pNnE0WmJDcueGW5xzUJWO0oH/qy4svpMUn4ENeeGVIBQiuiuEBI
j6BDVoObwixuzvFbcVsgf0Uni1TxN4FfBpVvovjpBXtu7UDq37B79jjh1HwzdvQcEQCrdT1ZzUEnJGJMFHJso0nDFE3JcB5B6R0
RCPfg1R4WBwzSFH17RD1822S2J18EWYvYTwWOU+LUUb0pTECze33eB/ZpP4dJVrEk4OQuXwVn2QxhYZm7tzN+Mo0J8TI5
BuvykJ05ehL7IB+zPbWBEKUI4PLqrvMZrpBjKVrlqjL87vcdPZkklmGZUdeuotujtHGie484T+jFDgKdYImQR/2SN
[email protected]
[grid@rac1 .ssh]$

[grid@rac1 ~]$ ssh rac2


The authenticity of host 'rac2 (192.168.1.12)' can't be established.
ECDSA key fingerprint is SHA256:XJbqguMZy29nAkIaZIrV4qiC3rsvI1VHLF1Bhwad/W8.
ECDSA key fingerprint is MD5:0c:d9:e7:71:5b:d3:d6:7e:5b:22:06:db:c4:c3:12:0c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2,192.168.1.12' (ECDSA) to the list of known hosts.
Last login: Mon Mar 25 03:51:46 2024
[grid@rac2 ~]$ exit
logout
Connection to rac2 closed.
[grid@rac1 ~]$ ssh rac1
The authenticity of host 'rac1 (192.168.1.11)' can't be established.
ECDSA key fingerprint is SHA256:tEQJh912vbO5ZktRlJNk5C4efU5iSc7FbJokX/JacIs.
ECDSA key fingerprint is MD5:d3:51:7c:84:ae:09:d1:16:45:64:e0:0e:fb:7d:50:a0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1,192.168.1.11' (ECDSA) to the list of known hosts.
Last login: Mon Mar 25 03:53:11 2024
[grid@rac1 ~]$ exit
logout
Connection to rac1 closed.
[grid@rac1 ~]$
[grid@rac2 ~]$ ssh rac1
The authenticity of host 'rac1 (192.168.1.11)' can't be established.
ECDSA key fingerprint is SHA256:tEQJh912vbO5ZktRlJNk5C4efU5iSc7FbJokX/JacIs.
ECDSA key fingerprint is MD5:d3:51:7c:84:ae:09:d1:16:45:64:e0:0e:fb:7d:50:a0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1,192.168.1.11' (ECDSA) to the list of known hosts.
Last login: Mon Mar 25 03:56:56 2024 from rac1.dba.com
[grid@rac1 ~]$ exit
logout
Connection to rac1 closed.
[grid@rac2 ~]$ ssh rac2
The authenticity of host 'rac2 (192.168.1.12)' can't be established.
ECDSA key fingerprint is SHA256:XJbqguMZy29nAkIaZIrV4qiC3rsvI1VHLF1Bhwad/W8.
ECDSA key fingerprint is MD5:0c:d9:e7:71:5b:d3:d6:7e:5b:22:06:db:c4:c3:12:0c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2,192.168.1.12' (ECDSA) to the list of known hosts.
Last login: Mon Mar 25 03:56:35 2024 from rac1.dba.com
[grid@rac2 ~]$ exit
logout
Connection to rac2 closed.
[grid@rac2 ~]$

Final Setup:: Create directory into both nodes, but unzip the grid software from node1 only.
Craete Directory for Grid software unzip.
[grid@rac2 DBSOFT]$ mkdir -p /grid/app/grid/19.0.0/
[grid@rac2 DBSOFT]$ mkdir -p /grid/app/grid_base
unzip LINUX.X64_193000_grid_home.zip -d /grid/app/grid/19.0.0/
/grid/app/grid/19.0.0/jdk/bin/ControlPanel -> jcontrol
/grid/app/grid/19.0.0/javavm/admin/cbp.jar -> ../../javavm/jdk/jdk8/admin/cbp.jar
/grid/app/grid/19.0.0/lib/libclntshcore.so -> libclntshcore.so.19.1
/grid/app/grid/19.0.0/lib/libclntsh.so.12.1 -> libclntsh.so
/grid/app/grid/19.0.0/lib/libclntsh.so.18.1 -> libclntsh.so
/grid/app/grid/19.0.0/lib/libclntsh.so.11.1 -> libclntsh.so
/grid/app/grid/19.0.0/lib/libclntsh.so.10.1 -> libclntsh.so
/grid/app/grid/19.0.0/jdk/jre/bin/ControlPanel -> jcontrol
/grid/app/grid/19.0.0/javavm/admin/libjtcjt.so -> ../../javavm/jdk/jdk8/admin/libjtcjt.so
/grid/app/grid/19.0.0/javavm/admin/classes.bin -> ../../javavm/jdk/jdk8/admin/classes.bin
/grid/app/grid/19.0.0/javavm/admin/lfclasses.bin -> ../../javavm/jdk/jdk8/admin/lfclasses.bin
/grid/app/grid/19.0.0/javavm/lib/security/cacerts -> ../../../javavm/jdk/jdk8/lib/security/cacerts
/grid/app/grid/19.0.0/javavm/lib/sunjce_provider.jar -> ../../javavm/jdk/jdk8/lib/sunjce_provider.jar
/grid/app/grid/19.0.0/javavm/lib/security/README.txt -> ../../../javavm/jdk/jdk8/lib/security/README.txt
/grid/app/grid/19.0.0/javavm/lib/security/java.security -> ../../../javavm/jdk/jdk8/lib/security/java.security
/grid/app/grid/19.0.0/jdk/jre/lib/amd64/server/libjsig.so -> ../libjsig.so
[grid@rac1 DBSOFT]$
Run cluvfy command for verify all things are OK or not::
[grid@rac1 19.0.0]$ pwd
/grid/app/grid/19.0.0
./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose (on rac1)

ackage: cvuqdisk-1.0.10-1 rac2,rac1 no no


Daemon "avahi-daemon" not rac2,rac1 no no
configured and running
zeroconf check rac2,rac1 no no
Group Existence: asmdba rac2,rac1 no no

Execute "/tmp/CVU_19.0.0.0.0_grid/runfixup.sh" as root user on nodes "rac1,rac2" to perform the fix up operations
manually

Press ENTER key to continue after execution of "/tmp/CVU_19.0.0.0.0_grid/runfixup.sh" has completed on nodes
"rac1,rac2"

run runinstaller (from node1)


[root@rac1 ~]# xhost +
access control disabled, clients can connect from any host
[root@rac1 ~]#
[root@rac1 ~]#
[root@rac1 ~]# su - grid
Last login: Mon Mar 25 03:57:33 IST 2024 from rac2.dba.com on pts/2
[grid@rac1 ~]$ export DISPLAY=:0
[grid@rac1 ~]$ cd /grid/app/grid/19.0.0/
addnode/ demo/ jdbc/ oracore/ qos/ tomcat/
assistants/ diagnostics/ jdk/ ord/ racg/ ucp/
bin/ dmu/ jlib/ ords/ rdbms/ usm/
cha/ evm/ ldap/ oss/ relnotes/ utl/
clone/ gpnp/ lib/ oui/ rhp/ wlm/
crs/ has/ md/ owm/ sdk/ wwg/
css/ hs/ network/ .patch_storage/ slax/ xag/
cv/ install/ nls/ perl/ sqlpatch/ xdk/
dbjava/ instantclient/ OPatch/ plsql/ sqlplus/
dbs/ inventory/ .opatchauto_storage/ precomp/ srvm/
deinstall/ javavm/ opmn/ QOpatch/ suptools/
[grid@rac1 ~]$ cd /grid/app/grid/19.0.0/
[grid@rac1 19.0.0]$ ./gridSetup.sh
Scan name get from /etc/hosts file.
Note: Here we are not using GIMR, but into Production system this is mandatory and need 50GB disk
Execute into both nodes.

Install 19C rpm::


[root@rac1 19C_RPMS]# ls -lrt
total 2783
-rwxrwxrwx 1 root root 902348 Feb 4 2023 ksh-20120801-137.0.1.el7.x86_64.rpm
-rwxrwxrwx 1 root root 903116 Feb 4 2023 ksh-20120801-144.0.1.el7_9.x86_64.rpm
-rwxrwxrwx 1 root root 903156 Feb 4 2023 ksh-20120801-140.0.1.el7_7.x86_64.rpm
-rwxrwxrwx 1 root root 12624 Feb 4 2023 libaio-devel-0.3.109-13.el7.x86_64.rpm
-rwxrwxrwx 1 root root 12672 Feb 4 2023 libaio-devel-0.3.109-13.el7.i686.rpm
-rwxrwxrwx 1 root root 24208 Feb 4 2023 libaio-0.3.109-13.el7.x86_64.rpm
-rwxrwxrwx 1 root root 24356 Feb 4 2023 libaio-0.3.109-13.el7.i686.rpm
-rwxrwxrwx 1 root root 18204 Feb 4 2023 oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm
-rwxrwxrwx 1 root root 27152 Feb 4 2023 oracle-database-preinstall-19c-1.0-3.el7.x86_64.rpm
-rwxrwxrwx 1 root root 19552 Feb 4 2023 oracle-database-preinstall-19c-1.0-2.el7.x86_64.rpm
[root@rac1 19C_RPMS]# rpm -ivh ksh-20120801-140.0.1.el7_7.x86_64.rpm ksh-20120801-144.0.1.el7_9.x86_64.rpm -
-nodeps --force
warning: ksh-20120801-140.0.1.el7_7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:ksh-20120801-144.0.1.el7_9 ################################# [ 50%]
2:ksh-20120801-140.0.1.el7_7 ################################# [100%]
[root@rac1 19C_RPMS]#
[root@rac1 19C_RPMS]# rpm -ivh libaio-devel-0.3.109-13.el7.x86_64.rpm --nodeps --force
warning: libaio-devel-0.3.109-13.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:libaio-devel-0.3.109-13.el7 ################################# [100%]
[root@rac1 19C_RPMS]#
Post Setup Ceate Diskgroup::
First set Display .
[grid@rac1 ~]$ adrci

ADRCI: Release 19.0.0.0.0 - Production on Tue Mar 26 03:01:18 2024

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

ADR base = "/grid/app/grid_base"


adrci> show homes
ADR Homes:
diag/asm/+asm/+ASM1
diag/crs/rac1/crs
diag/clients/user_grid/host_3611993476_110
diag/clients/user_root/host_3611993476_110
diag/tnslsnr/rac1/asmnet1lsnr_asm
diag/tnslsnr/rac1/listener_scan1
diag/tnslsnr/rac1/listener
diag/asmcmd/user_grid/rac1.dba.com
diag/kfod/rac1/kfod
adrci> exit
[grid@rac1 ~]$ cd /grid/app/grid_base
[grid@rac1 grid_base]$ pwd
/grid/app/grid_base
[grid@rac1 grid_base]$ cd diag/crs/rac1/crs
[grid@rac1 crs]$ pwd
/grid/app/grid_base/diag/crs/rac1/crs
[grid@rac1 crs]$
[grid@rac1 crs]$ cd trace/
[grid@rac1 trace]$ pwd
/grid/app/grid_base/diag/crs/rac1/crs/trace
[grid@rac1 trace]$
[grid@rac1 trace]$ tail -100f alert.log
2024-03-26 03:01:28.077 [OCSSD(15141)]CRS-1714: Unable to discover any voting files, retrying discovery in 15 seconds;
Details at (:CSSNM00070:) in /grid/app/grid_base/diag/crs/rac1/crs/trace/ocssd.trc
2024-03-26 03:01:43.082 [OCSSD(15141)]CRS-1714: Unable to discover any voting files, retrying discovery in 15 seconds;
Details at (:CSSNM00070:) in /grid/app/grid_base/diag/crs/rac1/crs/trace/ocssd.trc
2024-03-26 03:01:58.087 [OCSSD(15141)]CRS-1714: Unable to discover any voting files, retrying discovery in 15 seconds;
Details at (:CSSNM00070:) in /grid/app/grid_base/diag/crs/rac1/crs/trace/ocssd.trc
2024-03-26 03:02:13.092 [OCSSD(15141)]CRS-1714: Unable to discover any voting files, retrying discovery in 15 seconds;
Details at (:CSSNM00070:) in /grid/app/grid_base/diag/crs/rac1/crs/trace/ocssd.trc
2024-03-26 03:02:28.097 [OCSSD(15141)]CRS-1714: Unable to discover any voting files, retrying discovery in 15 seconds;
Details at (:CSSNM00070:) in /grid/app/grid_base/diag/crs/rac1/crs/trace/ocssd.trc
2024-03-26 03:02:43.105 [OCSSD(15141)]CRS-1714: Unable to discover any voting files, retrying discovery in 15 seconds;
Details at (:CSSNM00070:) in /grid/app/grid_base/diag/crs/rac1/crs/trace/ocssd.trc
2024-03-26 03:02:58.115 [OCSSD(15141)]CRS-1714: Unable to discover any voting files, retrying discovery in 15 seconds;
Details at (:CSSNM00070:) in /grid/app/grid_base/diag/crs/rac1/crs/trace/ocssd.trc
2024-03-26 03:03:13.124 [OCSSD(15141)]CRS-1714: Unable to discover any voting files, retrying discovery in 15 seconds;
Details at (:CSSNM00070:) in /grid/app/grid_base/diag/crs/rac1/crs/trace/ocssd.trc
2024-03-26 03:03:28.133 [OCSSD(15141)]CRS-1714: Unable to discover any voting files, retrying discovery in 15 seconds;
Details at (:CSSNM00070:) in /grid/app/grid_base/diag/crs/rac1/crs/trace/ocssd.trc
2024-03-26 03:03:43.142 [OCSSD(15141)]CRS-1714: Unable to discover any voting files, retrying discovery in 15 seconds;
Details at (:CSSNM00070:) in /grid/app/grid_base/diag/crs/rac1/crs/trace/ocssd.trc
[root@rac1 ~]# cd /dev/oracleasm/disks/
[root@rac1 disks]# ls
[root@rac1 disks]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DISK1"
Instantiating disk "DISK2"
Instantiating disk "DISK3"
Instantiating disk "DISK4"
[root@rac1 disks]# ls -lrt
total 0
brw-rw---- 1 grid oinstall 8, 18 Mar 26 03:03 DISK2
brw-rw---- 1 grid oinstall 8, 17 Mar 26 03:03 DISK1
brw-rw---- 1 grid oinstall 8, 20 Mar 26 03:03 DISK4
brw-rw---- 1 grid oinstall 8, 19 Mar 26 03:03 DISK3
[root@rac1 disks]#

##Grid user:: (both nodes)


[grid@rac1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@rac1 ~]$

[grid@rac1 ~]$ ps -ef|grep pmon


grid 23551 1 0 03:06 ? 00:00:00 asm_pmon_+ASM1
grid 28185 19138 0 03:07 pts/0 00:00:00 grep --color=auto pmon
[grid@rac1 ~]$

[grid@rac1 ~]$ crsctl check cluster -all


**************************************************************
rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[grid@rac1 ~]$

Oracle Installations::

Ssh PAsswordless Connectivity:: (for both nodes)


[root@rac1 disks]# su - oracle
Last login: Tue Mar 26 03:10:35 IST 2024 on pts/0
[oracle@rac1 ~]$
[oracle@rac1 ~]$ ls -la
total 24
drwx------ 8 oracle oinstall 4096 Mar 26 03:16 .
drwxr-xr-x. 15 root root 4096 Mar 25 03:36 ..
-rw------- 1 oracle oinstall 115 Mar 25 03:53 .bash_history
-rw-r--r-- 1 oracle oinstall 18 Aug 24 2018 .bash_logout
-rw-r--r-- 1 oracle oinstall 193 Aug 24 2018 .bash_profile
-rw-r--r-- 1 oracle oinstall 231 Aug 24 2018 .bashrc
drwxr-xr-x 3 oracle oinstall 18 Mar 25 03:48 .cache
drwxr-xr-x 3 oracle oinstall 18 Mar 25 03:48 .config
drwxr-xr-x 3 oracle oinstall 19 Mar 26 03:16 .java
drwxr-xr-x 3 oracle oinstall 19 Mar 25 03:48 .local
drwxr-xr-x 4 oracle oinstall 39 Mar 24 13:01 .mozilla
drwxr-xr-x 2 oracle oinstall 38 Mar 25 03:50 .ssh
[oracle@rac1 ~]$ rm -rf .ssh
[oracle@rac1 ~]$ mkdir .ssh
[oracle@rac1 ~]$ cd .ssh
[oracle@rac1 .ssh]$ pwd
/usr/oracle/.ssh
[oracle@rac1 .ssh]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/usr/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /usr/oracle/.ssh/id_rsa.
Your public key has been saved in /usr/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:i0HNfHp4G4ga2s4mF/Nyu0ZD6FvF4IG8EwN/JBihsgc [email protected]
The key's randomart image is:
+---[RSA 2048]----+
| +*... |
| ...=oo+ |
|E .*o++ . |
|.o +oo.o= |
|. ...oo.S + |
| . o+o+o + o |
| . oB... . |
| .o= + |
| +o+oo |
+----[SHA256]-----+
[oracle@rac1 .ssh]$ ls -lrt
total 8
-rw------- 1 oracle oinstall 1679 Mar 26 03:19 id_rsa
-rw-r--r-- 1 oracle oinstall 401 Mar 26 03:19 id_rsa.pub
[oracle@rac1 .ssh]$
[oracle@rac1 .ssh]$ cat id_rsa.pub
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQDj8UShZj7cglZrkSeZDFj9sV0Y5+0odYMvbZC3JV3MgUME7P1/xMm3/Rqg6FR7m
kLUoq5LvP9bG28dMA0IwQC0vOWU+bsih+EhCcuEqlTX7kio6ht1R/WKs4H0iOH+4QLV8IrrDfhMF8uagVESUWzRekmqVQJK
uVkHYxpGVp355T8SKj1sOjxj4hbyP1/YmOqds68FH9BxakHo+ZswM93IWeFNEvhpydNvq0JR3gPVyRnDch/pBINcJHT1YLB7u
xrGjPeOgAkepQmNRHXDcvXSKcKKiUOqBOcHvLg02B2AbniJqEXf+NRkKAEgHemzvh/iEugFztJH5nfm1elSLdaT
[email protected]
[oracle@rac1 .ssh]$ vi authorized_keys
[oracle@rac1 .ssh]$
[oracle@rac1 .ssh]$
[oracle@rac1 .ssh]$ cat authorized_keys
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQDj8UShZj7cglZrkSeZDFj9sV0Y5+0odYMvbZC3JV3MgUME7P1/xMm3/Rqg6FR7m
kLUoq5LvP9bG28dMA0IwQC0vOWU+bsih+EhCcuEqlTX7kio6ht1R/WKs4H0iOH+4QLV8IrrDfhMF8uagVESUWzRekmqVQJK
uVkHYxpGVp355T8SKj1sOjxj4hbyP1/YmOqds68FH9BxakHo+ZswM93IWeFNEvhpydNvq0JR3gPVyRnDch/pBINcJHT1YLB7u
xrGjPeOgAkepQmNRHXDcvXSKcKKiUOqBOcHvLg02B2AbniJqEXf+NRkKAEgHemzvh/iEugFztJH5nfm1elSLdaT
[email protected]
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQDQlfyFYEQI01rQAKd/p7w+oddVIKQXMLoZGfLbHhbvMaRvpnCBTfnAMGN6FNo4
NnbXq3gzYY5HJPmOJg/eN5vxaPxOwaMre8+oKWXOLPqEtkNbU5lJ6D8gFLcTIuAAOafZz61Z+UKZ5H1mWS1zlk36nxb4yzR9
Jr9qOhLcd7SMkWJ2xhzATqY0YYO4Mn3Mx/kC2NBGiUpR2RmBCIAJO4DPLy5QEwZA1ZF0IE6SK2tI90UuGBT3UjBTYtC3YM
AwlQEjqt81udyUcXsl4siGh0gKnkEGVRzAAJlXq8BeY+d7lloj2WW14rxi8cbBM96Y/iq6PWvOnUx9ibW+GVEnKlAB
[email protected]
[oracle@rac1 .ssh]$
[oracle@rac1 ~]$
[oracle@rac1 ~]$ mkdir -p /oracle/app/product/19.0.0/dbhome_1
[oracle@rac1 ~]$ mkdir -p /oracle/app/oracle_base
[oracle@rac1 ~]$
[oracle@rac1 ~]$ cd /mnt/hgfs/MEHTAB_SIR_DO_NOT_DELETE_19c_ALL_FILES/
19.15 Patches/ DBSOFT/ Oracle12.2/
19C_RPMS/ ip/ VMWare17/
ASMLBRPM/ openfileresa-2.99.1-x86_64-disc1/
[oracle@rac1 ~]$ cd /mnt/hgfs/MEHTAB_SIR_DO_NOT_DELETE_19c_ALL_FILES/DBSOFT/
[oracle@rac1 DBSOFT]$ ls
19C_RPMS.zip LINUX.X64_193000_db_home.zip OEL76-64bit.iso.zip root.sh runInstaller
ASMLBRPM.zip LINUX.X64_193000_grid_home.zip openfileresa-2.99.1-x86_64-disc1.zip root.sh.old schagent.conf
env.ora OEL76-64bit.iso.iso Oracle19c_Installation_Revised.zip root.sh.old.1 VMWare17.zip
[oracle@rac1 DBSOFT]$ unzip LINUX.X64_193000_db_home.zip -d /oracle/app/product/19.0.0/dbhome_1
/oracle/app/product/19.0.0/dbhome_1/lib/libclntsh.so.10.1 -> libclntsh.so
/oracle/app/product/19.0.0/dbhome_1/lib/libclntsh.so.11.1 -> libclntsh.so
/oracle/app/product/19.0.0/dbhome_1/lib/libclntsh.so.12.1 -> libclntsh.so
/oracle/app/product/19.0.0/dbhome_1/lib/libclntsh.so.18.1 -> libclntsh.so
/oracle/app/product/19.0.0/dbhome_1/precomp/public/SQLCA.H -> sqlca.h
/oracle/app/product/19.0.0/dbhome_1/precomp/public/SQLDA.H -> sqlda.h
/oracle/app/product/19.0.0/dbhome_1/precomp/public/ORACA.H -> oraca.h
/oracle/app/product/19.0.0/dbhome_1/precomp/public/SQLCA.COB -> sqlca.cob
/oracle/app/product/19.0.0/dbhome_1/precomp/public/ORACA.COB -> oraca.cob
/oracle/app/product/19.0.0/dbhome_1/javavm/admin/classes.bin -> ../../javavm/jdk/jdk8/admin/classes.bin
/oracle/app/product/19.0.0/dbhome_1/javavm/admin/libjtcjt.so -> ../../javavm/jdk/jdk8/admin/libjtcjt.so
/oracle/app/product/19.0.0/dbhome_1/jdk/jre/bin/ControlPanel -> jcontrol
/oracle/app/product/19.0.0/dbhome_1/javavm/admin/lfclasses.bin -> ../../javavm/jdk/jdk8/admin/lfclasses.bin
/oracle/app/product/19.0.0/dbhome_1/javavm/lib/security/cacerts -> ../../../javavm/jdk/jdk8/lib/security/cacerts
/oracle/app/product/19.0.0/dbhome_1/javavm/lib/sunjce_provider.jar -> ../../javavm/jdk/jdk8/lib/sunjce_provider.jar
/oracle/app/product/19.0.0/dbhome_1/javavm/lib/security/README.txt ->
../../../javavm/jdk/jdk8/lib/security/README.txt
/oracle/app/product/19.0.0/dbhome_1/javavm/lib/security/java.security ->
../../../javavm/jdk/jdk8/lib/security/java.security
/oracle/app/product/19.0.0/dbhome_1/jdk/jre/lib/amd64/server/libjsig.so -> ../libjsig.so
[oracle@rac1 DBSOFT]$
[oracle@rac1 DBSOFT]$
[oracle@rac1 DBSOFT]$
[oracle@rac1 DBSOFT]$ export DISPLAY=:0
[oracle@rac1 DBSOFT]$ cd /oracle/app/product/19.0.0/dbhome_1/
[oracle@rac1 dbhome_1]$ ls
addnode crs dbjava dmu hs jdbc md olap ords plsql rdbms runInstaller sqlj ucp
apex css dbs drdaas install jdk mgw OPatch oss precomp relnotes schagent.conf sqlpatch usm
assistants ctx deinstall dv instantclient jlib network opmn oui QOpatch root.sh sdk sqlplus utl
bin cv demo env.ora inventory ldap nls oracore owm R root.sh.old slax srvm wwg
clone data diagnostics has javavm lib odbc ord perl racg root.sh.old.1 sqldeveloper suptools xdk
[oracle@rac1 dbhome_1]$
[oracle@rac1 dbhome_1]$ ./runInstaller
Database Creation::
echo $ORACLE_SID
sqlplus / as sysdba
Select instance_name,status,host_name from v$instance; → this will show local instance details
Select instance_name,status,host_name from gv$instance; → this wil show global instance details

[oracle@rac1 ~]$ echo $ORACLE_SID


prod1
[oracle@rac1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Mar 26 04:11:11 2024


Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> Select instance_name,status,host_name from v$instance;

INSTANCE_NAME STATUS
---------------- ------------
HOST_NAME
----------------------------------------------------------------
prod1 OPEN
rac1.dba.com

SQL> set lines 200


SQL> /

INSTANCE_NAME STATUS HOST_NAME


---------------- ------------ ----------------------------------------------------------------
prod1 OPEN rac1.dba.com

SQL> Select instance_name,status,host_name from gv$instance;

INSTANCE_NAME STATUS HOST_NAME


---------------- ------------ ----------------------------------------------------------------
prod1 OPEN rac1.dba.com
prod2 OPEN rac2.dba.com

SQL>

srvctl config database → this will show how many database is configured into cluster
prod
srvctl config database –d prod → it will show particular database cluster details
Database unique name: prod
Database name: prod
Oracle home: /oracle/app/product/19.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/PROD/PARAMETERFILE/spfile.268.1164600257
Password file: +DATA/PROD/PASSWORD/pwdprod.256.1164599509
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: oinstall
OSOPER group: oinstall
Database instances: prod1,prod2
Configured nodes: rac1,rac2
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed

srvctl status database –d prod → this will show databae status instance wise.
Instance prod1 is running on node rac1
Instance prod2 is running on node rac2

srvctl status database –d prod –v → this will show databae status instance wise with database open mode.
Instance prod1 is running on node rac1. Instance status: Open.
Instance prod2 is running on node rac2. Instance status: Open.

How to stop cluster and DB.

Node2: init 0
Node1: init 0
SAN : init 0

You might also like