Cluster Configuration

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 23

GPFI

GEOUAT

Database version : 11.2.0.4.0


OS version : 5.10 Generic_150400-55 sun4v sparc sun4v
Servers : ch0bl7orcuat11 & ch1bl7orcuat12

[oracle@ch0bl7orcuat11]# oifcfg iflist


vnet0 10.25.20.0
vnet2 192.168.204.0
vnet2 169.254.0.0
vnet3 192.168.204.0
vnet3 169.254.128.0

[oracle@ch0bl7orcuat11]# oifcfg getif


vnet0 10.25.20.0 global public
vnet2 192.168.204.0 global cluster_interconnect
vnet3 192.168.204.0 global cluster_interconnect

[oracle@ch0bl7orcuat11]# srvctl config nodeapps


Network exists: 1/10.25.20.0/255.255.252.0/vnet0, type static
VIP exists: /orcuat11-vip/10.25.23.169/10.25.20.0/255.255.252.0/vnet0, hosting node ch0bl7orcuat11
VIP exists: /orcuat12-vip/10.25.23.170/10.25.20.0/255.255.252.0/vnet0, hosting node ch1bl7orcuat12
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016

[oracle@ch0bl7orcuat11]# srvctl config scan


SCAN name: orcuat10-scan, Network: 1/10.25.20.0/255.255.252.0/vnet0
SCAN VIP name: scan1, IP: /orcuat10-scan/10.25.23.171
SCAN VIP name: scan2, IP: /orcuat10-scan/10.25.23.178
SCAN VIP name: scan3, IP: /orcuat10-scan/10.25.23.177

Solution:

Step 1: stop oracle components


$ srvctl stop listener
$ srvctl stop cvu
$ srvctl stop scan_listener
$ srvctl stop scan

Step 2: Modify nodeapps


srvctl modify nodeapps -n ch0bl7orcuat11 -A 10.25.23.169/255.255.252.0/vnet0\|vnet1

Step 3: Validate nodeapps


$ srvctl config nodeapps
$ srvctl config scan

Step 4: Start stopped components


$ srvctl start scan
$ srvctl start scan_listener
$ srvctl start cvu
$ srvctl start listener
GEOSIT

Database version : 11.2.0.4.0


OS version : 5.10 Generic_150400-55 sun4v sparc sun4v
Servers : orcqa11 & orcqa12

[oracle@ch0bl7orcqa11]# oifcfg iflist


vnet0 10.25.20.0
vnet6 192.168.204.0
vnet6 169.254.0.0

[oracle@ch0bl7orcqa11]# oifcfg getif


vnet0 10.25.20.0 global public
vnet5 10.25.20.0 global public
vnet6 192.168.204.0 global cluster_interconnect
vnet7 192.168.204.0 global cluster_interconnect

[oracle@ch0bl7orcqa11]# srvctl config nodeapps


Network exists: 1/10.25.20.0/255.255.252.0/vnet0:vnet5, type static
VIP exists: /orcqa11-vip/10.25.23.139/10.25.20.0/255.255.252.0/vnet0:vnet5, hosting node
ch0bl7orcqa11
VIP exists: /orcqa12-vip/10.25.23.140/10.25.20.0/255.255.252.0/vnet0:vnet5, hosting node
ch1bl7orcqa12
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016

[oracle@ch0bl7orcqa11]# srvctl config scan


SCAN name: orcqa10-scan, Network: 1/10.25.20.0/255.255.252.0/vnet0:vnet5
SCAN VIP name: scan1, IP: /orcqa10-scan/10.25.23.176
SCAN VIP name: scan2, IP: /orcqa10-scan/10.25.23.174
SCAN VIP name: scan3, IP: /orcqa10-scan/10.25.23.175
GEOPERF

Database version : 11.2.0.4.0


OS version : 5.11 11.3 sun4v sparc sun4v
Servers : orcperf11e & orcperf12e

[oracle@ch5bl0orcperf11e]# oifcfg iflist


net2 192.168.206.0
net2 169.254.0.0
net3 192.168.207.0
net3 169.254.128.0
ipmp0 10.27.71.0

[oracle@ch5bl0orcperf11e]# oifcfg getif


ipmp0 10.27.71.0 global public
net2 192.168.206.0 global cluster_interconnect
net3 192.168.207.0 global cluster_interconnect

[oracle@ch5bl0orcperf11e]# srvctl config nodeapps


Network exists: 1/10.27.71.0/255.255.255.0/ipmp0, type static
VIP exists: /orcperf11e-vip/10.27.71.158/10.27.71.0/255.255.255.0/ipmp0, hosting node
ch5bl0orcperf11e
VIP exists: /orcperf12e-vip/10.27.71.159/10.27.71.0/255.255.255.0/ipmp0, hosting node
ch5bl1orcperf12e
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016

[oracle@ch5bl0orcperf11e]# srvctl config scan


SCAN name: orcperf10e-scan, Network: 1/10.27.71.0/255.255.255.0/ipmp0
SCAN VIP name: scan1, IP: /orcperf10e-scan/10.27.71.155
SCAN VIP name: scan2, IP: /orcperf10e-scan/10.27.71.157
SCAN VIP name: scan3, IP: /orcperf10e-scan/10.27.71.156
GEODEMO - Primary

Database version : 11.2.0.4.0


OS version : 5.10 Generic_150400-48 sun4v sparc sun4v
Servers : orcdemo11w & orcdemo12w

[oracle@ch2bl1orcdemo11w]# oifcfg iflist


vnet0 10.50.7.0
vnet2 192.168.205.0
vnet2 169.254.0.0
vnet3 192.168.205.0
vnet3 169.254.128.0

[oracle@ch2bl1orcdemo11w]# oifcfg getif


vnet0 10.50.7.0 global public
vnet2 192.168.205.0 global cluster_interconnect
vnet3 192.168.205.0 global cluster_interconnect
vnet1 10.50.7.0 global public

[oracle@ch2bl1orcdemo11w]# srvctl config nodeapps


Network exists: 1/10.50.7.0/255.255.255.0/vnet0, type static
VIP exists: /orcdemo11w-vip/10.50.7.238/10.50.7.0/255.255.255.0/vnet0, hosting node ch2bl1orcdemo11w
VIP exists: /orcdemo12w-vip/10.50.7.239/10.50.7.0/255.255.255.0/vnet0, hosting node ch3bl1orcdemo12w
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016

[oracle@ch2bl1orcdemo11w]# srvctl config scan


SCAN name: orcdemo10w-scan, Network: 1/10.50.7.0/255.255.255.0/vnet0
SCAN VIP name: scan1, IP: /orcdemo10w-scan/10.50.7.252
SCAN VIP name: scan2, IP: /orcdemo10w-scan/10.50.7.240
SCAN VIP name: scan3, IP: /orcdemo10w-scan/10.50.7.253

Solution:

Step 1: stop oracle components


$ srvctl stop listener
$ srvctl stop cvu
$ srvctl stop scan_listener
$ srvctl stop scan

Step 2: Modify nodeapps


Login as root user
srvctl modify nodeapps -n ch2bl1orcdemo11w -A 10.50.7.238/255.255.255.0/vnet0\|vnet1

Step 3: Validate nodeapps


$ srvctl config nodeapps
$ srvctl config scan

Step 4: Start stopped components


$ srvctl start scan
$ srvctl start scan_listener
$ srvctl start cvu
$ srvctl start listener
GEODEMO – Standby

Database version : 11.2.0.4.0


OS version : 5.10 Generic_150400-48 sun4v sparc sun4v
Servers : Orcdemo11e & orcdemo12e

[oracle@ch0bl3orcdemo11e]# oifcfg iflist


vnet0 10.27.2.0
vnet2 192.168.206.0
vnet2 169.254.0.0
vnet3 192.168.206.0
vnet3 169.254.128.0

[oracle@ch0bl3orcdemo11e]# oifcfg getif


vnet0 10.27.2.0 global public
vnet2 192.168.206.0 global cluster_interconnect
vnet3 192.168.206.0 global cluster_interconnect

[oracle@ch0bl3orcdemo11e]# srvctl config nodeapps


Network exists: 1/10.27.2.0/255.255.255.0/vnet0:vnet1, type static
VIP exists: /orcdemo11e-vip/10.27.2.238/10.27.2.0/255.255.255.0/vnet0:vnet1, hosting node
ch0bl3orcdemo11e
VIP exists: /orcdemo12e-vip/10.27.2.239/10.27.2.0/255.255.255.0/vnet0:vnet1, hosting node
ch1bl3orcdemo12e
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016

[oracle@ch0bl3orcdemo11e]# srvctl config scan


SCAN name: orcdemo10e-scan, Network: 1/10.27.2.0/255.255.255.0/vnet0:vnet1
SCAN VIP name: scan1, IP: /orcdemo10e-scan/10.27.2.253
SCAN VIP name: scan2, IP: /orcdemo10e-scan/10.27.2.252
SCAN VIP name: scan3, IP: /orcdemo10e-scan/10.27.2.240
GEOPROD - Primary

Database version : 11.2.0.4.0


OS version : 5.10 Generic_150400-48 sun4v sparc sun4v
Servers : ch2bl1orcprod11w & ch3bl1orcdemo12w

[oracle@ch2bl1orcprod11w]# oifcfg iflist


vnet0 10.50.7.0
vnet2 192.168.205.0
vnet2 169.254.0.0
vnet3 192.168.205.0
vnet3 169.254.128.0

[oracle@ch2bl1orcprod11w]# oifcfg getif


vnet0 10.50.7.0 global public
vnet2 192.168.205.0 global cluster_interconnect
vnet3 192.168.205.0 global cluster_interconnect

[oracle@ch2bl1orcprod11w]# srvctl config nodeapps


Network exists: 1/10.50.7.0/255.255.255.0/vnet0, type static
VIP exists: /orcprod11w-vip/10.50.7.241/10.50.7.0/255.255.255.0/vnet0, hosting node ch2bl1orcprod11w
VIP exists: /orcprod12w-vip/10.50.7.242/10.50.7.0/255.255.255.0/vnet0, hosting node ch3bl1orcprod12w
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016

[oracle@ch2bl1orcprod11w]# srvctl config scan


SCAN name: orcprod10w-scan, Network: 1/10.50.7.0/255.255.255.0/vnet0
SCAN VIP name: scan1, IP: /orcprod10w-scan/10.50.7.237
SCAN VIP name: scan2, IP: /orcprod10w-scan/10.50.7.243
SCAN VIP name: scan3, IP: /orcprod10w-scan/10.50.7.236

Solution:

Step 1: stop oracle components


$ srvctl stop listener
$ srvctl stop cvu
$ srvctl stop scan_listener
$ srvctl stop scan

Step 2: Modify nodeapps


srvctl modify nodeapps -n ch2bl1orcprod11w -A 10.50.7.241/255.255.255.0/vnet0\|vnet1

Step 3: Validate nodeapps


$ srvctl config nodeapps
$ srvctl config scan

Step 4: Start stopped components


$ srvctl start scan
$ srvctl start scan_listener
$ srvctl start cvu
$ srvctl start listener
GEOPROD – Standby

Database version : 11.2.0.4.0


OS version : 5.10 Generic_150400-48 sun4v sparc sun4v
Servers : orcprod11e & orcprod12e

[oracle@ch0bl3orcprod11e]# oifcfg iflist


vnet0 10.27.2.0
vnet2 192.168.206.0
vnet2 169.254.0.0
vnet3 192.168.206.0
vnet3 169.254.128.0

[oracle@ch0bl3orcprod11e]# oifcfg getif


vnet0 10.27.2.0 global public
vnet2 192.168.206.0 global cluster_interconnect
vnet3 192.168.206.0 global cluster_interconnect

[oracle@ch0bl3orcprod11e]# srvctl config nodeapps


Network exists: 1/10.27.2.0/255.255.255.0/vnet0, type static
VIP exists: /orcprod11e-vip/10.27.2.241/10.27.2.0/255.255.255.0/vnet0, hosting node ch0bl3orcprod11e
VIP exists: /orcprod12e-vip/10.27.2.242/10.27.2.0/255.255.255.0/vnet0, hosting node ch1bl3orcprod12e
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016

[oracle@ch0bl3orcprod11e]# srvctl config scan


SCAN name: orcprod10e-scan, Network: 1/10.27.2.0/255.255.255.0/vnet0
SCAN VIP name: scan1, IP: /orcprod10e-scan/10.27.2.236
SCAN VIP name: scan2, IP: /orcprod10e-scan/10.27.2.237
SCAN VIP name: scan3, IP: /orcprod10e-scan/10.27.2.243

Solution:

Step 1: stop oracle components


$ srvctl stop listener
$ srvctl stop cvu
$ srvctl stop scan_listener
$ srvctl stop scan

Step 2: Modify nodeapps


srvctl modify nodeapps -n ch0bl3orcprod11e -A 10.27.2.241/255.255.255.0/vnet0\|vnet1

Step 3: Validate nodeapps


$ srvctl config nodeapps
$ srvctl config scan

Step 4: Start stopped components


$ srvctl start scan
$ srvctl start scan_listener
$ srvctl start cvu
$ srvctl start listener
RDP and GPA

RDPQA

Database version : 11.2.0.4.0


OS version : 5.10 Generic_150400-55 sun4v sparc sun4v
Servers : orcqa21 & orcqa22

[oracle@ch0bl7orcqa21]# oifcfg iflist


vnet0 10.25.20.0
vnet2 192.168.204.0
vnet2 169.254.0.0
vnet3 192.168.204.0
vnet3 169.254.128.0

[oracle@ch0bl7orcqa21]# oifcfg getif


vnet0 10.25.20.0 global public
vnet2 192.168.204.0 global cluster_interconnect
vnet3 192.168.204.0 global cluster_interconnect

[oracle@ch0bl7orcqa21]# srvctl config nodeapps


Network exists: 1/10.25.20.0/255.255.252.0/vnet0, type static
VIP exists: /orcqa21-vip/10.25.23.193/10.25.20.0/255.255.252.0/vnet0, hosting node ch0bl7orcqa21
VIP exists: /orcqa22-vip/10.25.23.194/10.25.20.0/255.255.252.0/vnet0, hosting node ch1bl7orcqa22
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016

[oracle@ch0bl7orcqa21]# srvctl config scan


SCAN name: orcqa20-scan, Network: 1/10.25.20.0/255.255.252.0/vnet0
SCAN VIP name: scan1, IP: /orcqa20-scan/10.25.23.197
SCAN VIP name: scan2, IP: /orcqa20-scan/10.25.23.196
SCAN VIP name: scan3, IP: /orcqa20-scan/10.25.23.195

Solution:

Step 1: stop oracle components


$ srvctl stop listener
$ srvctl stop cvu
$ srvctl stop scan_listener
$ srvctl stop scan

Step 2: Modify nodeapps


srvctl modify nodeapps -n ch0bl7orcqa21 -A 10.25.23.193/255.255.252.0/vnet0\|vnet1

Step 3: Validate nodeapps


$ srvctl config nodeapps
$ srvctl config scan

Step 4: Start stopped components


$ srvctl start scan
$ srvctl start scan_listener
$ srvctl start cvu
$ srvctl start listener
RDPSTAGE

Database version : 11.2.0.4.0


OS version : 5.10 Generic_150400-55 sun4v sparc sun4v
Servers : orcuat21 & orcuat22

[oracle@ch0bl7orcuat21]# oifcfg iflist


vnet0 10.25.20.0
vnet2 192.168.204.0
vnet2 169.254.0.0
vnet3 192.168.204.0
vnet3 169.254.128.0

[oracle@ch0bl7orcuat21]# oifcfg getif


vnet0 10.25.20.0 global public
vnet2 192.168.204.0 global cluster_interconnect
vnet3 192.168.204.0 global cluster_interconnect

[oracle@ch0bl7orcuat21]# srvctl config nodeapps


Network exists: 1/10.25.20.0/255.255.252.0/vnet0, type static
VIP exists: /orcuat21-vip/10.25.23.207/10.25.20.0/255.255.252.0/vnet0, hosting node ch0bl7orcuat21
VIP exists: /orcuat22-vip/10.25.23.208/10.25.20.0/255.255.252.0/vnet0, hosting node ch1bl7orcuat22
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016

[oracle@ch0bl7orcuat21]# srvctl config scan


SCAN name: orcuat20-scan, Network: 1/10.25.20.0/255.255.252.0/vnet0
SCAN VIP name: scan1, IP: /orcuat20-scan/10.25.23.211
SCAN VIP name: scan2, IP: /orcuat20-scan/10.25.23.209
SCAN VIP name: scan3, IP: /orcuat20-scan/10.25.23.210

Solution:

Step 1: stop oracle components


$ srvctl stop listener
$ srvctl stop cvu
$ srvctl stop scan_listener
$ srvctl stop scan

Step 2: Modify nodeapps


srvctl modify nodeapps -n ch0bl7orcuat21 -A 10.25.23.207/255.255.252.0/vnet0\|vnet1

Step 3: Validate nodeapps


$ srvctl config nodeapps
$ srvctl config scan

Step 4: Start stopped components


$ srvctl start scan
$ srvctl start scan_listener
$ srvctl start cvu
$ srvctl start listener
RDPPROD – Primary

Database version : 11.2.0.4.0


OS version : 5.10 Generic_150400-48 sun4v sparc sun4v
Servers : orcprod21e & orcprod22e

[oracle@ch0bl3orcprod21e]# oifcfg iflist


vnet0 10.27.2.0
vnet2 192.168.206.0
vnet2 169.254.0.0
vnet3 192.168.206.0
vnet3 169.254.128.0

[oracle@ch0bl3orcprod21e]# oifcfg getif


vnet0 10.27.2.0 global public
vnet2 192.168.206.0 global cluster_interconnect
vnet3 192.168.206.0 global cluster_interconnect

[oracle@ch0bl3orcprod21e]# srvctl config nodeapps


Network exists: 1/10.27.2.0/255.255.255.0/vnet0:vnet1, type static
VIP exists: /orcprod21e-vip/10.27.2.101/10.27.2.0/255.255.255.0/vnet0:vnet1, hosting node
ch0bl3orcprod21e
VIP exists: /orcprod22e-vip/10.27.2.103/10.27.2.0/255.255.255.0/vnet0:vnet1, hosting node
ch1bl3orcprod22e
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016

[oracle@ch0bl3orcprod21e]# srvctl config scan


SCAN name: orcprod20e-scan, Network: 1/10.27.2.0/255.255.255.0/vnet0:vnet1
SCAN VIP name: scan1, IP: /orcprod20e-scan/10.27.2.106
SCAN VIP name: scan2, IP: /orcprod20e-scan/10.27.2.104
SCAN VIP name: scan3, IP: /orcprod20e-scan/10.27.2.105
RDPPROD – Standby

Database version : 11.2.0.4.0


OS version : 5.10 Generic_150400-48 sun4v sparc sun4v
Servers : orcprod21w & orcprod22w

[oracle@ch2bl1orcprod21w]# oifcfg iflist


vnet0 10.50.7.0
vnet2 192.168.205.0
vnet2 169.254.0.0
vnet3 192.168.205.0
vnet3 169.254.128.0

[oracle@ch2bl1orcprod21w]# oifcfg getif


vnet0 10.50.7.0 global public
vnet2 192.168.205.0 global cluster_interconnect
vnet3 192.168.205.0 global cluster_interconnect

[oracle@ch2bl1orcprod21w]# srvctl config nodeapps


Network exists: 1/10.50.7.0/255.255.255.0/vnet0, type static
VIP exists: /orcprod21w-vip/10.50.7.101/10.50.7.0/255.255.255.0/vnet0, hosting node ch2bl1orcprod21w
VIP exists: /orcprod22w-vip/10.50.7.103/10.50.7.0/255.255.255.0/vnet0, hosting node ch3bl1orcprod22w
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016

[oracle@ch2bl1orcprod21w]# srvctl config scan


SCAN name: orcprod20w-scan, Network: 1/10.50.7.0/255.255.255.0/vnet0
SCAN VIP name: scan1, IP: /orcprod20w-scan/10.50.7.106
SCAN VIP name: scan2, IP: /orcprod20w-scan/10.50.7.105
SCAN VIP name: scan3, IP: /orcprod20w-scan/10.50.7.104

Solution:

Step 1: stop oracle components


$ srvctl stop listener
$ srvctl stop cvu
$ srvctl stop scan_listener
$ srvctl stop scan

Step 2: Modify nodeapps


srvctl modify nodeapps -n ch2bl1orcprod21w -A 10.50.7.101/255.255.255.0/vnet0\|vnet1

Step 3: Validate nodeapps


$ srvctl config nodeapps
$ srvctl config scan

Step 4: Start stopped components


$ srvctl start scan
$ srvctl start scan_listener
$ srvctl start cvu
$ srvctl start listener
GPAPROD - Primary

Database version : 11.2.0.4.0


OS version : 5.10 Generic_150400-48 sun4v sparc sun4v
Servers : orcprod51w & orcprod52w

[oracle@ch2bl8orcprod51w]# oifcfg iflist


vnet0 10.50.7.0
vnet2 192.168.205.0
vnet2 169.254.0.0
vnet3 192.168.208.0
vnet3 169.254.128.0

[oracle@ch2bl8orcprod51w]# oifcfg getif


vnet0 10.50.7.0 global public
vnet2 192.168.205.0 global cluster_interconnect
vnet3 192.168.208.0 global cluster_interconnect

[oracle@ch2bl8orcprod51w]# srvctl config nodeapps


Network exists: 1/10.50.7.0/255.255.255.0/vnet0, type static
VIP exists: /orcprod51w-vip/10.50.7.185/10.50.7.0/255.255.255.0/vnet0, hosting node ch2bl8orcprod51w
VIP exists: /orcprod52w-vip/10.50.7.187/10.50.7.0/255.255.255.0/vnet0, hosting node ch3bl8orcprod52w
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016

[oracle@ch2bl8orcprod51w]# srvctl config scan


SCAN name: orcprod50w-scan, Network: 1/10.50.7.0/255.255.255.0/vnet0
SCAN VIP name: scan1, IP: /orcprod50w-scan/10.50.7.175
SCAN VIP name: scan2, IP: /orcprod50w-scan/10.50.7.174
SCAN VIP name: scan3, IP: /orcprod50w-scan/10.50.7.164

Solution:

Step 1: stop oracle components


$ srvctl stop listener
$ srvctl stop cvu
$ srvctl stop scan_listener
$ srvctl stop scan

Step 2: Modify nodeapps


srvctl modify nodeapps -n ch2bl8orcprod51w -A 10.50.7.185/255.255.255.0/vnet0\|vnet1

Step 3: Validate nodeapps


$ srvctl config nodeapps
$ srvctl config scan

Step 4: Start stopped components


$ srvctl start scan
$ srvctl start scan_listener
$ srvctl start cvu
$ srvctl start listener
GPAPROD - Standby

Database version : 11.2.0.4.0


OS version : 5.10 Generic_150400-48 sun4v sparc sun4v
Servers : orcprod51e & orcprod52e

[oracle@ch2bl3orcprod51e]# oifcfg iflist


vnet0 10.27.2.0
vnet2 192.168.206.0
vnet2 169.254.0.0
vnet3 192.168.207.0
vnet3 169.254.128.0

[oracle@ch2bl3orcprod51e]# oifcfg getif


vnet0 10.27.2.0 global public
vnet2 192.168.206.0 global cluster_interconnect
vnet3 192.168.207.0 global cluster_interconnect

[oracle@ch2bl3orcprod51e]# srvctl config nodeapps


Network exists: 1/10.27.2.0/255.255.255.0/vnet0, type static
VIP exists: /orcprod51e-vip/10.27.2.60/10.27.2.0/255.255.255.0/vnet0, hosting node ch2bl3orcprod51e
VIP exists: /orcprod52e-vip/10.27.2.64/10.27.2.0/255.255.255.0/vnet0, hosting node ch3bl3orcprod52e
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016

[oracle@ch2bl3orcprod51e]# srvctl config scan


SCAN name: orcprod50e-scan, Network: 1/10.27.2.0/255.255.255.0/vnet0
SCAN VIP name: scan1, IP: /orcprod50e-scan/10.27.2.88
SCAN VIP name: scan2, IP: /orcprod50e-scan/10.27.2.68
SCAN VIP name: scan3, IP: /orcprod50e-scan/10.27.2.89

Solution:

Step 1: stop oracle components


$ srvctl stop listener
$ srvctl stop cvu
$ srvctl stop scan_listener
$ srvctl stop scan

Step 2: Modify nodeapps


srvctl modify nodeapps -n ch2bl3orcprod51e -A 10.27.2.60/255.255.255.0/vnet0\|vnet1

Step 3: Validate nodeapps


$ srvctl config nodeapps
$ srvctl config scan

Step 4: Start stopped components


$ srvctl start scan
$ srvctl start scan_listener
$ srvctl start cvu
$ srvctl start listener
GPG

GPGUAT & QA

Database version : 12.1.0.2.0


OS version : 5.11 11.2 sun4v sparc sun4v
Servers : orcuat71 & orcuat72

[oracle@ch0bl9orcuat71]# oifcfg iflist


ipmp0 10.25.20.0
net2 192.168.208.0
net2 169.254.0.0
net3 192.168.209.0
net3 169.254.128.0

[oracle@ch0bl9orcuat71]# oifcfg getif


ipmp0 10.25.20.0 global public
net2 192.168.208.0 global cluster_interconnect,asm
net3 192.168.209.0 global cluster_interconnect,asm

[oracle@ch0bl9orcuat71]# srvctl config nodeapps


Network 1 exists
Subnet IPv4: 10.25.20.0/255.255.252.0/ipmp0, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
VIP exists: network number 1, hosting node ch0bl9orcuat71
VIP Name: orcuat71-vip.cctdev.com
VIP IPv4 Address: 10.25.23.25
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node ch1bl9orcuat72
VIP Name: orcuat72-vip.cctdev.com
VIP IPv4 Address: 10.25.23.26
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on nodes:
ONS is individually disabled on nodes:

[oracle@ch0bl9orcuat71]# srvctl config scan


SCAN name: orcuat70-scan, Network: 1
Subnet IPv4: 10.25.20.0/255.255.252.0/ipmp0, static
Subnet IPv6:
SCAN 0 IPv4 VIP: 10.25.23.36
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 1 IPv4 VIP: 10.25.23.35
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: 10.25.23.38
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
GPG PERF

Database version : 12.1.0.2.0


OS version : 5.11 11.3 sun4v sparc sun4v
Servers : orcperf71e & orcperf72e

[oracle@ch5bl0orcperf71e]# oifcfg iflist


net2 192.168.206.0
net2 169.254.0.0
net3 192.168.207.0
net3 169.254.128.0
ipmp0 10.27.71.0

[oracle@ch5bl0orcperf71e]# oifcfg getif


ipmp0 10.27.71.0 global public
net2 192.168.206.0 global cluster_interconnect,asm
net3 192.168.207.0 global cluster_interconnect,asm

[oracle@ch5bl0orcperf71e]# srvctl config nodeapps


Network 1 exists
Subnet IPv4: 10.27.71.0/255.255.255.0/ipmp0, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
VIP exists: network number 1, hosting node ch5bl0orcperf71e
VIP Name: orcperf71e-vip.cctdev.com
VIP IPv4 Address: 10.27.71.128
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node ch5bl1orcperf72e
VIP Name: orcperf72e-vip.cctdev.com
VIP IPv4 Address: 10.27.71.127
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on nodes:
ONS is individually disabled on nodes:

[oracle@ch5bl0orcperf71e]# srvctl config scan


SCAN name: orcperf70e-scan, Network: 1
Subnet IPv4: 10.27.71.0/255.255.255.0/ipmp0, static
Subnet IPv6:
SCAN 0 IPv4 VIP: 10.27.71.129
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 1 IPv4 VIP: 10.27.71.135
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: 10.27.71.134
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
GPGPROD – Primary

Database version : 12.1.0.2.0


OS version : 5.11 11.3 sun4v sparc sun4v
Servers : orcprod71e & orcprod72e

[oracle@orcprod71e]# oifcfg iflist


ipmp0 10.27.2.0
net701002 192.168.212.0
net701002 169.254.0.0
net702003 192.168.213.0
net702003 169.254.128.0

[oracle@orcprod71e]# oifcfg getif


ipmp0 10.27.2.0 global public
net701002 192.168.212.0 global cluster_interconnect,asm
net702003 192.168.213.0 global cluster_interconnect,asm

[oracle@orcprod71e]# srvctl config nodeapps


Network 1 exists
Subnet IPv4: 10.27.2.0/255.255.255.0/ipmp0, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
VIP exists: network number 1, hosting node orcprod71e
VIP Name: orcprod71e-vip.cct.ri.com
VIP IPv4 Address: 10.27.2.148
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node orcprod72e
VIP Name: orcprod72e-vip.cct.ri.com
VIP IPv4 Address: 10.27.2.177
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on nodes:
ONS is individually disabled on nodes:

[oracle@orcprod71e]# srvctl config scan


SCAN name: orcprod70e-scan, Network: 1
Subnet IPv4: 10.27.2.0/255.255.255.0/ipmp0, static
Subnet IPv6:
SCAN 0 IPv4 VIP: 10.27.2.184
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 1 IPv4 VIP: 10.27.2.183
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: 10.27.2.193
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
GPGPROD – Standby

Database version : 12.1.0.2.0


OS version : 5.11 11.3 sun4v sparc sun4v
Servers : orcprod71w & orcprod72w

[oracle@orcprod71w]# oifcfg iflist


net701004 192.168.210.0
net701004 169.254.0.0
net702005 192.168.211.0
net702005 169.254.128.0
ipmp0 10.50.7.0

[oracle@orcprod71w]# oifcfg getif


ipmp0 10.50.7.0 global public
net701004 192.168.210.0 global cluster_interconnect,asm
net702005 192.168.211.0 global cluster_interconnect,asm

[oracle@orcprod71w]# srvctl config nodeapps


Network 1 exists
Subnet IPv4: 10.50.7.0/255.255.255.0/ipmp0, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
VIP exists: network number 1, hosting node orcprod71w
VIP Name: orcprod71w-vip.cct.ri.com
VIP IPv4 Address: 10.50.7.84
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node orcprod72w
VIP Name: orcprod72w-vip.cct.ri.com
VIP IPv4 Address: 10.50.7.85
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on nodes:
ONS is individually disabled on nodes:

[oracle@orcprod71w]# srvctl config scan


SCAN name: orcprod70w-scan, Network: 1
Subnet IPv4: 10.50.7.0/255.255.255.0/ipmp0, static
Subnet IPv6:
SCAN 0 IPv4 VIP: 10.50.7.86
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 1 IPv4 VIP: 10.50.7.87
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: 10.50.7.88
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
TIBCO

TIBCO PERF

Database version : 12.1.0.2.0


OS version : 5.11 11.3 sun4v sparc sun4v
Servers : orcperf61 & orcperf62

[oracle@ch5bl0orcperf61]# oifcfg iflist


ipmp0 10.27.72.0
net2 192.168.206.0
net2 169.254.0.0
net3 192.168.207.0
net3 169.254.128.0

[oracle@ch5bl0orcperf61]# oifcfg getif


ipmp0 10.27.72.0 global public
net2 192.168.206.0 global cluster_interconnect,asm
net3 192.168.207.0 global cluster_interconnect,asm

[oracle@ch5bl0orcperf61]# srvctl config nodeapps


Network 1 exists
Subnet IPv4: 10.27.72.0/255.255.255.0/ipmp0, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
VIP exists: network number 1, hosting node ch5bl0orcperf61
VIP Name: orcperf61-vip.cctdev.com
VIP IPv4 Address: 10.27.72.64
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node ch5bl1orcperf62
VIP Name: orcperf62-vip.cctdev.com
VIP IPv4 Address: 10.27.72.65
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on nodes:
ONS is individually disabled on nodes:

[oracle@ch5bl0orcperf61]# srvctl config scan


SCAN name: orcperf60-scan, Network: 1
Subnet IPv4: 10.27.72.0/255.255.255.0/ipmp0, static
Subnet IPv6:
SCAN 0 IPv4 VIP: 10.27.72.63
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 1 IPv4 VIP: 10.27.72.61
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: 10.27.72.62
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
TIBCO UAT

Database version : 12.1.0.2.0


OS version : 5.11 11.3 sun4v sparc sun4v
Servers : orcuat61 & orcuat62

[oracle@ch5bl0orcuat61]# oifcfg iflist


ipmp0 10.27.71.0
net2 192.168.206.0
net2 169.254.0.0
net3 192.168.207.0
net3 169.254.128.0

[oracle@ch5bl0orcuat61]# oifcfg getif


ipmp0 10.27.71.0 global public
net2 192.168.206.0 global cluster_interconnect,asm
net3 192.168.207.0 global cluster_interconnect,asm

[oracle@ch5bl0orcuat61]# srvctl config nodeapps


Network 1 exists
Subnet IPv4: 10.27.71.0/255.255.255.0/ipmp0, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
VIP exists: network number 1, hosting node ch5bl0orcuat61
VIP Name: orcuat61-vip.cctdev.com
VIP IPv4 Address: 10.27.71.64
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node ch5bl1orcuat62
VIP Name: orcuat62-vip.cctdev.com
VIP IPv4 Address: 10.27.71.65
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on nodes:
ONS is individually disabled on nodes:

[oracle@ch5bl0orcuat61]# srvctl config scan


SCAN name: orcuat60-scan, Network: 1
Subnet IPv4: 10.27.71.0/255.255.255.0/ipmp0, static
Subnet IPv6:
SCAN 0 IPv4 VIP: 10.27.71.62
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 1 IPv4 VIP: 10.27.71.63
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: 10.27.71.61
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
TIBCO PROD – Primary

Database version : 12.1.0.2.0


OS version : 5.11 11.3 sun4v sparc sun4v
Servers : orcprod61e & orcprod62e

[oracle@orcprod61e]# oifcfg iflist


ipmp0 10.27.2.0
net701002 192.168.212.0
net701002 169.254.0.0
net702003 192.168.213.0
net702003 169.254.128.0

[oracle@orcprod61e]# oifcfg getif


ipmp0 10.27.2.0 global public
net701002 192.168.212.0 global cluster_interconnect,asm
net702003 192.168.213.0 global cluster_interconnect,asm

[oracle@orcprod61e]# srvctl config nodeapps


Network 1 exists
Subnet IPv4: 10.27.2.0/255.255.255.0/ipmp0, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
VIP exists: network number 1, hosting node orcprod61e
VIP Name: orcprod61e-vip.cct.ri.com
VIP IPv4 Address: 10.27.2.66
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node orcprod62e
VIP Name: orcprod62e-vip.cct.ri.com
VIP IPv4 Address: 10.27.2.65
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on nodes:
ONS is individually disabled on nodes:

[oracle@orcprod61e]# srvctl config scan


SCAN name: orcprod60e-scan, Network: 1
Subnet IPv4: 10.27.2.0/255.255.255.0/ipmp0, static
Subnet IPv6:
SCAN 0 IPv4 VIP: 10.27.2.62
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 1 IPv4 VIP: 10.27.2.61
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: 10.27.2.69
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
TIBCO PROD – Standby

Database version : 12.1.0.2.0


OS version : 5.11 11.3 sun4v sparc sun4v
Servers : orcprod61w & orcprod62w

[oracle@orcprod61w]# oifcfg iflist


ipmp0 10.26.1.0
net701002 192.168.210.0
net701002 169.254.0.0
net702003 192.168.211.0
net702003 169.254.128.0

[oracle@orcprod61w]# oifcfg getif


ipmp0 10.26.1.0 global public
net701002 192.168.210.0 global cluster_interconnect,asm
net702003 192.168.211.0 global cluster_interconnect,asm

[oracle@orcprod61w]# srvctl config nodeapps


Network 1 exists
Subnet IPv4: 10.26.1.0/255.255.255.0/ipmp0, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
VIP exists: network number 1, hosting node orcprod61w
VIP Name: orcprod61w-vip.cct.ri.com
VIP IPv4 Address: 10.26.1.154
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node orcprod62w
VIP Name: orcprod62w-vip.cct.ri.com
VIP IPv4 Address: 10.26.1.156
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on nodes:
ONS is individually disabled on nodes:

[oracle@orcprod61w]# srvctl config scan


SCAN name: orcprod60w-scan, Network: 1
Subnet IPv4: 10.26.1.0/255.255.255.0/ipmp0, static
Subnet IPv6:
SCAN 0 IPv4 VIP: 10.26.1.163
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 1 IPv4 VIP: 10.26.1.166
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: 10.26.1.169
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
ACCUITY

APPPROD - Primary

Database version : 10.2.0.5.0


Databases : APPPROD & GPADEMODR
OS version : 5.10 Generic_141445-09 i86pc i386 i86pc
Servers : ch0bl1racdb1e.cct.ri.com & ch1bl1racdb2e.cct.ri.com

[oracle@ch0bl1racdb1e]$ oifcfg iflist


aggr1 10.27.2.0
aggr2 192.168.206.0

[oracle@ch0bl1racdb1e]$ oifcfg getif


aggr1 10.27.2.0 global public
aggr2 192.168.206.0 global cluster_interconnect

GPADEMO – Primary

Database version : 10.2.0.5.0


OS version : 5.10 Generic_147441-27 i86pc i386 i86pc
Databases : GPADEMOW & APPPRODW
Servers : ch2bl3racdb3w.cct.ri.com & ch3bl3racdb3w.cct.ri.com

[oracle@ch2bl3racdb3w]# oifcfg iflist


e1000g0 192.168.205.0
e1000g3 192.168.205.0
ixgbe0 10.50.7.0

[oracle@ch2bl3racdb3w]# oifcfg getif


e1000g0 192.168.205.0 global cluster_interconnect
e1000g3 192.168.205.0 global cluster_interconnect
ixgbe0 10.50.7.0 global public
APPDBQA

Database version : 10.2.0.5.0


OS version : 5.10 Generic_141445-09 i86pc i386 i86pc
Databases : APPDBQA, APPSTAGE, GPAQA
Servers : ch0bl3racdb1qastage.cctdev.com & ch1bl3racdb2qastage.cctdev.com

[oracle@ch0bl3racdb1qastage]$ oifcfg iflist


aggr1 10.25.20.0
aggr2 192.168.204.0

[oracle@ch0bl3racdb1qastage]$ oifcfg getif


aggr1 10.0.0.0 global public
aggr2 192.168.204.0 global cluster_interconnect

You might also like