Step by Step Deleting Node in Oracle RAC (12c Release 1) Environment
Step by Step Deleting Node in Oracle RAC (12c Release 1) Environment
Environment
oracledbwr.com/step-by-step-deleting-node-in-oracle-rac-12c-release-1-environment
2 January 2019
1/17
2/17
3/17
4/17
Check number of Instance running status :
5/17
[oracle@racpb1 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2
Checking swap space: must be greater than 500 MB. Actual 5869 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
Deinstall ORACLE_HOME :
6/17
Specify the “-local” flag as not to remove more than just the local node’s software.
7/17
[oracle@racpb3 ~]$ $ORACLE_HOME/deinstall/deinstall -local
Specify the list of database names that are configured locally on this node for this
Oracle home. Local configurations of the discovered databases will be removed []: orcl11g
Database Check Configuration END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check7641.log
Oracle Configuration Manager check END
8/17
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2018-
12-28_11-37-08-PM.log
Network Configuration clean config END
Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean7641.log
Oracle Configuration Manager clean END
The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The
directory is in use by Oracle Home '/u01/app/12.1.0/grid'.
9/17
Clean install operation removing temporary directory '/tmp/deinstall2018-12-28_11-27-37PM'
on node 'racpb3'
Checking swap space: must be greater than 500 MB. Actual 5999 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
Remove GRID_HOME :-
If the node is pinned, then run the crsctl unpin css to unpinned the nodes from GRID_HOME.
10/17
[root@racpb3 ~]# /u01/app/12.1.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
Network 1 exists
Subnet IPv4: 192.168.12.0/255.255.255.0/eth0, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
VIP exists: network number 1, hosting node racpb1
VIP Name: racvr1
VIP IPv4 Address: 192.168.12.130
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node racpb2
VIP Name: racvr2
VIP IPv4 Address: 192.168.12.131
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node racpb3
VIP Name: racvr3
VIP IPv4 Address: 192.168.12.132
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on nodes:
ONS is individually disabled on nodes:
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on
'racpb3'
CRS-2673: Attempting to stop 'ora.crsd' on 'racpb3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'racpb3'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'racpb3'
CRS-2677: Stop of 'ora.DATA.dg' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'racpb3'
CRS-2677: Stop of 'ora.asm' on 'racpb3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'racpb3' has completed
CRS-2677: Stop of 'ora.crsd' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'racpb3'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'racpb3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'racpb3'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'racpb3'
CRS-2677: Stop of 'ora.drivers.acfs' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'racpb3'
CRS-2673: Attempting to stop 'ora.ctssd' on 'racpb3'
CRS-2673: Attempting to stop 'ora.storage' on 'racpb3'
CRS-2677: Stop of 'ora.gpnpd' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.storage' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'racpb3'
11/17
CRS-2677: Stop of 'ora.ctssd' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.crf' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.asm' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'racpb3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'racpb3'
CRS-2677: Stop of 'ora.cssd' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'racpb3'
CRS-2677: Stop of 'ora.gipcd' on 'racpb3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racpb3' has
completed
CRS-4133: Oracle High Availability Services has been stopped.
2018/12/29 00:13:32 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.
2018/12/29 00:14:03 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA)
Collector.
12/17
[oracle@racpb1 ~]$ crsctl check cluster -all
**************************************************************
racpb1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racpb2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
Update Inventory :
Checking swap space: must be greater than 500 MB. Actual 5980 MB Passed
The inventory pointer is located at /etc/oraInst.loc
Deinstall GRID_HOME :
13/17
[oracle@racpb3 ~]$ cd /u01/app/12.1.0/grid/deinstall
[oracle@racpb3 deinstall]$ ./deinstall -local
14/17
12-28_08-36-48-PM.log
De-configuring Oracle Restart enabled listener(s):
ASMNET1LSNR_ASM,MGMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
De-configuring listener: ASMNET1LSNR_ASM
Stopping listener: ASMNET1LSNR_ASM
Deleting listener: ASMNET1LSNR_ASM
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: MGMTLSNR
Stopping listener: MGMTLSNR
Deleting listener: MGMTLSNR
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: LISTENER
Stopping listener: LISTENER
Deleting listener: LISTENER
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: LISTENER_SCAN3
Stopping listener: LISTENER_SCAN3
Deleting listener: LISTENER_SCAN3
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: LISTENER_SCAN2
Stopping listener: LISTENER_SCAN2
Deleting listener: LISTENER_SCAN2
Listener deleted successfully.
Listener de-configured successfully
De-configuring listener: LISTENER_SCAN1
Stopping listener: LISTENER_SCAN1
Deleting listener: LISTENER_SCAN1
Listener deleted successfully.
Listener de-configured successfully.
De-configuring Listener configuration file...
Listener configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.
15/17
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to racpb3
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2018-12-15_28-33-16PM/oraInst.loc
Setting oracle.installer.local to true
## [END] Preparing for Deinstall ##
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/12.1.0/grid' from the central inventory on the local node :
Done
The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The
directory is not empty.
Checking swap space: must be greater than 500 MB. Actual 5997 MB Passed
The inventory pointer is located at /etc/oraInst.loc
Verify the integrity of the cluster after the nodes have been removed :
16/17
[oracle@racpb1 ~]$ cluvfy stage -post nodedel -n racpb3
17/17