0% found this document useful (0 votes)
95 views11 pages

Managing Raw Disks in AIX To Use With Oracle ASM (Lkdev, Rendev) (Doc ID 1445870.1)

Uploaded by

Antonio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views11 pages

Managing Raw Disks in AIX To Use With Oracle ASM (Lkdev, Rendev) (Doc ID 1445870.1)

Uploaded by

Antonio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Document 1445870.1 https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state...

Copyright (c) 2019, Oracle. All rights reserved. Oracle Confidential.

Managing Raw Disks in AIX to use with Oracle ASM ( lkdev, rendev ) (Doc ID 1445870.1)

In this Document

Purpose
Scope
Details

APPLIES TO:

Oracle Database - Enterprise Edition - Version 11.2.0.1 and later


Oracle Database Cloud Schema Service - Version N/A and later
Oracle Database Exadata Cloud Machine - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Oracle Database Backup Service - Version N/A and later
IBM AIX on POWER Systems (64-bit)
currency check 05-Sep-2013

PURPOSE

Effective way to manage the raw disks in AIX to be used by ASM. Steps listed in this document will also help prevent data corruption by
accidental overwrites due to user errors caused by OS disk naming convention.

SCOPE

DETAILS

Introduction :

Traditionally AIX storage devices were made available for use by assigning disk devices to Volume Groups (VGs) and then defining Logical
Volumes (LVs) in the VGs. When a disk is assigned to a VG the Logical Volume Manager (LVM) writes information to the disk and to the AIX
Object Data Manager (ODM). Information on the disk identifies that the disk was assigned to the LVM, other information in the ODM
identifies which VG the disk belongs to. System components other than the LVM can use the same convention. For example GPFS uses the
same area of the disk to identify disks assigned to it. AIX commands can use the identifying information on the disk or in the ODM to help
prevent a disk already in use from being reassigned to another use. The information can also be used to display informative information to
help identify disk usage. For example, the lspv command will display the VG name of disks that are assigned to a VG.

When using Oracle ASM, which does its own disk management, the disk devices are typically assigned directly to the Oracle application and
are not managed by the LVM. The Oracle OCR and VOTING disks are also commonly assigned directly to storage devices and are not
managed by the LVM. In these cases the identifying information associated with disks that are managed by the LVM is not present. AIX has
special functionality to help manage these disks and to help prevent an AIX administrator from inadvertently reassigning a disk already in
use by Oracle and inadvertently corrupting the Oracle data.

Where possible, AIX commands that write to the LVM information block have special checking added to check if the disk is already in used
by Oracle to prevent these disks from being assigned to the LVM, which would result in the Oracle data becoming corrupted. These
commands, if checking is done, and the AIX levels where checking was added is listed in Table 1.

AIX Commands AIX 5.3 AIX 6.1 AIX 7.1 Description

mkvg TL07 or newer All All Prevent reassigning disk used by oracle

extendvg TL07 or newer All All Prevent reassigning disk used by oracle

- - - No checking
chdev ... -a pv=yes

chdev ... -a pv=clear

1 de 11 05/12/2019 15:55
Document 1445870.1 https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state...

Table 1 – AIX commands which write control information on the disk and if and when checking for Oracle disk signatures was added

Note in Table 1 that the chdev command with attribute pv=yes or pv=clear which will write to the VGDA information block on the disk, does
not have the checking for the Oracle disk signature. It is therefore extremely important that this command is not used on a disk which is
already being used by Oracle, as it could lead to corruption of the Oracle data. To help prevent this additional functionality was added to
AIX.

AIX 6.1 and AIX 7.1 LVM commands contain new functionality that can be used to better manage AIX devices used by Oracle. This new
functionality includes commands to better identify shared disks across multiple nodes, the ability to assign a meaningful name to a device,
and a locking mechanism that the system administrator can use when the disk is assigned to Oracle to help prevent the accidental reuse of a
disk at a later time. This new functionality is listed in Table 2 along with the minimum AIX level providing that functionality.

AIX AIX AIX AIX Description


Command 5.3 6.1 7.1

lspv N/A TL07 TL01 New AIX lspv command option “-u” provides additional identification
and state information

lkdev N/A TL07 TL01 New AIX lkdev command that can lock a device so that any attempt
to modify the device characteristics will fail.

rendev N/A TL06 TL00 New command to rename device.

Table 2 – New AIX Commands Useful for Managing AIX Devices Used by Oracle ASM

The use of each of the commands in Table 2 is described in the sections below.

AIX lkdev Command:

The AIX lkdev command should be used by the system administrator when a disk is assigned to Oracle to lock the disk device to prevent the
device from inadvertently being altered by a system administrator at a later time. The lkdev command locks the specified device so that any
attempt to modify the device attributes (chdev, chpath) or remove the device or one of its paths (rmdev, rmpath) will be denied. This is
intended to get the attention of the administrator and warn that the device is already being used. The “-d” option of the lkdev command can
be used to remove the lock if the disk is no long being used by Oracle. The lspv command with the “-u” option indicates if the disk device is
locked. The example section of this note shows how to use lkdev and the related lspv output.

AIX rendev Command:

The AIX rendev command can be used to assign meaningful names to disks used by the Oracle Database, Cluster Ready Services (CRS) and
ASM. This is useful in identifying disk usage because there is no indication in output from AIX disk commands indicating that a disk is being
used by Oracle. For example the lsps command does not indicate the disk is used by Oracle. The command can be used to assign a
meaningful name to the Oracle CRS OCR and VOTING disks, whether they are accessed as raw devices (prior to 11gR2) or through the ASM
(11gR2 and later).

The rendev command will allow a disk to be dynamically renamed if the disk is not currently open. For example: disk11 could be renamed to
diskASM001. Once the device is renamed it can no longer be access by the old name. That means that any administrator who later tries to
do some command related to that disk would have to use the new meaningful name, and therefore would clearly see that the disk belongs
to ASM. Also, when a system administrator issues a "lspv" the meaningful name will be listed and it will be clear which disks are being used
by Oracle.

For non-RAC installations a system administrator should identify the disks that will be managed by ASM and assign meaningful names. Any
name that is 15 characters or less and not already used in the system can be used, but it is recommended that you keep the "disk" prefix on
the device name, as this will allow the default ASM discovery string to find the disks, and make it obvious it is a disk device. The ASM disk
discovery process will find the disks even though the names have changed as long as the new names match the ASM discovery string.

For RAC installations the disks are shared across nodes but the names of the shared disk devices are not necessarily the same across all of
the nodes in the cluster. The ASM can manage identifying storage devices even if the device names don't match across the nodes in the
cluster, however it is useful to make the names for each shared disk device consistent across the cluster. The new “-u” option of the lspv
command is useful in identifying disks across nodes of a cluster.

When rendev is used to rename a disk both the block and character mode devices are renamed. If the device being renamed is in the
Available state, the rendev command must unconfigure the device before renaming it. If the unconfigure operation fails, the renaming will
also fail. If the unconfigure succeeds, the rendev command will configure the device, after renaming it, to restore it to the Available state. In
the process of unconfiguring and reconfiguring the device the ownership and permissions will be reset to the default values. So after
renaming the disk device the ownership and permission should be checked and if necessary changed to values required by your Oracle RAC
installation. Device settings stored in the AIX ODM, for example reserve_policy, will not be changed by the renaming process.

Some disk multipathing solutions my have problems with device renaming. At least some versions of EMC PowerPath and some IBM

2 de 11 05/12/2019 15:55
Document 1445870.1 https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state...

SDDPCM tools (IBM storage MPIO tools) have dependencies on disk names which can lead to problems when disks are renamed. For this
reason, device renaming should only be used with the AIX native MPIO device driver, unless confirming with the storage vendor your
storage solution is compatible with renaming the disks.

AIX lspv Command:

The “-u” option of the AIX lspv command provides additional device identification information, the UDID and UUID. The lspv command will
also indicate if the device is locked. These IDs are unique to a disk and can be used to identify a shared disk across nodes of the cluster. For
example, if you want to rename a device to use a meaningful name that is the same for that disk on all nodes. Which identification
information is available is dependant on the storage and device driver being used. Also, newer device driver and system versions may add
identification information that was not previously available. When present either if these IDs can be used to identify a disk across nodes.

The Unique device identifier (UDID) is the unique_id attribute from the ODM CuAt ObjectClass. The UDID is present for PowerPath, AIX
MPIO, and some other devices. The Universally Unique Identifier (UUID) is present for AIX MPIO devices

Using the UDID or UUID to identify the disks instead of the PVID is useful because when Oracle uses a disk it overwrites the area on disk
where the PVID is stored. The PVID is also saved in the ODM so if the PVID was on the disk when AIX first configured the device then lspv
will still display the PVID. However, if after Oracle is using the disk a new node is added to the cluster it will not see a PVID on the disk and
therefore can not set the PVID in the ODM. This sometimes has lead to a situation when the system administrator incorrectly uses the
‘chdev …. –a pv=yes’ command to set the PVID which corrupts the Oracle data on that disk.

Because the ASM uses the same area of the disk where AIX stores the PVID, a PVID should never be written to a disk after the disk has
been assigned to the ASM, as this would corrupt the ASM disk header. If the PVID was written to the disk before the disk was assigned to
ASM, and all the nodes in the cluster discovered the disk while the PVID was still on the disk, then the PVID will remain in the AIX ODM and
will show up in the output of the lspv command. However because this method can not be used when adding nodes after the disks are
assigned to the ASM, and because of the risk of overwriting the ASM disk header if someone inadvertently adds a PVID after the disk is
assigned to ASM the UDID or UUID should be used to identify shared disks.

Examples:

The following examples show how to use the AIX rendev and lkdev commands with Oracle RAC and ASM.

Example 1:
This example uses rendev and lkdev commands to rename and lock ASM disk devices on an existing Oracle RAC cluster. In this example
the disks are already owned by ASM, so we use an ASM view to match up the disk names across the cluster instead of ‘lspv –u’.

Example 2:
This example shows how to add a new node to an existing RAC cluster, using ‘lspv –u’ to identify the disks on the new node.

Example 1:

In this example rendev and lkdev are used to rename and lock the ASM disks on an existing Oracle RAC cluster. In this example the cluster
has four nodes, and Oracle data, OCR and Voting files are all in the ASM. The ASM disks can be locked while the cluster is active, but
instances and clusterware need to be shutdown to rename the ASM disks. ASM uses the discovery string and control information on the ASM
disks to identify the disks, so the names of the ASM disk devices can change without confusing ASM; no ASM commands need to be issued.
In the example meaningful disk device names that start with “disk” are used, so there is no need to change the ASM discovery string.
Because the ASM disk names don’t need to match across the cluster the changes can be made one node at a time, while the Oracle
database and clusterware are active on the other nodes. The following commands show the renaming and locking on the first node. The
other nodes can be changed in a similar way.

The following commands and output show the OCR, Voting files are all located in the ASM. The SQL output from the ASM shows the disks
available to the ASM. All of the available disks except for disk19-disk21 are currently in use.

# ocrcheck -config
Oracle Cluster Registry configuration is :
Device/File Name : +OCR

# crsctl query css votedisk


## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 995e7ba027194fe0bfa6b09c7187ab17 (/dev/disk14) [VOTE]
2. ONLINE 60c6827b050c4ff4bf4d1a25cacfa4ef (/dev/disk15) [VOTE]
3. ONLINE 7ce12429d5a14fe6bfb67046eb1e40fd (/dev/disk16) [VOTE]

3 de 11 05/12/2019 15:55
Document 1445870.1 https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state...

SQL> select name, path, mode_status, state from v$asm_disk order by name;

NAME PATH MODE_STATUS STATE


---------------- -------------------- ---------------- ----------------
DATA_0000 /dev/disk4 ONLINE NORMAL
DATA_0001 /dev/disk5 ONLINE NORMAL
DATA_0002 /dev/disk6 ONLINE NORMAL
DATA_0003 /dev/disk7 ONLINE NORMAL
DATA_0004 /dev/disk8 ONLINE NORMAL
DATA_0005 /dev/disk9 ONLINE NORMAL
DATA_0006 /dev/disk10 ONLINE NORMAL
DATA_0007 /dev/disk11 ONLINE NORMAL
DATA_0008 /dev/disk12 ONLINE NORMAL
DATA_0009 /dev/disk13 ONLINE NORMAL
OCR_0000 /dev/disk17 ONLINE NORMAL
OCR_0001 /dev/disk18 ONLINE NORMAL
VOTE_0000 /dev/disk14 ONLINE NORMAL
VOTE_0001 /dev/disk15 ONLINE NORMAL
VOTE_0002 /dev/disk16 ONLINE NORMAL
/dev/disk21 ONLINE NORMAL
/dev/disk20 ONLINE NORMAL
/dev/disk19 ONLINE NORMAL

The following commands and output show that the Oracle RAC cluster is active on all nodes. So the cluster and CRS are stopped on node
node1 where we will make the changes.

# crsctl check cluster -all


**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node4:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
# crsctl stop cluster
:

# crsctl stop crs


:

With Oracle RAC and the CRS stopped all of the ASM disks devices can be renamed. For the new meaningful name we keep “disk” in the
start of the name so it is clear these are disk devices and so the default ASM discovery string will still search the correct disks. We include
“ASM” in the name to indicate the disks are owned by ASM, followed by “d” for datafiles or “c” for clusterware files, followed by a number
for uniqueness. The name is arbitrary, so can be adapted to what is meaningful in the context of your configuration. Then ownership of the
devices is set back to the original values required by Oracle, oracle:dba in this example.

4 de 11 05/12/2019 15:55
Document 1445870.1 https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state...

# rendev -n diskASM001 -l disk4


# rendev -n diskASM002 -l disk5
# rendev -n diskASM003 -l disk6
# rendev -n diskASM004 -l disk7
# rendev -n diskASM005 -l disk8
# rendev -n diskASM006 -l disk9
# rendev -n diskASM007 -l disk10
# rendev -n diskASM008 -l disk11
# rendev -n diskASM009 -l disk12
# rendev -n diskASM010 -l disk13
# rendev -n diskASM011 -l disk14
# rendev -n diskASM012 -l disk15
# rendev -n diskASM013 -l disk16
# rendev -n diskASM014 -l disk17
# rendev -n diskASM015 -l disk18
# rendev -n diskASM016 -l disk19
# rendev -n diskASM017 -l disk20
# rendev -n diskASM018 -l disk21

# chown oracle:dba /dev/diskASM*

Now the CRS and Oracle RAC can be restarted.

# crsctl start crs


CRS-4123: Oracle High Availability Services has been started.

We check the CRS is ONLINE and verify the OCR and voting files.

# crsctl check crs


CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 4220
Available space (kbytes) : 257900
ID : 133061953
Device/File Name : +OCR
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

# crsctl query css votedisk


## STATE File Universal Id File Name Disk group

5 de 11 05/12/2019 15:55
Document 1445870.1 https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state...

-- ----- ----------------- --------- ---------


1. ONLINE 995e7ba027194fe0bfa6b09c7187ab17 (/dev/diskASM011) [VOTE]
2. ONLINE 60c6827b050c4ff4bf4d1a25cacfa4ef (/dev/diskASM012) [VOTE]
3. ONLINE 7ce12429d5a14fe6bfb67046eb1e40fd (/dev/diskASM013) [VOTE]

We can now verify the names and status of the of the disks in ASM.

SQL> select name, path, mode_status, state from v$asm_disk order by name;

NAME PATH MODE_STATUS STATE


---------------- -------------------- ---------------- ----------------
DATA_0000 /dev/diskASM001 ONLINE NORMAL
DATA_0001 /dev/diskASM002 ONLINE NORMAL
DATA_0002 /dev/diskASM003 ONLINE NORMAL
DATA_0003 /dev/diskASM004 ONLINE NORMAL
DATA_0004 /dev/diskASM005 ONLINE NORMAL
DATA_0005 /dev/diskASM006 ONLINE NORMAL
DATA_0006 /dev/diskASM007 ONLINE NORMAL
DATA_0007 /dev/diskASM008 ONLINE NORMAL
DATA_0008 /dev/diskASM009 ONLINE NORMAL
DATA_0009 /dev/diskASM010 ONLINE NORMAL
OCR_0000 /dev/diskASM014 ONLINE NORMAL
OCR_0001 /dev/diskASM015 ONLINE NORMAL
VOTE_0000 /dev/diskASM011 ONLINE NORMAL
VOTE_0001 /dev/diskASM012 ONLINE NORMAL
VOTE_0002 /dev/diskASM013 ONLINE NORMAL
/dev/diskASM016 ONLINE NORMAL
/dev/diskASM017 ONLINE NORMAL
/dev/diskASM018 ONLINE NORMAL

18 rows selected.

We confirm the cluster is still ONLINE on all nodes.

# crsctl check cluster -all


**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node4:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

Now we lock the ASM disk devices. This can be done while Oracle RAC is active on the cluster. For each of the ASM disk devices lock the
device as show below for the first device. Then use the lspv command to check the status of the disks.

# lkdev –l diskASM001 –a
diskASM001 locked
(repeat for each of the ASM disks)

# lspv
disk0 00c1894ca63631c2 None
disk1 00c1892cab6286eb None
disk2 00c1892cab72bef3 None

6 de 11 05/12/2019 15:55
Document 1445870.1 https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state...

disk3 00c1892cac105522 None


diskASM001 00c1892c11ce6578 None locked
diskASM002 00c1892c11ce6871 None locked
diskASM003 00c1892c11ce6b66 None locked
diskASM004 00c1892c11ce6e39 None locked
diskASM005 00c1892c11ce7129 None locked
diskASM006 00c1892c11ce73fb None locked
diskASM007 00c1892c11ce76d4 None locked
diskASM008 00c1892c11ce79a3 None locked
diskASM009 00c1892c11ce7d63 None locked
diskASM010 00c1892c11ce8044 None locked
diskASM011 00c1894c211cc685 None locked
diskASM012 00c1894c211cc7f2 None locked
diskASM013 00c1894c211cc94c None locked
diskASM014 00c1894c211ccaa8 None locked
diskASM015 00c1894c211ccbfb None locked
diskASM016 none None locked
diskASM017 none None locked
diskASM018 00c1894c211cd008 None locked
disk22 00c1894c992baec7 None
disk23 00c1894c992bb220 None
disk24 00c1894c992bb5a3 None
disk25 00c1894c992bb8c7 None
disk26 00c1894c992bbc0f None
disk27 00c1894c992bbfdc None
disk28 00c1894cbfa5952b oinstall active
disk29 00c1894c05f04253 root active
disk30 00c1894c4a3c24bc oinstall active
disk31 00c1894cce049074 oinstall active
disk32 00c1894c4a3bf8b0 None
disk33 00c1894c5567ae8d None
disk34 00c1894c4a3c1347 None

Note the indication of which disks are locked in the last column of the lspv output listed above. Also, note that on this system lspv is showing
a PVID (second column) for some of the ASM disks but not for two of the disks (diskASM016/017). The ASM disks do not have a PVID on
the physical disk, because that area is reused by the ASM. The values being shown by lspv are the values in the AIX ODM. When AIX first
identifies a new disk device is saves information about that device, including the PVID in the ODM. So depending on the state of the disk
when AIX first configured it determines if a PVID is in the AIX ODM or not.

Example 2:

This example shows how to add a new node to an existing RAC cluster, using ‘lspv –u’ to identify the disks on the new node, and then
rename and lock them. When a new node is added to an existing cluster using ASM, the PVIDs will not be present on the ASM disks. Since
the disks are already in use by the ASM on the other nodes there is no PVID on the disk when AIX on the new node configures the disk
devices. In this case ‘lspv –u’ is used to identify the disks owned by ASM. The example only show how to rename the disks with names
matching the other cluster nodes and lock the disks in preparation to install the Oracle RAC.

In this example the shared disks (LUNs) have been mapped to the new node (node2) and the AIX command cfgmgr was run on node2 to
configure the new disks. Following is the resulting lspv output.

Before running cfgmgr:


# lspv
disk0 00c1894c05e93592 root active
disk1 00c1894c4a3bdd1d oinstall active
disk2 00c1894c4a3bf1dc oinstall active
disk3 00c1894ca63631c2 None
disk4 00c1892cab6286eb None
disk5 00c1892cab72bef3 None
disk6 00c1892cac105522 None

# cfgmgr

After running cfgmgr


# lspv
disk0 00c1894c05e93592 root active
disk1 00c1894c4a3bdd1d oinstall active
disk2 00c1894c4a3bf1dc oinstall active
disk3 00c1894ca63631c2 None
disk4 00c1892cab6286eb None
disk5 00c1892cab72bef3 None
disk6 00c1892cac105522 None

7 de 11 05/12/2019 15:55
Document 1445870.1 https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state...

disk7 none None


disk8 none None
disk9 none None
disk10 none None
disk11 none None
disk12 none None
disk13 none None
disk14 none None
disk15 none None
disk16 none None
disk17 none None
disk18 none None
disk19 none None
disk20 none None
disk21 none None
disk22 none None
disk23 none None
disk24 none None

In the above lspv output note that disk7 through disk24 was configured by running cfgmgr. These are the ASM shared disks that were
mapped to the new Oracle RAC node node2. Also not that the PVID field is “none” for these disks. This is because there was no PVID on the
disk when cfgmgr was run. At this point we could just set the correct permissions for Oracle RAC on these disks, and install the Oracle RAC
clusterware and the ASM would we correctly identify the disks and how they should be used. However we want to assign meaningful names
so it is clear they are ASM disks and so they match across the other nodes in the cluster. To do this we will use the UUID field from the ‘lspv
–u’ command to match the disk with the disk name used on an existing node in the cluster. Do not try to add a PVID onto these disks with
the ‘chdev … -a pv=yes` as that would overwrite the ASM data and corrupt the disk.

8 de 11 05/12/2019 15:55
Document 1445870.1 https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state...

Show the current disk name and UUID on node2

# lspv -u|awk '/none/{printf("%-12s %s\n",$1,$5)}'


disk7 10a04b5f-befd-05b1-b142-88054ee86886
disk8 efed151e-73a6-77dd-c685-dc4bfb6d42bd
disk9 6183f585-d9c2-0938-cef9-10daa59b5d21
disk10 9812adda-fbe9-e424-8299-b8274d6da44d
disk11 c3a8eb24-b547-edad-5179-2a31985ca5d2
disk12 23647cf1-cf55-1a8b-e96f-f4339454b8cc
disk13 4a6096eb-42fd-40d0-44d8-09faee2822f4
disk14 082d65b5-6c95-23fb-8660-f8de93acc204
disk15 647307f9-fa7b-57a6-704a-bd1bae9e1aa0
disk16 7e08f927-7507-a381-c42b-3dd1df5a5a78
disk17 ee197d2e-a3eb-58f0-18c8-617293053ae8
disk18 39405380-77b2-3c05-fbf2-e9cbc661accb
disk19 a3295eaa-a8a1-b341-6b02-4a9015a99f23
disk20 3986170d-077a-b6a6-b0e7-99022ffa098a
disk21 e750c8b3-bb48-9d75-5fa7-9f4d736126b3
disk22 9315a8b3-6b0e-1a8b-642c-83187dee282c
disk23 346319af-0190-a46e-8d51-6f139e19e5d9
disk24 67fafb19-0464-6199-e8be-a48e177d71be

Show the disk name and UUID on existing cluster node node1

# lspv -u|awk '/ASM/{printf("%-12s %s\n",$1,$5)}'


diskASM001 10a04b5f-befd-05b1-b142-88054ee86886
diskASM002 efed151e-73a6-77dd-c685-dc4bfb6d42bd
diskASM003 6183f585-d9c2-0938-cef9-10daa59b5d21
diskASM004 9812adda-fbe9-e424-8299-b8274d6da44d
diskASM005 c3a8eb24-b547-edad-5179-2a31985ca5d2
diskASM006 23647cf1-cf55-1a8b-e96f-f4339454b8cc
diskASM007 4a6096eb-42fd-40d0-44d8-09faee2822f4
diskASM008 082d65b5-6c95-23fb-8660-f8de93acc204
diskASM009 647307f9-fa7b-57a6-704a-bd1bae9e1aa0
diskASM010 7e08f927-7507-a381-c42b-3dd1df5a5a78
diskASM011 ee197d2e-a3eb-58f0-18c8-617293053ae8
diskASM012 39405380-77b2-3c05-fbf2-e9cbc661accb
diskASM013 a3295eaa-a8a1-b341-6b02-4a9015a99f23
diskASM014 3986170d-077a-b6a6-b0e7-99022ffa098a
diskASM015 e750c8b3-bb48-9d75-5fa7-9f4d736126b3
diskASM016 9315a8b3-6b0e-1a8b-642c-83187dee282c
diskASM017 346319af-0190-a46e-8d51-6f139e19e5d9
diskASM018 67fafb19-0464-6199-e8be-a48e177d71be

Now the UUID is used to join the node2 disk name with the node1 name for the same disk.

# lspv -u|awk '/none/{printf("%s %s\n",$1,$5)}' > /tmp/node2_name_uuid


# rsh node1 lspv -u|awk '/ASM/{printf("%s %s\n",$1,$5)}' > /tmp/node1_name_uuid
# cat /tmp/node2_name_uuid /tmp/node1_name_uuid|sort -k2|awk '{printf("%-12s %s\n",$1,$2)}'
disk14 082d65b5-6c95-23fb-8660-f8de93acc204
diskASM008 082d65b5-6c95-23fb-8660-f8de93acc204
disk7 10a04b5f-befd-05b1-b142-88054ee86886
diskASM001 10a04b5f-befd-05b1-b142-88054ee86886
disk12 23647cf1-cf55-1a8b-e96f-f4339454b8cc
diskASM006 23647cf1-cf55-1a8b-e96f-f4339454b8cc
disk23 346319af-0190-a46e-8d51-6f139e19e5d9
diskASM017 346319af-0190-a46e-8d51-6f139e19e5d9
disk18 39405380-77b2-3c05-fbf2-e9cbc661accb
diskASM012 39405380-77b2-3c05-fbf2-e9cbc661accb
disk20 3986170d-077a-b6a6-b0e7-99022ffa098a
diskASM014 3986170d-077a-b6a6-b0e7-99022ffa098a
disk13 4a6096eb-42fd-40d0-44d8-09faee2822f4
diskASM007 4a6096eb-42fd-40d0-44d8-09faee2822f4
disk9 6183f585-d9c2-0938-cef9-10daa59b5d21
diskASM003 6183f585-d9c2-0938-cef9-10daa59b5d21
disk15 647307f9-fa7b-57a6-704a-bd1bae9e1aa0
diskASM009 647307f9-fa7b-57a6-704a-bd1bae9e1aa0

9 de 11 05/12/2019 15:55
Document 1445870.1 https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state...

disk24 67fafb19-0464-6199-e8be-a48e177d71be
diskASM018 67fafb19-0464-6199-e8be-a48e177d71be
disk16 7e08f927-7507-a381-c42b-3dd1df5a5a78
diskASM010 7e08f927-7507-a381-c42b-3dd1df5a5a78
disk22 9315a8b3-6b0e-1a8b-642c-83187dee282c
diskASM016 9315a8b3-6b0e-1a8b-642c-83187dee282c
disk10 9812adda-fbe9-e424-8299-b8274d6da44d
diskASM004 9812adda-fbe9-e424-8299-b8274d6da44d
disk19 a3295eaa-a8a1-b341-6b02-4a9015a99f23
diskASM013 a3295eaa-a8a1-b341-6b02-4a9015a99f23
disk11 c3a8eb24-b547-edad-5179-2a31985ca5d2
diskASM005 c3a8eb24-b547-edad-5179-2a31985ca5d2
disk21 e750c8b3-bb48-9d75-5fa7-9f4d736126b3
diskASM015 e750c8b3-bb48-9d75-5fa7-9f4d736126b3
disk17 ee197d2e-a3eb-58f0-18c8-617293053ae8
diskASM011 ee197d2e-a3eb-58f0-18c8-617293053ae8
disk8 efed151e-73a6-77dd-c685-dc4bfb6d42bd
diskASM002 efed151e-73a6-77dd-c685-dc4bfb6d42bd

Based on the above paring the disks on node2 are renamed to match the node1 name.

# rendev -l disk14 -n diskASM008


diskASM008
# rendev -l disk7 -n diskASM001
diskASM001
# rendev -l disk12 -n diskASM006
diskASM006
# rendev -l disk23 -n diskASM017
diskASM017
# rendev -l disk18 -n diskASM012
diskASM012
# rendev -l disk20 -n diskASM014
diskASM014
# rendev -l disk13 -n diskASM007
diskASM007
# rendev -l disk9 -n diskASM003
diskASM003
# rendev -l disk15 -n diskASM009
diskASM009
# rendev -l disk24 -n diskASM018
diskASM018
# rendev -l disk16 -n diskASM010
diskASM010
# rendev -l disk22 -n diskASM016
diskASM016
# rendev -l disk10 -n diskASM004
diskASM004
# rendev -l disk19 -n diskASM013
diskASM013
# rendev -l disk11 -n diskASM005
diskASM005
# rendev -l disk21 -n diskASM015
diskASM015
# rendev -l disk17 -n diskASM011
diskASM011
# rendev -l disk8 -n diskASM002
diskASM002

Now the we set the disk permissions, and any disk device attributes that may be required, for example the reserve_policy. Then the ASM
device can be locked.

Set the ownership:


# chown oracle:dba /dev/diskASM*

Set the any required disk attributeson all ASM disks, for example the reserve_policy:

chdev -l diskASM001 -a reserve_policy=no_reserve


diskASM001 changed

10 de 11 05/12/2019 15:55
Document 1445870.1 https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state...

(repeat for other disks)

Lock each of the ASM disk devices, for example:

lkdev –l diskASM001 –a
diskASM001 locked
(repeat for other disks)

Check lspv:

# lspv|awk '{printf("%-12s %-16s %-6s %s\n",$1,$2,$3,$4,$5,$6)}'


disk0 00c1894c05e93592 root active
disk1 00c1894c4a3bdd1d oinstall active
disk2 00c1894c4a3bf1dc oinstall active
disk3 00c1894ca63631c2 None
disk4 00c1892cab6286eb None
disk5 00c1892cab72bef3 None
disk6 00c1892cac105522 None
diskASM001 none None locked
diskASM002 none None locked
diskASM003 none None locked
diskASM004 none None locked
diskASM005 none None locked
diskASM006 none None locked
diskASM007 none None locked
diskASM008 none None locked
diskASM009 none None locked
diskASM010 none None locked
diskASM011 none None locked
diskASM012 none None locked
diskASM013 none None locked
diskASM014 none None locked
diskASM015 none None locked
diskASM016 none None locked
diskASM017 none None locked
diskASM018 none None locked

Now the Oracle RAC software can be installed.

References to Related AIX Documentation:

AIX 7.1 Pubs:


http://publib.boulder.ibm.com/infocenter/aix/v7r1/index.jsp

AIX 6.1 Pubs:


http://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp

Oracle Architecture and Tuning on AIX:


http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100883

Didn't find what you are looking for?

11 de 11 05/12/2019 15:55

You might also like