0% found this document useful (0 votes)
104 views10 pages

SLVM Online Volume Reconfiguration

hp

Uploaded by

robin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views10 pages

SLVM Online Volume Reconfiguration

hp

Uploaded by

robin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

SLVM Online Volume Reconfiguration

Abstract.............................................................................................................................................. 2
Multi Node Online Reconfiguration........................................................................................................ 2
Prerequisites .................................................................................................................................... 2
LVM Command Changes .................................................................................................................. 2
lvmerge and lvsplit Behavior Change ................................................................................................. 3
On which node do you issue the LVM commands?............................................................................... 3
Volume naming................................................................................................................................ 3
Which nodes in an HP Serviceguard cluster receive the changes? ......................................................... 3
/etc/lvmtab and /etc/lvmtab_p ........................................................................................................ 4
Activating a shared volume group...................................................................................................... 4
Alternate PV links ............................................................................................................................. 4
Physical volume groups and the lvmpvg file......................................................................................... 5
Online disk replacement ................................................................................................................... 5
Troubleshooting ............................................................................................................................... 5
Example 1 ................................................................................................................................... 5
Example 2 ................................................................................................................................... 5
SNOR ................................................................................................................................................ 6
Application and availability considerations ......................................................................................... 6
Commands...................................................................................................................................... 7
vmchange(1M)............................................................................................................................. 7
Procedures ...................................................................................................................................... 7
Making a configuration change in an active shared volume group ..................................................... 7
Determining whether SLVM SNOR is available on the system ................................................................ 8
SLVM SNOR Messages .................................................................................................................... 9
vgchange(1M) ............................................................................................................................. 9
Required software and patch list ...................................................................................................... 10
Abstract
SLVM (Shared Logical Volume Manager) is a mechanism that permits multiple systems in an HP
Serviceguard cluster to share (read/write) disk resources in the form of volume groups.
Starting with the HP-UX 11i v3 September 2009 release, you can change the configuration of a
shared volume group while it stays activated on all the nodes. There is no need to deactivate the
volume group on some nodes. This method is called Multi Node Online Reconfiguration (MORE). This
method is only available on Version 2.1 volume groups.
Alternatively, you can still use the Single Node Online Reconfiguration (SNOR) method that was
available prior to the HP-UX 11i v3 September 2009 release. This feature works on all volume group
versions. However, it requires the deactivation of the volume group on all but one node in a cluster.
This white paper describes both methods.

Multi Node Online Reconfiguration


Multi Node Online Reconfiguration (MORE) is introduced in the HP-UX 11i v3 September 2009
release. This feature enables you to change the configuration of LVM Version 2.1 volume groups
while the volume groups are activated in shared mode on any number of cluster nodes. Applications
can continue to use these volume groups without interruption while the volume groups are
reconfigured.

Prerequisites
In addition to the configuration necessary to share a volume group, online reconfiguration requires
the service of an LVM daemon, lvmpud. There is a maximum of one instance of this daemon running
at any time on the system. This daemon is not started by default.
A startup script (/sbin/init.d/lvm) is delivered as part of the HP-UX 11i v3 September 2009
release to start and stop the daemon. A configuration file (/etc/rc.config.d/lvmconf) is
provided to configure automatic startup.
To have lvmpud started automatically at each boot, include the following line in the
/etc/rc.config.d/lvmconf file:
START_LVMPUD=1
To stop the daemon, manually use the following command:
/sbin/init.d/lvm stop
To start again the daemon, manually use the command:
/sbin/init.d/lvm start
To start the lvmpud daemon even if START_LVMPUD is not set to 1, enter:
lvmpud

LVM Command Changes


The LVM CLI is not changed. There are no new options or new commands for MORE.
The only difference is that commands that were failing with the The volume group vg name is
active in Shared Mode. Cannot perform configuration change. message are now
configuring the volume group (Version 2.1 or higher volume groups only).
Commands that were already working on online shared volume groups continue to work as before.
For example, the display commands, lvsync.
The following commands are supported on a shared volume group Version 2.1 or higher. Before
MORE (introduced in the HP-UX 11i v3 September 2009 release), these commands fail on a shared
volume group.
• lvchange
• lvcreate
• lvmerge
• lvreduce
• lvremove
• lvsplit
• pvmove
• vgextend
• vgmodify
• vgmove
• vgreduce

lvmerge and lvsplit Behavior Change


After a mirrored logical volume has been split (lvsplit) on a shared volume group, a merge (lvmerge)
resynchronizes the whole logical volume instead of only the changed area. In other words, the "fast
merge" (available for standalone and exclusive activation) is not available for shared activation.

On which node do you issue the LVM commands?


When used on a shared Version 2.1 or higher volume group, you must issue the LVM configuration
commands listed previously on the server node. If they are issued on a client node, the following error
message is displayed:
The volume group vg_name is active as client in Shared Mode. Cannot
perform configuration change.
To determine which node is the server for the volume group, use the vgdisplay -v command. The
output lists the name of the nodes sharing the volume group qualified with "server" or "client".

Volume naming
Online configuration of a shared volume group requires the volume group name and logical volume
names to be the same across all the nodes sharing the volume group. You must not rename the LVM
volume device special files without also renaming them on the other nodes. When you create a
logical volume on a shared volume group, LVM distributes the name chosen by you to the other
nodes, insuring that all the nodes see the same logical volume name.

Which nodes in an HP Serviceguard cluster receive the changes?


A configuration change initiated by an LVM command on a shared volume group is distributed across
some nodes in the cluster. However, other nodes in the cluster might not receive the LVM configuration
change. This is because LVM distributes the change only to the nodes sharing the volume group. For
example, in a 10-node cluster, if the volume group is shared by only 4 nodes, only those 4 nodes
receive the LVM configuration change.
This behavior has the following effects:
• When using vgextend or vgreduce, the /etc/lvmtab_p file is automatically updated only on the
nodes sharing the volume group.
• When lvcreate, lvmerge , lvremove, or lvsplit is used, the logical volume device special files are
automatically added or removed only on the nodes sharing the volume group.

/etc/lvmtab and /etc/lvmtab_p


The /etc/lvmtab file is not changed during online configuration of shared volume groups because
/etc/lvmtab only applies to Version 1.0 volume groups. Online configuration of shared volume
groups only applies to Version 2.1 volume groups.
The /etc/lvmtab_p file is changed on all the nodes sharing a Version 2.1 volume group when the
volume is extended (vgextend) or reduced (vgreduce). Adding or removing an alternate link only
changes the local /etc/lvmtab_p file.
When the shared volume group is extended (a physical volume is added), the /etc/lvmtab_p file
of the node on which the vgextend command is issued (the server node) is augmented with the
physical volume path given on the command line (might be a legacy or a persistent device special file
name). However, on the client nodes, /etc/lvmtab_p is always augmented with the persistent
physical volume device special file name.
When the shared volume group is reduced (a physical volume is removed), all the links to this
physical volume are removed from the /etc/lvmtab_p file of the nodes sharing the volume group.

Activating a shared volume group


If the volume group was extended or reduced or if logical volumes were created or removed from
node B while the volume group was not activated on node A, before activating the volume group on
the node A, you must follow these steps:
1. Issue the following command on node B:
vgexport -s -p -m map_name vg_name

2. Issue the following command on node A:


vgexport vg_name

3. Copy the map_name map file from node B on node A as follows:


vgimport [-N] -m map_name -s vg_name

4. Activate the volume group shared on node A.

You must use vgexport and vgimport to update the /etc/lvmtab_p file and the volume group
directory (to add or remove logical volumes device special files) on node A. The /etc/lvmtab_p
file and the volume group directory are updated automatically only on the nodes sharing the volume
group. If the volume group is deactivated on a node, it is not sharing it.

Alternate PV links
An alternate PV link is a local entity. It has a meaning only in the context of one node because the
paths to a device can differ from one node to the other. As a result, when an alternate path is added
to a volume group (vgextend), the effect is local only. The path is added in the local
/etc/lvmtab_p file and nothing is added in the remote /etc/lvmtab_p file.
While the volume group is activated shared, you can add alternate links (vgextend) on the server
only.
Physical volume groups and the lvmpvg file
The physical volume groups work exactly the same way regardless of the activation mode of the
volume group. In particular, you can use physical volume groups for configuration operations
(lvextend) on shared volume groups.
The /etc/lvmpvg file is not distributed across nodes automatically. The vgextend command acting
on a shared volume group updates only the local /etc/lvmpvg file. This is identical to a standalone
activation. The /etc/lvmpvg file is not automatically updated on the client nodes sharing the
volume group.

Online disk replacement


The method to replace disks is not changed. See the LVM Online Disk Replacement (LVM OLR)
whitepaper at http://docs.hp.com.

Troubleshooting
When a configuration command issued on a shared volume group fails, the failure might be caused
by any node sharing the volume group or might be caused by several nodes.
The message displayed by the LVM configuration command (issued on the server node) might
correspond to a failure on another node. To determine exactly which nodes failed, see the system log
file.
In the system log file, the first line indicates the volume group on which the configuration failed,
followed by one line per node reporting an error. Nodes are identified by their hostname followed by
their node id in the HP Serviceguard cluster.

Example 1
In this example, the failure is because the lvmpud daemon is not started on one node sharing the
volume group.
On node2:
# lvcreate -M y -m 1 -l 10 vg_bigmir
lvcreate: The logical volume "/dev/vg_bigmir/lvol1" could not be created
as a special file in the file-system:
No such process
In /var/adm/syslog/syslog.log:
Jun 29 17:04:02 node2 vmunix: LVM: VG 128 0x005000: the distributed
configuration failed 13 0 0 204.
Jun 29 17:04:02 node2 vmunix: LVM: lvmpud timed out on node node3 2
If the lvmpud daemon is not started on any one of the nodes sharing the volume group and the
configuration change needs the services of the daemon, the configuration command fails with a No
such process message and a lvmpud timed out message is displayed in the system log file. In
the example, lvmpud was not started on the node node3.

Example 2
To generate an error for this example, the volume group directory was intentionally corrupted on the
node node4 before starting the test. This test attempts to create the logical volume lvol1 in the
volume group vg_bigmir. Normally, before creating the logical volume lvol1, no lvol1 file must
be present in /dev/vg_bigmir. To generate the error, before starting the test, a text file
/dev/vg_bigmir/lvol1 is manually created on node4. Then, when lvcreate tries to create the
device special file /dev/vg_bigmir/lvol1, it fails because it sees that there is already an
unexpected file /dev/vg_bigmir/lvol1 on node4.
On node2:
# lvcreate -M y -m 1 -l 10 vg_bigmir
lvcreate: The logical volume "/dev/vg_bigmir/lvol1" could not be created
as a special file in the file-system:
File exists
In /var/adm/syslog/syslog.log:
LVM: VG 128 0x005000: the distributed configuration failed 13 0 0 204.
LVM: node node4.corp.com 3 failed with 17

SNOR
Using the vgchange –x option, SLVM SNOR enables you to change the configuration of a shared
volume group, and of logical and physical volumes in that volume group, while keeping it active in a
single node. Using this procedure, applications on at least one node are available during the volume
group reconfiguration.
Prior to SLVM SNOR, suppose you had an HP Serviceguard cluster set up on three nodes with data
shared in the cluster using common storage. You use SLVM to create and manage volume groups
shared across all nodes of the cluster. Applications access the volume group for data read and write
operations from disk. To change a shared volume group configuration, like creating or modifying
logical volumes and physical volumes, or modifying the volume group (VG) attributes, you must do the
following:

1. Shut down the applications using the VG.


2. Deactivate the VG on all nodes.
3. Activate the VG in exclusive mode on one of the nodes.
4. Change the configuration on the VG.
5. Deactivate the VG.
6. If required, generate the map file with the changed configuration, copy it over, and re-import it to
all the other nodes of the cluster.
7. Activate the VG in shared mode on all the cluster nodes.
8. Restart the applications using the VG.

With the SNOR functionality installed, to change the volume group configuration, follow these steps:
1. Scale the applications using the volume group down to one node. These applications are still
running on that one node.
2. Deactivate the VG on all but one node of the cluster.
3. Change the activation mode of the VG to exclusive in a single node.
4. Change the configuration on the VG.
5. If required, generate the map file with the changed configuration, copy it over, and re-import it to
all the other nodes of the cluster.
6. Activate the VG in shared mode on the single node and then on all the other cluster nodes.

Application and availability considerations


On the cluster node where the volume group stays active, user I/Os are uninterrupted during and
after the activation mode changes from shared to exclusive and back to shared.
On the other cluster nodes where the volume group is deactivated, you must scale the application
down to a single cluster node, causing a loss of redundancy and possibly performance degradation
for that period.
Commands
vmchange(1M)
The vgchange –x option enables you to change the activation mode of a shared volume group.
Changing the mode to exclusive from shared
vgchange –a e -x vg_name
The vgchange command provides an –x option that must be used together with the–a e option to
change the activation mode of a shared volume group to exclusive mode. Upon success, the volume
group is in the same state as if it had been activated in exclusive mode from inactive.
The operation fails if the volume group is activated on any other node than the local node or if current
operations on the volume group do not allow an activation change. For example, loss of runtime
quorum (more than 50% of the last active set of physical volumes lost).

Changing the mode to shared from exclusive


vgchange –a s –x vg_name
The vgchange command provides an –x option that must be used together with the–a s option to
change the activation mode of an active volume group from exclusive to shared mode. Upon success,
the volume group is in the same state as if it had been activated in shared mode from inactive.
The operation fails if the volume group is currently inactive or activated in shared mode on the local
node, or if current operations on the volume group do not allow an activation change. For example,
loss of runtime quorum.

Procedures
Making a configuration change in an active shared volume group
1. Identify the shared volume group on which a configuration change is required. For example,
vg_shared.
2. Identify one node of the cluster that is running an application (for example, SGeRAC) using the
shared volume group. Call it node1. The applications using the vg_shared volume group on this
node remain unaffected during the procedure. You must scale the SGeRAC cluster application
down to the single cluster node, node1.
3. Deactivate the volume group on all other nodes of the cluster, except node1, using the –n option
to the vgchange command.
vgchange -a n vg_shared
Ensure the vg_shared volume group is now active only on a single cluster node, node1 by using
the vgdisplay command on all cluster nodes. The Status shows available on a single node
only.
4. Change the activation mode to exclusive on node1.
vgchange -a e -x vg_shared
Note: Ensure that none of the mirrored logical volumes in this volume group have Consistency
Recovery set to MWC (see lvdisplay(1M)). Changing the activation mode back to shared is not
allowed in that case, since Mirror Write Cache consistency recovery (MWC) is not valid in volume
groups activated in shared mode.

5. Make the desired configuration change on the volume group.

For example, on node1, run the required command to change the configuration as follows:
lvextend -m 2 /dev/vg_shared/lvol1
Warning: If you run lvsplit on the logical volumes of a volume group in exclusive mode,
changing the mode to shared results in discarding the bit map used for the subsequent fast
lvmerge operation. In the absence of the bit map, an lvmerge operation causes the entire logical
volume to be resynchronized (see lvsplit(1M) and lvmerge(1M)).

6. Export the changes to other cluster nodes if required.

If the configuration change required the creation or deletion of a new logical or physical volume
(any of the following commands were used: lvcreate(1M), lvreduce(1M), vgextend(1M),
vgreduce(1M), lvsplit(1M), lvmerge(1M)), the following sequence of steps is required.
Warning: If you used one of the preceding commands to change the configuration of the volume
group, and the mapfile, reflecting the change of configuration, is not imported on all the other
nodes, unexpected errors might occur. For example, errors are reported if:
– – A vgchange tries to reattach PVs that were removed using vgreduce.
– – A logical volume that was reduced with lvreduce is opened.
a) From node1, export the mapfile for vg_shared as follows:
vgexport -s -p -m /tmp/vg_shared.map vg_shared
b) Copy the /tmp/vg_shared mapfile to all the other nodes of the cluster.
c) On the other cluster nodes, export vg_shared and re-import it using the new map file.
ls -l /dev/vg_shared/group
crw-rw-rw- 1 root sys 64 0x050000 Nov 16 15:27 /dev/vg_shared/group
Note the minor number (0x050000 in the previous output)
vgexport vg_shared
mkdir /dev/vg_shared
mknod /dev/vg_shared/group c 64 0x050000
vgimport -m /tmp/vg_shared.map -s vg_shared
Note: The vgimport(1M) and vgexport(1M) sequence does not preserve the order of physical
volumes in the /etc/lvmtab file. If the ordering is significant due to the presence of active-
passive devices, or if the volume group was configured to maximize throughput by ordering the
paths accordingly, the ordering must be repeated. In this case, it might be better to use vgexport
and vgimport with the –f flag instead, and manually edit the file if device paths differ.
7. Change the activation mode back to shared in all the cluster nodes.

On node1, change the mode back to shared as follows:


vgchange -a s -x vg_shared
On the other cluster nodes, activate vg_shared in shared mode as follows:
vgchange -a s vg_shared

8. HP recommends using vgcfgbackup on all nodes to backup the changes made to the volume
group as follows:
vgcfgbackup vg_shared

Determining whether SLVM SNOR is available on the system


SLVM SNOR is available in patch form for HP-UX 11i v1 and v2 and as a standard feature in HP-UX
11i v3 releases. You must install patches for LVM commands and kernel components, and the
supported HP Serviceguard version on the system to enable the feature. It is not available if patches
for any of the three components are missing on the system. For a list of patches and kernel
components, see Required software and patch list.
SLVM SNOR Messages
vgchange(1M)
• Usage: vgchange
{-a Availability -q Quorum [-l] [-p] [-s] [VolumeGroupName... ]}
{[-S shareable] -c cluster VolumeGroupName}
{[-P parallel_nomwc_resync_count]}
"x": Illegal option
The –x option to vgchange was used on a system that does not have the SNOR LVM Commands
patch.
Install the appropriate LVM command patch (see Required software and patch list).

• Activation of volume group "/dev/vg_shared" denied by another node in


the cluster. Request on this system conflicts with Activation Mode on
remote system.
The –x option to vgchange was used on a system that does not have the SNOR HP Serviceguard
component installed.
Install the Serviceguard patch as specified in the Required software and patch list.

• The HP-UX kernel running on this system does not provide this feature.
The –x option to vgchange was used on a system that does not have SLVM SNOR (LVM kernel
component) installed.
Install the appropriate LVM kernel patch (see Required software and patch list).

• Activation mode of volume group "vg_shared" cannot be changed at this


time. Please try later.
The –x option to vgchange was used on a volume group which has either lost quorum, or is
undergoing server recovery or sparing activity.

• Provide names of active volume group(s) for which reactivation in a


different mode is required.
The –x option to vgchange was used without specifying the volume group name.

• Volume group "/dev/vg_shared" is already active in requested mode.


The –x option to vgchange was used on a volume group which is already active in the requested
mode.

• To change activation mode, use lvchange(1M) to disable the Mirror Write


Cache flag for all Logical Volumes in "vg_shared".
The –x option to vgchange was used, with mirrored logical volumes that have mirror consistency
enabled but do not have the Mirror Write Cache flag off, and the requested activation mode is
shared (see lvchange(1M)). The Mirror Write Cache is never used when a volume group is
activated shared.

• The volume group "/dev/vg_shared" is not active on this system.


Cannot perform requested change.
The –x option to vgchange was used on a volume group that is not currently active.

• Activation mode requested for the volume group "/dev/vg_shared"


conflicts with configured mode.
The –x option to vgchange was used on a volume group that is not configured in shared mode.
Required software and patch list
• For HP-UX 11i Version 1:
LVM Patches PHKL_33390 and PHCO_33310 (or superseding patches)
HP Serviceguard patch PHSS_33834 on HP Serviceguard 11.16

• For HP-UX 11i Version 2:


LVM Patches PHKL_33312 and PHCO_33309 (or superseding patches)
HP Serviceguard 11.17 release or HP Serviceguard patch PHSS_33835 on HP Serviceguard
11.16

© 2009 Hewlett-Packard Development Company, L.P. The information contained


herein is subject to change without notice. The only warranties for HP products and
services are set forth in the express warranty statements accompanying such
products and services. Nothing herein should be construed as constituting an
additional warranty. HP shall not be liable for technical or editorial errors or
omissions contained herein.

Itanium is a trademark or registered trademark of Intel Corporation or its


subsidiaries in the United States and other countries.
September 2009

You might also like