0% found this document useful (0 votes)
122 views66 pages

DRD Aix

Dynamic Root Disk (DRD) allows cloning of an HP-UX system image to an inactive disk. This allows performing system maintenance on the clone while the original HP-UX system remains online, minimizing planned downtime. DRD commands like drd clone, drd runcmd, and drd activate can be used to clone the system, run commands on the clone, and activate the clone respectively. DRD provides the ability to perform tasks like system updates, testing, and recovery with minimal disruption to production systems.

Uploaded by

raldaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
122 views66 pages

DRD Aix

Dynamic Root Disk (DRD) allows cloning of an HP-UX system image to an inactive disk. This allows performing system maintenance on the clone while the original HP-UX system remains online, minimizing planned downtime. DRD commands like drd clone, drd runcmd, and drd activate can be used to clone the system, run commands on the clone, and activate the clone respectively. DRD provides the ability to perform tasks like system updates, testing, and recovery with minimal disruption to production systems.

Uploaded by

raldaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 66

HP-UX Dynamic

Root Disk, Solaris


Live Upgrade and
AIX Multibos

Dusan Baljevic
Sydney, Australia

2009 Dusan Baljevic


Cloning in Major Unix and Linux
Releases

AIX Alternate Root and Multibos (AIX 5.3 and


above)

HP-UX Dynamic Root Disk (DRD)

Linux Mondo Rescue, Clonezilla

Solaris Live Upgrade

August 7, 2009 2
HP-UX Dynamic Root Disk Features
Dynamic Root Disk (DRD) provides the ability to clone an
HP-UX system image to an inactive disk.
Supported on HP PA-RISC and Itanium-based systems.
Supported on hard partitions (nPars), virtual partitions
(vPars), and Integrity Virtual Machines (Integrity VMs),
running the following operating systems with roots
managed by the following Volume Managers (except as
specifically noted for rehosting):
o HP-UX 11i Version 2 (11.23) September 2004 or later
o HP-UX 11i Version 3 (11.31)
o LVM (all O/S releases supported by DRD)
o VxVM 4.1
o VxVM 5.0
August 7, 2009 3
HP-UX DRD Benefit: Minimizing
Planned Downtime
Without DRD: Software management may require extended downtime
With DRD: Install/remove software on the clone while applications continu

Install patches lvol1 lvol1 lvol1 lvol1


on the clone; lvol2 lvol2 lvol2 lvol2
lvol3 lvol3 lvol3 lvol3
applications
remain running boot disk boot mirror clone clone mirror
disk
Original vg00 (active)
cloned vg00 (inactive/patched

Activate the lvol1 lvol1 lvol1 lvol1


lvol2 lvol2 lvol2 lvol2
clone to make lvol3 lvol3 lvol3 lvol3
changes take
boot boot mirror clone clone mirror
effect
disk disk
Original vg00 (inactive) cloned vg00 (active/patched)

August 7, 2009 4
HP-UX Dynamic Root Disk Features -
continued
Product : DynRootDisk
Version: A.3.3.1.221 (B.11.xx.A.3.4.x will be the
current version number as of September 2009)
The target disk must be a single physical disk, or SAN
LUN.
The target disk must be large enough to hold all of the
root volume file systems. DRD allows the cloning of
the root volume group even if the master O/S is
spread across multiple disks (it is a one-way, many-to-
one operation).
On Itanium servers, all partitions are created; EFI and
HP-UX partitions are copied. This release of DRD does
not copy the HPSP partition.
Copy of lvmtab on the cloned image is modified by
August 7, 2009 5
HP-UX Dynamic Root Disk Features -
continued
Only the contents of vg00 are copied.
Due to system calls DRD depends on, DRD expects
legacy Device Special Files (DSFs) to be present and
the legacy naming model to be enabled on HP-UX
11i v3 servers. HP recommends only partial
migration to persistent DSFs be performed.
If the disk is currently in use by another volume
group that is visible on the system, the disk will not
be used.
If the disk contains LVM, VxVM, or boot records but is
not in use, one must use the -x overwrite option to
tell DRD to overwrite the disk. Already-created
clones will contain boot records; the drd status
command will show the disk that is currently in use
as an inactive system image.
August 7, 2009 6
HP-UX Dynamic Root Disk Features -
continued
All DRD processes, including drd clone and drd
runcmd, can be safely interrupted issuing Control-C
(SIGINT) from the controlling terminal or by issuing
kill HUP <pid> (SIGHUP). This action causes DRD
to abort processing. Do not interrupt DRD using the
kill -9 <pid> command (SIGKILL), which fails to
abort safely and does not perform cleanup. Refer to
the Known Issues list on the DRD web page
(http://www.hp.com/go/DRD) for cleanup
instructions after drd runcmd is interrupted.
The Ignite server will only be aware of the clone if it
is mounted during a make_*_recovery operation.

August 7, 2009 7
HP-UX Dynamic Root Disk Features -
continued
DRD does not provide a mechanism for resizing file
systems during a clone operation.
After the clone is created, one can manually change file
system sizes on the inactive system without an
immediate reboot:
1. The whitepaper, Dynamic Root Disk: Quick Start & Best
Practices describes resizing file systems other than
/stand. *
2. The whitepaper Dynamic Root Disk: Quick Start & Best
Practices describes resizing the boot (/stand) file system
on an inactive system image.
One can avoid multiple mounts and unmounts by using
drd mount to mount the inactive system image before
the first runcmd operation and drd umount to unmount
the inactive system image after the last runcmd
operation. **
Supports root volume groups with any name (prior to
version A.3.0, only vg00 was possible).
August 7, 2009 8
HP-UX Dynamic Root Disk
Commands
The basic DRD commands are:

drd clone
drd runcmd
drd activate
drd deactivate
drd mount
drd umount
drd status
drd rehost
drd unrehost
August 7, 2009 9
HP-UX Dynamic Root Disk
Commands - continued
drd runcmd can run specific Software Distributor (SD)
commands on the inactive system image only:
swinstall
swremove
swlist
swmodify
swverify
swjob
Three other commands can be executed by the drd
runcmd command:
view used to view logs produced by commands
that were executed by drd runcmd.
kctune used to modify kernel parameters.
update-ux performs v3 to v3 OE updates
August 7, 2009 10
HP-UX Dynamic Root Disk Features
Dry Run
A simple mechanism for determining if a chosen
target disk is sufficiently large is to run a preview:

# drd clone -p -v -t <blockDSF>

blockDSF is of the form:


* HP-UX 11i v2: /dev/dsk/cXtXdX
* HP-UX 11i v3: /dev/disk/diskX

The preview operation includes the disk space


analysis needed to see if the target disk is
sufficiently large.
August 7, 2009 11
HP-UX Dynamic Root Disk versus
Ignite-UX
DRD has several advantages over Ignite-UX net and
tape images:
* No tape drive is needed,
* No impact on network performance will occur,
* No security issues of transferring data across the
network.
Mirror Disk/UX keeps an "always up-to-date" image
of the booted system. DRD provides a "point-in-
time image. The booted system and the clone may
then diverge due to changes to either one. Keeping
the clone unchanged is the Recovery scenario. DRD
is not available for HP-UX 11.11, which limits options
on those systems.
August 7, 2009 12
HP-UX Dynamic Root Disk Features -
continued
Dynamic Root Disk (DRD) provides ability to clone an
HP-UX
system image to an inactive disk, and then:
* Perform system maintenance on the clone while
the HP-UX 11i system is online.
* Reboot during off-hours - significantly reducing
system downtime.
* Utilize the clone for system recovery, if needed.
* Rehost the clone on another system for testing or
provisioning purposeson VMs or blades utilizing
Virtual Connect, HP-UX 11i v3 LVM only; VMs with
HP-UX 11i v2 LVM only.
* Perform an OE Update on the clone from an older
version of HP-UX 11i v3 to HP-UX 11i v3 update 4 or
August 7, 2009 13
HP-UX Dynamic Root Disk and
/etc/bootconf

Errors in /stand/bootconf can make the command


drd deactivate to fail. * (This is no longer true in
the current release)
The /stand/bootconf file on the booted system
should contain device files for just the booted disk
and any of its mirrors not the clone target.
The /stand/bootconf file that is created on the
clone target WILL contain the device file of the
target itself (or, on an IPF system, the device file of
the HP-UX partition of the target).

August 7, 2009 14
HP-UX Dynamic Root Disk
Rehosting
The initial implementation of drd rehost only
supports rehosting of an LVM-managed root volume
group on an Integrity virtual machine to another
Integrity virtual machine, or an LVM-managed root
volume group on a Blade with Virtual Connect I/O to
another such Blade.
The rehost command does not enforce the
restriction to blades and VMs, but other use of this
command is not officially supported.
As of version A.3.3, rehosting support for HP-UX 11i
v2 has been added.

August 7, 2009 15
HP-UX Dynamic Root Disk
Rehosting on HP-UX 11.31
After the clone and system information file have been
created, the drd rehost command can be used to check
the syntax of the system information file and copy it to
/EFI/HPUX/SYSINFO.TXT in preparation for processing by
auto_parms(1M) during the boot of the image. The
following example uses the /var/opt/drd/tmp/newhost.txt
system information file:
SYSINFO_HOSTNAME=myhost
SYSINFO_MAC_ADDRESS[0]=0x0017A451E718
SYSINFO_DHCP_ENABLE[0]=0
SYSINFO_IP_ADDRESS[0]=192.2.3.4
SYSINFO_SUBNET_MASK[0]=255.255.255.0
SYSINFO_ROUTE_GATEWAY[0]=192.2.3.75
SYSINFO_ROUTE_DESTINATION[0]=default
SYSINFO_ROUTE_COUNT[0]=1
August 7, 2009 16
HP-UX Dynamic Root Disk
Rehosting on HP-UX 11.31 - continued
To check the syntax of the system information file,
without copying it to the /EFI/HPUX/SYSINFO.TXT file,
use the preview option of the drd rehost command:

# drd rehost p f \
/var/opt/drd/tmp/newhost.txt

To copy it to the /EFI/HPUX/SYSINFO.TXT file, use the


following command:

# drd rehost f /var/opt/drd/tmp/newhost.txt

August 7, 2009 17
HP-UX Dynamic Root Disk
Examples
# drd clone -t /dev/disk/disk8 -x
overwrite=true
======= 07/02/08 13:09:41 EST BEGIN Clone System Image
(user=root) (jobid=syd59)
* Reading Current System Information
* Selecting System Image To Clone
* Selecting Target Disk
* Selecting Volume Manager For New System Image
* Analyzing For System Image Cloning
* Creating New File Systems
* Copying File Systems To New System Image
* Making New System Image Bootable
* Unmounting New System Image Clone
======= 07/02/08 13:42:57 EST END Clone System Image
succeeded. (user=root) (jobid=syd59)
August 7, 2009 18
HP-UX Dynamic Root Disk
Examples - continued
# drd status
======= 07/02/08 13:45:42 EST BEGIN Displaying DRD Clone
Image Information (user=root) (jobid=syd59)
* Clone Disk: /dev/disk/disk8
* Clone EFI Partition: Boot loader and AUTO file present
* Clone Creation Date: 07/02/08 13:09:46 EST
* Clone Mirror Disk: None
* Mirror EFI Partition: None
* Original Disk: /dev/disk/disk7
* Original EFI Partition: Boot loader and AUTO file present
* Booted Disk: Original Disk (/dev/disk/disk7)
* Activated Disk: Original Disk (/dev/disk/disk7)
======= 07/02/08 13:45:51 EST END Displaying DRD Clone
Image Information succeeded. (user=root) (jobid=syd59)
August 7, 2009 19
HP-UX Dynamic Root Disk
Examples - continued
# drd activate
======= 07/02/08 13:48:03 EST BEGIN Activate Inactive System
Image (user=root) (jobid=syd59)
* Checking for Valid Inactive System Image
* Reading Current System Information
* Locating Inactive System Image
* Determining Bootpath Status
* Primary bootpath : 0/1/1/0.0x1.0x0 before activate.
* Primary bootpath : 0/1/1/1.0x2.0x0 after activate.
* Alternate bootpath : 0/1/1/1.0x2.0x0 before activate.
* Alternate bootpath : 0/1/1/1.0x2.0x0 after activate.
* HA Alternate bootpath : 0/1/1/0.0x1.0x0 before activate.
* HA Alternate bootpath : 0/1/1/0.0x1.0x0 after activate.
* Activating Inactive System Image
======= 07/02/08 13:48:15 EST END Activate Inactive System Image
succeeded. (user=root) (jobid=syd59)
August 7, 2009 20
HP-UX Dynamic Root Disk
Examples - continued
# drd_register_mirror /dev/dsk/c1t2d0 *

# drd_unregister_mirror /dev/dsk/c2t3d0 **

# drd runcmd view /var/adm/sw/swagent.log

# diff /var/spool/crontab/crontab.root \
/var/opt/drd/mnts/sysimage_001/var/spool/cron
tab/crontab.root

August 7, 2009 21
HP-UX Dynamic Root Disk
Examples - continued
# /opt/drd/bin/drd mount
# /usr/bin/bdf
file system kbytes used avail %used Mounted on
/dev/vg00/lvol3 1048576 320456 722432 31% /
/dev/vg00/lvol1 505392 43560 411288 10% /stand
/dev/vg00/lvol8 3395584 797064 2580088 24% /var
/dev/vg00/lvol7 4636672 1990752 2625264 43% /usr
/dev/vg00/lvol4 204800 8656 194680 4% /tmp
/dev/vg00/lvol6 3067904 1961048 1098264 64% /opt
/dev/vg00/lvol5 262144 9320 250912 4% /home
/dev/drd00/lvol3 1048576 320504 722392 31% /var/opt/drd/mnts/sysimage_001
/dev/drd00/lvol1 505392 43560 411288 10%
/var/opt/drd/mnts/sysimage_001/stand
/dev/drd00/lvol4 204800 8592 194680 4% /var/opt/drd/mnts/sysimage_001/tmp
/dev/drd00/lvol5 262144 9320 250912 4%
/var/opt/drd/mnts/sysimage_001/home
/dev/drd00/lvol6 3067904 1962912 1096416 64% /var/opt/drd/mnts/sysimage_001/opt
/dev/drd00/lvol7 4636672 1991336 2624680 43% /var/opt/drd/mnts/sysimage_001/usr
/dev/drd00/lvol8 3395584 788256 2586968 23% /var/opt/drd/mnts/sysimage_001/var

August 7, 2009 22
HP-UX Dynamic Root Disk Serial
Patch Installation Example
# swcopy -s /tmp/PHCO_38159.depot \* @
/var/opt/mx/depot11/PHCO_38159.dir

# drd runcmd swinstall -s \


/var/opt/mx/depot11/PHCO_38159.dir PHCO_38159

August 7, 2009 23
HP-UX Dynamic Root Disk update-
ux Issue *
When executing drd runcmd update-ux on the
inactive DRD
system image, the command errors:

ERROR: The expected depot does not exist at


"<depot_name>"

In order to use a directory depot on the active system


image,
you will need to create a loopback mount to access
the depot.

August 7, 2009 24
HP-UX Dynamic Root Disk update-ux
Issue - continued
Issue Resolution
The following steps should be followed in order to update the
clone from a directory depot that resides on the active system
image. The steps must executed as root, in this order:
1) Mount the clone using drd mount
2) Make the directory on the clone and loopback mount the depot.
The directory on the clone and the source depot must have the
same name, in this case /var/depots/0909_DCOE, however the
name can be whatever you chose:
# mkdir -p
/var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE
# mount -F lofs /var/depots/0909_DCOE
/var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE
# drd runcmd update-ux -s /var/depots/0909_DCOE

August 7, 2009 25
HP-UX Dynamic Root Disk update-
ux Issue - continued
3) Once your update has completed, unmount the loopback
mount and then unmount the clone
# umount F lofs /var/depots/0909_DCOE
/var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE
# drd umount

Updates from multiple-DVD Media


Updates directly from media are not supported for DRD
updates. In order to update from media, you must copy the
contents to a directory depot either on a remote server
(easiest method) or to a directory on the active system. If it
must be on the active system image you must first copy the
medias contents to a directory depot and then create the
clone. If you already have a clone, you can copy the depot
and then loopback mount that depot to the clone (see
instructions above).
August 7, 2009 26
HP-UX Dynamic Root Disk update-
ux Issue - continued
To copy the software from the DVDs, make a directory on a
remote system or the active system image; mount the DVD
media and swcopy its contents into the newly created directory.
Unmount the first disk and insert the second DVD to copy its
contents into the directory.
# mkdir p /var/software_depot/DCOE-DVD
# mount /dev/disk/diskX /cdrom
# swcopy -s /cdrom x enforce_dependencies=false \*
@/var/software_depot/DCOE-DVD
# umount /cdrom
# mount /dev/disk/diskX /cdrom // this is DVD 2
# swcopy -s /cdrom x enforce_dependencies=false \*
@/var/software_depot/DCOE-DVD

August 7, 2009 27
HP-UX Dynamic Root Disk update-
ux Issue - continued
If the depot resides on a remote server (a system other than the one to
be updated),
proceed with the drd runcmd update-ux command and specify the
location as the
argument of the -s parameter:

# drd runcmd update-ux -s <server_name>:/var/software_depot/DCOE-


DVD <OE>

If the depot resides in the root group of the system to be cloned, and
the clone has
not yet been created, create the clone and issue the drd runcmd
update-ux
command, specifying the location of the depot as it appears on the
booted system:

#drd runcmd update-ux s /var/software_depot/DCOE-DVD <OE>


August 7, 2009 28
Solaris Live Upgrade Features
Live upgrade is a feature of Solaris (since version
2.6) that allows the operating system to be cloned
to an offline partition (or partitions), which can
then be upgraded with new O/S patches, software,
or even a new version of the operating system.
The system administrator can then reboot the
system on the newly upgraded partition. In case of
problems, it is easy to revert back to the original
partition/version via a single live upgrade
command followed by a reboot.
Live upgrade is especially useful because Sun does
not officially support installing O/S patches to
active partitions - patching while in single user
mode or to a non-active live upgrade partition.
August 7, 2009 29
Solaris Live Upgrade Features -
continued
Live Upgrade requires multiple partitions on the boot drive
one set of partitions is "active" and the other is
"inactive) or on separate drives. These sets of partitions
are "boot environments (BEs).
A slice where the root (/) file system is to be copied must
be selected. Use the following guidelines when you select
a slice for the root (/) file system. The slice must comply
with the following:
* Must be a slice from which the system can boot.
* Must meet the recommended minimum size.
* Cannot be a Veritas VxVM volume or a Solstice DiskSuite
metadevice.
* Can be on different physical disks or the same disk as the
active root file system.
August 7, 2009 30
Solaris Live Upgrade Features -
continued
The swap slice cannot be in use by any boot
environment except the current boot environment or if
the -s option is used, the source boot environment.
The boot environment creation fails if the swap slice is
being used by any other boot environment whether
the slice contains a swap, UFS, or any other file
system.
Typically, each boot environment requires a minimum
of 350 to 800 MB of disk space, depending on the
system software configuration.
When viewing the character interface remotely, such
as over a tip line, set the TERM environment variable
to VT220. Also, when using the Common Desktop
Environment, set the value of the TERM variable to
dtterm, rather than xterm.
August 7, 2009 31
Solaris Live Upgrade Features -
continued
lucreate command allows you to include or exclude
specific files and directories when creating a new BE.
Include files and directories with:
-y include option
-Y include_list_file option
items with a leading + in the file used with the -z
filter_list option
Exclude files and directories with:
-x exclude option
-f exclude_list_file option
items with a leading in the file used with the -z
filter_list option

August 7, 2009 32
Solaris Live Upgrade and Special
Files
Files can change in the original boot environment
(BE) after the BE is created but NOT YET activated.

On the first boot of a BE, data is copied from the


source BE.

The list to copy is in /etc/lu/synclist. Example:


/etc/default/passwd OVERWRITE
/etc/dfs OVERWRITE
/var/log/syslog APPEND
/var/adm/messages APPEND
August 7, 2009 33
Solaris Live Upgrade Examples
The upgrade process of the new BE can be done in several ways
(local, net, CD-ROM, flash). All four of these are done the same
way except each one you specify a different path to the image
through the -s flag. Examples:
Local file:
# luupgrade -u -n solenv2 -s
/Solaris_10/path/to/os_image
Net:
# luupgrade -u -n solenv2 -s
/net/Solaris_10/path/to/os_image
CD-ROM:
# luupgrade -u -n solenv2 -s
/cdrom/Solaris_10/path/to/os_image
Flash:
# luupgrade -u -n solenv2 -s /path/to/flash.flar
August 7, 2009 34
Solaris Live Upgrade Examples
# lucompare BE2
Determining the configuration of BE2 ...
< BE1
> BE2
Processing Global Zone
Comparing / ...
Links differ
01 < /:root:root:33:16877:DIR:
02 > /:root:root:30:16877:DIR:
Sizes differ
01 <
/platform/sun4u/boot_archive:root:root:1:33188:REGFIL:76550144:
02 >
/platform/sun4u/boot_archive:root:root:1:33188:REGFIL:76922880:
August 7, 2009 35
Solaris Live Upgrade Examples
# lucreate -c "solenv1" -m /:/dev/dsk/c0d0s3:ufs -n
"solenv2 *

# lucreate -m /:/dev/md/dsk/d20:ufs,mirror \
-m /:/dev/dsk/c0t0d0s0:detach,attach,preserve \
-n nextBE **

# lucreate -m /:/dev/md/dsk/d10:ufs,mirror \
-m /:/dev/dsk/c0t0d0s0,d1:attach \
-m /:/dev/dsk/c0t1d0s0,d2:attach -n myserv2 ***

August 7, 2009 36
Solaris Live Upgrade Examples
# lucurr
BE1

# ludesc -n BE1 \
"Dusan BootEnvironment

# ludesc -n BE1
Dusan BootEnvironment

August 7, 2009 37
Solaris Live Upgrade Examples
# lufslist BE1
boot environment name: BE1
This boot environment is currently active
This boot environment will be active on next system
boot.
Filesystem fstype device size Mounted on
Mount Options
----------------------- -------- ------------ -------------------
--------------
/dev/zvol/dsk/rpool/swap swap 1073741824 - -
rpool/ROOT/s10s_u6wos_07b zfs 5119809024 / -
rpool/ROOT/s10s_u6wos_07b/var zfs 86450688 /var
-
rpool zfs 7493079552 /rpool -
rpool/export zfs 95149568 /export -
hppool zfs ? /hppool -
rpool/export/home zfs 95129088 /export/home -
August 7, 2009 38
Clone Commands Compared
Task HP-UX DRD Solaris Live
Upgrade
Create BE drd clone lucreate
Activate BE drd activate luactivate
Check drd status lustatus
status
Compare Indirect method: lucompare
BEs dif
cmp
Cancel Indirect method lucancel
scheduled remove from
copy/create crontab

August 7, 2009 39
Clone Commands Compared
Task HP-UX DRD Solaris Live
Upgrade
Display drd status lucurr
BE/System
Image
Delete BE N/A * ludelete

Add or resync N/A ** lumake


data in BE
Set or display N/A ludesc
BE description
Mount BE file drd mount lumount
systems
Unmount BE drd umount luumount
file system
August 7, 2009 40
Clone Commands Compared

Task HP-UX DRD Solaris


Live
Upgrade
Rename BE N/A lurename
Install drd runcmd luupgrade
software and swinstall
patches
List BE into drd
N/A runcmd lufslist
BE
configuration update-ux

TUI N/A lu

August 7, 2009 41
Clone Commands Compared

Task HP-UX DRD Solaris Live


Upgrade
Rehosting drd rehost N/A
Modify kernel Drd runcmd N/A
tunables kctune

August 7, 2009 42
AIX Alt_disk_install
The AIX alt_disk_install command allows a root
sysadmin to create an alternate rootvg on another
set of disk drives. The alternate rootvg can be
configured by restoring a mksysb image to it while
AIX continues to run from the primary rootvg, or
the primary rootvg can be "cloned" to the alternate
rootvg and updates and fixes can then be installed
on the alternate rootvg while AIX continues to run.
When the system admin is ready, AIX can be
rebooted from the alternate rootvg disks. Changes
can be backed out by rebooting AIX from the
original primary rootvg.
In AIX v.5.3, alt_disk_install has been replaced
by
alt_disk_copy
alt_disk_mksysb
alt_rootvg_op
The alt_disk_install will continue to ship as a
AIX Alt_disk_install Examples
Copy the current rootvg to an alternate disk. The
following example shows how to clone the rootvg
to hdisk1:
# alt_disk_copy -d hdisk1

Copy rootvg (hdisk1) to hdisk0, and then apply the


updates to hdisk0:
# alt_disk_copy -d hdisk0 -b update_all -l
AIX Alt_disk_install Examples
Copy the current rootvg to two alternate disks:
# alt_disk_copy -d hdisk2 hdisk3 -O
assuming that hdisk2 and hdisk3 are the targets on which the
copy should be placed.
Note that the -O flag is required when "cloning" (when planning
to boot the rootvg copy on another LPAR or server), but can be
detrimental when making a copy which will be booted on the
same LPAR or server.
Before taking the target disks away from the existing AIX image,
run command:
# alt_rootvg_op -X
If a rootvg copy has been made for use on the same LPAR/server
as the original rootvg (without the -O flag on alt_disk_copy),
System Management Services can be used to switch between
the primary and backup AIX rootvgs by shutting AIX down,
booting to SMS mode, and selecting the disks from which to
boot.
AIX Multibos Features
multibos command (AIX 5.3 ML3) provides dual
AIX boot from the same rootvg. One can run
production on one boot image while installing,
customizing or updating the other.

This is similar to AIX alt-disk-install, with one


major difference: in alt-disk-install the boot
images must reside on separate disks and
separate rootvg's. The multibos capability allows
both O/S images to reside on the same disk/rootvg.

August 7, 2009 46
MultiBOS (rootvg)

Reboot
AIX Multibos Features - continued
The multibos command allows the root level
administrator to create multiple instances of AIX on
the same rootvg.
The multibos setup operation creates a standby
Base Operating System (BOS) that boots from a
distinct boot logical volume (BLV). This creates two
bootable sets of BOS on a given rootvg. The
administrator can boot from either instance of BOS
by specifying the respective BLV as an argument to
the bootlist command or using system firmware
boot operations.
Two bootable instances of BOS can be
simultaneously maintained. The instance of BOS
associated with the booted BLV is referred to as the
active BOS. The instance of BOS associated with
the BLV that has not been booted is referred to as
August 7, 2009 48
AIX Multibos Features - continued
The multibos command allows the administrator to
access, install maintenance and technology levels
for, update, and customize the standby BOS either
during setup or in subsequent customization
operations.
Installing maintenance and technology updates to
the standby BOS does not change system files on
the active BOS. This allows for concurrent update
of the standby BOS, while the active BOS remains
in production.

August 7, 2009 49
AIX Multibos Features - continued
The multibos command has the ability to copy or
share logical volumes and file systems. By default,
the BOS file systems (currently /, /usr, /var, and
/opt,) and the boot logical volume are copied. The
administrator can make copies of additional BOS
objects (using the -L flag).
All other file systems and logical volumes are
shared between instances of BOS. Separate log
device logical volumes (for example, those that are
not contained within the file system) are not
supported for copy and will be shared.
The current rootvg must have enough space for
each BOS object copy. BOS object copies are
placed on the same disk or disks as the original.
August 7, 2009 50
AIX Multibos Features - continued
The total number of copied logical volumes cannot
exceed 128.
The total number of copied logical volumes and
shared logical volumes are subject to volume group
limits.
/etc/multibos contains multibos data and logs.
The only supported method of backup and recovery
with multibos is mksysb via CD, NIM or tape. If the
standby BOS was mounted during the creation of
the mksysb, it is restored and synchronized on the
first boot from the restored mksysb. However, if the
standby BOS wasnt mounted during the creation of
the mksysb backup, the synchronization on reboot
will remove the unusable standby BOS.
August 7, 2009 51
AIX Multibos Examples
Standby BOS setup operation preview:
# multibos -Xsp

Set up standby BOS:


# multibos -Xs

Set up standby BOS with optional image.data file


/tmp/image.dat and exclude list /tmp/exclude.lst:
# multibos -Xs -i /tmp/image.dat -e \
/tmp/exclude.lst

August 7, 2009 52
AIX Multibos Examples - continued
To set up standby BOS and install additional
software listed as bundle file /tmp/bundle and
located in the images source /images:
# multibos -Xs -b /tmp/bundle -l /images

To execute a customization operation on standby


BOS with the update_all install option:
# multibos -Xac -l /images

August 7, 2009 53
AIX Multibos Examples - continued
To mount all standby BOS file systems, type:
# multibos Xm

To perform a standby BOS remove operation


preview:
# multibos RXp

To remove standby BOS:


# multibos -RX

August 7, 2009 54
AIX Multibos Examples - continued
Apply TL6 to the standby BOS. The TL6 lppsource
is mounted from our Network Installation Manager
(NIM) master. Perform a preview operation and
then execute the actual update to the standby
instance. Check the log file for any issues:
# mount
nimsrv:/export/lpp_source/lpp_sourceaix5306
03 /mnt
# multibos -Xacp -l /mnt
# multibos -Xac -l /mnt

August 7, 2009 55
AIX Multibos Examples - continued
Back out of the update and return to the previous
TL. Set the bootlist and verify that the BLV is set to
the previous BOS instance (hd5):
# bootlist -m normal hdisk0 blv=hd5 \
hdisk0 blv=bos_hd5
# bootlist -m normal -o
hdisk0 blv=hd5
hdisk0 blv=bos_hd5

Now reboot the system and confirm that its running


at the
previous TL.
August 7, 2009 56
AIX Multibos Examples continued *
# multibos -S
MULTIBOS> df
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 1966080 1198800 40% 3364 1% /
/dev/hd2 3670016 299344 92% 42697 10% /usr
...
/dev/hd3 262144 250776 5% 64 1% /tmp
/dev/bos_hd4 1966080 1198800 40% 3364 1% /bos_inst
/dev/bos_hd2 3670016 299344 92% 42697 10% /bos_inst/usr
/dev/bos_hd9var 655360 594456 10% 674 1% /bos_inst/var
/dev/bos_hd10opt 393216 123592 69% 2545 6% /bos_inst/opt
# to exit from multibos shell
MULTIBOS> exit

August 7, 2009 57
AIX Multibos Examples continued *
# cat /root/hosts.txt
host1
host2
host3
# export WCOLL=/root/hosts.txt
# dsh multibos R
# dsh rm /etc/multibos/logs/op.alog
# dsh multibos sXp
# dsh alog -of /etc/multibos/logs/op.alog
# dsh multibos sX
# dsh mount nimmast:/export/lpp_source/lpp_sourceaix530603 /mnt
# dsh multibos -Xacp -l /mnt
# dsh multibos -Xac -l /mnt
# dsh alog -of /etc/multibos/logs/op.alog
# dsh umount /mnt
# dsh bootlist m normal o
# dsh shutdown -Fr
August 7, 2009 58
AIX Check Boot Environment
After the reboot, confirm the TL level:
# oslevel r

Verify which BLV the system booted from


with:
# bootinfo v

August 7, 2009 59
Features Compared
Feature HP-UX DRD Solaris Live AIX Multibos
Upgrade
Licensing N/A N/A N/A
Supported PA-RISC SPARC 32-bit POWER
platforms IA-64 x86-32 64-bit POWER *
x86-64 PowerPC
Supported HP-UX 11.23 Solaris 2.6 AIX 5L Version 5.3
O/S HP-UX 11.31 Solaris 7 with the 5300-03
Solaris 8 Recommended
Solaris 9 Maintenance
Solaris 10 package and later

Current DynRootDisk Live Upgrade 2.0 Part of AIX 6.1


product B.11.xx.A.3.4.y
where xx is 23
or 31
TUI Not supported Supported Not Supported
GUI Not supported Not supported Not Supported
CLI Supported Supported Supported
August 7, 2009 60
Features Compared - continued
Feature HP-UX Solaris AIX Multibos
Add mirror Supported Not supported N/A
disk to a directly via directly!
clone command: Supported via
SVM, ZFS, and
drd clone x VxVM RAID-1
mirror_disk= setup only
Reboot drd activate x Never use bootlist -m
commands reboot=true or reboot(1) or normal hdisk0
Standard Unix halt(1) commands. blv=bos_hd5
commands Instead, init 6
or shutdown(1) shutdown -Fr or
reboot -q

Automated Mostly manual lucompare(1) Mostly manual


comparison process, based process, based
of primary on: on:
and dvd mount
cmp ...
multibos S
alternate cmp ...
boot diff...
diff ...
environment
s

August 7, 2009 61
Features Compared - continued
Feature HP-UX Solaris AIX Multibos
Mounting a) drd mount does not a) lumount(1) multibos S
inactive support mounting on supports
images diferent directories mounting on It mounts file
diferent systems as
b) drd mount mounts file directories /bos_inst/...
systems as:
b) lumount
/var/opt/drd/mnts/ mounts file
sysimage_00X systems as:

/.alt.configX

Change size Not supported Supported Supported **


of any file
systems
during
cloning
File system Not supported Supported * Not supported
split

August 7, 2009 62
Features Compared - continued
Feature HP-UX Solaris AIX Multibos
Simple drd mount Supported via Not directly
listing of bdf lufslist(1) supported **
clone file command
systems

Clone Supported via full Supported via Supported via


updates (re- clone recreation: command flag -c *
sync) lumake(1)
drd clone t= -x
overwrite=true

Merge file Not supported yet Supported Not supported


systems
during
cloning

August 7, 2009 63
Features Compared - continued
Feature HP-UX Solaris AIX Multibos
Change file Not supported Supported. Not supported
system type For example,
during SVM to ZFS
cloning migration
Supported LVM Solstice AIX LVM
Volume VxVM DiskSuite *
Manager VxVM
ZFS **
Virtualizatio nPar Solaris LPAR
n Support vPar Zones *** Dynamic LPAR
Integrity VM Logical Live Partition
Domain Mobility on POWER6
WPAR
Full-disk On Itanium Supported Not supported
copy during servers, all
cloning partitions are
created and EFI
and HPUX are
copied. This
release of DRD
August 7, 2009
does not copy the 64
Features Compared - continued
Feature HP-UX Solaris AIX Multibos
Multiple target Not supported Supported Not supported
disks for
cloning
Dry-run Supported Supported Supported
(preview)
cloning
Swap shared Primary swap is not Yes, by default Yes, by default
shared, secondary
swap can be shared
On-line cloning Yes Sun Yes
recommends to
halt all zones
during lucreate
or lumount
operations! That
means, the
Solaris zones
cloning is not
truly an on-line
process
August 7, 2009 65
Features Compared - continued
Feature HP-UX Solaris AIX Multibos
Exclude files Not supported yet * Supported ** Supported *****
from cloning

Include files Not supported yet Supported ** Supported *****


during
cloning

Simple Not supported yet Supported **** Supported ******


method to ***
remove clone

Clone on the Not supported Supported Supported


same physical
disk (multiple
BEs on the
same disk)

August 7, 2009 66

You might also like