AIX Migration Steps7

Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 2

How to perform storage migration from IBM to Hitachi?

Please follow the below host based storage migration steps in dual vio setup.
Take the backup of the Lpar (Mksysb and TSM for datavg), and take the backup
of VIOS (VIOSBR and VIOSBR)
Share the current configuration (IBM disks info) to the storage team (No of
disks and size of the disks), and ask storage people to provide the new Hitachi
disks based on the current configuration.
Execute "cfgmgr"on the vios servers to make sure the newly added Hitachi dis
ks are available with the correct size and note the LUN info.
Change the disk's attribute like the algorithm and reserve_policy
Map the disks to the lpar with the help of mkvdev command (mkvdev
ost# -vdev hdisk# -dev vtd_name)

vadapter vh

Come to the Lpar and then execute the cfgmgr and confirm the number of disks
and size of the disks also the disks having the two path.
Change the attribute of healthcheck_interval
Extend the disks to the VG and then execute the migratepv command (nohup mig
ratepv IBM_disk Hitachi_disk &)
Once the migraiton is completed, take it out the old IBM disks by executing
the reducevg command (Reducevg datavg IBM_disk)
Remove the disks from the lpar by executing rmdev
Now going to VIOS server and then delete the IBM disks VTD Mapping and then
delete the IBM disks.
Once you delete the IBM disks, ask the storage people to unmap the IBM disks
from their end, once they did, we will execute the cfgmgr and make sure the IBM
disks cannot return back.
Note:
Ask the app people to shutdown the app/DB (This is OPTIONAL if your manageme
nt wants to make sure to avoid any risks - we can perform rootvg+datavg migratio
n on the fly. However if we perform the storage migration on cluster (HACMP) env
ironment, we need to down the app (In cluster end - bring resource group offline
) to avoid risks.
We can perform the storage migration in SAN based also. Steps are slightly d
iffer from Host based method.
We can perform the migration by using the below methods.
1. Migratepv
2. Mirrorvg
3. Alt disk install
61541,004,000

92632,004,000 ----- FSM error related to inventory


94433,004,000 ---- IMM2
a08vcbr ------ IMM2 hardware ticket
updhmc -t s -h ftp.software.ibm.com -u anonymous -p ftp -f /software/server/hmc/
updates/HMC_Update_V8R820_SP2.iso -r
updhmc -t s -h ftp.software.ibm.com -u anonymous -p ftp -f /software/server/hmc/
recovery_images/HMC_Recovery_V8R830_1.iso -r
updhmc -t s -h ftp.software.ibm.com -u anonymous -p ftp -f /software/server/hmc/
updates/HMC_Update_V8R830_SP1.iso -r
Starting with AIX 6100-06 and 7100-00, IBM introduced the AIX Event Infrastructu
re for monitoring pre-defined and user-defined system events such as modification
of a file s content, utilization of a filesystem exceeding a user-defined threshol
d, death of a process or a change in the value of a kernel tunable parameter witho
ut the high overhead of polling. This infrastructure can automatically notify re
gistered users or processes instantly about the occurrences of such events, with
information useful for maintaining and improving the health and security of the
running AIX instance. The information provided in the notification includes the
what, when, who and where the event happened, and it can include the whole func
tion call-chain that triggered the event.
At the core of the AIX Event Infrastructure is a pseudo-filesystem: Autonomic He
alth Advisor FileSystem (AHAFS), which is implemented as a kernel extension. AHA
FS mainly acts as a mediator to take the requests of event registration, monitor
ing and unregistering from the processes interested in monitoring for events. It
forwards the requests to the corresponding event producers (code responsible fo
r triggering the occurrence of an event) in the kernel space, processes the call
back functions when the event occurs, and notifies the registered users or proce
sses with useful information.
3PARmpio.64
3.1.1.0 APPLIED
3PAR Multipath I/O ODM for IBM
devices.common.IBM.mpio.rte
7.1.3.15 COMMITTED MPIO Disk Path Control Module
devices.common.IBM.mpio.rte
7.1.3.15 COMMITTED MPIO Disk Path Control Module

You might also like