0% found this document useful (0 votes)
41 views4 pages

Reaplace Disk in ZFS Pool

Uploaded by

mkunh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views4 pages

Reaplace Disk in ZFS Pool

Uploaded by

mkunh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Copyright (c) 2024, Oracle. All rights reserved. Oracle Confidential.

How to Replace a Disk in a ZFS rpool for a SPARC Solaris System (Doc ID 2328288.1)

In this Document

Goal
Solution
References

APPLIES TO:

SPARC S7-2L - Version All Versions to All Versions [Release All Releases]
Sun SPARC Enterprise T5220 Server - Version All Versions to All Versions [Release All Releases]
Sun SPARC Enterprise M5000 Server - Version All Versions to All Versions [Release All Releases]
Sun SPARC Enterprise M9000-64 Server - Version All Versions and later
Fujitsu M10-1 - Version All Versions and later
Oracle Solaris on SPARC (64-bit)
This Document does not apply to the SuperCluster product. The servers this Document applies to should have locally attached
disk drives not part of a ZFS array.

GOAL

Assuming c0t0d0s0 and c0t1d0s0 are mirrored in the ZFS rpool and c0t1d0s0 need to be replaced in a SPARC system.

SOLUTION

NOTE:

Solaris 10

For the general procedure please also read the Oracle Solaris ZFS Administration Guide from http://docs.oracle.com
Chapter 5 : Installing and Booting an Oracle Solaris ZFS Boot File System
Recovering the ZFS root Pool or Root Pool
How to replace a Disk in the ZFS Root Pool

Solaris 11

For the general procedure please also read the Managing ZFS File Systems in Oracle Solaris 11.4
Chapter 6: Managing the ZFS Root Pool
Replacing Disks in a ZFS Root Pool
How to Replace a Disk in a ZFS Root Pool

1. With the above configuration:

# iostat -En
<---snip--->
c0t1d0 Soft Errors: 0 Hard Errors: 35 Transport Errors: 137630
Vendor: SEAGATE Product: ST914602SSUN146G Revision: 0400 Serial No: <SERIAL_NUMBER>
Size: 146.81GB <146810535936 bytes>
<---snip--->

# zpool status -v
<---snip--->
c0t1d0s0 FAULTED 3 6.71K 0 too many errors
<---snip--->

2. Remove disk from ZFS control:

# zpool detach rpool c0t1d0s0 <<< in Sparc Solaris it is always required to specify a slice if the
label is SMI

3. Perform physical replacement as follows:

Use cfgadm command to find the App_ID of the disk. The second example shows if the disk has WWN name.
# cfgadm -alv
Example 1:
<---snip--->
c0::dsk/c0t1d0 connected configured unknown SEAGATE ST914602SSUN146G

Example 2:
<---snip--->
c8::w5000cca07d3cb185,0 connected configured unknown Client Device:
/dev/dsk/c0t5000CCA07D3CB184d0s0(sd7).

# cfgadm -c unconfigure c0::dsk/c0t1d0


CAUTION for any hardware requirement: Physically replace the drive
# cfgadm -c configure c0::dsk/c0t1d0

NOTE for possible issue when "unconfigure" does not work.

# cfgadm -c unconfigure c0::dsk/c0t1d0


Return error -> Device Busy
# fuser /dev/rdsk/c0t1d0s0
# kill -9 <PID>
# cfgadm -c unconfigure c0::dsk/c0t1d0
Still return error -> Device Busy

In this case physically remove the drive (do not replace the failed disk in this step, the "cfgadm -c configure c0::dsk/c0t1d0"
will return an I/O Error):

# cfgadm -c unconfigure c0::dsk/c0t1d0

Check if there are some remaining devices:

# ls -l /dev/dsk/c0t1d0s0
If yes clean the device tree with the following:

# devfsadm -Cv

Insert the drive. The Solaris disk configuration should be automatically done:

# cfgadm -al
# echo | format

4. Verify replaced drive is visible to Solaris (via format/prtvtoc/iostat -En, etc ...):

5. Label new disk with label and format -e (choose "SMI label" under "label" entry):

# format -e c0t1d0
format> label
[0] SMI Label
[1] EFI Label
Specify Label type[0]:0

# format -e c0t1d0
format> label
...

Then change the label if needed:

# format -e c0t1d0
format> label
[0] SMI Label
[1] EFI Label
Specify Label type[0]:

6. Create partitioning:

# prtvtoc /dev/dsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2

The above command is, of course, when the source disk and the target have equal size. If the target disk is larger please
use format instead.

7. Re-attach the sub-mirror:

# zpool attach rpool c0t0d0s0 c0t1d0s0 <<< If you do not specify a slice you will get a efi warning

8. Let ZFS resilver the newly attached mirror:

# zpool status -v

9. If the server is running Solaris 11 ZFS installs the boot loader during resilver. A boot block could be manually installed but
bootadm must be used.

If server is running Solaris 10 you must install the boot loader in the replaced disk:

# installboot -F zfs /usr/platform/'uname -i'/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0

REFERENCES
NOTE:2328288.1 - How to Replace a Disk in a ZFS rpool for a SPARC Solaris System
NOTE:1010753.1 - Solaris Volume Manager (SVM) How to Replace Internal System FC-AL Disks in 280R, V480, V490, V880,
V890, and E3500 Servers
NOTE:1362952.1 - How to Replace a Disk in a rpool for an x86 System
Didn't find what you are looking for?

You might also like