Solaris 10 ZFSEssentials
Solaris 10 ZFSEssentials
Copyright................................................................................................................................ 1
Preface.................................................................................................................................. 10
Acknowledgments................................................................................................................. 14
About the Author.................................................................................................................. 16
Chapter 1. Introducing ZFS File Systems............................................................................... 18
Section 1.1. Overview of ZFS.......................................................................................................................................................................................................... 18
Section 1.2. Fast and Simple Storage............................................................................................................................................................................................ 23
Section 1.3. ZFS Commands.......................................................................................................................................................................................................... 24
Chapter 2. Managing Storage Pools...................................................................................... 26
Section 2.1. ZFS Pool Concepts..................................................................................................................................................................................................... 26
Section 2.2. Creating a Dynamic Stripe........................................................................................................................................................................................ 28
Section 2.3. Creating a Pool with Mirrored Devices..................................................................................................................................................................... 30
Section 2.4. Creating a Pool with RAID-Z Devices....................................................................................................................................................................... 32
Section 2.5. Creating a Spare in a Storage Pool............................................................................................................................................................................ 34
Section 2.6. Adding a Spare Vdev to a Second Storage Pool........................................................................................................................................................ 35
Section 2.7. Replacing Bad Devices Automatically....................................................................................................................................................................... 36
Section 2.8. Locating Disks for Replacement............................................................................................................................................................................... 39
Section 2.9. Example of a Misconfigured Pool............................................................................................................................................................................. 40
Chapter 3. Installing and Booting a ZFS Root File System..................................................... 42
Section 3.1. Simplifying (Systems) Administration Using ZFS.................................................................................................................................................... 42
Section 3.2. Installing a ZFS Root File System............................................................................................................................................................................. 43
Section 3.3. Creating a Mirrored ZFS Root Configuration........................................................................................................................................................... 47
Section 3.4. Testing a Mirrored ZFS Root Configuration............................................................................................................................................................. 48
Section 3.5. Creating a Snapshot and Recovering a ZFS Root File System................................................................................................................................. 49
Section 3.6. Managing ZFS Boot Environments with Solaris Live Upgrade................................................................................................................................ 52
Section 3.7. Managing ZFS Boot Environments (beadm)............................................................................................................................................................ 60
Section 3.8. Upgrading a ZFS Boot Environment (beadm)......................................................................................................................................................... 60
Section 3.9. Upgrading a ZFS Boot Environment (pkg)............................................................................................................................................................... 61
Section 3.10. References................................................................................................................................................................................................................ 63
Chapter 4. Managing ZFS Home Directories......................................................................... 64
Section 4.1. Managing Quotas and Reservations on ZFS File Systems........................................................................................................................................ 64
Section 4.2. Enabling Compression on a ZFS File System........................................................................................................................................................... 70
Section 4.3. Working with ZFS Snapshots.................................................................................................................................................................................... 72
Section 4.4. Sharing ZFS Home Directories................................................................................................................................................................................. 76
Section 4.5. References.................................................................................................................................................................................................................. 77
Chapter 5. Exploring Zpool Advanced Concepts.................................................................... 78
Section 5.1. X4500 RAID-Z2 Configuration Example.................................................................................................................................................................. 78
Section 5.2. X4500 Mirror Configuration Example..................................................................................................................................................................... 86
Section 5.3. X4500 Boot Mirror Alternative Example.................................................................................................................................................................. 91
Section 5.4. ZFS and Array Storage............................................................................................................................................................................................... 91
Chapter 6. Managing Solaris CIFS Server and Client............................................................. 92
Section 6.1. Installing the CIFS Server Packages.......................................................................................................................................................................... 92
Section 6.2. Configuring the SMB Server in Workgroup Mode................................................................................................................................................... 95
Section 6.3. Sharing Home Directories......................................................................................................................................................................................... 96
Chapter 7. Using Time Slider............................................................................................... 100
Section 7.1. Enabling Time Slider Snapshots.............................................................................................................................................................................. 100
Section 7.2. Enabling Nautilus Time Slider................................................................................................................................................................................ 102
Section 7.3. Modifying the Snapshot Schedule........................................................................................................................................................................... 104
Section 7.4. Setting the Snapshot Schedule per File System...................................................................................................................................................... 108
Chapter 8. Creating a ZFS Lab in a Box................................................................................ 110
Section 8.1. Creating Virtual Disks with Virtual Media Manager............................................................................................................................................... 110
Section 8.2. Registering a CD Image with Virtual Media Manager............................................................................................................................................ 114
Section 8.3. Creating a New Virtual Machine.............................................................................................................................................................................. 116
Section 8.4. Modifying the New Virtual Machine....................................................................................................................................................................... 120
Section 8.5. Installing an OS on a Virtual Machine.................................................................................................................................................................... 123
Section 8.6. Installing Virtual Box Tools..................................................................................................................................................................................... 128
bvdindexIndex.................................................................................................................... 136
Onur Bingul
3690014
Essentials
Scott Watanabe
Preface ix
Acknowledgments xiii
About the Author xv
Index 119
ix
systems, snapshots, directory entries, devices, and more. And ZFS implements an
improvement on RAID-5 by introducing RAID-Z, which uses parity, striping, and
atomic operations to ensure reconstruction of corrupted data. It is ideally suited
for managing industry-standard storage servers.
The following sections talk more about the other two books in the series.
Intended Audience
The three books in the Solaris System Administration Series can benefit anyone
who wants to learn more about the Solaris 10 operating system. They are written
to be particularly accessible to system administrators who are new to Solaris, as
OpenSolaris
Any good book always results from more than just the effort of the attributed author,
and this book is far from being an exception. I have benefited from the tremendous
support of Cindy Swearingen and Jeff Ramsey. Cindy provided insight and guidance
without which this book simply would not have been possible. She critiqued my
ideas and writing diplomatically, and she offered great ideas of her own.
Jeff took my rough drawings and created detailed illustrations, upon which the
figures in this book are based. If they are not clear in any way, that is no fault of
Jeff’s. As always, he was a pleasure to work with.
I would like to thank Todd Lowry, Uzzal Dewan, and the rest of the staff at
Sun’s RLDC for allowing me access to equipment for testing and showing me how
they use ZFS in their operation. Thanks to Dean Kemp for being my technical
sounding board and giving the opportunity to use my knowledge of ZFS to resolve
some real-world problems.
A big thank you to Jim Siwila for jumping in with editorial support at critical
points during challenging times at Sun. Thanks to Judy Hall at Sun for her sup-
port and presenting me with the opportunity to write this book.
To my family and friends, thank you for your understanding and support.
x i ii
Scott Watanabe is a freelance consultant with more than twenty-five years in the
computer/IT industry. Scott has worked for Sun Microsystems for more than
eleven years and has worked with Sun technology since the mid-1980s. Scott has
also worked as a systems administrator for more than ten years and as a systems
manager for a few of those years. While at Sun, Scott worked as a chief architect
for Utility Computing, a backline engineer in Network Technical Support, and a
lead course developer for Internal Technical Training.
xv
This chapter introduces ZFS, a new kind of file system from Sun Microsystems that
is elegant in its simplicity, particularly in the way that you can easily manage data
and storage. The ZFS file system is available in the following Solaris releases:
Solaris 10 6/06 release (ZFS root and install features available in Solaris 10
10/08 release)
OpenSolaris 2008.11 release (includes ZFS root and install features)
Mirror
RAID-Z (single parity), also called RAID-Z1
RAID-Z2 (double parity)
ZFS makes it easy to set up file system quotas or reservations, turn compre-
sion on or off, or set other characteristics per file system.
ZFS provides constant-time snapshots so that an administrator or users can
roll back copies of the data.
ZFS makes it possible to delegate administration directly to users. Using the
RBAC facility, ZFS administration can be delegated to users.
ZFS provides automation of mundane tasks. Once a ZFS file system is cre-
ated, Solaris will remember to mount the file system on reboot.
Licensed by
ZFS provides a history of commands used on a pool or file system.
Onur Bingul
To get a better understanding of ZFS, it helps to see the whole before talking about
the parts of ZFS. ZFS can be split into two major parts: the ZFS pool and the ZFS
datasets. The zpool command manages the ZFS pool, and the zfs command man-
ages the ZFS datasets. Each of these commands has subcommands that manage
3690014
each part from cradle to grave.
The fundamental base of ZFS is the ZFS pool. A pool is the primary storage ele-
ment that supports the ZFS datasets, as illustrated in Figure 1.1. The pool acts
much like the memory of a computer system. When a dataset needs another
chunk, ZFS will allocate more storage from the pool. When a file is deleted, the free
storage returns to the pool for all datasets in the pool to use.
Pool
A ZFS pool can contain many ZFS datasets, as shown in Figure 1.2. Each
dataset can grow and shrink as needed. Using snapshots, quotas, and reservations
can affect how much each dataset can consume of the ZFS pool.
ZFS supports three datasets:
ZFS file system: A file system that is mounted for normal usage (such as
home directories or data storage)
Volume: Raw volumes that can be used for swap and dump partitions in a
ZFS boot configuration
Clone: A copy of a ZFS file system or volume
Pool
File Systems
At a minimum, a single virtual device (vdev) is needed for a ZFS pool (see
Figure 1.3). A vdev can comprise a file, a slice of a disk, the whole disk, or even a
logical disk from a hardware array or another volume product such as Solaris Vol-
ume Manager. A vdev supports three redundancy configurations: mirror, RAID-Z,
and RAID-Z2.
Vdev
Pool
File Systems
In Figure 1.4, the vdev contains a mirrored array made up of two whole disks.
ZFS can handle disks of similar size and different disk geometries. This feature is
good for a home system, where if a disk fails, the replacement disk doesn’t have to
be of the same make and model. The only requirement of a replacement disk is
that it must be the same capacity or larger.
Vdev
Pool
Disks
File Systems
Increasing the storage of a ZFS pool is simple. You just add another vdev. In
Figure 1.5, a second mirrored vdev is concatenated to the pool. You can increase a
ZFS pool at any time, and you don’t need to adjust any of the datasets contained
within the pool.
Vdev
Pool
Disks
File Systems
Disks
Expanded Pool
Vdev
Once a ZFS pool is expanded, the data sets in the pool can take advantage of the
increased size. ZFS will take advantage of the expanded storage and distribute the
new writes across all the disk spindles (see Figure 1.6).
Vdev
Pool
Disks
File Systems
Disks
Vdev
ZFS was designed to be fast and simple with only two commands to remember:
zpool and zfs. You can create a ZFS pool and have a writable file system in one
step. To create the configuration in Figure 1.4, you just execute the following
command:
By using the time command, you can see it took six seconds to create a new
ZFS pool. In this example you used the pfexec command to execute the zpool
command as root. You can then view the status of the new ZFS pool by running the
following command:
With the previous code, you have created a new pool called mpool with a new
virtual device that consists of two disks in mirror configuration. At this point, you
can write to the newly mounted file system:
$ df -h | grep mpool
mpool 85M 18K 85M 1% /mpool
$ pfexec touch /mpool/j
$ ls -l /mpool/j
-rw-r--r-- 1 root root 0 2009-04-24 01:20 /mpool/j
Just by using one command line, you created a mirrored pool and mounted it.
ZFS has only two commands. The zpool command creates, modifies, and destroys
ZFS pools. The zfs command creates, modifies, and destroys ZFS file systems. In
this book, Chapters 2 and 5 are dedicated to administration of ZFS pools, and the
other chapters use the zfs command as part of the administrative tasks.
ZFS storage pools are the basis of the ZFS system. In this chapter, I cover the basic
concepts, configurations, and administrative tasks of ZFS pools. In Chapter 5, I
cover advanced configuration topics in ZFS pool administration.
The zpool command manages ZFS storage pools. The zpool command creates,
modifies, and destroys ZFS pools.
Redundant configurations supported in ZFS are mirror (RAID-1), RAID-Z (simi-
lar to RAID-5), and RAID-Z2 with a double parity (similar to RAID-6). All tradi-
tional RAID-5-like algorithms (RAID-4, RAID-6, RDP, and EVEN-ODD, for
example) suffer from a problem known as the RAID-5 write hole. If only part of a
RAID-5 stripe is written and power is lost before all blocks have made it to disk, the
parity will remain out of sync with the data (and is therefore useless) forever, unless
a subsequent full-stripe write overwrites it. In RAID-Z, ZFS uses variable-width
RAID stripes so that all writes are full-stripe writes. This design is possible only
because ZFS integrates file system and device management in such a way that the
file system’s metadata has enough information about the underlying data redun-
dancy model to handle variable-width RAID stripes. RAID-Z is the world’s first
software-only solution to the RAID-5 write hole.
Log and cache vdevs are used with solid-state disks (SSDs) in the Sun Storage
7000 series in hybrid storage pools (HSPs; see http://blogs.sun.com/ahl/
entry/fishworks_launch).
Best-practice guidelines for ZFS pools include the following:
Creating/adding new vdevs to a ZFS pool is the most unforgiving part about ZFS.
Once committed, some operations cannot be undone. The zpool command will warn
you, however, if the operation is not what’s expected. There is a force option in
zpool to bypass any of the warnings, but it is not recommended that you use the
force option unless you are sure you will not need to reverse the operation.
These are the rules for ZFS pools:
A dynamic stripe is the most basic pool that can be created. There is no redun-
dancy in this configuration. If any disk fails, then the whole pool is lost. Any pool
created with multiple vdevs will dynamically stripe across each vdev or physical
device.
You can use the zpool command with the subcommand to create a dynamic
stripe. After the create subcommand is the name of the new ZFS pool, dstripe,
and the disks that will comprise the pool.
# zpool create dstripe c5t0d0 c5t1d0
The following listing presents the results of creating the ZFS pool dstripe. On
line 2, zpool list is executed to list all the ZFS pools on the system. Line 3
starts a list of the available pools. The command gives you general information
about the ZFS pools.
Next on line 6, zpool status is issued to inquire about the status of the ZFS
pools. Starting at line 7, the status of the ZFS pool dstripe is displayed, with a nor-
mal status. Reading the config: section of the output starting at line 10, the pool
dstripe is shown as two concatenated disks. Lines 14 and 15 list the vdevs (c5t0d0
and c5t1d0) that belong to the pool dstripe. The second pool listed is made of a sin-
gle disk called rpool, configured as a dynamic stripe with only a single vdev
(c3d0s0). It was created as part of the OS installation process.
6 # zpool status
7 pool: dstripe
8 state: ONLINE
9 scrub: none requested
10 config:
11
12 NAME STATE READ WRITE CKSUM
13 dstripe ONLINE 0 0 0
14 c5t0d0 ONLINE 0 0 0
15 c5t1d0 ONLINE 0 0 0
16
17 errors: No known data errors
18
19 pool: rpool
20 state: ONLINE
21 scrub: none requested
22 config:
23
24 NAME STATE READ WRITE CKSUM
25 rpool ONLINE 0 0 0
26 c3d0s0 ONLINE 0 0 0
27
28 errors: No known data errors
Figure 2.1 illustrates the resulting dynamic pool with its two vdevs of single
nonredundant disks. Any problems with the disks (sector errors or disk failure)
may result in the loss of the whole pool or data.
Vdev
Pool dstripe
Disk
Disk
Vdev
The following command creates a ZFS mirrored pool called mpool with a mirrored
vdev. As expected, the new pool mpool is about half the capacity of dstripe. The
pool dstripe is a concatenation of two disks of the same capacity, and mpool is a
mirror of a disk of the same capacity.
The following command line shows how to use the zpool command with the
subcommand create to create a pool with mirrored vdevs. After the create sub-
command is the name of the new ZFS pool, mpool. The mirror subcommand will
Licensed by
create a mirrored vdev with the disks c5t2d0 and c5t3d0.
# zpool create mpool mirror c5t2d0 c5t3d0
The following output is the creation of a mirrored ZFS pool called mpool. Using
Onur Bingul
the zpool list command, you now can see the capacity of the newly created pool
on line 5. Notice it is half the capacity of the dstripe pool.
3690014
1 # zpool create mpool mirror c5t2d0 c5t3d0
2 # zpool list
3 NAME SIZE USED AVAIL CAP HEALTH ALTROOT
4 dstripe 234M 75K 234M 0% ONLINE -
5 mpool 117M 73.5K 117M 0% ONLINE -
6 rpool 15.9G 3.21G 12.7G 20% ONLINE -
Starting at line 20 (in the following part of the listing) is the status information of
mpool. The disks c5t2d0 and c5t3d0 are configured as a mirror vdev, and the mirror
is part of mpool. This is an important concept in reading the status information of
ZFS pools. Notice the indentations on the pool name notations. The first is the name
of the ZFS pool. Then at the first indentation of the name are the vdevs that are part
of the pool. In the case of a dynamic stripe, this is the physical device. If the pool is
created with redundant vdev(s), the first indentation will be mirror, raidz1, or
raidz2. Then the next indentation will be the physical devices that comprise the
redundant vdev. On lines 13 and 25 are the names dstripe and mpool, respectively.
On lines 14 and 15 are the disks that belong to dstripe on the first indentation. On
line 25 is the mirror configuration, and the next indented line lists the disks belong-
ing to the mirror configuration. Compare Figures 2.1 and 2.2 for a graphical repre-
sentation of each pool’s configuration.
7 # zpool status
8 pool: dstripe
9 state: ONLINE
10 scrub: none requested
11 config:
continues
12
13 NAME STATE READ WRITE CKSUM
14 dstripe ONLINE 0 0 0
15 c5t0d0 ONLINE 0 0 0
16 c5t1d0 ONLINE 0 0 0
17
18 errors: No known data errors
19
20 pool: mpool
21 state: ONLINE
22 scrub: none requested
23 config:
24 NAME STATE READ WRITE CKSUM
25 mpool ONLINE 0 0 0
26 mirror ONLINE 0 0 0
27 c5t2d0 ONLINE 0 0 0
28 c5t3d0 ONLINE 0 0 0
29
30 errors: No known data errors
31
32 pool: rpool
33 state: ONLINE
34 scrub: none requested
35 config:
36
37 NAME STATE READ WRITE CKSUM
38 rpool ONLINE 0 0 0
39 c3d0s0 ONLINE 0 0 0
40 errors: No known data errors
Figure 2.2 illustrates the results of creating the ZFS pool mpool with one mir-
rored vdev.
Vdev Mirror
Pool mpool
Disks
The following sequence adds a second mirror vdev to the pool, doubling the size
of pool mpool. Notice that after the addition of the second mirror vdev, the dstripe
and mpool pools are the same capacity. Line 30 is the mirrored vdev with the phys-
ical disks c5t4d0 and c5t5d0 indented.
continues
Figure 2.3 shows the results of adding a second mirrored vdev to ZFS pool
mpool.
In ZFS you can also create redundant vdevs similar to RAID-5, called RAID-Z. To
create a pool with double parity, you would use RAID-Z2 instead.
The following command line will create a ZFS pool named rzpool with two
RAID-Z1 vdevs, each with four disks:
# zpool create rzpool raidz1 c5t6d0 c5t7d0 c5t8d0 c5t9d0 raidz1
c5t10d0 c5t11d0 c5t12d0 c5t13d0
The first RAID-Z1 vdev consists of disks c5t6d0, c5t7d0, c5t8d0, and c5t9d0, and
the second RAID-Z1 vdev has c5t6d0, c5t7d0, c5t8d0, and c5t9d0 as members.
Vdev Mirror
Pool mpool
Disks
Disks
Vdev Mirror
The zpool list command shows the summary status of the new pool rzpool.
The output of zpool status, on lines 16 and 21, shows the RAID-Z1 virtual
devices and physical disk devices that make up each RAID-Z1 virtual device.
Figure 2.4 has two RAID-Z1 vdevs with four physical disks to each vdev. The
vdevs are concatenated to form the ZFS pool rzpool.
RAID-Z1 Vdev
Pool rzpool
Disks
Disks
RAID-Z1 Vdev
continues
12 config:
13
14 NAME STATE READ WRITE CKSUM
15 mpool ONLINE 0 0 0
16 mirror ONLINE 0 0 0
17 c5t2d0 ONLINE 0 0 0
18 c5t3d0 ONLINE 0 0 0
19 mirror ONLINE 0 0 0
20 c5t4d0 ONLINE 0 0 0
21 c5t5d0 ONLINE 0 0 0
22 spares
23 c5t14d0 AVAIL
24
25 errors: No known data errors
In Figure 2.5, an additional vdev is added to the ZFS pool mpool. A spare vdev
can be shared with multiple ZFS pools. The spares must have at least the same
capacity of the smallest disk in the other vdev devices.
Vdev Mirror
Pool mpool
Disks
Disks
Spare Vdev
Vdev Mirror
As indicated earlier in the chapter, you can share a spare vdev with other pools on
the same system. In the following example, the spare vdev (c5t14d0) disk is shared
with ZFS pool rzpool. After you add the spare to the rzpool, the spare disk now
appears in ZFS pools mpool and rzpool, namely, on lines 16 and 17 and lines 38
and 39.
In Figure 2.6, the spare vdev is shared between the mpool and rzpool ZFS pools.
Each pool can use the spare when needed.
Vdev Vdev
Pool Pool
mpool rzpool
Spare Vdev
This allows the administrator to replace the failed drive at a later time. The man-
ual disk replacement procedure is covered later in this chapter.
If you list the properties of the ZFS pool, you can see that the autoreplace fea-
ture is turned off using the get subcommand.
On line 3, the state of the pool has been degraded, and line 4 tells you that the
pool can continue in this state. On lines 6 and 7, ZFS tells you what actions you will
need to take, and by going to the Web site, a more detailed message tells you how to
correct the problem. Line 19 tells you that the spare disk has been resilvered with
disk c5t5d0. Line 22 now gives you the new status of the spare disk c5t14d0.
The original definition of resilver is the process of restoring a glass mirror with
a new silver backing. In ZFS, it is a re-creation of data by copying from one disk to
another. In other volume management systems, the process is called resynchroni-
zation. Continuing the example, you can shut down the system and attach a new
disk in the same location of the missing disk. The new disk at location c5t4d0 is
automatically resilvered to the mirrored vdev, and the spare disk is put back to an
available state.
If the original disk is reattached to the system, ZFS does not handle this case
with the same grace. Once the system is booted, the original disk must be detached
from the ZFS pool. Next, the spare disk (c5d14d0) must be replaced with the origi-
nal disk. The last step is to place the spare disk back into the spares group.
1. Start the format command, and select the disk you want to locate by select-
ing the number before the drive.
2. Type analyze, and hit the Enter/Return key.
3. Select read test by typing read, and then hit Enter/Return.
4. Look at the array, and find the disk LED with constant access.
5. Once the disk is located, stop the read test by pressing Ctrl+C.
6. To exit the format utility, type quit and hit Return, and then type quit and
hit Return again.
7. Replace the disk according to manufacturer’s instructions.
If the damaged disk is not seen by format, try to light up the LEDs of the disks
above and below the target number of the damaged disk. For example, if you were
looking to find c5t6d0 and it was not seen by the format command, you would first
locate c5t5d0 by using the format read test and then locate c5t7d0 by using the
same method. The drive in between should be c5t6d0.
Licensed by
The following is an example of a ZFS pool that started as a RAID-Z pool and
needed more space. The administrator just added the new disks to the pool. The
status output follows.
$ zpool status -v
pool: mypool
state: ONLINE
Onur Bingul
status: One or more devices has experienced an error resulting in data
3690014
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the entire pool
from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scrub: resilver completed with 0 errors on Tue Nov 18 17:05:10 2008
config:
mypool/dataset/u01:<0x2a07a>
mypool/dataset/u01:<0x2da66>
c2t9d0, c2t10d0, c2t11d0, c2t12d0, and c2t13d0 are single-disk vdevs. They have
no redundancy. The spare vdev will not help because there is nothing to resilver
with when one of the single-disk vdevs fails. In this case, ZFS has detected perma-
nent errors in mypool/dataset/u01 and has suggested an action of restoring the
files from backup or restoring the whole pool from backup.
The ZFS pool is in danger of a single disk failure that may destroy the whole pool.
The only way to get this pool to be redundant is to mirror all the single-vdev drives.
Figure 2.7 represents the ZFS pool mypool. The figure shows five single-disk
vdevs with a single RAID-Z vdev. Any damage to the single disks will cause ZFS to
lose data, or, worse, the pool will fail.
You can correct this configuration for pool redundancy in two ways:
If there are enough disks, you can mirror the single-disk vdevs.
You can back up all the file systems and re-create the pool. Then do a full
restore of the file systems.
Vdev
Pool mypool
Figure 2.7 The ZFS pool mypool has a single RAID-Z vdev and five single-disk vdevs
This chapter describes how to install and boot a ZFS root file system and how to
recover from a root pool disk failure. Upgrading and patching ZFS boot environ-
ments are also covered.
The capability to install and boot a ZFS root file system is available in the fol-
lowing Solaris releases:
ZFS installation and boot features simplify the administration process. Currently,
ZFS file systems need a partition table prior to the installation of the Solaris OS.
With traditional file systems, the following questions typically need to be
answered:
How large does the root (/) file system need to be?
How large does /usr need to be?
How large does the swap partition need to be?
How large does the /var partition need to be?
25
How much disk space needs to be allocated to use Solaris Live Upgrade?
Will the current requirements change in 6 to 12 months?
Quick rollbacks
Less downtime
Quick cloning
Safer patching
Safer upgrades
Simple administration
modes: a normal install and a binary image install called Flash. Currently, Flash
installation is not supported for ZFS. An unsupported workaround is provided at
http://blogs.sun.com/scottdickson/entry/a_much_better_way_to.
A supported but limited ZFS and Flash solution should be available in an
upcoming Solaris 10 release.
By default, the OpenSolaris release 2008.11 installs a ZFS root file system. No
option exists for installing a UFS root file system.
Memory requirements
768MB RAM minimum
1GB RAM recommended for better overall ZFS performance
Disk requirements
16GB is the minimum required to install a ZFS boot OS.
Maximum swap and dump sizes are based on physical memory and kernel
needs at boot time but are generally sized no more than 2GB by the instal-
lation program.
The ZFS storage pool must be a nonredundant disk slice configuration or a
mirrored disk slice configuration with the following characteristics:
• There are no RAID-Z or RAID-Z2 vdevs and no separate log devices.
• Disks must be labeled SMI.
• Disks must be less than 1TB.
• Disks on x86 systems must have an fdisk partition.
Compression can be enabled. The default is the lzjb algorithm. The gzip
algorithm is also available, and compression is selectable.
Consider using one root pool and one nonroot pool for data management on
larger systems where the performance of faster hardware is a factor in deter-
mining the best configuration.
Use Solaris Live Upgrade to upgrade and patch ZFS boot environments.
Keep root pool snapshots for recovery purposes.
3. Select the Solaris Interactive Text installation option from the menu, either
selection 3 or 4 (see Figure 3.2).
4. Continue answering questions according to your environment.
5. Select ZFS as the root file system to be installed (see Figure 3.3).
6. In the form shown in Figure 3.4, adjust the size of the swap and dump
devices, if necessary.
7. Select the /var partition on a separate dataset option.
Figure 3.4 Form to adjust swap and dump devices and to select a
separate /var partition
1. You can create a mirrored ZFS root pool in three basic steps after installation
is complete. Alternatively, a mirrored root pool can be installed from a Jump-
Start installation or during the initial installation process by selecting the
disks to be components of the mirrored ZFS root pool. Copy the partition
information from the source disk to the target disk.
a. Label the disk as SMI. EFI labels are not supported for boot disks.
# fdisk –B /dev/rdsk/c3d1p0
Licensed by
rpool/export/home/watanabe@backup 258K - 22.4M -
rpool/swap@backup 0 - 16K -
rzpool/export/home/watanabe@tx 95.8K - 11.0M -
Onur Bingul
3.5.2 Sending the ZFS Root Pool Snapshots to Storage
Sending root pool snapshots to storage can take some time to complete. Depending
on disk, server, or network speeds, this process can consume a few hours. The fol-
3690014
lowing example shows how to send the snapshot to a different ZFS storage pool on
the same system:
You can use Solaris Live Upgrade to manage boot environments in the Solaris 10
release. Solaris Live Upgrade copies the current operating environment, and then
you can apply an OS upgrade or a patch cluster to this boot environment. Once the
upgrade process is completed, the new boot environment is activated and booted
with minimal downtime. You can roll back to the previous environment by activat-
ing the previous boot environment and rebooting. Starting in the Solaris 10 10/08
release, you can use Solaris Live Upgrade to migrate a UFS root file system to a
ZFS root file system.
3.6.1 Migrating a UFS Root File System to a ZFS Root File System
The following procedure describes how to migrate to a ZFS root file system by
using Solaris Live Upgrade. The one-way migration is supported for the UFS
root file system components only. To migrate nonroot UFS file systems, use the
ufsdump and ufsrestore commands to copy UFS data and restore the data
into a ZFS file system.
The ZFS root file system migration involves three major steps:
The following example uses a UFS root file system on an x86-based system. The
primary disk is c0d0s0, and the target update disk is c0d1.
1. With the disk labeled as SMI, create a new s0 partition and a new ZFS pool:
continues
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
show - translate a disk address
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> p
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> p
Current partition table (original):
Total disk cylinders available: 2085 + 2 (reserved cylinders)
partition> 0
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
continues
partition> label
Ready to label disk, continue? y
partition> q
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
show - translate a disk address
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> l
Ready to label disk, continue? y
format> q
continues
continues
************************************************************
The target boot environment has been activated. It will be used when you reboot.
NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the
init or the shutdown command when you reboot. If you do not use either init or shutdown,
the system will not boot using the target BE.
************************************************************
1. Boot from the Solaris failsafe or boot in Single User mode from Solaris
Install CD or Network.
3. Run <luactivate> utility without any arguments from the Parent boot
environment root slice, as shown below:
/mnt/sbin/luactivate
************************************************************
Notice the difference in the lustatus output. After the luactivate operation,
the Active On Reboot flag changed for boot environment 10u6be-zfs.
Done!
...Deleted Output…
************************************************************
The target boot environment has been activated. It will be used when you reboot.
NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the
init or the shutdown command when you reboot. If you do not use either init or shutdown,
the system will not boot using the target BE.
************************************************************
continues
In case of a failure while booting to the target BE, the following process needs
to be followed to fallback to the currently working boot environment:
1. Boot from Solaris failsafe or boot in single user mode from the Solaris Install
CD or Network.
2. Mount the Parent boot environment root slice to some directory (like /mnt).
You can use the following command to mount:
3. Run <luactivate> utility without any arguments from the Parent boot
environment root slice, as shown below:
/mnt/sbin/luactivate
************************************************************
5. Check the status of the Active On Reboot flag for the new ZFS boot
environment:
bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
------------------- -------- ------ --------- ------ -------
c0d0s0 yes no no yes -
10u6be-zfs yes yes no no -
10u6be-1 yes no yes no -
bash-3.00# init 0
updating /platform/i86pc/boot_archive
propagating updated GRUB menu
Saving existing file </boot/grub/menu.lst> in top level
dataset for BE
<10u6be-1> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
File </etc/lu/GRUB_backup_menu> propagation successful
File </etc/lu/menu.cksum> propagation successful
File </sbin/bootadm> propagation successful
Licensed by
risk by enabling you to revert to the prior boot environment.
Onur Bingul
Upgrade a ZFS boot environment using beadm by following these steps:
3690014
$ pfexec beadm create opensolaris1
2. Mount the new boot environment. For example, mount the new boot environ-
ment at /mnt:
1 $ beadm list
2 BE Active Mountpoint Space Policy Created
3 -- ------ ---------- ----- ------ -------
4 opensolaris NR / 2.46G static 2009-01-29 23:32
5 opensolaris1 - /mnt 1.41G static 2009-01-30 01:43
6 watanabe@opensolaris:~$
7 $ pfexec beadm activate opensolaris1
8 $ beadm list
9 BE Active Mountpoint Space Policy Created
10 -- ------ ---------- ----- ------ -------
11 opensolaris N / 68.50M static 2009-01-29 23:32
12 opensolaris1 R /mnt 3.80G static 2009-01-30 01:43
$ cat /etc/release
OpenSolaris 2009.04 snv_105 X86
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 22 December 2008
$ beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
opensolaris - - 122.34M static 2009-01-29 23:32
opensolaris1 NR / 3.95G static 2009-01-30 01:43
3.10 References
You can find more information at the following locations:
This chapter describes how to use ZFS file systems for users’ home directories. Man-
aging home directories with ZFS is very easy:
Quotas and reservations are easier to manage than in other file systems.
Individual users can recover their own files from snapshots, leaving the
administrator to do other work.
Compression on file systems enables better disk management.
File system properties can be assigned depending on users’ needs.
47
Another good management practice for a server is to separate the system root
pool data from any data pools or file systems. Even if a mistake is made with the
root pool, the data can be recovered easily.
On line 3, the availability is 572MB. After setting the quota on line 4 to 20MB,
the available storage to user layne drops to 11.8MB.
Next, user layne adds files to the home directory until the quota limit is hit:
The write fails on line 4 with only 1,835,008 bytes written to disk. The ls out-
put shows that testfile6 is the same size as the rest of the 2MB files written.
The size difference is because of the way that ls calculates the size of the file. The
ls command determines file size by using the difference between the first and last
blocks of the file. To get a more accurate size, the du command is used.
On line 20, the touch fails because of the copy-on-write (COW) feature of ZFS.
Because ZFS would need to copy the file first and no available space exists, the
write fails. Deleting the file testfile6 is no problem, because ZFS does not have
to keep a copy.
ZFS does not write in place like most other file systems. Instead, it will write a
new copy of the data and move the pointers to the data once the operation is com-
plete via COW.
Next, the following example re-creates testfile6 and takes a snapshot of the
file system that is full:
Again, the touch file on line 5 fails. On line 7, the file cannot be removed
because the snapshot needs more room to rewrite the blocks to point to the snap-
shot, but the quota is a hard limit.
You can correct this problem in four ways:
Create a couple of 10MB files until the refquota limit is reached and an error mes-
sage is generated by the system. The output of the ls command leads you to believe
that both files are the same size, but the output of the du command tells a different
story. The du command counts blocks used and is accurate concerning the size of the file.
On line 3, 0 bytes are available for use in layne’s home directory. After remov-
ing testfile2, 1.82MB is available. Notice also that the used storage has not
changed because the file testfile2 is still in the snapshot.
Using a refquota property, the file system continues to grow until the refquota
limit is reached. A file system with a refquota set means that space that is con-
sumed by descendents can grow until a parent file system quota is reached or the
pool’s available capacity is reached.
1 $ zfs list
2 NAME USED AVAIL REFER MOUNTPOINT
3 homepool 108M 560M 26.9K /homepool
4 homepool/export 107M 560M 28.4K /export
5 homepool/export/home 107M 560M 50.9K /export/home
6 homepool/export/home/ken 8.17M 560M 8.17M /export/home/ken
7 homepool/export/home/layne 20.1M 1.82M 18.2M /export/home/layne
8 homepool/export/home/watanabe 78.8M 560M 71.2M /export/home/watanabe
...Output delete...
9 $ pfexec zfs quota=250m homepool/export/home
10 $ zfs get quota homepool/export/home
11 NAME PROPERTY VALUE SOURCE
12 homepool/export/home quota 250M local
13 $ zfs list
14 NAME USED AVAIL REFER MOUNTPOINT
15 homepool 108M 560M 26.9K /homepool
16 homepool/export 107M 560M 28.4K /export
17 homepool/export/home 107M 143M 50.9K /export/home
18 homepool/export/home/ken 8.17M 143M 8.17M /export/home/ken
19 homepool/export/home/layne 20.1M 1.82M 18.2M /export/home/layne
20 homepool/export/home/watanabe 78.8M 143M 71.2M /export/home/watanabe
...Output deleted...
file system, not including descendents, snapshots, and clones. You can use these
properties to guarantee a minimum amount of storage to a file system.
1 $ zfs list
2 NAME USED AVAIL REFER MOUNTPOINT
3 homepool 108M 560M 26.9K /homepool
4 homepool/export 107M 560M 28.4K /export
5 homepool/export/home 107M 143M 50.9K /export/home
6 homepool/export/home/ken 8.17M 143M 8.17M /export/home/ken
7 homepool/export/home/layne 20.1M 1.82M 18.2M /export/home/layne
8 homepool/export/home/watanabe 78.8M 143M 71.2M /export/home/watanabe
...Output deleted...
9 $ pfexec zfs reservation=100m homepool/export/home/ken
10 $ zfs get reservation,refreservation homepool/export/home/ken
11 NAME PROPERTY VALUE SOURCE
12 homepool/export/home/ken reservation 100M local
13 homepool/export/home/ken refreservation none default
14 watanabe@opensolaris:~$
15 watanabe@opensolaris:~$ zfs list
16 NAME USED AVAIL REFER MOUNTPOINT
17 homepool 200M 468M 26.9K /homepool
18 homepool/export 199M 468M 28.4K /export
19 homepool/export/home 199M 51.1M 50.9K /export/home
20 homepool/export/home/ken 8.17M 143M 8.17M /export/home/ken
21 homepool/export/home/layne 20.1M 1.82M 18.2M /export/home/layne
22 homepool/export/home/watanabe 78.8M 51.1M 71.2M /export/home/watanabe
...Output deleted...
First, clear out the previous reservation setting, and set refreservation to
50MB:
Licensed by
10 homepool/export/home 107M 143M 50.9K /export/home
11 homepool/export/home/ken 8.17M 143M 8.17M /export/home/ken
12 homepool/export/home/layne 20.1M 1.82M 18.2M /export/home/layne
13 homepool/export/home/watanabe 78.8M 143M 71.2M /export/home/watanabe
...Output deleted...
14 $ pfexec zfs refreservation=50m homepool/export/home/ken
Onur Bingul
15 $ zfs get reservation,refreservation homepool/export/home/ken
16 NAME PROPERTY VALUE SOURCE
17 homepool/export/home/ken reservation none default
18 homepool/export/home/ken refreservation 50M local
19 $ zfs list
20 NAME USED AVAIL REFER MOUNTPOINT
3690014
21 homepool 150M 518M 26.9K /homepool
22 homepool/export 149M 518M 28.4K /export
23 homepool/export/home 149M 101M 50.9K /export/home
24 homepool/export/home/ken 50M 143M 8.17M /export/home/ken
25 homepool/export/home/layne 20.1M 1.82M 18.2M /export/home/layne
26 homepool/export/home/watanabe 78.8M 101M 71.2M /export/home/watanabe
...Output deleted... 844M 9.13G 16K -
After setting the refreservation property to 50MB, on line 31, the used space
charged to homepool/export/home/ken grows by 42MB to 50MB. The parent
directory’s used number grows by 42MB to 149MB from 107MB. The file system
homepool/export/home/layne again stays the same on line 32. On line 33, the
next file system, homepool/export/home/watanabe, has the available storage
reduced by 42MB to 101MB.
specify the level of compression using the gzip algorithm from 1 (fastest) to 9
(best compression). By default, gzip-6 is used when gzip is set.
Enabling compression helps manage available space for a file system. When
compression is enabled on a file system, the space usage numbers will differ
depending on the command used to calculate size.
The following example illustrates how to enable compression using the lzjb
and gzip-9 algorithms. The files in Table 4.1 were copied after enabling the com-
pression property on the file system.
The listing output is the same in all cases, but the disk usage is less with com-
pression turned on. The best compression is gzip-9.
Another statistic you can use is the compressratio property status of the ZFS
file system. This will show the compression efficiency of the data on the file sys-
tem. Table 4.2 provides an idea of the kind of compression ratios available from
lzjb and gzip-9.
$ zfs get compressratio zfs_filesystem
You need to be careful when moving or copying data from a compressed file sys-
tem. The compression could cause backup or archive programs to underestimate
the size of the media and the time that is needed to execute the job. Using the
compressratio and disk usage will give an approximate size of the uncom-
pressed data as it moved to an archive. Take the output of du –s of the file system
or directory, and multiply by the compression ratio to get an approximation of the
size of the data that needs to be copied.
Users can manage their own deleted, destroyed, and modified data by implement-
ing snapshots for home directories. A snapshot is a read-only copy of a ZFS file
system and is created using the zfs snapshot command.
cindy@opensolaris:~$ ls -a
. .bash_history Books local.login .profile
.. .bashrc local.cshrc local.profile
cindy@opensolaris:~$ ls .zfs
snapshot
In the example, specifying the .zfs directory with the ls command shows the
snapshot directory.
You can make the .zfs directory visible by changing the snapdir property
from hidden to visible:
When the snapdir property is set to visible, the ls command displays the
.zfs directory.
The following example is the output from a tar archive of a home directory with
the snap directory hidden:
The following example is the same tar command with the snap directory visi-
ble. The archive now copies the .zfs directory, doubling the size of the archive.
This file system has only one snapshot. If the file system had additional snap-
shots, the archive size could grow larger.
continues
./.zfs/snapshot/sunday/.profile
./.zfs/snapshot/sunday/.bashrc
./.zfs/snapshot/sunday/Books/
./.zfs/snapshot/sunday/Books/Anna_Karenina_T.pdf
./.zfs/snapshot/sunday/Books/zfs_lc_preso.pdf
./.zfs/snapshot/sunday/Books/Shakespeare William Romeo and Juliet.pdf
./.zfs/snapshot/sunday/Books/zfs_last.pdf
./.zfs/snapshot/sunday/Books/War_and_Peace_NT.pdf
./.zfs/snapshot/sunday/Books/1.Millsap2000.01.03-RAID5.pdf
./.zfs/snapshot/sunday/Books/Crime_and_Punishment_T.pdf
./.zfs/snapshot/sunday/Books/Moby_Dick_NT.pdf
./.zfs/snapshot/sunday/Books/Emma_T.pdf
./.zfs/snapshot/sunday/.bash_history
./.zfs/snapshot/sunday/local.login
./.zfs/snapshot/sunday/local.profile
./local.cshrc
./.profile
./.bashrc
./Books/
./Books/Anna_Karenina_T.pdf
./Books/zfs_lc_preso.pdf
./Books/Shakespeare William Romeo and Juliet.pdf
./Books/zfs_last.pdf
./Books/War_and_Peace_NT.pdf
./Books/1.Millsap2000.01.03-RAID5.pdf
./Books/Crime_and_Punishment_T.pdf
./Books/Moby_Dick_NT.pdf
./Books/Emma_T.pdf
./.bash_history
./local.login
./local.profile
cindy@opensolaris:~$
cindy@opensolaris:~$ rm -rf *
cindy@opensolaris:~$ rm -rf .*
rm: cannot remove directory `.'
rm: cannot remove directory `..'
$ ls -Fa
./ ../
Now you, or the home directory owner, must find the most recent snapshot and
select it for a rollback. You can do this with the zfs list command:
To identify the snapshots by date and time, use zfs get creation to retrieve
the creation date and time of the snapshot:
cindy@opensolaris:~$ ls -Fa
./ .bash_history Books/ local.login .profile
../ .bashrc local.cshrc local.profile
ZFS home directories are shared by using Network File System (NFS) services, as
is done with Unix File System (UFS). You can set up the automounter so that the
home directories appear the same on the server as they do on client systems. When
users log in to the server, they can find the home directory using the same pass-
word entry used on the client. Using the /home directory to mount user directo-
ries, the automounter maps server:/export/home/user_name to /home/
user_name. In this example, only files are used.
1. Edit the /etc/passwd file on the server to change all the user home directo-
ries from /export/home/user_name to /home/user_name. For more infor-
mation on the passwd file, use man –s 4 passwd. Distribute the new
password file to the other client systems.
watanabe:x:101:10:scott watanabe:/home/watanabe:/bin/bash
layne:x:65535:1:Layne :/home/layne:/bin/bash
judy:x:65536:1:Judy:/home/judy:/bin/bash
cindy:x:65537:1:Cindy:/home/cindy:/bin/bash
ken:x:65538:1:Ken:/home/ken:/bin/bash
The star (*) is a wildcard key, and the ampersand (&) takes the key
(user_name) and makes a replacement. Copy this file to the other clients.
3. Set the sharenfs property to each home directory file system:
4.5 References
In this chapter, you will explore techniques for creating the best possible pool configu-
rations. The examples shown in this chapter are from the ZFS Configuration Guide at
www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide. I
will walk you through the commands that create the configuration and explain why it
works.
There are many ways to configure ZFS pools, some better than others, but it is my
hope that from this chapter you will get a better idea for setting up a ZFS pool that
suits your needs.
61
t0
t1
t2
t3
c0
t4
t5
t6
t7
In Figure 5.2, the drawing is altered to line up the disks with the controller
inline. There are six controllers: c0, c1, c4, c5, c6, and c7. Each controller has eight
target disks, starting from t0 to t7. The configuration has 48 disks total.
The boot disks are described as Solaris Volume Manager (SVM) mirrored. This
now can easily be ZFS boot disks today in Solaris 10. Disks c4t0d0 and c5t0d0 are
mirrored as the boot/system disks in Figure 5.3.
The first zpool command will create the new pool called rpool and create four
RAIDZ2 vdevs:
c0
t0 t1 t2 t3 t4 t5 t6 t7
c1
t0 t1 t2 t3 t4 t5 t6 t7
c4
t0 t1 t2 t3 t4 t5 t6 t7
c5
t0 t1 Licensed by
t2 t3 t4 t5 t6 t7
c6
t0 t1
Onur Bingul
t2 t3 t4 t5 t6 t7
3690014
c7
t0 t1 t2 t3 t4 t5 t6 t7
Figure 5.2 X4500 disk controllers and eight target disks per controller
c0
t0 t1 t2 t3 t4 t5 t6 t7
c1
t0 t1 t2 t3 t4 t5 t6 t7
c4
t0 t1 t2 t3 t4 t5 t6 t7
SVM Mirror
c5
t0 t1 t2 t3 t4 t5 t6 t7
c6
t0 t1 t2 t3 t4 t5 t6 t7
c7
t0 t1 t2 t3 t4 t5 t6 t7
Figure 5.3 The boot disks c4t0d0 and c5t0d0 are circled as a mirror created by SVM
The command line has been parsed to see the vdev creation for the ZFS pool.
Notice the four raidz2 subcommands with six disks per vdev. Each raidz2 sub-
command uses disk targets in the same column (see Figure 5.4).
c0
t0 t1 t2 t3 t4 t5 t6 t7
c1
t0 t1 t2 t3 t4 t5 t6 t7
c4
t0 t1 t2 t3 t4 t5 t6 t7
SVM Mirror
c5
t0 t1 t2 t3 t4 t5 t6 t7
c6
t0 t1 t2 t3 t4 t5 t6 t7
c7
t0 t1 t2 t3 t4 t5 t6 t7
The first vdev uses all the disk targets (starting with t1). Each of the next vdevs
spans the next three target numbers: 2, 3, and 4.
1 # zpool status
2 pool: rpool
3 state: ONLINE
4 scrub: none requested
5 config:
6
7 NAME STATE READ WRITE CKSUM
8 rpool ONLINE 0 0 0
9 raidz2 ONLINE 0 0 0
10 c0t1d0 ONLINE 0 0 0
continues
11 c1t1d0 ONLINE 0 0 0
12 c4t1d0 ONLINE 0 0 0
13 c5t1d0 ONLINE 0 0 0
14 c6t1d0 ONLINE 0 0 0
15 c7t1d0 ONLINE 0 0 0
16 raidz2 ONLINE 0 0 0
17 c0t2d0 ONLINE 0 0 0
18 c1t2d0 ONLINE 0 0 0
19 c4t2d0 ONLINE 0 0 0
20 c5t2d0 ONLINE 0 0 0
21 c6t2d0 ONLINE 0 0 0
22 c7t2d0 ONLINE 0 0 0
23 raidz2 ONLINE 0 0 0
24 c0t3d0 ONLINE 0 0 0
25 c1t3d0 ONLINE 0 0 0
26 c4t3d0 ONLINE 0 0 0
27 c5t3d0 ONLINE 0 0 0
28 c6t3d0 ONLINE 0 0 0
29 c7t3d0 ONLINE 0 0 0
30 raidz2 ONLINE 0 0 0
31 c0t4d0 ONLINE 0 0 0
32 c1t4d0 ONLINE 0 0 0
33 c4t4d0 ONLINE 0 0 0
34 c5t4d0 ONLINE 0 0 0
35 c6t4d0 ONLINE 0 0 0
36 c7t4d0 ONLINE 0 0 0
37
38 errors: No known data errors
In the output of zpool status, on lines 9, 16, 23, and 30, the RAID-Z2 vdevs
correspond to the RAID-Z2 vdevs in Figure 5.4. Each of the vdev disks has the
same target number and different controller numbers. In this ZFS pool, each of the
vdev disks can be damaged before any data is lost. Also, if a controller is damaged,
the pool can still function.
In the next four zpool commands, you add three more RAID-Z2 vdevs across all
six controllers and four disks to the spare pools:
1 # zpool add rpool raidz2 c0t5d0 c1t5d0 c4t5d0 c5t5d0 c6t5d0 c7t5d0
2 # zpool add rpool raidz2 c0t6d0 c1t6d0 c4t6d0 c5t6d0 c6t6d0 c7t6d0
3 # zpool add rpool raidz2 c0t7d0 c1t7d0 c4t7d0 c5t7d0 c6t7d0 c7t7d0
4 # zpool add rpool spare c0t0d0 c1t0d0 c6t0d0 c7t0d0
In the following figures, the ZFS pool configuration is shown after each com-
mand line is executed.
In Figure 5.5, a new RAID-Z2 vdev is added to the ZFS pool rpool after execut-
ing line 1. All the disks in this vdev are the target 5 (t5) disks across all six SATA
controllers.
In Figure 5.6, a new RAID-Z2 vdev is added to ZFS pool rpool after executing line 2.
All the disks in this vdev are the target 6 (t6) disks across all six SATA controllers.
In Figure 5.7, a new RAID-Z2 vdev is added to ZFS pool rpool after executing line 3.
All the disks in this vdev are the target 6 (t6) disks across all six SATA controllers.
c0
t0 t1 t2 t3 t4 t5 t6 t7
c1
t0 t1 t2 t3 t4 t5 t6 t7
c4
t0 t1 t2 t3 t4 t5 t6 t7
SVM Mirror
c5
t0 t1 t2 t3 t4 t5 t6 t7
c6
t0 t1 t2 t3 t4 t5 t6 t7
c7
t0 t1 t2 t3 t4 t5 t6 t7
Figure 5.5 After executing line 1, a RAID-Z2 vdev using the target 5 (t5) disks is added
to the ZFS pool
c0
t0 t1 t2 t3 t4 t5 t6 t7
c1
t0 t1 t2 t3 t4 t5 t6 t7
c4
t0 t1 t2 t3 t4 t5 t6 t7
SVM Mirror
c5
t0 t1 t2 t3 t4 t5 t6 t7
c6
t0 t1 t2 t3 t4 t5 t6 t7
c7
t0 t1 t2 t3 t4 t5 t6 t7
Figure 5.6 After executing line 2, a RAID-Z2 vdev using the target 6 (t6) disks is added
to the ZFS pool
c0
t0 t1 t2 t3 t4 t5 t6 t7
c1
t0 t1 t2 t3 t4 t5 t6 t7
c4
t0 t1 t2 t3 t4 t5 t6 t7
SVM Mirror
c5
t0 t1 t2 t3 t4 t5 t6 t7
c6
t0 t1 t2 t3 t4 t5 t6 t7
c7
t0 t1 t2 t3 t4 t5 t6 t7
Figure 5.7 After executing line 3, a RAID-Z2 vdev using the target 7 (t7) disks is added
to the ZFS pool
c0
t0 t1 t2 t3 t4 t5 t6 t7
c1
t0 t1 t2 t3 t4 t5 t6 t7
c4
t0 t1 t2 t3 t4 t5 t6 t7
SVM Mirror
c5
t0 t1 t2 t3 t4 t5 t6 t7
c6
t0 t1 t2 t3 t4 t5 t6 t7
c7
t0 t1 t2 t3 t4 t5 t6 t7
Spare Pool
RAID-Z2 Vdev RAID-Z2 Vdev RAID-Z2 Vdev RAID-Z2 Vdev
RAID-Z2 Vdev RAID-Z2 Vdev RAID-Z2v Vdev
Figure 5.8 After executing line 4, four spares are added to the ZFS pool
After executing lines 1, 2, and 3, the ZFS pool is almost complete. Next, execut-
ing line 4 will add four spare disks to the pool rpool. Figure 5.8 shows the configu-
ration after the spares are added.
Running a status command on the ZFS pool, you can now see the completed
rpool. The four spares are listed starting at line 48.
1 # zpool status
2 pool: rpool
3 state: ONLINE
4 scrub: none requested
5 config:
6
7 NAME STATE READ WRITE CKSUM
8 rpool ONLINE 0 0 0
9 raidz2 ONLINE 0 0 0
10 c0t1d0 ONLINE 0 0 0
11 c1t1d0 ONLINE 0 0 0
12 c4t1d0 ONLINE 0 0 0
13 c5t1d0 ONLINE 0 0 0
14 c6t1d0 ONLINE 0 0 0
15 c7t1d0 ONLINE 0 0 0
16 raidz2 ONLINE 0 0 0
17 c0t2d0 ONLINE 0 0 0
18 c1t2d0 ONLINE 0 0 0
19 c4t2d0 ONLINE 0 0 0
20 c5t2d0 ONLINE 0 0 0
21 c6t2d0 ONLINE 0 0 0
22 c7t2d0 ONLINE 0 0 0
23 raidz2 ONLINE 0 0 0
24 c0t3d0 ONLINE 0 0 0
25 c1t3d0 ONLINE 0 0 0
26 c4t3d0 ONLINE 0 0 0
27 c5t3d0 ONLINE 0 0 0
28 c6t3d0 ONLINE 0 0 0
29 c7t3d0 ONLINE 0 0 0
30 raidz2 ONLINE 0 0 0
31 c0t4d0 ONLINE 0 0 0
32 c1t4d0 ONLINE 0 0 0
33 c4t4d0 ONLINE 0 0 0
34 c5t4d0 ONLINE 0 0 0
35 c6t4d0 ONLINE 0 0 0
36 c7t4d0 ONLINE 0 0 0
37 raidz2 ONLINE 0 0 0
38 c0t5d0 ONLINE 0 0 0
39 c1t5d0 ONLINE 0 0 0
40 c4t5d0 ONLINE 0 0 0
41 c5t5d0 ONLINE 0 0 0
42 c6t5d0 ONLINE 0 0 0
43 c7t5d0 ONLINE 0 0 0
44 raidz2 ONLINE 0 0 0
45 c0t6d0 ONLINE 0 0 0
46 c1t6d0 ONLINE 0 0 0
47 c4t6d0 ONLINE 0 0 0
48 c5t6d0 ONLINE 0 0 0
49 c6t6d0 ONLINE 0 0 0
50 c7t6d0 ONLINE 0 0 0
51 raidz2 ONLINE 0 0 0
52 c0t7d0 ONLINE 0 0 0
53 c1t7d0 ONLINE 0 0 0
continues
54 c4t7d0 ONLINE 0 0 0
55 c5t7d0 ONLINE 0 0 0
56 c6t7d0 ONLINE 0 0 0
57 c7t7d0 ONLINE 0 0 0
58 spares
59 c0t0d0 AVAIL
60 c1t0d0 AVAIL
61 c6t0d0 AVAIL
62 c7t0d0 AVAIL
63
64 errors: No known data errors
Creating the RAID-Z2 vdevs across the target disks columns also has an advan-
tage of spreading the I/O across all the SATA controllers and not causing a single
controller to be a bottleneck.
Again, in this example, the single point of failure for the storage is the mother-
board. The disks’ pool is spread over multiple controllers. This configuration is
designed for speed and maximum redundancy. Also, the data I/O will be spread
across all six controllers for maximum performance. The example creates a single
mirror for the boot/system disk and ZFS pool with 14 three-way mirrored vdevs
and 4 disk spares.
This example starts with mirrored disks created by Solaris Volume Manager in
Figure 5.9 as in the previous example.
c0
t0 t1 t2 t3 t4 t5 t6 t7
c1
t0 t1 t2 t3 t4 t5 t6 t7
c4
t0 t1 t2 t3 t4 t5 t6 t7
SVM Mirror
c5
t0 t1 t2 t3 t4 t5 t6 t7
c6
t0 t1 t2 t3 t4 t5 t6 t7
c7
t0 t1 t2 t3 t4 t5 t6 t7
Figure 5.9 The boot disks c4t0d0 and c5t0d0 are circled as a mirror created by SVM
Next you create a ZFS pool named mpool with seven three-way mirrors using
the following command:
The command line was formatted to make each three-way mirrored vdev clear.
The zpool status command shows the newly created ZFS pool mpool with the
seven vdevs:
# zpool status
pool: mpool
state: ONLINE
scrub: none requested
config:
Notice the indentation in the output. ZFS pool mpool has seven mirror vdevs,
and each vdev has three disks. Figure 5.10 shows the vdevs in the diagram of the
X4500.
In Figure 5.10, it is easy to see each mirrored vdev using three different SATA
II controllers. A mirrored vdev could lose a maximum of two disks or a controller
without losing any data in the ZFS pool.
Mirror Vdev
c0
t0 t1 t2 t3 t4 t5 t6 t7
c1
t0 t1 t2 t3 t4 t5 t6 t7
c4
t0 t1 t2 t3 t4 t5 t6 t7
SVM Mirror
c5
t0 t1 t2 t3 t4 t5 t6 t7
c6
t0 t1 t2 t3 t4 t5 t6 t7
c7
t0 t1 t2 t3 t4 t5 t6 t7
Figure 5.10 The ZFS pool with seven three-way mirrored vdevs
The next eight commands fill out the rest of the configuration of the X4500 with
another seven three-way mirrored vdevs and four disks in a spare pool:
The zpool status command shows the ZFS pool mpool now with seven addi-
tional mirrored vdevs and a spare pool with four disks:
# zpool status
pool: mpool
state: ONLINE
scrub: none requested
config:
continues
mirror ONLINE 0 0 0
c5t6d0 ONLINE 0 0 0
c6t6d0 ONLINE 0 0 0
c7t6d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c5t7d0 ONLINE 0 0 0
c6t7d0 ONLINE 0 0 0
c7t7d0 ONLINE 0 0 0
spares
c0t0d0 AVAIL
c1t0d0 AVAIL
c6t0d0 AVAIL
c7t0d0 AVAIL
Licensed by
errors: No known data errors
c0
t0
3690014
t1 t2 t3 t4 t5 t6 t7
c1
t0 t1 t2 t3 t4 t5 t6 t7
c4
t0 t1 t2 t3 t4 t5 t6 t7
SVM Mirror
c5
t0 t1 t2 t3 t4 t5 t6 t7
c6
t0 t1 t2 t3 t4 t5 t6 t7
c7
t0 t1 t2 t3 t4 t5 t6 t7
Spare Pool
Mirror Vdev
Figure 5.11 The ZFS pool with 14 three-way mirrored vdevs and a spare pool with
4 disks
ZFS also works well with your current storage arrays. Any logical unit number
(LUN) presented to ZFS is used like any other disk. These are some consider-
ations when using storage arrays with ZFS:
ZFS is not a shared file system. The LUNs presented are exclusive to the
ZFS host.
ZFS is oblivious to disk failures if hardware RAID is used. The LUN pre-
sented will look normal to ZFS, and any disk replacement will be handled at
the array level. The array is responsible for rebuilding the LUN.
A striped LUN is just a bigger disk. A disk failure in a striped LUN will be a
failure of the whole LUN. The failed disk needs to be replaced and the LUN
rebuilt before ZFS resilvering can take place.
ZFS still needs its own redundancy. Even if the LUN is redundant (mirror or
RAID-5), ZFS still needs to be configured in a mirror or a RAID-Z for best
results.
ZFS is MPxIO safe. In Solaris 10, MPxIO is integrated in the operating sys-
tem. MPxIO provides a multipathing solution for storage devices accessible
through multiple physical paths. See www.sun.com/bigadmin/xperts/
sessions/23_fibre/#7.
OpenSolaris has a built-in native CIFS client and server. The client is installed by
default, and the server packages can be installed after interactive installation.
OpenSolaris can manage CIFS by the command line or a GUI via the GNOME file
manager. There is no anonymous user in the built-in server. If an anonymous user is
required, then SAMBA would be the better option to configure.
The CIFS client and server are not available in Solaris 10.
OpenSolaris uses Image Packaging System (IPS). This is the next generation of
package management for Solaris. You need two packages in order to install the
SMB server: SUNWsmbs and SUNWsmbskr. The latter is a kernel package and
will require a reboot before the server can be activated.
75
3. Click the Install/Update button. Package Manager will contact the repository
at OpenSolaris.org and retrieve the packages’ information (see Figure 6.2).
4. Click the Next button. Then wait for the packages to install.
5. Reboot the system.
6. Select System > Shutdown.
7. Click the Restart button to activate SMB in the kernel.
Figure 6.2 Package Manager with the server packages ready to be installed
Now reboot the system to activate the SMB server in the kernel.
Figure 6.3 GNOME file manager with network and server OPENSOLARIS
1. To share all home directories, create and edit the /etc/smbautohome file. In
the example OpenSolaris installation, all home directories are in /export/
home/username file systems.
2. Edit /etc/smbautohome, and add the following line:
* /export/home/&
The star (*) and the ampersand (&) compose a shortcut to match a username
with its home directory. Otherwise, each user would need their own entry in
the /etc/smbautohome file.
3. Test the new home directory using the GNOME file manager. On the Go To
line, type smb://hostname/username, which is smb://opensolaris/judy
in the following example.
After entering the SMB URL in the Go To box, you are presented with a login
window, as illustrated in Figure 6.5.
4. Log in. In Figure 6.6, user judy is logging in to the workgroup domain
WORKGROUP.
Figure 6.5 The GNOME file manager logging into an OpenSolaris SMB server
As shown in Figure 6.7, user judy is presented with her home directory. She
can see only her own directory and cannot browse other users’ directories.
This makes for a cleaner interface for a user.
Another way to activate SMB shares is by setting the sharesmb property to on,
using the zfs set command:
Figure 6.8 shows what the file manager will see if all the ZFS file systems are
shared using the sharesmb setting.
Figure 6.8 The file manager showing individual home directory shares
In Figure 6.8, the names listed are the ZFS file systems and may confuse some
users. You should share common directories via the smbshare property.
Licensed by
Onur Bingul
3690014
Time Slider is a tool that users can use to retrieve deleted, damaged, or previous
versions of a file or directory. Time Slider has two parts: a front-end GUI in the
Nautilus file manager and a back end managed by Service Management Facility
(SMF). Time Slider is currently available on OpenSolaris.
You can enable and configure Time Slider snapshots using the Time Slider Setup
GUI. In this chapter, I will show how to configure Time Slider only for home direc-
tories. The default configuration takes snapshots of all ZFS file systems.
The Time Slider schedule has five periodic snapshots (see Table 7.1). Each
period has a different number of snapshots kept.
83
1. Start the Time Slider Setup GUI as the administrative user of the system.
From the GNOME desktop, select System > Administration > Time Slider
Setup from the top menu bar (see Figure 7.1).
2. Next, select the Enable Time Slider check box, as shown in Figure 7.2, and
click OK.
3. Click the Advanced Options button. Here you’ll use Time Slider on home
directories, so deselect everything but user home directories (see Figure 7.3).
Then click the OK button.
1. Start the Nautilus file manager, and set the mode to List View. From the
Nautilus file manager menu, select View > List to set the List View mode.
The List View mode will display information about the file or directory when
the Time Slider box is selected.
Figure 7.4 shows the Nautilus file manager in the List View mode.
2. Click the Time Slider icon. As shown in Figure 7.5, with the Time Slider icon
selected, the slider appears.
Figure 7.4 Start the Nautilus file manager, and set the view mode to List View
Figure 7.5 The Nautilus file manager with the Time Slider icon selected
3. In the “Restore information” column only, the file testfile has different ver-
sions available for restoration. Drag the slider to the desired time frame.
When the slider is dragged to an older snapshot, a difference is noted from
the latest version of the file or directory. For example, in Figure 7.6, the file
testfile is compared with the current version of the file. The date time-
stamp and size differences are noted in the “Restore information” column.
Figure 7.6 The Nautilus file manager with Time Slider and the slider moved to the
desired time frame
4. Select the file testfile for restore. Then select Edit > Restore to Desktop, as
shown in Figure 7.7.
5. In Figure 7.8, you see the file testfile restored to the desktop. It appears
right below the Start Here icon.
Figure 7.7 The Nautilus file manager with file testfile selected for restoring
to the desktop
Each snapshot is executed from cron as user zfssnap. To check the cron sched-
ule, use the crontab command. Do not edit the crontab for zfssnap. Each SMF
instance manages the cron schedule. See the man page on crontab for more
information on the entry format.
continues
The zfs/period property on line 13 is currently set to 15, and the zfs/
interval property on line 7 is set to minutes. The following command line will set
the period to ten minutes:
continues
4 0 0 1,8,15,22,29 * * /lib/svc/method/zfs-auto-snapshot \
svc:/system/filesystem/zfs/auto-snapshot:weekly
5 0 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23 * * \
* /lib/svc/method/zfs-auto-snapshot \
svc:/system/filesystem/zfs/auto-snapshot:hourly
6 0,10,20,30,40,50 * * * * /lib/svc/method/zfs-auto-snapshot \
svc:/system/filesystem/zfs/auto-snapshot:frequent
On line 6, the frequency of the snaps is now changed to every 10 minutes start-
ing at the top of the hour.
You can set snapshot schedules independently on each file system. In the follow-
ing example, the new file system jan has inherited the property autosnapshots
turned off. The file system needs only hourly and weekly scheduled snapshots. Use
the zfs command to set the autosnapshot properties. The svcadm command
restarts the hourly and weekly autosnapshot methods to inherit the changes in the
user jan file system.
93
In this section, you will see how to create two IDE 16GB disks for boot disks and
five SATA 130MB disks for data. For a Solaris installation, a minimum disk size of
16GB is recommended. You can do the installation on a much smaller disk, but
this chapter will use 16GB. ZFS requires disks to be a minimum of 128MB. The
VMM can create two kinds of virtual disks:
Use the following steps to select the dynamically expanding disks to save on the
physical disk space:
1. Start Virtual Box, and then select File > VMM (see Figure 8.1).
Figure 8.1 Starting Virtual Media Manager from the File menu
2. Once the VMM panel appears, click the New button. Figure 8.2 includes pre-
viously created virtual disk images.
3. The Create New Virtual Disk Wizard appears. Click the Next button to get to
the Hard Disk Storage Type panel (see Figure 8.3).
“Dynamically expanding storage” should be selected by default. If it’s not,
select it before clicking the Next button. The Virtual Disk Location and Size
panel is next (see Figure 8.4).
4. Accept the default location of the new virtual disk. The default location
depends on the OS type of the host system. Enter ZFStest.vdi for the name
of the new disk, and make the size 16GB (see Figure 8.4 (2)).
5. Clicking Next, you get the Summary panel (see Figure 8.5).
6. Click Finish to create the new boot disks for the VM. Repeat the sequence,
and create another 16GB disk named ZFStest2.vdi.
Now that the boot disks are created, you need to create six 130MB data disks
named z0, z1, z2, z3, z4, and z5.
Figure 8.3 Hard Disk Storage Type panel with “Dynamically expanding storage” selected
Figure 8.7 lists all the virtual disks registered to Virtual Box. Notice the actual
size of the new data disks of 1.5KB vs. the virtual size.
1. Click the CD/DVD Images button to get to the registered CD/DVD ISO. Figure 8.8
shows previously registered ISO images.
2. Click the Add button, and locate the installation ISO image. In this example,
the OpenSolaris image is osol-0811.iso. Figure 8.9 shows the new regis-
tered ISO image.
Figure 8.9 Registered ISO images in the VMM with the new OpenSolaris ISO
1. The main Virtual Box panel (see Figure 8.10) contains a New button at the
top left of the panel. The left subpanel is a list of defined systems and status,
and the right subpanel has the configuration of each system.
In the main Virtual Box panel, click the New button to open the Create New
Virtual Machine Wizard (see Figure 8.11).
2. Click the Next button to get to the VM Name and OS Type panel (see Figure 8.12).
For the name of the system, enter ZFStest (see Figure 8.12 (1)). For the
operating system, select Solaris (see Figure 8.12 (2)). For the version,
select OpenSolaris (see Figure 8.12 (3)). Then click the Next button.
3. The next panel configures the memory for the system. Type 1024 in the box for
1GB of physical memory in the VM. Figure 8.13 shows the completed panel.
4. Click the Next button to get to the Virtual Hard Disk panel. Because you
created the boot disks previously, select “Use existing hard disk.” Then select
ZFStest.vdi from the menu. Figure 8.14 shows the completed panel.
5. Click the Next button to get to the Summary panel. Check the selection. Then
click the Finish button to create the new VM (see Figure 8.15).
After you click the Finish button, the new VM is created and listed on the left
subpanel (see Figure 8.16).
Figure 8.14 Completed Virtual Hard Disk panel with ZFStest.vdi selected
Licensed by
Onur Bingul
3690014
Figure 8.15 The Summary panel before creating the new VM
1. First, select ZFStest on the left panel, and then click the Settings button to
open the settings panel for ZFStest. Figure 8.17 shows the General settings
for the VM.
2. Click the Storage icon to configure more storage for the VM. Figure 8.18
shows the VM with one disk configured.
3. Add the remaining disks that were previously created to the VM. First, click
Enable Additional Controller to add a SATA controller to the VM, and then
click the Add Disk icon on the right side of the panel. Match the Slot/Hard
Disk pairs for each disk added (see Table 8.1).
Figure 8.19 shows the completed disk addition and assignments.
Figure 8.16 The Virtual Box main panel with new ZFStest VM
4. Once the disks have been added to the VM and the SATA assignments have
been done, click the OK button. The SATA port assignments can be random,
but assigning a numbered disk name to a SATA port number will make it eas-
ier to see which disk belongs to which disk target from the OS point of view.
5. Click the CD/DVD-ROM icon. Select Mount CD/DVD Drive. Select ISO Image
File. Then select osol-0811.iso from the drop-down menu. Only registered ISO
images will be listed. See Figure 8.20 for the completed panel. Click the OK
button to complete the configuration.
The ZFStest VM is now completed and ready to boot from the ISO CD image.
Figure 8.18 The Storage settings panel with one disk configured for VM
Figure 8.20 The CD/DVD-ROM panel with the ISO boot image mounted on the VM
display the VM running. At this point, the VM will grab the mouse and keyboard.
To release both the mouse pointer and keyboard, press the key indicated on the
bottom right of the screen. In the MacBook Pro, it is the left Cmd key. On a Solaris
host it is the right Ctrl key.
Figure 8.21 shows the icons at the bottom of the Virtual Box VM window. The
buttons from left to right are hard disk activity, CD-ROM activity and control, net-
work activity and control, USB control, shared folder, virtualization hardware
acceleration status, mouse capture status, and mouse capture key.
1. A GRUB menu will appear. Select the default GRUB selection. Figure 8.22
shows the GRUB menu from the installation CD.
Figure 8.22 The default GRUB menu from the OpenSolaris LiveCD
2. Select your keyboard and language. In this example, just hit the Enter/
Return key twice. Now the LiveCD will drop into a GNOME session. This is a
working OpenSolaris system.
3. Double-click Install OpenSolaris to start the install process, as shown in
Figure 8.23.
4. The first screen is the Welcome screen shown in Figure 8.24. Click the Next
button to continue.
5. The installer asks a few questions before the actual install. The first question
is about the disk. By default the first 16GB disk will be selected. Select “Use
the whole disk.” The completed screen should look like Figure 8.25.
6. Click the Next button to get the Time Zone screen. Select your local time zone
either by using the drop-down menus or by locating a city in your time zone,
in this case Denver, Colorado (see Figure 8.26).
7. Click the Next button to move on to the Locale screen. In Figure 8.27, English
is the default language.
8. Click the Next button to get to the Users screen. Select a password for the
root user. Create a username for yourself, and set the password. At this point,
you can change the name of the system or leave it as opensolaris. Figure 8.28
shows the completed Users screen.
9. Click the Next button to move to the Installation review screen. This is the
last chance to review the installation parameters before the installation pro-
cedure begins. See Figure 8.29 for an example.
The installation will take a few minutes to complete, depending on the host
system. You can watch the splash screen for information on the features of
OpenSolaris. The completion bar at the bottom will indicate what percent of
the installation has completed (see Figure 8.30).
10. Once the install is complete, the Finished screen appears, as shown in
Figure 8.31. Click the Reboot button to restart the VM with the newly
installed OpenSolaris OS.
11. After clicking the Reboot button, the system will reboot from the CD-ROM
ISO image. The GRUB menu will appear. Use the down arrow key to scroll
down to the Boot from Hard Disk item, and press the Return/Enter key to
start the OS from the hard disk. In Figure 8.32 the Boot from Hard Disk item
is selected.
12. When you press Enter/Return to boot the OS, another GRUB screen appears.
This GRUB menu comes from the hard disk installation. Let the default
GRUB selection boot by pressing Enter/Return, or let the timer run down to
boot the OS.
Figure 8.25 The installer at the Disk screen with “Use the whole disk” selected
Installing the Virtual Box tools will complete the installation of the OS. There is a
different VB tool installation for each of the various OSs that VB supports. The VB
tools will enable the VM to seamlessly share the mouse and keyboard and install a
video device driver. The video device driver will enable you to resize the VM screen
on your host system much like any other application.
1. You need to mount the Virtual Box Guest Additions CD image. First, log in to
OpenSolaris as the user defined during the installation. Next, right-click (on
a Mac, Cmd+click) the OpenSolaris CD icon on the desktop. Select Unmount
Volume to unmount the CD image. In Figure 8.33, Unmount Volume is
selected from the drop-down menu.
Figure 8.26 The installer at the Time Zone screen with Denver, Colorado, selected
2. You need to mount the Virtual Box Guest Additions CD image. Once the CD
image is unmounted, free your mouse from the VM using the mouse capture
key indicated at the bottom-right corner of the window. Now move the mouse
pointer over the CD-ROM icon at the bottom of the screen, and click the icon.
Select CD/DVD-ROM Image, as shown in Figure 8.34.
3. The Virtual Media Manager window is now displayed. Select the
VirtualBoxAdditions.iso image, and click the Select button. In
Figure 8.35, the VMM window is open with the Virtual Box Guest Additions
image selected.
If the image does not mount, go to the CD-ROM icon at the bottom of the VM
window, select the physical CD-ROM, and then repeat the selection of the
image file.
Licensed by
Onur Bingul
3690014
Figure 8.27 The installer at the Locale screen with English selected
4. When the image is mounted, a warning message box is displayed. Click the
Run button to continue with the mount.
5. The Virtual Box Guest Additions CD image must be installed from the com-
mand line. Open a terminal window by clicking the Terminal icon on the top
menu bar.
6. Change directories to /media/VBOXADDITIONS_2.2.0_45846. The VBOX
directory may be different depending on the build you download from
http://virtualbox.org.
$ cd /media/VBOX*
7. Install the guest package VBoxSolarisAdditions.pkg using the following
command line. Because this install needs root privileges, the command
needs to start with the command pfexec. The user defined in the install
Figure 8.28 The installer at the Users screen with the fields completed
will have root privileges defined by the Solaris Role-Based Access Control
(RBAC) facility. See the man page on RBAC, or go to http://docs.sun.com
for more information on RBAC.
$ pfexec pkgadd –d VBoxSolarisAdditions.pkg
8. Press Enter/Return, and answer y for yes to install the package. Log out of
the system by clicking System on the top menu bar and selecting Log Out.
See Figure 8.36 for the menu.
9. Log in again to activate the VB Guest Additions image. Now your
OpenSolaris system is ready to be your ZFS lab in a box.
Figure 8.31 The installer Finished screen after the installation is completed
Figure 8.32 The GRUB menu with the Boot from Hard Disk item selected
Figure 8.33 The mouse right-click menu with Unmount Volume selected
Figure 8.35 The VMM window with Virtual Box Guest Additions image selected
to be mounted
Figure 8.36 The Log Out action selected on the System menu
119
R
O RAID
OpenSolaris SMB server. See SMB server configurations, 4
OSs (operating systems), installing on ZFS file system options, 2
virtual machines, 106–111 ZFS supported, 9
Licensed by
overview of, 10 sharing home directories via NFS, 59
rules for ZFS pools, 11 ufsdump command, 35
SSDs (solid-state disks), 10 ufsrestore command, 35
status subcommand, zpool Unix File System. See UFS (Unix File
Onur Bingul
command, 20–21 System)
Storage Upgrades
arrays, 74 benefits of ZFS file system, 2
pools. See Pools upgrading boot environment using beadm
virtual disks for, 94 command, 43–44
Storage blocks, 51–52
Stripes, dynamic. See Dynamic stripes
Sun x4500
3690014 upgrading boot environment using pkg
image-update command, 44–45
/usr directory, 26
boot mirror example, 74
RAID mirrored configuration example,
69–73 V
RAID-Z2 configuration example, 61–69 /var directory
SUNWsmbs package, 75 example of interactive installation, 28, 30
SUNWsmbskr package, 75 in ZFS boot environment, 26
svcadm command, 91 vdev (virtual device), 4
svccfg command, 89–91 adding spare vdev to second storage
SVM (Solaris Volume Manager) pool, 18–19
mirrored boot disks, 62–63, 69 creating RAID-Z2 vdevs, 62–63
volume management with, 10 types of, 10
System administration, 25–26 ZFS pools comprised of, 9
System requirements, 27 Virtual Box
installing tools on virtual
machines, 111–118
T main panel of, 100, 104
tar archive, 56 overview of, 93
Time Slider Virtual devices. See vdev
basics of snapshots, 89 Virtual disks
changing number of snapshots kept, 91 creating, 96–97
changing period of snapshots, 89–91 location and size, 96
enabling Nautilus Time Slider, 85–87 types of, 94
enabling Time Slider snapshots, 83–85 Virtual machines. See VMs (virtual
modifying snapshot schedule, 87–88 machines)
overview of, 83 Virtual Media Manager. See VMM (Virtual
recovering files from snapshots, 57 Media Manager)
Solaris™ 10 System Administration Essentials is a practical guide to deploying and managing the Solaris 10
operating system in a business or academic environment. The book is easy to read and rich with examples—
a perfect companion for system administrators who are deploying the Solaris OS for the first time.
informit.com/title/9780137000098
informit.com/ph
LearnIT at InformIT
Looking for a book, eBook, or training video on a new technology? Seek-
ing timely and relevant information and tutorials? Looking for expert opin-
ions, advice, and tips? InformIT has the solution.
informIT.com
THE TRUSTED TECHNOLOGY LEARNING SOURCE
Addison-Wesley | Cisco Press | Exam Cram
IBM Press | Que | Prentice Hall | Sams