Zfsboottalk Solaris Boot Zfsboot
Zfsboottalk Solaris Boot Zfsboot
Boot capability Robustness characteristics (such as mirroring) Installation support Swap and dump support Ongoing management capabilities (upgrade, patching, snapshots, etc.)
2
There is a benefit to having only one file system type to understand and manage (assuming ZFS is already in use for data). ZFS's features make it an excellent root file system with many management advantages. At least for Solaris, it's the coming thing. New installation and management features will depend on it.
4
Pooled Storage No need to preallocate volumes. File systems only use as much space as they need Built-in redundancy capabilities (such as mirroring) at the pool level. Unparalleled data integrity features. On-disk consistency always maintainedno fsck.
5
Snapshots and clones (writable snapshots) instantaneous, nearly free, persistent, and unlimited in size and number (except by the size of the pool) ZFS volumes (zvols) can be used for in-pool swap and dump areas (no need for a swap/dump slice). One pool does it all.
6
Root B Swap/Dump
mirror mirror
Root B
Swap/Dump
/export
mirror
/export 7
Disk 1
Disk 2
PROM
BOOTER
KERNEL
10
The booter selects a root file system. The booter loads one or more files from the root file system into memory and executes one of them. The executable file is either part of the Solaris kernel, or a program that knows how to load the Solaris kernel.
11
12
At the PROM stage, booting ZFS is essentially the same as booting any other file system type. The boot device identifies a storage pool, not a root file system. At this time, the booter which gets loaded is GRUB 0.95 on x86 platforms, and is a standalone ZFS reader on SPARC platforms.
13
With ZFS, there is no one-to-one correspondence between boot device and root file system. A boot device identifies a storage pool, not a file system. Storage pools can contain multiple root file systems. Thus, the booter phase must have a way to select among the available root file systems in the pool. The booter must have a way of identifying the default root file system to be booted, and also must provide a way for a user to override the default.
14
Root pools have a bootfs property that identifies the default root file system. We need a control file that lists all of the available root file systems, but in which file system do we store it? (we don't want to keep it in any particular root file system). Answer: keep it in the pool dataset, which is the dataset at the root of the dataset hierarchy. There's only one of them per pool and it's guaranteed to be there.
15
Booting from ZFS Booter phase, Root File System Selection - x86
On x86 platforms, the GRUB menu provides a way to list alternate root file systems. One of the GRUB menu entries is designated as the default. This default entry (or any other, for that matter) can be set up to mount the pool's default root file system (indicated by the pool's bootfs property).
16
Booting from ZFS Booter phase, Root File System Selection - SPARC
On SPARC platforms, a control file (/<rootpool>/boot/menu.lst) will list the available root file systems. A simple boot or boot disk command at the OBP prompt will boot whatever root file system is identified by the bootfs pool property. The booter has a L option which lists the bootable datasets on the disk being booted.
17
Once the root file system is identified, the paths to the files needed for booting are resolved in that root file system's name space. The booter loads the kernel's initial executable file (and other files, as necessary) and executes the kernel.
18
The booter has passed (1) the device identifier of the boot device, and (2) the name and type of the root file system as arguments to the kernel. Because the root file system is ZFS, the ZFS file system module is loaded and its mountroot function is called. The ZFS mountroot function reads the pool metadata from the boot device, initializes the pool, and mounts the designated dataset as root.
19
Boot Environments
A boot environment is a root file system, plus all of its subordinate file systems (i.e., the file systems that are mounted under it) There is a one-to-one correspondence between boot environments and root file systems. A boot environment (sometimes abbreviated as a BE) is a fundamental object in Solaris system software management.
20
There can be multiple boot environments on a system, varying by version, patch level, or configuration. Boot environments can be related (for example, one BE might be a modified copy of another BE). Multiple BEs allow for safe application and testing of configuration changes.
21
Solaris supports a set of tools calls LiveUpgrade, which do cloning of boot environments for the purpose of safe upgrades and patching New install technology under development will support this also. ZFS is ideally suited to making clone and modify fast, easy, and space-efficient. Both clone and modify tools will work much better if your root file system is ZFS. (The new install tool will require it for some features.)
23
Swap/Dump
/export 24
Boot environments can be composed of multiple datasets, with exactly one root file system. Regardless of how many datasets compose the boot environment, the clone and modify tools will treat the boot environment as a single manageable object.
26
28
The system remains live (still running the original root) during the upgrade of the clone. The upgrade gradually increases the amount of disk space used as copy-on-write takes place. New space is required only for files that are modified by the upgrade.
29
30
31
Boot environments can be composed of multiple datasets. By default, all of Solaris is installed into one dataset. Any optional directories placed under root (such as a /zoneroots directory, for example) will typically be in their own datasets. The /var directory can optionally be placed in its own dataset (for prevention of denialof-service attacks by filling up root).
32
Swap and Dump areas are (by default) zvols in the root pool. It's still possible to set up swap and dump areas on disk slices. Some environments (such as those where the root pool is stored on compact flash) might need this. Swap and dump require two separate zvols (can't share the space as they can with slices).
33
Currently, root pools can be n-way mirrors only (no striping or RAID-Z). We hope to relax this restriction in the next release. On Solaris, root pools cannot have EFI labels (the boot firmware doesn't support booting from them).
34
System must be running a version of Solaris that supports zfs root (S10U6 or Nevada build 90 or later) Create a pool (mirrored only) in some available storage. Use lucreate to clone one of the UFS boot environments into the zfs root pool.
35
The existing Solaris install software is being adapted to set up a root pool and a root dataset and install Solaris into the root dataset. This will work with both the interactive install and the profile-driven install (Jumpstart). There are new keywords defined for use in Jumpstart profiles for setting up root pools and boot environments in those pools. Interactive install has a screen for selecting between UFS and ZFS root. Customization features will be limited.
36
Installation Future
New installation software is currently under development which will leverage ZFS's capabilities from the outset. Installation will be much easier with ZFS: no need to slice up a disk into separate volumes for root, swap, /export, and so on. New packaging mechanism (IPS) also leverages zfs features. An early version is available with the latest version of OpenSolaris. See: http://opensolaris.org/os/project/caiman
37
Further Information
38