Managing Storage Devices
Managing Storage Devices
Managing Storage Devices
F58078-06
June 2024
Oracle Linux 9 Managing Storage Devices,
F58078-06
iii
4 Working With Logical Volume Manager
Initializing and Managing Physical Volumes 4-1
Creating and Managing Volume Groups 4-2
Creating and Managing Logical Volumes 4-3
Creating and Managing Grouping With Tags 4-3
Managing Activation and Automatic Activation 4-4
Creating Logical Volume Snapshots 4-6
Using Thinly-Provisioned Logical Volumes 4-6
Configuring and Managing Thinly-Provisioned Logical Volumes 4-7
Using snapper With Thinly-Provisioned Logical Volumes 4-7
iv
Preface
Oracle Linux 9: Managing Storage Devices provides information about storage device
management, as well as instructions on how to configure and manage disk partitions, swap
space, logical volumes, software RAID, block device encryption, iSCSI storage, and
multipathing.
Documentation License
The content in this document is licensed under the Creative Commons Attribution–Share Alike
4.0 (CC-BY-SA) license. In accordance with CC-BY-SA, if you distribute this content or an
adaptation of it, you must provide attribution to Oracle and retain the original copyright notices.
Conventions
The following text conventions are used in this document:
Convention Meaning
boldface Boldface type indicates graphical user interface
elements associated with an action, or terms
defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or
placeholder variables for which you supply
particular values.
monospace Monospace type indicates commands within a
paragraph, URLs, code in examples, text that
appears on the screen, or text that you enter.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility
Program website at https://www.oracle.com/corporate/accessibility/.
v
Preface
we are working to remove insensitive terms from our products and documentation. We are also
mindful of the necessity to maintain compatibility with our customers' existing technologies and
the need to ensure continuity of service as Oracle's offerings and industry standards evolve.
Because of these technical constraints, our effort to remove insensitive terms is ongoing and
will take time and external cooperation.
vi
1
Using Disk Partitions
All storage devices, from hard disks to solid state drives to SD cards, must be partitioned to
become usable. A device must have at least one partition, although you can create several
partitions on any device.
Partitioning divides a disk drive into one or more reserved areas called partitions. Information
about these partitions are stored in the partition table on the disk drive. The OS treats each
partition as a separate disk that can contain a file system.
You create more partitions to simplify backups, enhance system security, and meet other
needs, such as setting up development sandboxes and test areas. You can add partitions to
store data that frequently changes, such as user home directories, databases, and log file
directories.
Note:
When you partition a block storage device, align the primary and logical partitions on
one-megabyte (1048576 bytes) boundaries. If partitions, file system blocks, or RAID
stripes are incorrectly aligned and overlap the boundaries of the underlying storage's
sectors or pages, the device controller would modify twice as many sectors or pages
than if correct alignment is used. This recommendation applies to most block storage
devices, including hard disk drives, solid state drives (SSDs), LUNs on storage
arrays, and host RAID adapters.
1-1
Chapter 1
Partitioning Disks by Using fdisk
Note:
The two modes can differ in the options they support to perform specific actions. To
list supported options while in interactive mode, enter m at the mode's prompt. For
supported options in the command line mode, type:
fdisk -h
To run the fdisk command interactively, specify only the name of the disk device as an
argument, for example:
sudo fdisk /dev/sda
p
Displays the current partition table.
n
Initiates the process for creating new partitions.
t
Changes the partition type.
Tip:
To list all the supported partition types, enter l.
w
Commits changes you made to the partition table, then exits the interactive session.
q
Disregards any configuration changes you made and exits the session.
m
Displays all the supported commands in the interactive mode.
For more information, see the cfdisk(8) and fdisk(8) manual pages.
1-2
Chapter 1
Partitioning Disks by Using fdisk
The output contains device information summary such as disk size, disklabel type, and
partition details. The partition details are specified under the following field names:
Device
Lists the current partitions on the device.
Boot
Identifies the boot partition with an asterisk (*). This partition contains the files that the GRUB
bootloader needs to boot the system. Only one partition can be bootable.
Sectors
Displays sector sizes.
Size
Displays partition sizes.
Id and Type
Indicates a representative number and its corresponding representative number.
Oracle Linux typically supports the following types:
5 Extended
An extended partition that can contain up to four logical partitions.
82 Linux swap
Swap space partition.
83 Linux
Linux partition for a file system that's not managed by LVM. This is the default partition
type.
8e Linux LVM
Linux partition that's managed by LVM.
1-3
Chapter 1
Partitioning Disks by Using parted
Creating Partitions
The following example shows how to use the different fdisk interactive commands to partition
a disk. 2 partitions are created on /dev/sdb. The first partition is assigned 2 GB while the
second partition uses all the remaining disk space.
sudo fdisk /dev/sdb
The command runs a menu-based system where you must select the appropriate responses to
configure the partition. Example inputs are displayed in the following interactive session:
...
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-32767999, default 2048): <Enter>
Last sector, +sectors or +size{K,M,G,T,P} (2048-32767999, default 32767999): +2G
1-4
Chapter 1
Partitioning Disks by Using parted
advanced because it supports a larger set of commands and more disk label types including
GPT disks.
Before running parted, complete the following requirements first:
print
Displays the current partition table.
mklabel
Creates a partition type according to the label you choose.
mkpart
Starts the process for creating new partitions.
quit
Exits the session.
Note:
In interactive sessions, changes are committed to disk immediately. Unlike fdisk,
the parted utility doesn't have an option for quitting without saving changes.
help
Displays all the supported commands in the interactive mode.
Creating Partitions
The following example shows how to use the different parted commands to create 2 disk
partitions. The first partition is assigned 2 GB while the second partition uses all the remaining
disk space.
sudo parted /dev/sdb
The command runs a menu-based system where you must select the appropriate responses to
configure the partition. Example inputs are displayed in the following interactive session:
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
1-5
Chapter 1
Partitioning Disks by Using parted
(parted) mkpart
Partition type? primary/extended? primary
File system type? [ext2]? <Enter>
Start? 1
End? 2GB
(parted) print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 16.8GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
(parted) mkpart
Partition type? primary/extended? primary
File system type? [ext2]? <Enter>
Start? 2001
End? -0
(parted) print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 16.8GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
(parted) quit
Note:
Unless you specify otherwise, the size for the Start and End offsets is in megabytes.
To use another unit of measure, type the value and the unit together, for example,
2GB. To assign all remaining disk space to the partition, enter -0 for the End offset as
shown in the example.
Customizing Labels
By default, parted creates msdos-labeled partitions. When partitioning with this label, you're
also prompted for the partition type. Partition types can be primary, extended, or logical.
To use a different label, you would need to specify that label first with the mklabel command
before creating the partition. Depending on the label, you would be prompted during the
1-6
Chapter 1
Automatic Device Mappings for Partitions and File Systems
partitioning process for information, such as the partition name, as shown in the following
example:
sudo parted /dev/sdb
The command runs a menu-based system where you must select the appropriate responses to
configure the partition. Example inputs are displayed in the following interactive session:
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel
New disk label type? gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on
this disk will be lost. Do you want to continue?
Yes/No? yes
(parted) print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 16.8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
(parted) mkpart
Partition name? []? Example
File system type? [ext2]? linux-swap
Start? 1
End? 2GB
(parted) print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 16.8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
(parted) quit
To know which types of file systems and labels are supported by parted, consult the GNU
Parted User Manual at https://www.gnu.org/software/parted/manual/, or enter info parted
to view the online user manual. For more information, see the parted(8) manual page.
1-7
Chapter 1
Automatic Device Mappings for Partitions and File Systems
Udev is a subsystem that works with the kernel to monitor hardware or device changes and
manages events related to changes. Storage devices, partitions, and file systems are all
allocated unique identifiers that the udev subsystem can read and use to automatically
configure device mappings that you can use to identify the device or partition that you intend to
work use. Device mappings for storage devices managed by udev are stored in /dev/disks
and you can identify a device by various identifiers, including the unique partition UUID, file
system UUID, or the partition label.
When configuring mount points in /etc/fstab, use the file system UUID or, if set, the partition
label, for example:
UUID=8980b45b-a2ce-4df6-93d8-d1e72f3664a0 /boot xfs defaults 0 0
LABEL=home /home xfs defaults 0 0
Note that because file systems can span multiple devices, defining mount points against the
file system UUID is preferable to using a partition UUID or label. File system UUIDs are
assigned when the file system is created and is stored as part of the file system itself. If you
copy the file system to a different device, the file system retains the same file system UUID.
However, if you reformat a device by using the mkfs command, the device loses the file
system UUID and a new UUID is generated.
Tip:
Use the -o +UUID option to display the UUIDs for each device and partition listed, or
use the -f option to get a display of important file system information.
lsblk -f
You can also use the udevadm info command to obtain information about any udev
mappings on the system. For example:
sudo udevadm info /dev/sda3
Output might appear as follows, listing all the device links that udev has created for the device
and any other information that udev has on the device:
1-8
Chapter 1
Automatic Device Mappings for Partitions and File Systems
P: /devices/pci0000:00/0000:00:04.0/virtio1/host2/target2:0:0/2:0:0:1/block/sda/sda3
N: sda3
L: 0
S: disk/by-partuuid/18b918a1-16ca-4a07-91c6-455c6dc59fac
S: oracleoci/oraclevda3
S: disk/by-id/wwn-0x60170f5736a64bd7accb6a5e66fe70ee-part3
S: disk/by-path/pci-0000:00:04.0-scsi-0:0:0:1-part3
S: disk/by-id/scsi-360170f5736a64bd7accb6a5e66fe70ee-part3
S: disk/by-id/lvm-pv-uuid-LzsEPR-Mnbk-kQZY-eV3n-9u1T-lFXZ-x7ANgD
E: DEVPATH=/devices/pci0000:00/0000:00:04.0/virtio1/host2/target2:0:0/2:0:0:1/block/sda/
sda3
E: DEVNAME=/dev/sda3
E: DEVTYPE=partition
E: DISKSEQ=9
E: PARTN=3
E: MAJOR=8
E: MINOR=3
E: SUBSYSTEM=block
E: USEC_INITIALIZED=20248855
E: ID_SCSI=1
E: ID_VENDOR=ORACLE
E: ID_VENDOR_ENC=ORACLE\x20\x20
E: ID_MODEL=BlockVolume
E: ID_MODEL_ENC=BlockVolume\x20\x20\x20\x20\x20
E: ID_REVISION=1.0
E: ID_TYPE=disk
E: ID_SERIAL=360170f5736a64bd7accb6a5e66fe70ee
E: ID_SERIAL_SHORT=60170f5736a64bd7accb6a5e66fe70ee
E: ID_WWN=0x60170f5736a64bd7
E: ID_WWN_VENDOR_EXTENSION=0xaccb6a5e66fe70ee
E: ID_WWN_WITH_EXTENSION=0x60170f5736a64bd7accb6a5e66fe70ee
E: ID_BUS=scsi
E: ID_PATH=pci-0000:00:04.0-scsi-0:0:0:1
E: ID_PATH_TAG=pci-0000_00_04_0-scsi-0_0_0_1
E: ID_PART_TABLE_UUID=a0b1f7d8-e84b-461f-a016-c3fcfed369c3
E: ID_PART_TABLE_TYPE=gpt
E: ID_SCSI_INQUIRY=1
E: ID_FS_UUID=LzsEPR-Mnbk-kQZY-eV3n-9u1T-lFXZ-x7ANgD
E: ID_FS_UUID_ENC=LzsEPR-Mnbk-kQZY-eV3n-9u1T-lFXZ-x7ANgD
E: ID_FS_VERSION=LVM2 001
E: ID_FS_TYPE=LVM2_member
E: ID_FS_USAGE=raid
E: ID_PART_ENTRY_SCHEME=gpt
E: ID_PART_ENTRY_UUID=18b918a1-16ca-4a07-91c6-455c6dc59fac
E: ID_PART_ENTRY_TYPE=e6d6d379-f507-44c2-a23c-238f2a3df928
E: ID_PART_ENTRY_NUMBER=3
E: ID_PART_ENTRY_OFFSET=4401152
E: ID_PART_ENTRY_SIZE=100456415
E: ID_PART_ENTRY_DISK=8:0
E: SCSI_TPGS=0
E: SCSI_TYPE=disk
E: SCSI_VENDOR=ORACLE
E: SCSI_VENDOR_ENC=ORACLE\x20\x20
E: SCSI_MODEL=BlockVolume
E: SCSI_MODEL_ENC=BlockVolume\x20\x20\x20\x20\x20
E: SCSI_REVISION=1.0
E: SCSI_IDENT_LUN_NAA_REGEXT=60170f5736a64bd7accb6a5e66fe70ee
E: UDISKS_IGNORE=1
E: DEVLINKS=/dev/disk/by-partuuid/18b918a1-16ca-4a07-91c6-455c6dc59fac
/dev/oracleoci/oraclevda3
/dev/disk/by-id/wwn-0x60170f5736a64bd7accb6a5e66fe70ee-part3
/dev/disk/by-path/pci-0000:00:04.0-scsi-0:0:0:1-part3
1-9
Chapter 1
Manually Mapping Partition Tables to Devices
/dev/disk/by-id/scsi-360170f5736a64bd7accb6a5e66fe70ee-part3
/dev/disk/by-id/lvm-pv-uuid-LzsEPR-Mnbk-kQZY-eV3n-9u1T-lFXZ-x7ANgD
E: TAGS=:systemd:
E: CURRENT_TAGS=:systemd:
...
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 3907583 3905536 1.9G 83 Linux
/dev/sdb2 3907584 32767999 28860416 13.8G 83 Linux
In the following example, the first column of the output identifies the device files in /dev/
mapper.
sudo kpartx -l /dev/sdb
The kpartx command also works with image files such as an installation image. For example,
for an image file system.img, you can do the following:
sudo kpartx -a system.img
1-10
Chapter 1
Manually Mapping Partition Tables to Devices
The output of the previous command shows that the drive image contains four partitions.
control
1-11
2
Implementing Swap Spaces
Swap spaces are a way by which the operating system manages resources in the system to
ensure efficient performance.
Oracle Linux uses swap space if your system does not have enough physical memory for
ongoing processes. When available memory is low, the operating system writes inactive pages
to swap space on the disk, and thus free up physical memory.
However, swap space is not an effective solution to memory shortage. Swap space is located
on disk drives, which have much slower access times than physical memory. Writing to swap
space effectively degrades system performance. If your system often resorts to swapping, you
should add more physical memory, not more swap space.
Swap space can be either in a swap file or on a separate swap partition. A dedicated swap
partition is faster, but changing the size of a swap file is easier. If you know how much swap
space your system requires, configure a swap partition. Otherwise, start with a swap file and
create a swap partition later when you know what your system requires.
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB, 977 MiB) copied, 6.10298 s, 168 MB/s
4. Add an entry to the /etc/fstab file so that the system uses the swap file at system
reboots, for example:
/swapfile swap swap defaults 0 0
5. Regenerate the mount units and register the new configuration in /etc/fstab.
sudo systemctl daemon-reload
7. (Optional) Test whether the new swap file was successfully created by inspecting the
active swap space:
2-1
Chapter 2
Creating a Swap Partition
cat /proc/swaps
sudo free -h
4. Add an entry to /etc/fstab for the swap partition so that the system uses it following the
next reboot, for example:
/dev/sda2 swap swap defaults 0 0
In this example, the system is using both a 4GB swap partition on /dev/sda2 and a 1GB swap
file, /swapfile. The Priority column shows that the operating system to write to the swap
partition rather than to the swap file.
You can also view /proc/meminfo or use utilities such as free, top, and vmstat to view
swap space usage, for example:
grep Swap /proc/meminfo
SwapCached: 248 kB
SwapTotal: 5128752 kB
SwapFree: 5128364 kB
2-2
Chapter 2
Removing a Swap File or Swap Partition
2. Remove the entry for the swap file or swap partition from /etc/fstab.
3. Optionally, remove the swap file or swap partition if you no longer need it.
2-3
3
Recommendations for Solid State Drives
Similar to other storage devices, solid state drives (SSDs) require their partitions to be on 1 MB
boundaries.
For btrfs and ext4 file systems on SSDs, specifying the discard option with mount sends
discard (TRIM) commands to an underlying SSD whenever blocks are freed. This option can
extend the working life of the device. However, the option also has a negative impact on
performance, even for SSDs that support queued discards.
Instead, use the fstrim command to discard empty and unused blocks, especially before
reinstalling the operating system or before creating a new file system on an SSD. Schedule
fstrim to run when impact on system performance is minimal. You can also apply fstrim to
a specific range of blocks rather than the whole file system.
Note:
Using a minimal journal size of 1024 file-system blocks for ext4 on an SSD improves
performance. However, journaling also improves the robustness of the file system,
and therefore should not be disabled completely.
Btrfs automatically enables SSD optimization for a device if the value of /sys/block/device/
queue/rotational is 0, such as in the case of Xen Virtual Devices (XVD). If btrfs doesn't
detect a device as being an SSD, enable SSD optimization by specifying the ssd option to
mount. Note, however, that setting the ssd option doesn't imply that discard is also set.
If you configure swap files or partitions on an SSD, reduce the tendency of the kernel to
perform anticipatory writes to swap, which is controlled by the value of the vm.swappiness
kernel parameter and displayed as /proc/sys/vm/swappiness. The value of vm.swappiness
can be in the range 0 to 100, where a higher value implies a greater propensity to write to
swap. The default value is 60. The suggested value when swap has been configured on SSD
is 1. Use the following commands to change the value:
echo "vm.swappiness = 1" >> /etc/sysctl.conf
sudo sysctl -p
...
vm.swappiness = 1
For additional swap-related information in connection with the btrfs file system, see Creating
Swap Files on a Btrfs File System in Oracle Linux 9: Managing Local File Systems.
3-1
4
Working With Logical Volume Manager
Logical Volume Manager (LVM) enables you to manage multiple physical volumes and
configure mirroring and striping of logical volumes. Through its use of the device mapper (DM)
to create an abstraction layer, LVM provides you the capability to by which you can configure
physical and logical volumes. With LVM, you obtain data redundancy as well increased I/O
performance.
In LVM, you first create volume groups from physical volumes. Physical volumes are storage
devices such as disk array LUNs, software or hardware RAID devices, hard drives, and disk
partitions. Over these physical volumes, you create volume groups. In turn, you configure
logical volumes in a volume group. Logical volumes become the foundation for configuring
software RAID, encryption, and other storage features.
You create file systems on logical volumes and mount the logical volume devices in the same
way as you would a physical device. If a file system on a logical volume becomes full with data,
you can increase the volume's capacity by using free space in the volume group. You can then
grow the file system, if the file system supports that capability. Physical storage devices can be
added to a volume group to further increase its capacity.
LVM is non disruptive and transparent to users. Thus, management tasks such as increasing
logical volume sizes, changing their layouts dynamically, or reconfiguring physical volumes
don't require any system downt time.
Before setting up logical volumes on the system, complete the following requirements:
• Backup the data on the devices assigned for the physical volume.
• Unmount those devices. Creating physical volumes fails on mounted devices.
Configuring logical volumes with LVM involves the following tasks which you perform
sequentially.
1. Creating physical volumes from selected storage devices.
2. Creating a volume group from physical volumes.
3. Configuring logical volumes over the volume group.
4. As needed, creating snapshots of logical volumes.
4-1
Chapter 4
Creating and Managing Volume Groups
To display information about physical volumes, use the pvdisplay, pvs, and pvscan
commands.
To remove a physical volume from the control of LVM, use the pvremove command:
sudo pvremove device
Other commands that are available for managing physical volumes include pvchange, pvck,
pvmove, and pvresize.
For more information, see the lvm(8), pvcreate(8), and other LVM manual pages.
LVM divides the storage space within a volume group into physical extents An extent, with a
default size of 4 MB, is the smallest unit that LVM uses when allocating storage to logical
volumes.
The allocation policy determines how LVM allocates extents from either a volume group or a
logical volume. The default allocation policy for a volume group is normal, whose rules include,
for example, not placing parallel stripes on the same physical volume. For a logical volume, the
default allocation policy is inherit, which means that the logical volume uses the same policy
as the volume group. Other allocation policies are anywhere, contiguous and cling, and
cling_by_tags.
To display information about volume groups, use the vgdisplay, vgs, and vgscan
commands.
To remove a volume group from LVM, use the vgremove command:
sudo vgremove vol_group
The command warns you if logical volumes exist in the group and prompts for confirmation.
Other commands that are available for managing volume groups include vgchange, vgck,
vgexport, vgimport, vgmerge, vgrename, and vgsplit.
4-2
Chapter 4
Creating and Managing Logical Volumes
For more information, see the lvm(8), vgcreate(8), and other LVM manual pages.
lvcreate uses the device mapper to create a block device file entry under /dev for each
logical volume. The command also uses udev to set up symbolic links to this device file
from /dev/mapper and /dev/ volume_group. For example, the device that corresponds to the
logical volume mylv in the volume group myvg might be /dev/dm-3, to which /dev/mapper/
myvg-mylv and /dev/myvg/mylv are symbolically linked.
Other commands that are available for managing logical volumes include lvchange,
lvconvert, lvmdiskscan, lvmsadc, lvmsar, lvrename, and lvresize.
For more information, see the lvm(8), lvcreate(8), and other LVM manual pages.
4-3
Chapter 4
Managing Activation and Automatic Activation
1. To add a tag to one or more existing objects, use the following command:
In the previous, tag is the name of the tag and should always be prefix with the "@"
character. PV is one or more physical volume, VG is one or more volume group, and LV is
one or more logical volume. The --addtag option can be repeated to add multiple tags with
one command.
Note:
Valid tag characters are A-Z a-z 0-9 _ + . - / = ! : # & and can be up to 1024
characters. Tags cannot start with a hyphen.
3. To delete a tag from one or more existing objects, use the following command:
The --deltag option can be repeated to add multiple tags with one command.
4. To display tags, use the following command:
pvs -o tags PV
pvs -o tags VG
pvs -o tags LV
For more information about tags, see the lvm(8), pvcreate(8), and other LVM manual
pages.
4-4
Chapter 4
Managing Activation and Automatic Activation
You can also control which logical volumes can be automatically activated by listing which
logical volumes can be activated in the /etc/lvm/lvm.conf auto_activation_volume_list
parameter.
To manage the activation status of a logical volume, do the following:
1. To deactivate a logical volume, use the following command:
lvchange -an VG LG
In the previous, VG is the name of the volume group and LG is the name of the logical
volume to be deactivated.
2. To deactivate all logical volumes in a volume group, use the following command:
lvchange -an VG
lvchange -ay VG LG
Note:
systemd automatically mounts LVM volumes using the mount points specified in
the /etc/fstab file.
4. To activate all deactivated logical volumes in a volume group, use the following command:
lvchange -ay VG
5. To control which logical volumes can be activated using lvchange commands described
above, you can also use the /etc/lvm/lvm.conf configuration file, with the activation/
volume_list configuration option. The following example shows that the volume list can
specify an entire volume group or just one logical volume within a volume group:
You can also use one or more tags to specify which logical devices can be activated. For
example:
volume_list = []
2. To enable or disable automatic activation for a logical volume, use the following command:
4-5
Chapter 4
Creating Logical Volume Snapshots
If you leave the brackets empty, the automatic activation functionality is completely
disabled.
auto_activation_volume_list = []
You can mount and modify the contents of the snapshot independently of the original volume.
Or, you can preserve the snapshot as a record of the state of the original volume at the time
that the snapshot was taken.
The snapshot usually occupies less space than the original volume, depending on how much
the contents of the volumes diverge over time. In the example, assume that the snapshot only
requires one quarter of the space of the original volume. To calculate how much data is
allocated to the snapshot, do the following:
1. Issue the lvs command.
2. From the command output, check the value under the Snap% column.
A value approaching 100% indicates that the snapshot is low on storage space.
3. Use lvresize to either grow the snapshot or reduce its size to save storage space.
To merge a snapshot with its original volume, use the lvconvert --merge command.
To remove a logical volume snapshot from a volume group, use the lvremove command as
you would for a logical volume, for example:
sudo lvremove myvg/mlv-snapshot
For more information, see the lvcreate(8) and lvremove (8) manual pages.
4-6
Chapter 4
Using Thinly-Provisioned Logical Volumes
applications that access the volume. You need to use the lvs command to monitor the usage
of the thin pool so that you can increase its size if its available storage is in danger of being
exhausted.
In the following example, the thin pool mytp of size 1 GB is first created from the volume group
myvg:
sudo lvcreate --size 1g --thin myvg/mytp
Logical volume "mytp" created
Then, the thinly provisioned logical volume mytv is created with a virtual size of 2 GB:
sudo lvcreate --virtualsize 2g --thin myvg/mytp --name mytv
Logical volume "mytv" created
To create a snapshot of mytv, don't specify the size of the snapshot. Otherwise, its storage
would not be provisioned from mytp, for example:
sudo lvcreate --snapshot --name mytv-snapshot myvg/mytv
Logical volume “mytv-snapshot” created
If the volume group has sufficient space, use the lvresize command as needed to increase
the size of a thin pool, for example:
sudo lvresize -L+1G myvg/mytp
Extending logical volume mytp to 2 GiB
Logical volume mytp successfully resized
For more information, see the lvcreate(8) and lvresize(8) manual pages.
config_name
Name of the configuration
fs_type
File system type (ext4 or xfs)
4-7
Chapter 4
Using Thinly-Provisioned Logical Volumes
fs_name
Path of the file system.
post
A post snapshot records the state of a volume after a modification. A post snapshot should
always be paired with a pre snapshot that you take immediately before you make the
modification.
pre
A pre snapshot records the state of a volume immediately before a modification. A pre
snapshot should always be paired with a post snapshot that you take immediately after you
have completed the modification.
single
A single snapshot records the state of a volume but does not have any association with other
snapshots of the volume.
For example, the following commands create a pre snapshot and a post snapshots of a
volume:
sudo snapper -c config_name create -t pre -p N
The -p option causes snapper to display the number of the snapshot so that you can
reference it when you create the post snapshot or when you compare the contents of the pre
and post snapshots.
To display the files and directories that have been added, removed, or modified between the
pre and post snapshots, use the status subcommand:
sudo snapper -c config_name status N .. ..
To display the differences between the contents of the files in the pre and post snapshots, use
the diff subcommand:
sudo snapper -c config_name diff .. N'
To undo the changes in the volume from post snapshot N' to pre snapshot N:
4-8
Chapter 4
Using Thinly-Provisioned Logical Volumes
4-9
5
Working With Software RAID
The Redundant Array of Independent Disks (RAID) feature provides the capability to spread
data across multiple drives to increase capacity, implement data redundancy, and increase
performance. RAID is implemented either in hardware through intelligent disk storage that
exports the RAID volumes as LUNs, or in software by the operating system. The Oracle Linux
kernel uses the multiple device (MD) driver to support software RAID to create virtual devices
from two or more physical storage devices. MD enables you to organize disk drives into RAID
devices and implement different RAID levels.
You can create RAID devices using mdadm or Logical Volume Manager (LVM).
RAID-0 (striping)
Increases performance but doesn't provide data redundancy. Data is broken down into units
(stripes) and written to all the drives in the array. Resilience decreases because the failure of a
single drive renders the array unusable.
RAID-1 (mirroring)
Provides data redundancy and resilience by writing identical data to each drive in the array. If
one drive fails, a mirror can satisfy I/O requests. Mirroring is an expensive solution because
the same information is written to all of the disks in the array.
5-1
Chapter 5
Creating Software RAID Devices using mdadm
the array fails, the parity information is used to reconstruct data to satisfy I/O requests. In this
mode, read performance and resilience are degraded until you replace the failed drive and
repopulate the new drive with data and parity information. RAID-5 is intermediate in expense
between RAID-0 and RAID-1.
md_device
Name of the RAID device, for example, /dev/md0.
RAID_level
Level number of the RAID to create, for example, 5 for a RAID-5 configuration.
--raid-devices=N
Number of devices to become part of the RAID configuration.
devices
Devices to be configured as RAID, for example, /dev/sd[bcd] for 3 devices for the RAID
configuration.
The devices you list must total to the number you specified for --raid-devices.
This example creates a RAID-5 device /dev/md1 from /dev/sdb, /dev/sdc, and dev/sdd:
sudo mdadm --create /dev/md1 --level=5 -raid-devices=3 /dev/sd[bcd]
The previous example creates a RAID-5 device /dev/md1 out of 3 devices. We can use 4
devices where one device is configured as a spare for expansion, reconfiguration, or
replacement of failed drives:
sudo mdadm --create /dev/md1 --level=5 -raid-devices=3 --spare-devices=1 /dev/
sd[bcde]
Based on the configuration file, mdadm assembles the arrays at boot time.
For example, the following entries define the devices and arrays that correspond
to /dev/md0 and /dev/md1:
DEVICE /dev/sd[c-g]
ARRAY /dev/md0 devices=/dev/sdf,/dev/sdg
ARRAY /dev/md1 spares=1 devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde
5-2
Chapter 5
Creating and Managing Software RAID Devices using LVM
Personalities : [raid1]
mdo : active raid1 sdg[1] sdf[0]
To display a summary or detailed information about MD RAID devices, use the --query or --
detail option, respectively, with mdadm.
For more information, see the md(4), mdadm(8), and mdadm.conf(5) manual pages.
5. Consider persisting the logical volume by editing your /etc/fstab file. For example,
adding the following line that includes the UUID created in the previous step ensures that
the logical volume is remounted after a reboot.
5-3
Chapter 5
Creating and Managing Software RAID Devices using LVM
For more information about using UUIDs with the /etc/fstab file, see Automatic Device
Mappings for Partitions and File Systems .
6. In the event that a device failure occurs for LVM RAID levels 5, 6, and 10, ensure that you
have a replacement physical volume attached to the volume group that contains the failed
RAID device, and do one of the following:
• Use the following command to switch to a random spare physical volume present in
the volume group:
In the previous example, volume_group is the volume group and logical_volume is the
LVM RAID logical volume.
• Use the following command to switch to a specific physical volume present in the
volume group:
In the previous example, physical_volume is the specific volume you want to replace
the failed physical volume with. For example, /dev/sdb1.
The logical volume contains three stripes, which is the number of devices to use in myvg
volume group. The stripesize of four kilobytes is the size of data that can be written to one
device before moving to the next device.
The following output is displayed:
Rounding size 2.00 GiB (512 extents) up to stripe boundary size 2.00 GiB (513 extents).
Logical volume "mylvraid0" created.
The lsblk command shows that three out of the four physical volumes are now running the
myvg-mylvraid0 RAID 0 logical volume. Additionally, each instances of myvg-mylvraid0 is a
subvolume included in another subvolume containing the data for the RAID logical volume.
Each subvolume contains the data subvolume that are labled myvg-mylvraid0_rimage_0,
myvg-mylvraid0_rimage_1, and myvg-mylvraid0_rimage_2 .
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
...
sdb 8:16 0 50G 0 disk
└─myvg-mylvraid0_rimage_0 252:2 0 684M 0 lvm
└─myvg-mylvraid0 252:5 0 2G 0 lvm
sdc 8:32 0 50G 0 disk
└─myvg-mylvraid0_rimage_1 252:3 0 684M 0 lvm
└─myvg-mylvraid0 252:5 0 2G 0 lvm
sdd 8:48 0 50G 0 disk
5-4
Chapter 5
Creating and Managing Software RAID Devices using LVM
To display information about logical volumes, use the lvdisplay, lvs, and lvscan
commands.
To remove a RAID 0 logical volume from a volume group, use the lvremove command:
sudo lvremove vol_group/logical_vol
Other commands that are available for managing logical volumes include lvchange,
lvconvert, lvmdiskscan, lvrename, lvextend, lvreduce, and lvresize.
The -m specifies that you want 1 mirror device in the myvg volume group where identical data is
written to the first device and second mirror device. You can specify additional mirror devices if
you want. For example, -m 2 would create two mirrors of the first device. If one device fails the
other device mirrors can continue to process requests.
The lsblk command shows that two out of the four available physical volumes are now part of
the myvg-mylvraid1 RAID 1 logical volume. Additionally, each instance of myvg-mylvraid1
includes subvolume pairs for data and metadata. Each data subvolumes are labled myvg-
mylvraid1_rimage_0 and myvg-mylvraid1_rimage_1. Each metadata subvolumes are labled
myvg-mylvraid1_rmeta_0 and myvg-mylvraid1_rmeta_1.
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOIN
...
sdb 8:16 0 50G 0 disk
├─myvg-mylvraid1_rmeta_0 252:2 0 4M 0 lvm
│ └─myvg-mylvraid1 252:6 0 1G 0 lvm
└─myvg-mylvraid1_rimage_0 252:3 0 1G 0 lvm
└─myvg-mylvraid1 252:6 0 1G 0 lvm
sdc 8:32 0 50G 0 disk
├─myvg-mylvraid1_rmeta_1 252:4 0 4M 0 lvm
│ └─myvg-mylvraid1 252:6 0 1G 0 lvm
└─myvg-mylvraid1_rimage_1 252:5 0 1G 0 lvm
└─myvg-mylvraid1 252:6 0 1G 0 lvm
sdd 8:48 0 50G 0 disk
sde 8:64 0 50G 0 disk
5-5
Chapter 5
Creating and Managing Software RAID Devices using LVM
To display information about logical volumes, use the lvdisplay, lvs, and lvscan
commands. For example, you can use the following command to show the synchronization
rate between devices in myvg:
To remove a RAID 1 logical volume from a volume group, use the lvremove command:
sudo lvremove vol_group/logical_vol
Other commands that are available for managing logical volumes include lvchange,
lvconvert, lvmdiskscan, lvrename, lvextend, lvreduce, and lvresize.
For example, you can enable the data integrity feature when creating a RAID 1 logical volume
using the --raidintegrity y option. This creates subvolumes used to detect and correct data
corruption in your RAID images. You can also add or remove this subvolume after creating the
logical volume using the following lvconvert command:
You can also use lvconvert to split a mirror into individual linear logical volumes. For example,
the following command splits the mirror:
If you had a three instance mirror, the same command would create a two way mirror and a
linear logical volume.
You can also add or remove mirrors to an existing mirror. For example, the following command
increases a two way mirror to a three way mirror by adding a third mirror:
5-6
Chapter 5
Creating and Managing Software RAID Devices using LVM
And performing the same command again, but with -m 1 instead deletes the third mirror back
down to a two way mirror again and also specify which drive to have removed.
For more information, see the lvmraid, lvcreate, and lvconvert manual pages.
The logical volume contains two stripes, which is the number of devices to use in the myvg
volume group. However, the total usable number of devices requires that an additional device
be added to account for the parity information. And so, a stripe size of two requires three
available drives such that striping and parity information is spread across all three, even
though the total usable device space available for striping is only equivalent to two devices.
The parity information across all three devices is sufficient to deal with the loss of one of the
devices.
The stripesize is not specified in the creation command, so the default of 64 kilobytes is used.
This is the size of data that can be written to one device before moving to the next device.
The lsblk command shows that three out of the four available physical volumes are now part
of the myvg-mylvraid5 RAID 5 logical volume. Additionally, each instance of myvg-mylvraid5
includes subvolume pairs for data and metadata. Each data subvolumes are labelled myvg-
mylvraid5_rimage_0, myvg-mylvraid5_rimage_1, and myvg-mylvraid5_rimage_2. Each
metadata subvolumes are labelled myvg-mylvraid5_rmeta_0, myvg-mylvraid5_rmeta_1 and
myvg-mylvraid5_rmeta_2.
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOIN
...
sdb 8:16 0 50G 0 disk
├─myvg-mylvraid5_rmeta_0 252:2 0 4M 0 lvm
│ └─myvg-mylvraid5 252:8 0 1G 0 lvm
└─myvg-mylvraid5_rimage_0 252:3 0 512M 0 lvm
└─myvg-mylvraid5 252:8 0 1G 0 lvm
sdc 8:32 0 50G 0 disk
├─myvg-mylvraid5_rmeta_1 252:4 0 4M 0 lvm
│ └─myvg-mylvraid5 252:8 0 1G 0 lvm
└─myvg-mylvraid5_rimage_1 252:5 0 512M 0 lvm
5-7
Chapter 5
Creating and Managing Software RAID Devices using LVM
To display information about logical volumes, use the lvdisplay, lvs, and lvscan
commands. For example, you can use the following command to show the synchronization
rate between devices in myvg:
To remove a RAID 5 logical volume from a volume group, use the lvremove command:
sudo lvremove vol_group/logical_vol
Other commands that are available for managing logical volumes include lvchange,
lvconvert, lvmdiskscan, lvrename, lvextend, lvreduce, and lvresize.
For example, you can enable the data integrity feature when creating a RAID 5 logical volume
using the --raidintegrity y option. This creates subvolumes used to detect and correct data
corruption in your RAID images. You can also add or remove this subvolume after creating the
logical volume using the following lvconvert command:
For more information, see the lvmraid, lvcreate, and lvconvert manual pages.
5-8
Chapter 5
Creating and Managing Software RAID Devices using LVM
The logical volume contains three stripes, which is the number of devices to use in the myvg
volume group. However, the total usable number of devices requires that an additional two
devices be added to account for the double parity information. And so, a stripe size of three
requires five available drives such that striping and double parity information is spread across
all five, even though the total usable device space available for striping is only equivalent to
three devices. The parity information across all five devices is sufficient to deal with the loss of
two of the devices.
The stripesize is not specified in the creation command, so the default of 64 kilobytes is used.
This is the size of data that can be written to one device before moving to the next device.
The lsblk command shows that all five of the available physical volumes are now part of the
myvg-mylvraid6 RAID 6 logical volume. Additionally, each instance of myvg-mylvraid6
includes subvolume pairs for data and metadata. Each data subvolumes are labelled myvg-
mylvraid6_rimage_0, myvg-mylvraid6_rimage_1, myvg-mylvraid6_rimage_2, myvg-
mylvraid6_rimage_3, and myvg-mylvraid6_rimage_4. Each metadata subvolumes are labelled
myvg-mylvraid6_rmeta_0, myvg-mylvraid6_rmeta_1, myvg-mylvraid6_rmeta_2, myvg-
mylvraid6_rmeta_3, and myvg-mylvraid6_rmeta_4.
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOIN
...
sdb 8:16 0 50G 0 disk
├─myvg-mylvraid5_rmeta_0 252:2 0 4M 0 lvm
│ └─myvg-mylvraid5 252:12 0 1G 0 lvm
└─myvg-mylvraid5_rimage_0 252:3 0 344M 0 lvm
└─myvg-mylvraid5 252:12 0 1G 0 lvm
sdc 8:32 0 50G 0 disk
├─myvg-mylvraid5_rmeta_1 252:4 0 4M 0 lvm
│ └─myvg-mylvraid5 252:12 0 1G 0 lvm
└─myvg-mylvraid5_rimage_1 252:5 0 344M 0 lvm
└─myvg-mylvraid5 252:12 0 1G 0 lvm
sdd 8:48 0 50G 0 disk
├─myvg-mylvraid5_rmeta_2 252:6 0 4M 0 lvm
│ └─myvg-mylvraid5 252:12 0 1G 0 lvm
└─myvg-mylvraid5_rimage_2 252:7 0 344M 0 lvm
└─myvg-mylvraid5 252:12 0 1G 0 lvm
sde 8:64 0 50G 0 disk
├─myvg-mylvraid5_rmeta_3 252:8 0 4M 0 lvm
│ └─myvg-mylvraid5 252:12 0 1G 0 lvm
└─myvg-mylvraid5_rimage_3 252:9 0 344M 0 lvm
5-9
Chapter 5
Creating and Managing Software RAID Devices using LVM
To display information about logical volumes, use the lvdisplay, lvs, and lvscan
commands. For example, you can use the following command to show the synchronization
rate between devices in myvg:
mylvraid6 31.26
mylvraid6_rimage_0(0),mylvraid6_rimage_1(0),mylvraid6_rimage_2(0),mylvraid6_ri
mage_3(0),mylvraid6_rimage_4(0)
[mylvraid6_rimage_0] /dev/
sdf(1)
[mylvraid6_rimage_1] /dev/
sdg(1)
[mylvraid6_rimage_2] /dev/
sdh(1)
[mylvraid6_rimage_3] /dev/
sdi(1)
[mylvraid6_rimage_4] /dev/
sdj(1)
[mylvraid6_rmeta_0] /dev/
sdf(0)
[mylvraid6_rmeta_1] /dev/
sdg(0)
[mylvraid6_rmeta_2] /dev/
sdh(0)
[mylvraid6_rmeta_3] /dev/
sdi(0)
[mylvraid6_rmeta_4] /dev/sdj(0)
To remove a RAID 6 logical volume from a volume group, use the lvremove command:
sudo lvremove vol_group/logical_vol
Other commands that are available for managing logical volumes include lvchange,
lvconvert, lvmdiskscan, lvrename, lvextend, lvreduce, and lvresize.
5-10
Chapter 5
Creating and Managing Software RAID Devices using LVM
For example, you can enable the data integrity feature when creating a RAID 6 logical volume
using the --raidintegrity y option. This creates subvolumes used to detect and correct data
corruption in your RAID images. You can also add or remove this subvolume after creating the
logical volume using the following lvconvert command:
For more information, see the lvmraid, lvcreate, and lvconvert manual pages.
The -m specifies that you want 1 mirror device in the myvg volume group where identical data is
written to pairs of mirrored device sets which are also using striping across the sets. Logical
volume data remains available if one or more devices remains in each mirrored device set.
The lsblk command shows that four out of the five available physical volumes are now part of
the myvg-mylvraid10 RAID 10 logical volume. Additionally, each instance of myvg-mylvraid10
includes subvolume pairs for data and metadata. Each data subvolumes are labelled myvg-
mylvraid10_rimage_0, myvg-mylvraid10_rimage_1, myvg-mylvraid10_rimage_2, and myvg-
mylvraid10_rimage_3. Each metadata subvolumes are labelled myvg-mylvraid10_rmeta_0,
myvg-mylvraid10_rmeta_1, myvg-mylvraid10_rmeta_2, and myvg-mylvraid10_rmeta_3.
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOIN
...
sdb 8:16 0 50G 0 disk
├─myvg-mylvraid10_rmeta_0 252:2 0 4M 0 lvm
│ └─myvg-mylvraid10 252:10 0 10G 0 lvm
└─myvg-mylvraid10_rimage_0 252:3 0 5G 0 lvm
└─myvg-mylvraid10 252:10 0 10G 0 lvm
sdc 8:32 0 50G 0 disk
5-11
Chapter 5
Creating and Managing Software RAID Devices using LVM
To display information about logical volumes, use the lvdisplay, lvs, and lvscan
commands. For example, you can use the following command to show the synchronization
rate between devices in myvg:
mylvraid101 68.82
mylvraid10_rimage_0(0),mylvraid10_rimage_1(0),mylvraid10_rimage_2(0),mylvraid1
0_rimage_3(0)
[mylvraid10_rimage_0] /dev/
sdf(1)
[mylvraid10_rimage_1] /dev/
sdg(1)
[mylvraid10_rimage_2] /dev/
sdh(1)
[mylvraid10_rimage_3] /dev/
sdi(1)
[mylvraid10_rmeta_0] /dev/
sdf(0)
[mylvraid10_rmeta_1] /dev/
sdg(0)
[mylvraid10_rmeta_2] /dev/
sdh(0)
[mylvraid10_rmeta_3] /dev/sdi(0)
To remove a RAID 10 logical volume from a volume group, use the lvremove command:
sudo lvremove vol_group/logical_vol
5-12
Chapter 5
Creating and Managing Software RAID Devices using LVM
Other commands that are available for managing logical volumes include lvchange,
lvconvert, lvmdiskscan, lvrename, lvextend, lvreduce, and lvresize.
For example, you can enable the data integrity feature when creating a RAID 10 logical volume
using the --raidintegrity y option. This creates subvolumes used to detect and correct data
corruption in your RAID images. You can also add or remove this subvolume after creating the
logical volume using the following lvconvert command:
For more information, see the lvmraid, lvcreate, and lvconvert manual pages.
5-13
6
Using Encrypted Block Devices
When you install Oracle Linux, you have the option to configure encryption on system volumes
except the boot partition. To protect the bootable partition itself, consider using any password
protection mechanism that's built into the BIOS or setting up a GRUB password.
2. Open the device and create the device mapping, for example:
sudo cryptsetup luksOpen /dev/sdd cryptfs
6-1
Chapter 6
Creating Encrypted Volumes
This entry causes the operating system to prompt you for the passphrase at boot time.
You use an encrypted volume in the same way as you would a physical storage device, for
example, as an LVM physical volume, file system, swap partition, Automatic Storage
Management (ASM) disk, or raw device. For example, to mount the encrypted volume
automatically, you would create an entry in the /etc/fstab to mount the mapped device (/dev/
mapper/cryptfs), not the physical device (/dev/sdd).
For more information, see the cryptsetup(8) and crypttab(5) manual pages.
6-2
7
Working With Linux I-O Storage
Oracle Linux uses the Linux-IO Target (LIO) to provide the block-storage SCSI target for FCoE,
iSCSI, and Mellanox InfiniBand (iSER and SRP). You manage LIO by using the targetcli
shell provided in the targetcli package. Note that Mellanox InfiniBand is only supported with
UEK. You can install the targetcli package by running:
sudo dnf install -y targetcli
Fibre Channel over Ethernet (FCoE) encapsulates Fibre Channel packets in Ethernet frames,
which enables them to be sent over Ethernet networks. To configure FCoE storage, you need
to install the fcoe-utils package that includes both the fcoemon service and the fcoeadm
command. You can install the fcoe-utils package by running:
sudo dnf install -y fcoe-utils
Figure 7-1 iSCSI Initiators and an iSCSI Target Connected via an IP-based Network
7-1
Chapter 7
Configuring an iSCSI Target
A hardware-based iSCSI initiator uses a dedicated iSCSI HBA. Oracle Linux supports iSCSI
initiator functionality in software. The kernel-resident device driver uses the existing network
interface card (NIC) and network stack to emulate a hardware iSCSI initiator. The iSCSI
initiator functionality isn't available at the level of the system BIOS. Thus, you can't boot an
Oracle Linux system from iSCSI storage.
To improve performance, some network cards implement TCP/IP Offload Engines (TOE) that
can create a TCP frame for the iSCSI packet in hardware. Oracle Linux doesn't support TOE,
although suitable drivers might be available directly from some card vendors.
For more information about LIO, see http://linux-iscsi.org/wiki/Main_Page.
2. (Optional) Use the ls command to list the object hierarchy, which is initially empty:
ls
o- / ..................................................................... [...]
o- backstores .......................................................... [...]
| o- block .............................................. [Storage Objects: 0]
| o- fileio ............................................. [Storage Objects: 0]
| o- pscsi .............................................. [Storage Objects: 0]
| o- ramdisk ............................................ [Storage Objects: 0]
o- iscsi ........................................................ [Targets: 0]
o- loopback ..................................................... [Targets: 0]
3. Change to the /backstores/block directory and create a block storage object for the disk
partitions that you want to provide as LUNs, for example:
cd /backstores/block
/backstores/block> create name=LUN_0 dev=/dev/sdb
Created block storage object LUN_0 using /dev/sdb.
/backstores/block> create name=LUN_1 dev=/dev/sdc
Created block storage object LUN_1 using /dev/sdc.
The names that you assign to the storage objects are arbitrary.
Note:
The device path varies based on the Oracle Linux instance's disk configuration.
7-2
Chapter 7
Configuring an iSCSI Target
cd /iscsi
/iscsi> create
Created target iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344.
Created TPG 1.
5. (Optional): List the target portal group (TPG) hierarchy, which is initially empty:
/iscsi> ls
6. Change to the luns subdirectory of the TPG directory hierarchy and add the LUNs to the
target portal group:
/iscsi> cd iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344/tpg1/luns
/iscsi/iqn.20...344/tpg1/luns> create /backstores/block/LUN_0
Created LUN 0.
/iscsi/iqn.20...344/tpg1/luns> create /backstores/block/LUN_1
Created LUN 1.
7. Change to the portals subdirectory of the TPG directory hierarchy and specify the IP
address and TCP port of the iSCSI endpoint:
/iscsi/iqn.20...344/tpg1/luns> cd ../portals
/iscsi/iqn.20.../tpg1/portals> create 10.150.30.72 3260
Using default IP port 3260
Created network portal 10.150.30.72:3260.
Note:
An existing default portal would cause the portal creation to fail and a message
similar to the following is generated:
Could not create NetworkPortal in configFS
To resolve the issue, delete the default portal, then create the new portal again,
for example:
/iscsi/iqn.20.../tpg1/portals> delete 0.0.0.0 ip_port=3260
8. Enable TCP port 3260 either by adding the port or adding the iSCSI target:
• Add the port:
sudo firewall-cmd --permanent --add-port=3260/tcp
9. List the object hierarchy, which now shows the configured block storage objects and TPG:
/iscsi/iqn.20.../tpg1/portals> ls /
7-3
Chapter 7
Restoring a Saved Configuration for an iSCSI target
o- / ..................................................................... [...]
o- backstores .......................................................... [...]
| o- block .............................................. [Storage Objects: 1]
| | o- LUN_0 ....................... [/dev/sdb (10.0GiB) write-thru activated]
| | o- LUN_1 ....................... [/dev/sdc (10.0GiB) write-thru activated]
| o- fileio ............................................. [Storage Objects: 0]
| o- pscsi .............................................. [Storage Objects: 0]
| o- ramdisk ............................................ [Storage Objects: 0]
o- iscsi ........................................................ [Targets: 1]
| o- iqn.2013-01.com.mydom.host01.x8664:sn.ef8e14f87344 ............ [TPGs: 1]
| o- tpg1 ........................................... [no-gen-acls, no-auth]
| o- acls ...................................................... [ACLs: 0]
| o- luns ...................................................... [LUNs: 1]
| | o- lun0 ..................................... [block/LUN_0 (/dev/sdb)]
| | o- lun1 ..................................... [block/LUN_1 (/dev/sdc)]
| o- portals ................................................ [Portals: 1]
| o- 10.150.30.72:3260 ............................................ [OK]
o- loopback ..................................................... [Targets: 0]
For example, to configure a demonstration mode that does not require authentication,
change to the TGP directory and set the attributes as shown in the following example:
/iscsi/iqn.20.../tpg1/portals> cd ..
/iscsi/iqn.20...14f87344/tpg1> set attribute authentication=0
demo_mode_write_protect=0
generate_node_acls=1 cache_dynamic_acls=1
Parameter authentication is now '0'.
Parameter demo_mode_write_protect is now '0'.
Parameter generate_node_acls is now '1'.
Parameter cache_dynamic_acls is now '1'.
Caution:
The demonstration mode is inherently insecure. For information about
configuring secure authentication modes, see http://linux-iscsi.org/wiki/
ISCSI#Define_access_rights.
11. Change to the root (/) directory and save the configuration.
This step ensures that the changes persist across system reboots. Omitting the step might
result in an empty configuration.
/iscsi/iqn.20...14f87344/tpg1> cd /
/> saveconfig
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json
7-4
Chapter 7
Configuring an iSCSI Initiator
As an alternative, run the following command to restore saved configurations from previous
versions:
/> restoreconfig /etc/target/backup/saveconfig-20180516-18:53:29.json
2. Use a discovery method, such as SendTargets or the Internet Storage Name Service
(iSNS), to discover the iSCSI targets at the specified IP address.
For example, you would use SendTargets as follows:
sudo iscsiadm -m discovery -t sendtargets -p 10.150.30.72
This command also starts the iscsid service if it is not already running.
Note:
Before running the discovery process, ensure that the firewall is configured to
accept communication with an iSCSI target and that ICMP traffic is allowed.
3. Display information about the targets that are now stored in the discovery database.
sudo iscsiadm -m discoverydb -t st -p 10.150.30.72
7-5
Chapter 7
Configuring an iSCSI Initiator
5. Verify that the session is active and display the available LUNs:
sudo iscsiadm -m session -P 3
The LUNs are represented as SCSI block devices (sd*) in the local /dev directory, for example:
sudo fdisk -l | grep /dev/sd[bc]
To distinguish between target LUNs, examine the paths under /dev/disk/by-path, which is
displayed by using the following command:
ls -l /dev/disk/by-path/
You can view the initialization messages for the LUNs in the /var/log/messages file, for
example:
7-6
Chapter 7
Updating the Discovery Database
...
May 18 14:19:36 localhost kernel: [12079.963376] sd 8:0:0:0: [sdb] Attached SCSI disk
...
You configure and use a LUN in the same way that you would any other physical storage
device, for example, as an LVM physical volume, a file system, a swap partition, an Automatic
Storage Management (ASM) disk, or a raw device.
When creating mount entries for iSCSI LUNs in /etc/fstab, specify the _netdev option, for
example:
UUID=084591f8-6b8b-c857-f002-ecf8a3b387f3 /iscsi_mount_point ext4 _netdev
0 0
This option indicates that the file system resides on a device that requires network access, and
prevents the system from mounting the file system until the network has been enabled.
Note:
When adding iSCSI LUN entries to /etc/fstab, see the LUN by using UUID= UUID
rather than the device path. A device path can change after reconnecting the storage
or rebooting the system. To display the UUID of a block device, the blkid command.
Any discovered LUNs remain available across reboots provided that the target
continues to serve those LUNs and you don't log the system off the target.
For more information, see the iscsiadm(8) and iscsid(8) manual pages.
To delete records from the database that are no longer supported by the target:
sudo iscsiadm -m discoverydb -t st -p 10.150.30.72 -o delete --discover
7-7
8
Using Multipathing for Efficient Storage
Multiple paths to storage devices provide connection redundancy, failover capability, load
balancing, and improved performance. Device-Mapper Multipath (DM-Multipath) is a
multipathing tool that enables you to represent multiple I/O paths between a server and a
storage device as a single path.
8-1
Chapter 8
Device Multipathing Sample Setup
Without DM-Multipath, the system treats each path as being separate even though both paths
connect to the same storage device. DM-Multipath creates a single multipath device, /dev/
mapper/mpathN , that subsumes the underlying devices, /dev/sdc and /dev/sdf.
The multipathing service (multipathd) handles I/O from and to a multipathed device in one of
the following ways:
Active/Active
I/O is distributed across all available paths, either by round-robin assignment or dynamic load-
balancing.
8-2
Chapter 8
Configuring Multipathing
Note:
DM-Multipath can provide failover in the case of path failure, such as in a SAN fabric.
Disk media failure must be handled by using either a software or hardware RAID
solution.
If the property is set to yes, the devices are mapped as /dev/mapper/mpathN , where N is the
multipath group number. In addition, you can use the alias attribute to assign meaningful
names to the devices. See Working With the Multipathing Configuration File.
To check the status of user_friendly_names and other DM-multipath settings, issue the
mpathconf command, for example:
sudo mpathconf
You can use the multipath device in /dev/mapper to reference the storage in the same way as
you would any other physical storage device. For example, you can configure it as an LVM
physical volume, file system, swap partition, Automatic Storage Management (ASM) disk, or
raw device.
Configuring Multipathing
1. Install the device-mapper-multipath package.
sudo dnf install device-mapper-multipath
The command displays output similar to the following, when multipath is configured properly:
8-3
Chapter 8
Working With the Multipathing Configuration File
The sample output shows that /dev/mapper/mpath1 subsumes two paths (/dev/sdb and /dev/
sdc) to 20 GB of storage in an active/active configuration using round-robin I/O path selection.
The WWID that identifies the storage is 360000970000292602744533030303730 and the name
of the multipath device under sysfs is dm-0.
defaults
Defines default multipath settings, which can be overridden by settings in the devices section.
In turn, definitions in the devices section can be overridden by settings in the multipaths
section.
blacklist
Defines devices that are excluded from multipath topology discovery. Excluded devices cannot
be subsumed by a multipath device.
The example shows different ways that you can use to exclude devices: by WWID (wwid) and
by device name (devnode).
blacklist_exceptions
Defines devices that are included in multipath topology discovery, even if the devices are
implicitly or explicitly listed in the blacklist section.
multipaths
Defines settings for a multipath device that's identified by its WWID.
The alias attribute specifies the name of the multipath device as it will appear in /dev/mapper
instead of a name based on either the WWID or the multipath group number.
devices
Defines settings for individual types of storage controller. Each controller type is identified by
the vendor, product, and optional revision settings, which must match the information in
sysfs for the device.
To add a storage device that DM-Multipath doesn't list as being supported, obtain the vendor,
product, and revision information from the vendor, model, and rev files under /sys/block/
device_name/device.
8-4
Chapter 8
Working With the Multipathing Configuration File
uid_attribute ID_SERIAL
}
multipaths {
multipath {
wwid 360000970000292602744533030303730
}
}
In this standby failover configuration, I/O continues through a remaining active network
interface if a network interface fails on the iSCSI initiator.
Note:
If you edit /etc/multipath.conf, restart the multipathd service to make it re-read
the file:
sudo systemctl restart multipathd
8-5