0% found this document useful (0 votes)
12 views18 pages

Chapter 3

Chapter 3 discusses file system administration, including the creation, management, and mounting of file systems in UNIX. It covers partitioning disks, the advantages and disadvantages of partitioning, and the commands used to check disk usage. Additionally, it explains the concept of swap space and configuring disk quotas in a shared environment.

Uploaded by

gadisakarorsa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views18 pages

Chapter 3

Chapter 3 discusses file system administration, including the creation, management, and mounting of file systems in UNIX. It covers partitioning disks, the advantages and disadvantages of partitioning, and the commands used to check disk usage. Additionally, it explains the concept of swap space and configuring disk quotas in a shared environment.

Uploaded by

gadisakarorsa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Chapter 3

File Systems and Management of Data Storages


3.1. File system Administration
A file system is an abstraction that supports the creation, deletion, and modification of files, and
organization of files into directories. It also supports control of access to files and directories and
manages the disk space accorded to it. We tend to use the phrase file system to refer to a hierarchical,
tree-like structure whose internal nodes are directories and whose external nodes are non-directory-files
(or perhaps empty directories), but a file system is actually the flat structure sitting on a linear storage
device such as a disk partition.
This flat structure is completely hidden from the user, but not entirely from the programmer. The user
sees this as a hierarchically organized collection of files and directories, which is more properly called
the directory hierarchy or file hierarchy. In general speaking, file system implies that how the
operating system manages files. There are two main types of file system. They are a disk file system
(called ufs, hfs etc.) (which means a physical disk) and the NFS network file system.
3.1.1. Partitioning Disks with fdisk and parted
In the early versions of UNIX, the disk was configured as a single partition with a single file system. As
disks grew in size it became advantageous in operating system design to partition them into multiple
logical devices that were actually distinct physical portions of the same disk. Partitioning a disk allowed
for:
 More control of security: different user groups could be placed into different partitions, and
different mounting options could be used on separate partitions, so that some might be read only,
and others might have different security options
 More efficient use of the disk: different partitions could use different block sizes and file size
limits
 More efficient operation: shorter seek distances would improve disk access times
 Improved back-up procedures: backups could be done on partitions, not disks, thereby making it
possible to back-up different file systems at different intervals
 Improved reliability: damage could be restricted to a single partition rather than the entire disk,
and redundancy could be built in
The biggest disadvantage of partitioning a disk is that partitions cannot be increased in size, so
when they are created, if they are too small and fill up, the entire disk would need to be
reorganized. There are other disadvantages of partitioning a disk, but they tend to be outweighed
by the advantages.
Modern UNIX systems allow disks to be partitioned into two or more separate physical entities,
each containing a distinct file system. The file systems in separate physical partitions are

1
connected to each other by virtue of their being mounted on directories of the one and only
directory hierarchy rooted at "/", but they are otherwise unrelated. Each separate file system has
its own pool of free blocks, its own set of files, and its own i-node table.
3.1.2. Creating, Mounting and Maintaining File systems
Partitioning a disk divides the disk into logically distinct regions, often named with letters from
the beginning of the alphabet, i.e., a, b, c, and so on. In UNIX, partitions are not necessarily
disjoint. The "c" partition is almost always the entire disk, and typically does not have a file
system. It is used by utilities that access the disk block by block. The "b" partition traditionally
was reserved as the swapping store, i.e., the partition used for swapping; it did not have a file
system written onto it. The innermost partition, "a", is where the kernel is installed and it is
typically very small, since little else should be put in it. If a disk has a 100 GB storage capacity,
you might make the first 1 GB partition a, the next 10 GB, b, the next 50GB d, the remainder, e,
and the whole disk, c. In order to create files in a partition, a file system must be created in that
partition.

Figure 3.1: Partitions, block groups, and their structures in Ext2


Creating a file system includes doing the following:
 Dividing the partition into equal size logical blocks, typically anywhere from 1024 to
4096 bytes each, depending upon expected file size. The block size is fixed at file system
creation time; it cannot be changed after that without rebuilding the file system. Block
size is always a power of two. Larger block sizes result in more wasted disk space in
internal fragmentation, whereas smaller sizes result in more disk activity and more disk
waits. Larger blocks are appropriate for file systems expecting large files. In the file
systems on my personal Linux host, the root file system uses 1024-byte blocks and the
second partition, used for user data, uses 4096-byte blocks.

2
 Deciding how many alternate blocks are needed in each cylinder. (A cylinder is the set of
all tracks that are accessible from one position of the disk head assembly. In other words,
a cylinder is the set of tracks that are vertically aligned one on top of the other.) When a
block becomes bad, it has to be removed from the file system. Alternate blocks are
reserved to replace bad blocks.
 Deciding how many cylinders are in each cylinder group. Grouping cylinders is done for
performance and reliability. A cylinder group is a collection of adjacent cylinders that are
grouped together to localize information. The file system tries to allocate all data blocks
for a given file from the same cylinder group if the file is small enough.
 Deciding how many bytes to allocate to each i-node. They can be as small as 64 bytes.
The i-nodes on the Linux Ext file systems are usually 128 bytes. Larger i-nodes means
fewer i-nodes, which means fewer files. Smaller i-nodes allow more files.
 Dividing the cylinder group or partition, depending on the system, into three physical
regions:
 The superblock. This stores the map of how the disk is used as well as the file
system parameters. In the Linux file system, the superblock contains
information such as the block size in bits and bytes, the identifier of the physical
device on which the superblock resides, various flags indicating whether it is
read-only or locked, or how it is mounted, and queues of mounts waiting to be
performed.
 The i-node area. This is where used and free i-nodes are stored. The used and
free i-nodes were traditionally arranged into two lists, the i-list and the free-list,
with the start of each list stored in the superblock. That method of storage
management is obsolete. Later versions of UNIX used a more efficient method,
in which, when the file system is created, a fixed number of i-nodes was
allocated within each cylinder group. This puts i-nodes closer to their data
blocks, reducing the overall number of seeks.
 The data area. This is where the data blocks are stored.
Many other things need to be done to the partition to make a file system. For example, because
the disk rotates while it is reading data, there need to be gaps between blocks. How big should
the gaps be? Because the disk head has to advance to a new cylinder to read the next block
sometimes, and the disk is rotating while it advances, the next block should not be in the same

3
sector. Which sector should be read next? Also, the superblock is usually replicated on the
disk for reliability. How many times? Where should it be placed?

File System Mounting


Multiple storage devices are usually attached to a modern computer. Some operating systems
treat the file systems on these devices as independent entities. In Microsoft's DOS, and
systems derived from it, for example, each separate disk partition has a drive letter, and the
file hierarchy on each separate drive or partition is separate from all others attached to the
computer. In effect, DOS has multiple trees whose roots are drive letters. For example, a
typical Windows machine may have a directory E:\users on the "E:" drive and a directory
"C:\Temp" on the "C:" drive but these directories are in two separate trees, not a single tree.
In UNIX there is a single file hierarchy. It is a tree if you think of the leaf nodes as filenames,
but it is not a tree if you think of the leaf nodes as actual files, since a single file can have
more than one name, existing as a directory entry in multiple directories, making the topology
a directed acyclic graph. We will take the liberty of referring to it as a tree, knowing that this is
inaccurate.
In UNIX, every accessible file is in this single file hierarchy, no matter how many disks are
attached. There is no such thing as the "C" drive" or "E" drive" in UNIX. This is because of
the concept of mounting. In UNIX, a file system may be mounted onto the single file
hierarchy by attaching that file system's root to some directory in the hierarchy. It is like
grafting a branch onto a tree. By mounting a file system onto the file hierarchy, the file system
becomes a subtree of the hierarchy, making it possible to navigate into the file system from the
rest of the file hierarchy. The mount command without arguments displays a list showing all
of the file systems currently mounted on the file hierarchy.
Mount allows two file systems to be merged into one. For example, you insert your USB Flash
Disk into the root file system.
mount(“/dev/fd0”, “/mnt”,0)

4
Figure 3.2 File mounting diagram

Remote file system mounting


• Same idea, but file system is actually on some other machine
• Implementation uses remote procedure call
– Package up the user’s file system operation
– Send it to the remote machine where it gets executed like a local request
– Send back the answer
• Very common in modern systems
– Network File System (NFS)
– Server Message Block (SMB)
In UNIX, unlike DOS or Windows, all files on all volumes are part of a single directory
hierarchy; this is achieved by mounting one file system onto another. What exactly do we mean
by mounting one file system onto another? To make it clear, suppose that we have a root file
system that looks in part like the one in Figure 3.3.

Figure 3.3: Root file system before mount


It has a subdirectory named data with two subdirectories named a and c. Notice that c is not
empty. Suppose this file system is located on one internal disk of the computer, and suppose that
there is a second internal disk that contains its own file system, as shown in Figure 3.4. The
second disk is represented by a device special file named /dev/hdb. There is a file system on this

5
second disk, whose root has two subdirectories named staff and students, and students has
subdirectories grad and undergrad. To make the files in this second file system available to the
users of the system, it has to be mounted on a mount point. A mount point is a directory in the
root file system that will be replaced by the root of the mounted file system.
Figure 3.4: File system /dev/hdb
If the directory /data/c is a mount point for the /dev/hdb file system, then we say that /dev/hdb is
mounted on /data/c. The following mount command will mount the /dev/hdb file system on
/data/c. It does not matter that c contains file links already; the mount merely hides them while it
is there. They would disappear from view until the file system was unmounted, when they
would reappear.
$ mount /dev/hdb /data/c
After this command, the root file system will be as follows.

Figure 3.5: Root system after mount of /dev/hdb


The absolute pathname of the grad directory would then be /data/c/students/grad.
In reality, we would have to specify the type of file system written on /dev/hdb, unless it is
the default file system, and only the super user is allowed to run the mount command, with a
few exceptions, such as mounting removable storage devices such as CD-ROMs, DVDs, and
USB devices. When a directory becomes a mount point, the kernel restructures the directory
hierarchy. The directory contents are not lost; they are masked by the root directory of the
mounted file system. The kernel records in a list of mount points that this file system is mounted
on this directory. Different versions of UNIX implement mounting in different ways.
3.1.3. Swap
Unix systems, and other Unix-like operating systems, use the term "swap" to describe both the
act of moving memory pages between RAM and disk, and the region of a disk the pages are
stored on. In some of those systems, it is common to use a separate whole partition of a hard
disk for swapping. These partitions are called swap partitions.

6
Swapping is a technique where data in RAM is written to a special location on your hard disk;
either a swap partition or a swap file, to free up RAM.

3.1.4. Determining Disk Usage With df and du


Linux has strong built-in commands to check available disk space called ‘df‘ and 'du'. Two
related commands that every system administrator runs frequently are df and du.

 While du reports files' and directories' disk usage,


 df reports how much disk space your file system is using. The df command displays the
amount of disk space available on the file system with each file name's argument.

The df command stands for disk file system. It is used to get a full summary of available and
used disk space usage of the file system on the Linux system.

The du command, short for disk usage, is used to estimate file space usage. The du command
can be used to track the files and directories which are consuming an excessive amount of space
on the hard disk drive.

df's syntax

The df command can be run by any user. Like many Linux commands, df uses the following
structure:

 df [OPTION]... [FILE]...

The df command primarily checks disk usage on a mounted file system. If you don not include
a file name, the output shows the space available on all currently mounted file systems. Disk
space is shown in 1K blocks by default:

$ df

Filesystem 1K-blocks Used Available Use% Mounted on

devtmpfs 883500 0 883500 0% /dev

tmpfs 913840 168 913672 1% /dev/shm

tmpfs 913840 9704 904136 2% /run

tmpfs 913840 0 913840 0% /sys/fs/cgroup

7
/dev/map[...] 17811456 7193312 10618144 41% /

/dev/sda1 1038336 260860 777476 26% /boot

tmpfs 182768 120 182648 1% /run/user/1000

Lists of long numbers (as shown above) can be difficult to parse. If you want to run df in its
human-readable format, use the --human-readable (-h for short) option:

$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 863M 0 863M 0% /dev
tmpfs 893M 168K 893M 1% /dev/shm
tmpfs 893M 9.5M 883M 2% /run
tmpfs 893M 0 893M 0% /sys/fs/cgroup
/dev/map[...] 17G 6.9G 11G 41% /
/dev/sda1 1014M 255M 760M 26% /boot
tmpfs 179M 120K 179M 1% /run/user/1000
 By default, the df command shows six columns :
 File system: he name of the file system that may be equal to the partition name on your
file system (/dev/vda1 or /dev/sda1 for example).
 1K-blocks the number of blocks on the file system of size 1Kb.
 Used: the number of 1K-blocks used on the file system.
 Available: the number of 1K-blocks available for the file system.
 Use %: the percentage of disk used on the file system.
 Mounted on: the mountpoint used in order to mount the file system.

3.1.5. Configuring Disk Quotas


In a shared environment, all users share the same machine resources. If one user is selfish that
affects all of the other users. Given the opportunity, users will consume all of the disk space and
all of the memory and CPU cycles somehow, whether through greed or simply through
inexperience. Thus it is in the interests of the user community to limit the ability of users to spoil
things for other users. One way of protecting operating systems from users and from faulty
software is to place quotas on the amount of system resources which they are allowed.
 Disk quotas: Place fixed limits on the amount of disk space which can be used per user.
The advantage of this is that the user cannot use more storage than this limit; the

8
disadvantage is that many software systems need to generate/cache large temporary files
(e.g. compilers, or web browsers) and a fixed limit means that these systems will fail to
work as a user approaches his/her quota.
 CPU time limit: Some faulty software packages leave processes running which
consume valuable CPU cycles to no purpose. Users of multiuser computer systems
occasionally steal CPU time by running huge programs which make the system unusable
for others. The C-shell limit CPU time function can be globally configured to help
prevent accidents.
 Policy decisions: Users collect garbage. To limit the amount of it, one can specify a
system policy which includes items of the form: ‘Users may not have mp3, wav, mpeg
etc. files on the system for more than one day’. Quotas have an unpleasant effect on
system morale, since they restrict personal freedom. They should probably only be used
as a last resort. There are other ways of controlling the build-up of garbage.
Quotas, limits and restrictions tend to antagonize users. Users place a high value on personal
freedom. Restrictions should be minimized or made inconspicuous to avoid a backlash.
Workaround solutions which avoid rigid limits are preferable, if possible.
3.2 Logical Volume Management (LVM) and RAID
Disk partitions are referred to as volumes. Logical volumes provide seamless integration of disks and
partitions into a large virtual disk which can be organized without worrying about partition
boundaries. This is not always desirable, however. Sometimes partitions exist for protection, rather
than merely for necessity.
3.2.1 Implementing LVM, Creating Logical Volumes (LVs), Manipulating VGs & LVs
LVM stands for Logical Volume Manager. It is a mechanism that provides an alternative
method of managing storage systems than the traditional partition-based one. In LVM, instead
of creating partitions, you create logical volumes, and then you can just as easily mount those
volumes in your file system as you'd a disk partition. It allows to create partitions, physical
volumes, a virtual group, logical volumes, and file systems on a hard disk.
The main advantage of LVM
A) Can be used for nearly any mount point EXECPT/boot
B) Flexibility: allows for resizing of volumes
LVM can expand a partition while it is mounted, if the file system used on it also supports that.
When expanding a partition, LVM can use free space anywhere in the volume group, even on
another disk. When resizing LVM partitions and especially when shrinking them, it is

9
important to take the same precautions you would as if you were dealing with regular
partitions. Namely, always make a backup of your data before actually executing the
commands. Although LVM will try hard to determine whether a partition can be expanded or
shrunk before actually performing the operation, there is always the possibility of data loss.

C) Snapshots: allows for point in time copies’ of your logical volumes


VM allows you to freeze an existing logical volume in time, at any moment, even while the
system is running. You can continue to use the original volume normally, but the snapshot
volume appears to be an image of the original, frozen in time at the moment you created it. You
can use this to get a consistent file system image to back up, without shutting down the system.
You can also use it to save the state of the system, so that you can later return to that state if
needed. You can also mount the snapshot volume and make changes to it, without affecting the
original.
Components of LVM
There are 3 concepts that LVM manages:
 Physical volumes: correspond to disks. They represent the lowest abstraction level of
LVM, and are used to create a volume group.
 Volume groups: are collections of physical volumes. They are pools of disk space that
logical volumes can be allocated from.
 Logical volumes: correspond to partitions – they usually hold a file system. Unlike
partitions though, they can span multiple disks (because of the way volume groups are
organized) and do not have to be physically contiguous.
LVs are a collection of logical extents (LEs), which map to physical extents, the smallest
storage chunk of a PV. By default, each LE maps to one PE. Setting specific LV options
changes this mapping; for example, mirroring causes each LE to map to two PEs.

10
Figure 3.6 Logical Volume Manager (LVM) Hierarchy

Implementing LVM Storage


Creating LVM storage requires several steps.
 The first step is to determine which physical devices to use.
 After a set of suitable devices have been assembled, they are initialized as physical
volumes so that they are recognized as belonging to LVM.
 The physical volumes are then combined into a volume group. This creates a pool of disk
space out of which logical volumes can be allocated.
Logical volumes created from the available space in a volume group can be formatted with a file
system, activated as swap space, and mounted or activated persistently. LVM provides a
comprehensive set of command-line tools for implementing and managing LVM storage. These
command-line tools can be used in scripts, making them suitable for automation.
Creating a Logical Volume
To create a logical volume, perform the following steps:
1. Prepare the physical device.
Use parted, gdisk, or fdisk to create a new partition for use with LVM. Always set the
partition type to Linux LVM on LVM partitions; use 0x8e for MBR partitions.
2. Create a physical volume.
Use pvcreate to label the partition (or other physical device) as a physical volume.
The pvcreate command divides the physical volume into physical extents (PEs) of a fixed size,
for example, 4 MiB blocks. You can label multiple devices at the same time by using space-
delimited device names as arguments to pvcreate.
# pvcreate /dev/vdb2 /dev/vdb1
This labels the devices /dev/vdb2 and /dev/vdb1 as PVs, ready for allocation into a volume
group. A PV only needs to be created if there are no PVs free to create or extend a VG.
3. Create a volume group
Use vgcreate to collect one or more physical volumes into a volume group. A volume group
is the functional equivalent of a hard disk; you will create logical volumes from the pool of
free physical extents in the volume group.
The vgcreate command-line consists of a volume group name followed by one or more
physical volumes to allocate to this volume group.
# vgcreate vg01 /dev/vdb2 /dev/vdb1

11
 This creates a VG called vg01 that is the combined size, in PE units, of the two
PVs /dev/vdb2 and /dev/vdb1.
 A VG only needs to be created if none already exist.
 Additional VGs may be created for administrative reasons to manage the use of PVs and
LVs. Otherwise, existing VGs can be extended to accommodate new LVs when needed.
4. Create a logical volume.
Use lvcreate to create a new logical volume from the available physical extents in a
volume group. At a minimum, the lvcreate command includes the -n option to set the LV
name, either the -L option to set the LV size in bytes or the -l option to set the LV size in
extents, and the name of the volume group hosting this logical volume.
# lvcreate -n lv01 -L 700M vg01
This creates an LV called lv01, 700 MiB in size, in the VG vg01. This command will fail if the
volume group does not have a sufficient number of free physical extents for the requested size.
Note also that the size will be rounded to a factor of the physical extent size if the size cannot
match exactly.
The following list provides some examples of creating LVs:
 lvcreate -L 128M: Size the logical volume to exactly 128 MiB.
 lvcreate -l 128: Size the logical volume to exactly 128 extents. The total number of bytes
depends on the size of the physical extent block on the underlying physical volume.
Note: Different tools display the logical volume name using either:
 the traditional name, /dev/vgname/lvname, or
 the kernel device mapper name, /dev/ mapper/vgname-lvname.
5. Add the file system
Use mkfs to create an XFS file system on the new logical volume. Alternatively, create a file
system based on your preferred file system, for example, ext4.
# mkfs -t xfs /dev/vg01/lv01
To make the file system available across reboots, perform the following steps:
1. Use mkdir to create a mount point.
# mkdir /mnt/data
2. Add an entry to the /etc/fstab file:
# /dev/vg01/lv01 /mnt/data xfs defaults 1 2
3. Run mount /mnt/data to mount the file system that you just added in /etc/fstab.
# mount /mnt/data

12
Removing a Logical Volume
To remove all logical volume components, perform the following steps:
1. Prepare the file system.
Move all data that must be kept to another file system. Use the umount command to unmount
the file system and then remove any /etc/fstab entries associated with this file system.
# umount /mnt/data
Note: Removing a logical volume destroys any data stored on the logical volume. Back up or
moving data before remove the logical volume is necessary.
2. Remove the logical volume
Use lvremove DEVICE_NAME to remove a logical volume that is no longer needed.
# lvremove /dev/vg01/lv01
Unmount the LV file system before running this command. The command prompts for
confirmation before removing the LV. The LV’s physical extents are freed and made available
for assignment to existing or new LVs in the volume group.
3. Remove the volume group
Use vgremove VG_NAME to remove a volume group that is no longer needed.
# vgremove vg01
The VG’s physical volumes are freed and made available for assignment to existing or new
VGs on the system.
4. Remove the physical volumes.
Use pvremove to remove physical volumes that are no longer needed. Use a space-delimited
list of PV devices to remove more than one at a time. This command deletes the PV metadata
from the partition (or disk). The partition is now free for reallocation or reformatting.
# pvremove /dev/vdb2 /dev/vdb1
Reviewing LVM Status Information
Physical Volumes
Use pvdisplay to display information about physical volumes. To list information about all
physical volumes, use the command without arguments. To list information about a specific
physical volume, pass that device name to the command.

13
4. PV Name maps to the device name.
5. VG Name shows the volume group where the PV is allocated.
6. PV Size shows the physical size of the PV, including any unusable space.
7. PE Size is the physical extent size, which is the smallest size a logical volume can be
allocated. It is also the multiplying factor when calculating the size of any value reported
in PE units, such as Free PE; for example: 26 PEs x 4 MiB (the PE Size) equals 104 MiB
of free space. A logical volume size is rounded to a factor of PE units. LVM sets the PE
size automatically, although it is possible to specify it.
8. Free PE shows how many PE units are available for allocation to new logical volumes.
Volume Groups
Use vgdisplay to display information about volume groups. To list information about all
volume groups, use the command without arguments. To list information about a specific
volume group, pass that VG name to the command.

14
9. VG Name is the name of the volume group.
10. VG Size is the total size of the storage pool available for logical volume allocation.
11. Total PE is the total size expressed in PE units.
12. Free PE/Size shows how much space is free in the VG for allocating to new LVs or to
extend existing LVs.
Logical Volumes
Use lvdisplay to display information about logical volumes. If you provide no argument to
the command, it displays information about all LVs; if you provide an LV device name as an
argument, the command displays information about that specific device.

15
1. LV Path shows the device name of the logical volume. Some tools may report the
device name as /dev/mapper/vgname-lvname; both represent the same LV.
2. VG Name shows the volume group that the LV is allocated from.
3. LV Size shows the total size of the LV. Use file-system tools to determine the free space
and used space for storage of data.
4. Current LE shows the number of logical extents used by this LV. An LE usually maps
to a physical extent in the VG, and therefore the physical volume.
3.2.2 Advanced LVM Concepts (i.e. system-config-lvm)
The system-config-lvm is the first GUI LVM tool which was originally released as part
of Red Hat Linux. It is also called LVM GUI because it is the first one. Later, Red Hat also
created an installation package for it. So system-config-lvm is able to be used in other Linux
distributions. The installation package includes RPM packages and DEB packages.
The main panel(section) of system-config-lvm
The system-config-lvm only supports lvm-related operations. Its user interface is
divided into three parts.
 The left part is tree view of disk devices and LVM devices (VGs),
 The middle part is the main view which shows VG usage, divided into LV and PV
columns.
 The right part displays details of the selected related objects (PV/LV/VG).
The different versions of system-config-lvm are not completely consistent in the organized way
of devices. Some of them show both LVM devices and non-lvm devices (disk), the others show
LVM devices only. LVM devices existing in the system, namely PV/VG/LV only, no other
devices. The other can display non-lvm disks and PV can be removed in disk view.

The version which shows non-lvm disks


Supported operations
 PV Operations: Delete PV, Migrate PV
 VG Operations: Create VG, Append PV to VG/Remove PV from VG, Delete VG (Delete last PV in VG)

16
 LV Operations
Create LV (support three formats: Linear, Stripe and Mirror). Users can specify how many PVs
should be used for LV to stripe type, but users neither cannot specify certain PVs nor cannot
specify region in the PV.
Delete LV

The setting of create LV

3.2.3 RAID Concepts (Creating and Managing a RAID-5 Array)


Loss against physical disk failure can be mitigated by using RAID solutions which offer real
redundancy. RAID stands for Redundant Array of Inexpensive Disks. The idea is that, since
disks are relatively cheap, compared with human time and labor, we can build a system
which uses extra disks in order to secure increased performance and redundancy. RAID disks
systems are sold by most manufacturers and come in a variety of levels.
There are currently more than seven levels of RAID from 0 to 6 or 7 depending on where you
look; these incorporate a number of themes:
 Disk striping: This is a reorganization of file system structure amongst a group of
disks. Data are spread across disks, using parallelism to increase data throughput and
improved search rate. This can improve performance dramatically, but reduces
security by an equal amount, since if one disk fails, all the data are lost from the other
disks.
 Real-time mirroring: When data are written to one disk, they are simultaneously
written to a second disk, rather than mirroring as a batch job performed once per day.

17
This increases security. This protects against random disk failure, but not necessarily
against natural disasters etc., since RAID disks are usually located all in one place.
 Hamming code parity protection: Data are split across several disks to utilize
parallelism, and a special parity disk enables data to be reconstructed provided no
more than one disk fails randomly. Again, this does not help us against loss due to
wide-scale influences like natural disasters.
New RAID solutions appear frequently and the correspondence between manufacturers’
solutions and RAID levels is not completely standardized. RAID provides enhancements for
performance and fault tolerance, but it cannot protect us against deliberate vandalism (damage)
or widespread failure.
Advantages of RAID 5
 RAID 5 is ideal for application and file servers with a limited number of drives.
It combines the better elements of efficiency and performance among the different
RAID configurations.
 Fast, reliable read speed: offers inexpensive data redundancy and fault tolerance. Writes
tend to be slower because of the parity (similarity) data calculation, but users can
access and read data even while a failed drive is being rebuilt. When drives fail, the
RAID 5 system can read the information contained on the other drives and recreate that
data, tolerating a single drive failure.
Disadvantages of RAID 5
 Longer rebuild times are one of the major drawbacks of RAID 5, and this delay could result
in data loss.
 Complexity, RAID 5 rebuilds can take a day or longer, depending on controller speed and
workload. If another disk fails during the rebuild, then users lose data forever.

18

You might also like