0% found this document useful (0 votes)
39 views80 pages

My Server Setup - Requirements: Check LVM Disk Storage in Linux

Uploaded by

ak19.alamgir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views80 pages

My Server Setup - Requirements: Check LVM Disk Storage in Linux

Uploaded by

ak19.alamgir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 80

Before proceeding with the LVM setup, consider the following prerequisites:

My Server Setup – Requirements

 OS – RHEL 9 with LVM Installation


 IP – 192.168.0.200
 Disks – 3 disks with 20GB each.
Check LVM Disk Storage in Linux

1. To gain insight into our LVM setup, we can utilize the following commands to
reveal the distinct components: Physical Volume (PV), Volume Group (VG),
and Logical Volume (LV).

# pvs

# vgs

# lvs

List LVM Setup in Linux

Here, is the description of each parameter shown in the above screenshot.

 Physical Disk Size (PV Size)


 The disk used was Virtual Disk sda.
 Volume Group Size (VG Size)
 Volume Group name (vg_tecmint)
 Logical Volume name (LogVol00, LogVol01)
 LogVol00 Assigned for a swap with 956MB Size
 LogVol01 Assigned for/with 18.63GB
So, from here we come to know that there is not enough free space in the VDA
disk.

Create a New Volume Group in LVM

2. To create a new Volume Group, we need to add an additional 3 hard disks to


this server. However, it is not compulsory to use 3 drives; just 1 is enough to
create a new VG and LV (Logical Volume) inside that VG.

I am adding the following 3 disks here for demonstration purposes and to provide
more feature command explanations.

sdb, sdc, sdd

3. To list all the disks and their partitions, such as the disk name, size, partition
type, start and end sectors, and more use the fdisk utility as shown.

# fdisk -l
List Disk Partitions in Linux
Here, is the description of each disk shown in the above screenshot.

 The default disk used for the Operating System is RHEL 9.


 Partitions defined on the default disk are as follows: (sda1 = boot), (sda2
= /).
 Additionally, added disks are mentioned as Disk1, Disk2, and Disk3.
Each and every disk is 20 GB in size.

4. Now run the vgdisplay command to view the detailed information about all the
Volume Groups present on the system, including their name, size, free space,
physical volume (PV) information, and more.

# vgdisplay
List Volume Groups in Linux
Here, is the description of each parameter shown in the above screenshot.

 VG Name – A volume group name.


 Format – LVM architecture used lvm2.
 VG Access – The Volume Group is in read-and-write mode and ready to
use.
 VG Status – The Volume Group can be resized. We can expand it if we
need to add more space.
 Cur LV – Currently, there are 2 Logical volumes in this Volume Group.
 CurPV and Act PV – Currently, the physical disk in use is 1 (vda), and it’s
active. So, we can use this Volume Group.
 PE Size – Physical Extents (PEs) and size for a disk can be defined using
either PE or GB size. The default PE size of LVM is 4 MB. For example, if
we need to create a 5 GB logical volume, we can use a sum of 1280 PEs.
Do you understand what I’m saying?
Here’s the explanation: 1 GB is equal to 1024 MB, so 1024 MB x 5 = 5120 PE =
5 GB. Now, divide 5120 by 4 = 1280. 4 is the default PE size.
 Total PE – This Volume Group has.
 Alloc PE – Total PE Used, full PE already Used, 5008 x 4PE = 20032.
 Free PE – Here it’s already used so there was no free PE.
5. Now list the file system disk space information, here only sda is used
with /boot, /, and swap on the sda physical disk using LVM. There is no space
remaining on this disk.

# df -TH

List File System Disk Space


The above image shows the mount point we are using, and the 19GB is fully
used for the root, so there is no free space available.

Create a Disk Partition


6. So, let’s create a new physical volume (PV) and volume group (VG)
named tecmint_add_vg, and create logical volumes (LVs) within it. Here, we can
create 4 logical volumes with the names tecmint_documents, tecmint_manager,
and tecmint_public.

We can extend the Volume Group of the currently used VG to get more space.
However, in this case, we are going to create a new Volume Group and
experiment with it. Later, we can see how to extend the file systems of
the Volume Group that is currently in use.
Before using a new disk, we need to partition the disk using the fdisk command
as shown.

# fdisk -c /dev/sdb

Create /dev/sdb Disk Partition


Next, follow the below steps to create a new partition.

 Choose n to create new.


 Choose p to create a primary partition.
 Choose which number of partitions we need to create.
 Press enter twice to use the full space of the disk.
 We need to change the type of newly created partition type t.
 Which number of partition need to change, choose the number which we
created its 1 .
 Here we need to change the type, we need to create LVM so we going to
use the type code of LVM as 8e , if we do not know the type code
Press L to list all types of codes.
 Print the partition that we created to just confirm.
 Here we can see the ID as 8e LINUX LVM.
 Write the changes and exit the fdisk.
7. Do the above steps for the other 2 disks sdc and sdd to create new partitions.
Then restart the machine to verify the partition table using the fdisk command.

# fdisk -l
Confirm Disk Partitions
Create LVM Physical Volume
8. Now, it’s time to create Physical Volumes using all 3 disks. Here, I have listed
the physical disks using the ‘pvs‘ command, and now only one default PV is
listed.

# pvs

9. Then create the new physical disks and confirm the newly created physical
disks.

# pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1

# pvs
Create LVM Physical Volumes
Creating LVM Volume Groups
10. Create a Volume Group named tecmint_add_vg using the available
free PV and a PE size of 32. To display the current volume groups, we can see
that there is one volume group with 1 PV in use.

# vgs

11. This will create the volume group named tecmint_add_vg using a 32MB
PE size and the 3 physical volumes we created in the last steps.

# vgcreate -s 32M tecmint_add_vg /dev/sdb1 /dev/sdc1


/dev/sdd1

12. Next, verify the volume group by running the vgs command again.

# vgs
Confirm LVM Volume Groups
Understanding vgs command output:

 Volume Group name.


 Physical Volumes used in this Volume Group.
 Shows free space available in this volume group.
 Total Size of the Volume Group.
 Logical Volumes inside this volume group, Here we have not yet created
so there is 0.
 SN = Number of Snapshots the volume group contains. (Later we can
create a snapshot).
 Status of the Volume group as Writeable, readable, resizeable, exported,
partial, and clustered, Here it is wz–n- that means w = Writable, z =
resizeable.
 Number of Physical Volume (PV) used in this Volume Group.
13. To display more information about the volume group use the command.

# vgs -v

View LVM Volume Groups Info


14. To get more information about newly created volume groups, run the
following command.

# vgdisplay tecmint_add_vg
List LVM Volume Groups
Here, is the description of each parameter shown in the above screenshot.

 Volume group name


 LVM Architecture used.
 It can be read and write state, ready to use.
 This volume group can be resizeable.
 No Physical disk was used and they are active.
 Volume Group total size.
 A Single PE size was 32 here.
 Total number of PE available in this volume group.
 Currently, we have not created any LV inside this VG so it’s totally free.
 UUID of this volume group.
Creating LVM Logical Volumes
15. Now, create 3 Logical Volumes
named tecmint_documents, tecmint_manager, and tecmint_public. Here, we will
demonstrate how to create Logical Volumes using both PE size and GB size.

First, list the current Logical Volumes using the following command..

# lvs
List LVM Logical Volumes
16. These Logical Volumes are in the vg_tecmint Volume Group. To see how
much free space is available to create logical volumes, list the Volume Group
and available Physical Volumes using the ‘vgs‘ command.

# vgs

List Volume Groups


The volume group size is almost 60GB, and it is unused, so we can create LVs in
it. Let us divide the volume group into equal sizes to create 3 Logical Volumes.
That means 60GB/3 = 20GB. Each Logical Volume will be 20GB in size after
creation.

Method 1: Creating Logical Volumes using PE Size

First, let us create Logical Volumes using the Physical Extent (PE) size. We need
to know the default PE size assigned to this Volume Group and the total
available PEs to create new Logical Volumes.

Run the following command to get this information.


# vgdisplay tecmint_add_vg

Create a New Logical Volume


 The default PE Assigned for this VG is 32MB, Here Single PE size will be
32MB.
 Total Available PE is 1917.
Just do and see a little Calculation using the bc command.

# bc

1917PE/3 = 639 PE.

639 PE x 32MB = 20448 --> 20GB


Calculate Disk Space
Press CRTL+D to exit from bc.

Let us now create 3 Logical Volumes using 639 PE’s. Here -l used to extend
the size and -n to assign a logical volume name.

# lvcreate -l 639 -n tecmint_documents tecmint_add_vg

# lvcreate -l 639 -n tecmint_manager tecmint_add_vg

# lvcreate -l 639 -n tecmint_public tecmint_add_vg

List the created Logical Volumes using lvs command.

# lvs
List Created Logical Volumes
Method 2: Creating Logical Volumes using GB Size

While creating Logical Volume using GB size we cannot get the exact size. So,
the better way is to create using extend.

# lvcreate -L 20G -n tecmint_documents tecmint_add_vg

# lvcreate -L 20G -n tecmint_manager tecmint_add_vg

# lvcreate -L 20G -n tecmint_public tecmint_add_vg

# lvcreate -L 20G -n tecmint_public tecmint_add_vg

List the Created logical Volumes using lvs command.

# lvs

Here, we can see while creating the 3rd LV we can’t Round-up to 20GB, it is
because of small changes in size, but this issue will be ignored while creating LV
using Extend size.
Creating File System
17. For using the logical volumes we need to format. Here I am using the ext4
file-system to create the volumes and going to mount them under /mnt/.

# mkfs.ext4 /dev/tecmint_add_vg/tecmint_documents

# mkfs.ext4 /dev/tecmint_add_vg/tecmint_public

# mkfs.ext4 /dev/tecmint_add_vg/tecmint_manager

Create Ext4 File System


18. Let us create directories in /mnt and mount the Logical volumes that we have
created file-system.
# mount /dev/tecmint_add_vg/tecmint_documents
/mnt/tecmint_documents/

# mount /dev/tecmint_add_vg/tecmint_public
/mnt/tecmint_public/

# mount /dev/tecmint_add_vg/tecmint_manager
/mnt/tecmint_manager/

19. List and confirm the Mount point using.

# df -h

Mount Logical Volumes


Permanent Mounting of Logical Volumes
20. It’s now temporarily mounted, for permanent mount, we need to add the entry
in fstab, for that let us get the mount entry from mtab using

# cat /etc/mtab

21. We need to make slight changes in the fstab entry while entering the mount
entry contents copies from mtab, we need to change the rw to defaults

# vi /etc/fstab

Our fstab entries should look similar to the below sample.

/dev/mapper/tecmint_add_vg-tecmint_documents
/mnt/tecmint_documents ext4 defaults 0 0

/dev/mapper/tecmint_add_vg-tecmint_public
/mnt/tecmint_public ext4 defaults 0 0

/dev/mapper/tecmint_add_vg-tecmint_manager
/mnt/tecmint_manager ext4 defaults 0 0
Permanent Mount Logical Volumes
22. Finally, run the command mount -a to check for the fstab entry before
restarting.

# mount -av

Confirm Mount Points


Here we have seen how to set up flexible storage with logical volumes by using
physical disk to physical volume, physical volume to the volume group, and
volume group to logical volumes.
Previously we have seen how to create a flexible disk storage using LVM. Here,
we are going to see how to extend volume group, extend and reduce a logical
volume. Here we can reduce or extend the partitions in Logical volume
management (LVM) also called as flexible volume file-system.

Extend/
Reduce LVMs in Linux
Requirements

1. Create Flexible Disk Storage with LVM – Part I


When do we need to reduce volume?

May be we need to create a separate partition for any other use or we need to
expand the size of any low space partition, if so we can reduce the large size
partition and we can expand the low space partition very easily by the following
simple easy steps.

My Server Setup – Requirements

1. Operating System – CentOS 6.5 with LVM Installation


2. Server IP – 192.168.0.200
How to Extend Volume Group and Reduce Logical Volume

Logical Volume Extending

Currently, we have One PV, VG and 2 LV. Let’s list them one by one using
following commands.

# pvs

# vgs

# lvs

Logical Volume
Extending
There are no free space available in Physical Volume and Volume group. So,
now we can’t extend the lvm size, for extending we need to add one physical
volume (PV), and then we have to extend the volume group by extending the vg.
We will get enough space to extend the Logical volume size. So first we are
going to add one physical volume.

For adding a new PV we have to use fdisk to create the LVM partition.
# fdisk -cu /dev/sda

1. To Create new partition Press n.


2. Choose primary partition use p.
3. Choose which number of partition to be selected to create the primary
partition.
4. Press 1 if any other disk available.
5. Change the type using t.
6. Type 8e to change the partition type to Linux LVM.
7. Use p to print the create partition ( here we have not used the option).
8. Press w to write the changes.
Restart the system once completed.

Create
LVM Partition
List and check the partition we have created using fdisk.
# fdisk -l /dev/sda

Verify
LVM Partition
Next, create new PV (Physical Volume) using following command.

# pvcreate /dev/sda1

Verify the pv using below command.

# pvs
Create Physical Volume

Extending Volume Group

Add this pv to vg_tecmint vg to extend the size of a volume group to get more
space for expanding lv.

# vgextend vg_tecmint /dev/sda1

Let us check the size of a Volume Group now using.

# vgs

Extend Volume Group


We can even see which PV are used to create particular Volume group using.
# pvscan

Check
Volume Group
Here, we can see which Volume groups are under Which Physical Volumes. We
have just added one pv and its totally free. Let us see the size of each logical
volume we have currently before expanding it.

Check All Logical Volume


1. LogVol00 defined for Swap.
2. LogVol01 defined for /.
3. Now we have 16.50 GB size for / (root).
4. Currently there are 4226 Physical Extend (PE) available.
Now we are going to expand the / partition LogVol01. After expanding we can list
out the size as above for confirmation. We can extend using GB or PE as I have
explained it in LVM PART-I, here I’m using PE to extend.

For getting the available Physical Extend size run.

# vgdisplay

Check Available
Physical Size
There are 4607 free PE available = 18GB Free space available. So we can
expand our logical volume up-to 18GB more. Let us use the PE size to extend.

# lvextend -l +4607 /dev/vg_tecmint/LogVol01


Use + to add the more space. After Extending, we need to re-size the file-system
using.

# resize2fs /dev/vg_tecmint/LogVol01

Expand Logical Volume


1. Command used to extend the logical volume using Physical extends.
2. Here we can see it is extended to 34GB from 16.51GB.
3. Re-size the file system, If the file-system is mounted and currently under
use.
4. For extending Logical volumes we don’t need to unmount the file-system.
Now let’s see the size of re-sized logical volume using.

# lvdisplay
Resize Logical Volume
1. LogVol01 defined for / extended volume.
2. After extending there is 34.50GB from 16.50GB.
3. Current extends, Before extending there was 4226, we have added 4607
extends to expand so totally there are 8833.
Now if we check the vg available Free PE it will be 0.

# vgdisplay

See the result of extending.

# pvs

# vgs

# lvs
Verify
Resize Partition
1. New Physical Volume added.
2. Volume group vg_tecmint extended from 17.51GB to 35.50GB.
3. Logical volume LogVol01 extended from 16.51GB to 34.50GB.
Here we have completed the process of extending volume group and logical
volumes. Let us move towards some interesting part in Logical volume
management.

Reducing Logical Volume (LVM)

Here we are going to see how to reduce the Logical Volumes. Everyone say its
critical and may end up with disaster while we reduce the lvm. Reducing lvm is
really interesting than any other part in Logical volume management.

1. Before starting, it is always good to backup the data, so that it will not be a
headache if something goes wrong.
2. To Reduce a logical volume there are 5 steps needed to be done very
carefully.
3. While extending a volume we can extend it while the volume under mount
status (online), but for reduce we must need to unmount the file system
before reducing.
Let’s wee what are the 5 steps below.

1. unmount the file system for reducing.


2. Check the file system after unmount.
3. Reduce the file system.
4. Reduce the Logical Volume size than Current size.
5. Recheck the file system for error.
6. Remount the file-system back to stage.
For demonstration, I have created separate volume group and logical volume.
Here, I’m going to reduce the logical volume tecmint_reduce_test. Now its 18GB
in size. We need to reduce it to 10GB without data-loss. That means we need to
reduce 8GB out of 18GB. Already there is 4GB data in the volume.

18GB ---> 10GB

While reducing size, we need to reduce only 8GB so it will roundup to 10GB after
the reduce.

# lvs

R
educe Logical Volume
Here we can see the file-system information.

# df -h
Check File System Size
1. The size of the Volume is 18GB.
2. Already it used upto 3.9GB.
3. Available Space is 13GB.
First unmount the mount point.

# umount -v /mnt/tecmint_reduce_test/

Unmount Parition
Then check for the file-system error using following command.

# e2fsck -ff /dev/vg_tecmint_extra/tecmint_reduce_test

Scan Parition for Errors


Note: Must pass in every 5 steps of file-system check if not there might be some
issue with your file-system.

Next, reduce the file-system.

# resize2fs /dev/vg_tecmint_extra/tecmint_reduce_test 10G

Reduce File System


Reduce the Logical volume using GB size.

# lvreduce -L -8G /dev/vg_tecmint_extra/tecmint_reduce_test

R
educe Logical Partition
To Reduce Logical volume using PE Size we need to Know the size of default
PE size and total PE size of a Volume Group to put a small calculation for
accurate Reduce size.

# lvdisplay vg_tecmint_extra

Here we need to do a little calculation to get the PE size of 10GB using bc


command.
1024MB x 10GB = 10240MB or 10GB

10240MB / 4PE = 2048PE

Press CRTL+D to exit from BC.

Calculate PE Size
Reduce the size using PE.

# lvreduce -l -2048
/dev/vg_tecmint_extra/tecmint_reduce_test

Reduce Size Using PE


Re-size the file-system back, In this step if there is any error that means we have
messed-up our file-system.

# resize2fs /dev/vg_tecmint_extra/tecmint_reduce_test
Resize File System
Mount the file-system back to same point.

# mount /dev/vg_tecmint_extra/tecmint_reduce_test
/mnt/tecmint_reduce_test/

Mount File System


Check the size of partition and files.

# lvdisplay vg_tecmint_extra

Here we can see the final result as the logical volume was reduced to 10GB size.
Verify
Logical Volume Size
In this article, we have seen how to extend the volume group, logical volume and
reduce the logical volume. In the next part (Part III), we will see how to take a
Snapshot of logical volume and restore it to earlier stage.

LVM Snapshots are space-efficient point-in-time copies of lvm volumes. It works


only with lvm and consumes the space only when changes are made to the
source logical volume to snapshot volume. If the source volume has huge
changes made to the sum of 1GB the same changes will be made to the
snapshot volume. It is best to always have a small size of changes for space
efficiency. In case the snapshot runs out of storage, we can use lvextend to
grow. And if we need to shrink the snapshot we can use lvreduce.
Ta
ke a Snapshot in LVM
If we have accidentally deleted any file after creating a Snapshot we don’t have
to worry because the snapshot has the original file which we have deleted. It is
possible that the file was there when the snapshot was created. Don’t alter the
snapshot volume, keep it as it is while the snapshot is used to do a fast recovery.

Snapshots can’t be used as a backup option. Backups are Primary Copies of


some data, so we cant use snapshots as a backup option.

Requirements

1. Create Disk Storage with LVM in Linux – PART 1


2. How to Extend/Reduce LVM’s in Linux – Part II

My Server Setup

1. Operating System – CentOS 6.5 with LVM Installation


2. Server IP – 192.168.0.200
Step 1: Creating LVM Snapshot
First, check for free space in the volume group to create a new snapshot using
the following ‘vgs‘ command.

# vgs

# lvs

Check
LVM Disk Space
You see, there is 8GB of free space left in the above vgs output. So, let’s create
a snapshot for one of my volume named tecmint_datas. For demonstration
purposes, I am going to create only 1GB snapshot volume using the following
commands.

# lvcreate -L 1GB -s -n tecmint_datas_snap


/dev/vg_tecmint_extra/tecmint_datas

OR

# lvcreate --size 1G --snapshot --name tecmint_datas_snap


/dev/vg_tecmint_extra/tecmint_datas
Both the above commands do the same thing:

1. -s – Creates Snapshot
2. -n – Name for snapshot

Create LVM Snapshot


Here, is the explanation of each point highlighted above.

1. Size of snapshot I am creating here.


2. Creates snapshot.
3. Creates name for the snapshot.
4. New snapshots name.
5. A volume of which we are going to create a snapshot.
If you want to remove a snapshot, you can use ‘lvremove‘ command.

# lvremove /dev/vg_tecmint_extra/tecmint_datas_snap

Remove LVM Snapshot


Now, list the newly created snapshot using the following command.

# lvs

Verify LVM Snapshot


You see above, a snapshot was created successfully. I have marked with an
arrow where snapshots originate from where its created, It’s tecmint_datas. Yes,
because we have created a snapshot for tecmint_datas l-volume.

Check LVM Snapshot Space


Let’s add some new files into tecmint_datas. Now volume has some data around
650MB and our snapshot size is 1GB. So there is enough space to back up our
changes in snap volume. Here we can see, what is the status of our snapshot
using the below command.

# lvs

Check Snapshot Status


You see, 51% of snapshot volume was used now, no issue for more modification
in your files. For more detailed information use the command.

# lvdisplay vg_tecmint_extra/tecmint_data_snap
View Snapshot Information
Again, here is a clear explanation of each point highlighted in the above picture.

1. Name of Snapshot Logical Volume.


2. Volume group name currently under use.
3. Snapshot volume in read and write mode, we can even mount the volume
and use it.
4. A time when the snapshot was created. This is very important because a
snapshot will look for every change after this time.
5. This snapshot belongs to the tecmint_datas logical volume.
6. A logical volume is online and available to use.
7. Size of Source volume which we took a snapshot of.
8. Cow-table size = copy on Write, which means whatever changes were
made to the tecmint_data volume will be written to this snapshot.
9. Currently, the snapshot size used, our tecmint_datas was 10G but our
snapshot size was 1GB which means our file is around 650 MB. So what is
now in 51% if the file grows to 2GB size in tecmint_datas size will increase
more than the snapshot allocated size, sure we will be in trouble with a
snapshot. That means we need to extend the size of the logical volume
(snapshot volume).
10. Gives the size of the chunk for a snapshot.
Now, let’s copy more than 1GB of files in tecmint_datas, let’s see what will
happen. If you do, you will get an error message saying ‘Input/output error‘,
which means out of space in the snapshot.

Add Files to Snapshot


If the logical volume becomes full it will get dropped automatically and we can’t
use it anymore, even if we extend the size of the snapshot volume. It is the best
idea to have the same size as Source while creating a
snapshot, tecmint_datas size was 10G, if I create a snapshot size of 10GB it will
never overflow like above because it has enough space to take snaps of your
volume.

Step 2: Extend Snapshot in LVM


If we need to extend the snapshot size before overflow we can do it using.

# lvextend -L +1G /dev/vg_tecmint_extra/tecmint_data_snap

Now there was a total of 2GB size for a snapshot.

Ex
tend LVM Snapshot
Next, verify the new size and COW table using the following command.

# lvdisplay /dev/vg_tecmint_extra/tecmint_data_snap
To know the size of the snap volume and usage %.

# lvs

Check the Size of


the Snapshot
But if you have a snapshot volume of the same size as the Source volume we
don’t need to worry about these issues.

Step 3: Restoring Snapshot or Merging


To restore the snapshot, we need to un-mount the file system first.

# unmount /mnt/tecmint_datas/
Un-mount File System
Just check for the mount point to whether it’s unmounted or not.

# df -h

Che
ck File System Mount Points
Here are mount has been unmounted, so we can continue to restore the
snapshot. To restore the snap using the command lvconvert.

# lvconvert --merge /dev/vg_tecmint_extra/tecmint_data_snap

Restore LVM Snapshot


After the merge is completed, the snapshot volume will be removed
automatically. Now we can see the space of our partition using the df command.
# df -Th

Check the Size of the Snapshot


After the snapshot volume is removed automatically. You can see the size of the
logical volume.

# lvs

Ch
eck the Size of the Logical Volume
Important: To Extend the Snapshots automatically, we can do it using some
modifications in the conf file. For manual, we can extend using lvextend.

Open the lvm configuration file using your choice of editor.

# vim /etc/lvm/lvm.conf

Search for the word autoextend. By default, the value will be similar to below.
LV
M Configuration
Change the 100 to 75 here, if so auto extend threshold is 75 and the auto-extend
percent is 20, it will expand the size by 20 Percent

If the snapshot volume reaches 75% it will automatically expand the size of the
snap volume by 20% more. Thus, we can expand automatically. Save and exit
the file using wq!.

This will save snapshots from overflow drop. This will also help you to save more
time. LVM is the only Partition method in which we can expand more and have
many features such as thin Provisioning, Striping, Virtual volume, and more
Using thin-pool, let us see them in the next topic.

Logical Volume management has great features such as snapshots and Thin
Provisioning. Previously in (Part – III) we have seen how to snapshot the logical
volume. Here in this article, we will going to see how to setup thin Provisioning
volumes in LVM.
Set
up Thin Provisioning in LVM
What is Thin Provisioning?

Thin Provisioning is used in lvm for creating virtual disks inside a thin pool. Let us
assume that I have a 15GB storage capacity in my server. I already have 2
clients who has 5GB storage each. You are the third client, you asked for 5GB
storage. Back then we use to provide the whole 5GB (Thick Volume) but you
may use 2GB from that 5GB storage and 3GB will be free which you can fill it up
later.

But what we do in thin Provisioning is, we use to define a thin pool inside one of
the large volume group and define the thin volumes inside that thin pool. So, that
whatever files you write will be stored and your storage will be shown as 5GB.
But the full 5GB will not allocate the entire disk. The same process will be done
for other clients as well. Like I said there are 2 clients and you are my 3rd client.

So, let us assume how much total GB I have assigned for clients? Totally 15GB
was already completed, If someone comes to me and ask for 5GB can I give?
The answer is “Yes“, here in thin Provisioning I can give 5GB for 4th Client even
though I have assigned 15GB.
Warning: From 15GB, if we are Provisioning more than 15GB it is called Over
Provisioning.

How it Works? and How we provide storage to new Clients?

I have provided you 5GB but you may used only 2GB and other 3GB will be free.
In Thick Provisioning we can’t do this, because it will allocate the whole space at
first itself.

In thin Provisioning if I’m defining 5GB for you it won’t allocate the whole disk
space while defining a volume, it will grow till 5GB according to your data write,
Hope you got it! same like you, other clients too won’t use the full volumes so
there will be a chance to add 5GB to a new client, This is called over
Provisioning.

But it’s compulsory to monitored each and every volume growth, if not it will end-
up in a disaster. While over Provisioning is done if the all 4 clients write the data’s
badly to disk you may face an issue because it will fill up your 15GB and overflow
to get drop the volumes.

Requirements

1. Create Disk Storage with LVM in Linux – PART 1


2. How to Extend/Reduce LVM’s in Linux – Part II
3. How to Create/Restore Snapshot of Logical Volume in LVM – Part III
My Server Setup

1. Operating System – CentOS 6.5 with LVM Installation


2. Server IP – 192.168.0.200
Step 1: Setup Thin Pool and Volumes
Let’s do it practically how to setup the thin pool and thin volumes. First we need a
large size of Volume group. Here I’m creating Volume group with 15GB for
demonstration purpose. Now, list the volume group using the below command.

# vgcreate -s 32M vg_thin /dev/sdb1


Listing Volume
Group
Next, check for the size of Logical volume availability, before creating the thin
pool and volumes.

# vgs

# lvs

Check
Logical Volume
We can see there is only default logical volumes for file-system and swap is
present in the above lvs output.

Creating a Thin Pool

To create a Thin pool for 15GB in volume group (vg_thin) use the following
command.

# lvcreate -L 15G --thinpool tp_tecmint_pool vg_thin


1. -L – Size of volume group
2. –thinpool – To o create a thinpool
3. tp_tecmint_pool– Thin pool name
4. vg_thin – Volume group name were we need to create the pool

Create Thin Pool


To get more detail we can use the command ‘lvdisplay’.

# lvdisplay vg_thin/tp_tecmint_pool
Logical Volume
Information
Here we haven’t created Virtual thin volumes in this thin-pool. In the image we
can see Allocated pool data showing 0.00%.

Creating Thin Volumes

Now we can define thin volumes inside the thin pool with the help of ‘lvcreate’
command with option -V (Virtual).

# lvcreate -V 5G --thin -n thin_vol_client1


vg_thin/tp_tecmint_pool

I have created a Thin virtual volume with the name of thin_vol_client1 inside
the tp_tecmint_pool in my vg_thin volume group. Now, list the logical volumes
using below command.
# lvs

List Logical Volumes


Just now, we have created the thin volume above, that’s why there is no data
showing i.e. 0.00%M.

Fine, let me create 2 more Thin volumes for other 2 clients. Here you can see
now there are 3 thin volumes created under the pool (tp_tecmint_pool). So, from
this point, we came to know that I have used all 15GB pool.

Create Thin Volumes


Creating File System

Now, create mount points and mount these three thin volumes and copy some
files in it using below commands.

# mkdir -p /mnt/client1 /mnt/client2 /mnt/client3


List the created directories.

# ls -l /mnt/

Creating
Mount Points
Create the file system for these created thin volumes using ‘mkfs’ command.

# mkfs.ext4 /dev/vg_thin/thin_vol_client1 && mkfs.ext4


/dev/vg_thin/thin_vol_client2 && mkfs.ext4
/dev/vg_thin/thin_vol_client3

Create File System


Mount all three client volumes to the created mount point using ‘mount’
command.

# mount /dev/vg_thin/thin_vol_client1 /mnt/client1/ && mount


/dev/vg_thin/thin_vol_client2 /mnt/client2/ && mount
/dev/vg_thin/thin_vol_client3 /mnt/client3/
List the mount points using ‘df’ command.

# df -h

Print Mount Points


Here, we can see all the 3 clients volumes are mounted and therefore only 3% of
data are used in every clients volumes. So, let’s add some more files to all 3
mount points from my desktop to fill up some space.

Add Files To Volumes


Now list the mount point and see the space used in every thin volumes & list the
thin pool to see the size used in pool.

# df -h

# lvdisplay vg_thin/tp_tecmint_pool
Check Mount Point Size

Check Thin Pool Size


The above command shows, the three mount pints along with their sizes in
percentage.

13% of datas used out of 5GB for client1

29% of datas used out of 5GB for client2


49% of datas used out of 5GB for client3

While looking into the thin-pool we can see only 30% of data is written totally.
This is the total of above three clients virtual volumes.

Over Provisioning

Now the 4th client came to me and asked for 5GB storage space. Can I give?
Because I had already given 15GB Pool to 3 clients. Is it possible to give 5GB
more to another client? Yes it is possible to give. This is when we use Over
Provisioning, which means giving the space more than what I have.

Let me create 5GB for the 4th Client and verify the size.

# lvcreate -V 5G --thin -n thin_vol_client4


vg_thin/tp_tecmint_pool

# lvs

Create thin Storage


I have only 15GB size in pool, but I have created 4 volumes inside thin-pool up-to
20GB. If all four clients start to write data to their volumes to fill up the pace, at
that time, we will face critical situation, if not there will no issue.

Now I have created file system in thin_vol_client4, then mounted


under /mnt/client4 and copy some files in it.
# lvs

Verify Thin Storage


We can see in the above picture, that the total used size in newly created client 4
up-to 89.34% and size of thin pool as 59.19% used. If all these users are not
writing badly to volume it will be free from overflow, drop. To avoid the overflow
we need to extend the thin-pool size.

Important: Thin-pools are just a logical volume, so if we need to extend the size
of thin-pool we can use the same command like, we’ve used for logical volumes
extend, but we can’t reduce the size of thin-pool.

# lvextend

Here we can see how to extend the logical thin-pool (tp_tecmint_pool).

# lvextend -L +15G /dev/vg_thin/tp_tecmint_pool

Extend Thin
Storage
Next, list the thin-pool size.
# lvs

Verify Thin Storage


Earlier our tp_tecmint_pool size was 15GB and 4 thin volumes which was over
Provision by 20GB. Now it has extended to 30GB so our over Provisioning has
been normalized and thin volumes are free from overflow, drop. This way you
can add ever more thin volumes to the pool.

Here, we have seen how to create a thin-pool using a large size of volume group
and create thin-volumes inside a thin-pool using Over-Provisioning and extending
the pool. In the next article we will see how to setup a lvm Striping.

n this article, we are going to see how the logical volumes writes the data to disk
by striping I/O. Logical Volume management has one of the cool feature which
can write data over multiple disk by striping the I/O.
Man
age LVM Disks Using Striping I/O
What is LVM Striping?

LVM Striping is one of the feature which will writes the data over multiple disk,
instead of constant write on a single Physical volume.

Features of Striping

1. It will increase the performance of disk.


2. Saves from hard write over and over to a single disk.
3. Disk fill-up can be reduced using striping over multiple disk.
In Logical volume management, if we need to create a logical volume the
extended will get fully mapped to the volume group and physical volumes. In
such situation if one of the PV (Physical Volume) gets filled we need to add more
extends from other physical volume. Instead, adding more extends to PV, we can
point our logical volume to use the particular Physical volumes writing I/O.

Assume we have four disks drives and pointed to four physical volumes, if each
physical volume are capable of 100 I/O totally our volume group will get 400 I/O.
If we are not using the stripe method, the file system will writes across the
underlying physical volume. For example, some data writes to physical volume
100 I/O will be write only to the first (sdb1) PV. If we create the logical volume
with stripe option while writing, it will write to every four drives by splitting 100 I/O,
that means every four drive will receive 25 I/O each.

This will be done in round robin process. If any one of the logical volume need to
be extended, in this situation we can’t add 1 or 2 PV. We have to add all 4 pvs to
extend the logical volume size. This is one of the drawback in stripe feature, from
this we can know that while creating logical volumes we need to assign the same
stripe size over all logical volumes.

Logical Volume management has these features which we can stripe the data
over multiple pvs at the same time. If you are familiar with logical volume you can
go head to setup the logical volume stripe. If not then you must need to know
about the logical volume managements basics, read below articles to know more
about logical volume management.

Requirements

1. Setup Flexible LVM Disk Storage in Linux – Part I


2. How to Extend/Reduce LVM’s in Linux – Part II
My Server Setup

Here I’m using Centos6.5 for my workout. The same steps can be used in RHEL,
Oracle Linux, and most of the distributions.

Operating System : CentOS 6.5

IP Address : 192.168.0.222

Hostname : tecmint.storage.com

Logical Volume management using Striping I/O


For demonstration purpose, I’ve used 4 Hard drives, each drive with 1 GB in
Size. Let me show you four drives using ‘fdisk‘ command as shown below.
# fdisk -l | grep sd

List Hard Drives


Now we’ve to create partitions for these 4 hard
drives sdb, sdc, sdd and sde using ‘fdisk‘ command. To create partitions, please
follow the step #4 instructions, given in the Part 1 of this article (link give above)
and make sure you change the type to LVM (8e), while creating partitions.

After you’ve created partitions successfully, now move forward to create Physical
volumes using all these 4 drives. For creating PV’s, use the following ‘pvcreate‘
command as shown.

# pvcreate /dev/sd[b-e]1 -v
Create Physical Volumes in LVM
Once PV’s created, you can list them using ‘pvs‘ command.

# pvs

Verify Physical
Volumes
Now we need to define volume group using those 4 physical volumes. Here I’m
defining my volume group with 16MB of Physical extended size (PE) with volume
group named as vg_strip.

# vgcreate -s 16M vg_strip /dev/sd[b-e]1 -v


The description of above options used in the command.

1. [b-e]1 – Define your hard drive names such as sdb1, sdc1, sdd1, sde1.
2. -s – Define your physical extent size.
3. -v – verbose.
Next, verify the newly created volume group using.

# vgs vg_strip

Verify
Volume Group
To get more detailed information about VG, use switch ‘-v‘
with vgdisplay command, it will give us a every physical volumes which all used
in vg_strip volume group.

# vgdisplay vg_strip -v
Volume Group
Information
Back to our topic, now while creating Logical volume, we need to define the
stripe value, how data need to write in our logical volumes using stripe method.
Here I’m creating a logical volume in the name
of lv_tecmint_strp1 with 900MB size, and it needs to be in vg_strip volume group,
and I’m defining as 4 stripe, it means the data writes to my logical volume, needs
to be stripe over 4 PV’s.

# lvcreate -L 900M -n lv_tecmint_strp1 -i4 vg_strip

1. -L –logical volume size


2. -n –logical volume name
3. -i –stripes

Cre
ate Logical Volumes
In the above image, we can see that the default size of stripe-size was 64 KB, if
we need to define our own stripe value, we can use -I (Capital I). Just to confirm
that the logical volume are created use the following command.

# lvdisplay vg_strip/lv_tecmint_strp1
Confirm
Logical Volumes
Now next question will be, How do we know that stripes are writing to 4 drives?.
Here we can use ‘lvdisplay‘ and -m (display the mapping of logical volumes)
command to verify.

# lvdisplay vg_strip/lv_tecmint_strp1 -m
Check
Logical Volumes
To create our defined stripe size, we need to create one logical volume
with 1GB size using my own defined Stripe size of 256KB. Now I’m going to
stripe over only 3 PV’s, here we can define which pvs we want to be striped.
# lvcreate -L 1G -i3 -I 256 -n lv_tecmint_strp2 vg_strip
/dev/sdb1 /dev/sdc1 /dev/sdd1

Define Stripe Size


Next, check the stripe size and which volume does it stripes.

# lvdisplay vg_strip/lv_tecmint_strp2 -m
Check
Stripe Size
It’s time to use a device mapper, for this we use command ‘dmsetup‘. It is a low
level logical volume management tool which manages logical devices, that use
the device-mapper driver. We can see the lvm information using dmsetup
command to know the which stripe depends on which drives.

# dmsetup deps /dev/vg_strip/lv_tecmint_strp[1-2]


Device Mapper
Here we can see that strp1 depend on 4 drives, and strp2 depend on 3 devices.

Hope you have learnt, that how we can stripe through logical volumes to write the
data. For this setup one must know about the basic of logical volume
management. In my next article, I will show you how we can migrate in logical
volume management, till then stay tuned for updates and don’t forget to give
valuable comments about the article.

This is the 6th part of our ongoing Logical Volume Management series, in this
article we will show you how to migrate existing logical volumes to other new
drive without any downtime. Before moving further, I would like to explain you
about LVM Migration and its features.

LVM Storage
Migration
What is LVM Migration?

LVM migration is one of the excellent feature, where we can migrate the logical
volumes to a new disk without the data-loss and downtime. The purpose of this
feature is it to move our data from old disk to a new disk. Usually, we do
migrations from one disk to other disk storage, only when an error occur in some
disks.

Features of Migration

1. Moving logical volumes from one disk to other disk.


2. We can use any type of disk like SATA, SSD, SAS, SAN storage iSCSI or
FC.
3. Migrate disks without data loss and downtime.
In LVM Migration, we will swap every volumes, file-system and it’s data in the
existing storage. For example, if we have a single Logical volume, which has
been mapped to one of the physical volume, that physical volume is a physical
hard-drive.

Now if we need to upgrade our server with SSD Hard-drive, what we used to
think at first? reformat of disk? No! we don’t have to reformat the server. The
LVM has the option to migrate those old SATA Drives with new SSD Drives. The
Live migration will support any kind of disks, be it local drive, SAN or Fiber
channel too.

Requirements

1. Creating Flexible Disk Storage with Logical Volume Management – Part 1


2. How to Extend/Reduce LVM’s in Linux – Part 2
There are two ways to migrate LVM partitions (Storages), one is
using Mirroring method and other using pvmove command. For demonstration
purpose, here I’m using Centos6.5, but same instructions can also be supported
for RHEL, Fedora, Oracle Linux and Scientific Linux.

My Server Setup

Operating System : CentOS 6.5 Final

IP Address : 192.168.0.224

System Hostname : lvmmig.tecmintlocal.com


Step 1: Check for Present Drives
1. Assume we are already having one virtual drive named “vdb“, which mapped
to one of the logical volume “tecmint_lv“. Now we want to migrate this “vdb”
logical volume drive to some other new storage. Before moving further, first verify
that the virtual drive and logical volume names with the help
of fdisk and lvs commands as shown.

# fdisk -l | grep vd

# lvs

Check
Logical Volume Disk
Step 2: Check for Newly added Drive
2. Once we confirm our existing drives, now it’s time to attach our new SSD drive
to system and verify newly added drive with the help of fdisk command.

# fdisk -l | grep dev


Check New
Added Drive
Note: Did you see in the above screen, that the new drive has been added
successfully with name “/dev/sda“.

Step 3: Check Present Logical and Physical Volume


3. Now move forward to create physical volume, volume group and logical
volume for migration. Before creating volumes, make sure to check the present
logical volume data under /mnt/lvm mount point. Use the following commands to
list the mounts and check the data.

# df -h

# cd /mnt/lvm

# cat tecmint.txt
Check
Logical Volume Data
Note: For demonstration purpose, we’ve created two files under /mnt/lvm mount
point, and we migrate these data to a new drive without any downtime.

4. Before migrating, make sure to confirm the names of logical volume and
volume group for which physical volume is related to and also confirm which
physical volume used to hold this volume group and logical volume.

# lvs

# vgs -o+devices | grep tecmint_vg


Confirm Logical
Volume Names
Note: Did you see in the above screen, that “vdb” holds the volume
group tecmint_vg.

Step 4: Create New Physical Volume


5. Before creating Physical Volume in our new added SSD Drive, we need to
define the partition using fdisk. Don’t forget to change the Type to LVM(8e), while
creating partitions.

# pvcreate /dev/sda1 -v

# pvs

Create Physical Volume


6. Next, add the newly created physical volume to existing volume
group tecmint_vg using ‘vgextend command’

# vgextend tecmint_vg /dev/sda1

# vgs

Add Physical Volume


7. To get the full list of information about volume group use ‘vgdisplay‘ command.

# vgdisplay tecmint_vg -v

List Volume
Group Info
Note: In the above screen, we can see at the end of result as our PV has added
to the volume group.
8. If in-case, we need to know more information about which devices are
mapped, use the ‘dmsetup‘ dependency command.

# lvs -o+devices

# dmsetup deps /dev/tecmint_vg/tecmint_lv

In the above results, there is 1 dependencies (PV) or (Drives) and here 17 were
listed. If you want to confirm look into the devices, which has major and minor
number of drives that are attached.

# ls -l /dev | grep vd

List
Device Information
Note: In the above command, we can see that major number with 252 and minor
number 17 is related to vdb1. Hope you understood from above command
output.

Step 5: LVM Mirroring Method


9. Now it’s time to do migration using Mirroring method, use ‘lvconvert‘ command
to migrate data from old logical volume to new drive.

# lvconvert -m 1 /dev/tecmint_vg/tecmint_lv /dev/sda1

1. -m = mirror
2. 1 = adding a single mirror

Mirroring
Method Migration
Note: The above migration process will take long time according to our volume
size.

10. Once migration process completed, verify the converted mirror.

# lvs -o+devices

Verify Converted Mirror


11. Once you sure that the converted mirror is perfect, you can remove the old
virtual disk vdb1. The option -m will remove the mirror, earlier we’ve used 1 for
adding the mirror.

# lvconvert -m 0 /dev/tecmint_vg/tecmint_lv /dev/vdb1


Remove
Virtual Disk
12. Once old virtual disk is removed, you can re-check the devices for logical
volumes using following command.

# lvs -o+devices

# dmsetup deps /dev/tecmint_vg/tecmint_lv

# ls -l /dev | grep sd

Check New Mirrored Device


In the above picture, did you see that our logical volume now depends on 8,1 and
has sda1. This indicates that our migration process is done.

13. Now verify the files that we’ve migrated from old to new drive. If same data is
present at the new drive, that means we have done every steps perfectly.

# cd /mnt/lvm/

# cat tecmin.txt
Check Mirrored Data
14. After everything perfectly created, now it’s time to delete the vdb1 from
volume group and later confirm, which devices are depends on our volume
group.

# vgreduce /dev/tecmint_vg /dev/vdb1

# vgs -o+devices

15. After removing vdb1 from volume group tecmint_vg, still our logical volume is
present there because we have migrated it to sda1 from vdb1.

# lvs
Delete Virtual
Disk
Step 6: LVM pvmove Mirroring Method
16. Instead using ‘lvconvert’ mirroring command, we use here ‘pvmove‘
command with option ‘-n‘ (logical volume name) method to mirror data between
two devices.

# pvmove -n /dev/tecmint_vg/tecmint_lv /dev/vdb1 /dev/sda1

The command is one of the simplest way to mirror the data between two devices,
but in real environment Mirroring is used more often than pvmove.

Conclusion
In this article, we have seen how to migrate the logical volumes from one drive to
other. Hope you have learnt new tricks in logical volume management. For such
setup one should must know about the basic of logical volume management. For
basic setups, please refer to the links provided on top of the article at
requirement section.

You might also like