My Server Setup - Requirements: Check LVM Disk Storage in Linux
My Server Setup - Requirements: Check LVM Disk Storage in Linux
1. To gain insight into our LVM setup, we can utilize the following commands to
reveal the distinct components: Physical Volume (PV), Volume Group (VG),
and Logical Volume (LV).
# pvs
# vgs
# lvs
I am adding the following 3 disks here for demonstration purposes and to provide
more feature command explanations.
3. To list all the disks and their partitions, such as the disk name, size, partition
type, start and end sectors, and more use the fdisk utility as shown.
# fdisk -l
List Disk Partitions in Linux
Here, is the description of each disk shown in the above screenshot.
4. Now run the vgdisplay command to view the detailed information about all the
Volume Groups present on the system, including their name, size, free space,
physical volume (PV) information, and more.
# vgdisplay
List Volume Groups in Linux
Here, is the description of each parameter shown in the above screenshot.
# df -TH
We can extend the Volume Group of the currently used VG to get more space.
However, in this case, we are going to create a new Volume Group and
experiment with it. Later, we can see how to extend the file systems of
the Volume Group that is currently in use.
Before using a new disk, we need to partition the disk using the fdisk command
as shown.
# fdisk -c /dev/sdb
# fdisk -l
Confirm Disk Partitions
Create LVM Physical Volume
8. Now, it’s time to create Physical Volumes using all 3 disks. Here, I have listed
the physical disks using the ‘pvs‘ command, and now only one default PV is
listed.
# pvs
9. Then create the new physical disks and confirm the newly created physical
disks.
# pvs
Create LVM Physical Volumes
Creating LVM Volume Groups
10. Create a Volume Group named tecmint_add_vg using the available
free PV and a PE size of 32. To display the current volume groups, we can see
that there is one volume group with 1 PV in use.
# vgs
11. This will create the volume group named tecmint_add_vg using a 32MB
PE size and the 3 physical volumes we created in the last steps.
12. Next, verify the volume group by running the vgs command again.
# vgs
Confirm LVM Volume Groups
Understanding vgs command output:
# vgs -v
# vgdisplay tecmint_add_vg
List LVM Volume Groups
Here, is the description of each parameter shown in the above screenshot.
First, list the current Logical Volumes using the following command..
# lvs
List LVM Logical Volumes
16. These Logical Volumes are in the vg_tecmint Volume Group. To see how
much free space is available to create logical volumes, list the Volume Group
and available Physical Volumes using the ‘vgs‘ command.
# vgs
First, let us create Logical Volumes using the Physical Extent (PE) size. We need
to know the default PE size assigned to this Volume Group and the total
available PEs to create new Logical Volumes.
# bc
Let us now create 3 Logical Volumes using 639 PE’s. Here -l used to extend
the size and -n to assign a logical volume name.
# lvs
List Created Logical Volumes
Method 2: Creating Logical Volumes using GB Size
While creating Logical Volume using GB size we cannot get the exact size. So,
the better way is to create using extend.
# lvs
Here, we can see while creating the 3rd LV we can’t Round-up to 20GB, it is
because of small changes in size, but this issue will be ignored while creating LV
using Extend size.
Creating File System
17. For using the logical volumes we need to format. Here I am using the ext4
file-system to create the volumes and going to mount them under /mnt/.
# mkfs.ext4 /dev/tecmint_add_vg/tecmint_documents
# mkfs.ext4 /dev/tecmint_add_vg/tecmint_public
# mkfs.ext4 /dev/tecmint_add_vg/tecmint_manager
# mount /dev/tecmint_add_vg/tecmint_public
/mnt/tecmint_public/
# mount /dev/tecmint_add_vg/tecmint_manager
/mnt/tecmint_manager/
# df -h
# cat /etc/mtab
21. We need to make slight changes in the fstab entry while entering the mount
entry contents copies from mtab, we need to change the rw to defaults
# vi /etc/fstab
/dev/mapper/tecmint_add_vg-tecmint_documents
/mnt/tecmint_documents ext4 defaults 0 0
/dev/mapper/tecmint_add_vg-tecmint_public
/mnt/tecmint_public ext4 defaults 0 0
/dev/mapper/tecmint_add_vg-tecmint_manager
/mnt/tecmint_manager ext4 defaults 0 0
Permanent Mount Logical Volumes
22. Finally, run the command mount -a to check for the fstab entry before
restarting.
# mount -av
Extend/
Reduce LVMs in Linux
Requirements
May be we need to create a separate partition for any other use or we need to
expand the size of any low space partition, if so we can reduce the large size
partition and we can expand the low space partition very easily by the following
simple easy steps.
Currently, we have One PV, VG and 2 LV. Let’s list them one by one using
following commands.
# pvs
# vgs
# lvs
Logical Volume
Extending
There are no free space available in Physical Volume and Volume group. So,
now we can’t extend the lvm size, for extending we need to add one physical
volume (PV), and then we have to extend the volume group by extending the vg.
We will get enough space to extend the Logical volume size. So first we are
going to add one physical volume.
For adding a new PV we have to use fdisk to create the LVM partition.
# fdisk -cu /dev/sda
Create
LVM Partition
List and check the partition we have created using fdisk.
# fdisk -l /dev/sda
Verify
LVM Partition
Next, create new PV (Physical Volume) using following command.
# pvcreate /dev/sda1
# pvs
Create Physical Volume
Add this pv to vg_tecmint vg to extend the size of a volume group to get more
space for expanding lv.
# vgs
Check
Volume Group
Here, we can see which Volume groups are under Which Physical Volumes. We
have just added one pv and its totally free. Let us see the size of each logical
volume we have currently before expanding it.
# vgdisplay
Check Available
Physical Size
There are 4607 free PE available = 18GB Free space available. So we can
expand our logical volume up-to 18GB more. Let us use the PE size to extend.
# resize2fs /dev/vg_tecmint/LogVol01
# lvdisplay
Resize Logical Volume
1. LogVol01 defined for / extended volume.
2. After extending there is 34.50GB from 16.50GB.
3. Current extends, Before extending there was 4226, we have added 4607
extends to expand so totally there are 8833.
Now if we check the vg available Free PE it will be 0.
# vgdisplay
# pvs
# vgs
# lvs
Verify
Resize Partition
1. New Physical Volume added.
2. Volume group vg_tecmint extended from 17.51GB to 35.50GB.
3. Logical volume LogVol01 extended from 16.51GB to 34.50GB.
Here we have completed the process of extending volume group and logical
volumes. Let us move towards some interesting part in Logical volume
management.
Here we are going to see how to reduce the Logical Volumes. Everyone say its
critical and may end up with disaster while we reduce the lvm. Reducing lvm is
really interesting than any other part in Logical volume management.
1. Before starting, it is always good to backup the data, so that it will not be a
headache if something goes wrong.
2. To Reduce a logical volume there are 5 steps needed to be done very
carefully.
3. While extending a volume we can extend it while the volume under mount
status (online), but for reduce we must need to unmount the file system
before reducing.
Let’s wee what are the 5 steps below.
While reducing size, we need to reduce only 8GB so it will roundup to 10GB after
the reduce.
# lvs
R
educe Logical Volume
Here we can see the file-system information.
# df -h
Check File System Size
1. The size of the Volume is 18GB.
2. Already it used upto 3.9GB.
3. Available Space is 13GB.
First unmount the mount point.
# umount -v /mnt/tecmint_reduce_test/
Unmount Parition
Then check for the file-system error using following command.
R
educe Logical Partition
To Reduce Logical volume using PE Size we need to Know the size of default
PE size and total PE size of a Volume Group to put a small calculation for
accurate Reduce size.
# lvdisplay vg_tecmint_extra
Calculate PE Size
Reduce the size using PE.
# lvreduce -l -2048
/dev/vg_tecmint_extra/tecmint_reduce_test
# resize2fs /dev/vg_tecmint_extra/tecmint_reduce_test
Resize File System
Mount the file-system back to same point.
# mount /dev/vg_tecmint_extra/tecmint_reduce_test
/mnt/tecmint_reduce_test/
# lvdisplay vg_tecmint_extra
Here we can see the final result as the logical volume was reduced to 10GB size.
Verify
Logical Volume Size
In this article, we have seen how to extend the volume group, logical volume and
reduce the logical volume. In the next part (Part III), we will see how to take a
Snapshot of logical volume and restore it to earlier stage.
Requirements
My Server Setup
# vgs
# lvs
Check
LVM Disk Space
You see, there is 8GB of free space left in the above vgs output. So, let’s create
a snapshot for one of my volume named tecmint_datas. For demonstration
purposes, I am going to create only 1GB snapshot volume using the following
commands.
OR
1. -s – Creates Snapshot
2. -n – Name for snapshot
# lvremove /dev/vg_tecmint_extra/tecmint_datas_snap
# lvs
# lvs
# lvdisplay vg_tecmint_extra/tecmint_data_snap
View Snapshot Information
Again, here is a clear explanation of each point highlighted in the above picture.
Ex
tend LVM Snapshot
Next, verify the new size and COW table using the following command.
# lvdisplay /dev/vg_tecmint_extra/tecmint_data_snap
To know the size of the snap volume and usage %.
# lvs
# unmount /mnt/tecmint_datas/
Un-mount File System
Just check for the mount point to whether it’s unmounted or not.
# df -h
Che
ck File System Mount Points
Here are mount has been unmounted, so we can continue to restore the
snapshot. To restore the snap using the command lvconvert.
# lvs
Ch
eck the Size of the Logical Volume
Important: To Extend the Snapshots automatically, we can do it using some
modifications in the conf file. For manual, we can extend using lvextend.
# vim /etc/lvm/lvm.conf
Search for the word autoextend. By default, the value will be similar to below.
LV
M Configuration
Change the 100 to 75 here, if so auto extend threshold is 75 and the auto-extend
percent is 20, it will expand the size by 20 Percent
If the snapshot volume reaches 75% it will automatically expand the size of the
snap volume by 20% more. Thus, we can expand automatically. Save and exit
the file using wq!.
This will save snapshots from overflow drop. This will also help you to save more
time. LVM is the only Partition method in which we can expand more and have
many features such as thin Provisioning, Striping, Virtual volume, and more
Using thin-pool, let us see them in the next topic.
Logical Volume management has great features such as snapshots and Thin
Provisioning. Previously in (Part – III) we have seen how to snapshot the logical
volume. Here in this article, we will going to see how to setup thin Provisioning
volumes in LVM.
Set
up Thin Provisioning in LVM
What is Thin Provisioning?
Thin Provisioning is used in lvm for creating virtual disks inside a thin pool. Let us
assume that I have a 15GB storage capacity in my server. I already have 2
clients who has 5GB storage each. You are the third client, you asked for 5GB
storage. Back then we use to provide the whole 5GB (Thick Volume) but you
may use 2GB from that 5GB storage and 3GB will be free which you can fill it up
later.
But what we do in thin Provisioning is, we use to define a thin pool inside one of
the large volume group and define the thin volumes inside that thin pool. So, that
whatever files you write will be stored and your storage will be shown as 5GB.
But the full 5GB will not allocate the entire disk. The same process will be done
for other clients as well. Like I said there are 2 clients and you are my 3rd client.
So, let us assume how much total GB I have assigned for clients? Totally 15GB
was already completed, If someone comes to me and ask for 5GB can I give?
The answer is “Yes“, here in thin Provisioning I can give 5GB for 4th Client even
though I have assigned 15GB.
Warning: From 15GB, if we are Provisioning more than 15GB it is called Over
Provisioning.
I have provided you 5GB but you may used only 2GB and other 3GB will be free.
In Thick Provisioning we can’t do this, because it will allocate the whole space at
first itself.
In thin Provisioning if I’m defining 5GB for you it won’t allocate the whole disk
space while defining a volume, it will grow till 5GB according to your data write,
Hope you got it! same like you, other clients too won’t use the full volumes so
there will be a chance to add 5GB to a new client, This is called over
Provisioning.
But it’s compulsory to monitored each and every volume growth, if not it will end-
up in a disaster. While over Provisioning is done if the all 4 clients write the data’s
badly to disk you may face an issue because it will fill up your 15GB and overflow
to get drop the volumes.
Requirements
# vgs
# lvs
Check
Logical Volume
We can see there is only default logical volumes for file-system and swap is
present in the above lvs output.
To create a Thin pool for 15GB in volume group (vg_thin) use the following
command.
# lvdisplay vg_thin/tp_tecmint_pool
Logical Volume
Information
Here we haven’t created Virtual thin volumes in this thin-pool. In the image we
can see Allocated pool data showing 0.00%.
Now we can define thin volumes inside the thin pool with the help of ‘lvcreate’
command with option -V (Virtual).
I have created a Thin virtual volume with the name of thin_vol_client1 inside
the tp_tecmint_pool in my vg_thin volume group. Now, list the logical volumes
using below command.
# lvs
Fine, let me create 2 more Thin volumes for other 2 clients. Here you can see
now there are 3 thin volumes created under the pool (tp_tecmint_pool). So, from
this point, we came to know that I have used all 15GB pool.
Now, create mount points and mount these three thin volumes and copy some
files in it using below commands.
# ls -l /mnt/
Creating
Mount Points
Create the file system for these created thin volumes using ‘mkfs’ command.
# df -h
# df -h
# lvdisplay vg_thin/tp_tecmint_pool
Check Mount Point Size
While looking into the thin-pool we can see only 30% of data is written totally.
This is the total of above three clients virtual volumes.
Over Provisioning
Now the 4th client came to me and asked for 5GB storage space. Can I give?
Because I had already given 15GB Pool to 3 clients. Is it possible to give 5GB
more to another client? Yes it is possible to give. This is when we use Over
Provisioning, which means giving the space more than what I have.
Let me create 5GB for the 4th Client and verify the size.
# lvs
Important: Thin-pools are just a logical volume, so if we need to extend the size
of thin-pool we can use the same command like, we’ve used for logical volumes
extend, but we can’t reduce the size of thin-pool.
# lvextend
Extend Thin
Storage
Next, list the thin-pool size.
# lvs
Here, we have seen how to create a thin-pool using a large size of volume group
and create thin-volumes inside a thin-pool using Over-Provisioning and extending
the pool. In the next article we will see how to setup a lvm Striping.
n this article, we are going to see how the logical volumes writes the data to disk
by striping I/O. Logical Volume management has one of the cool feature which
can write data over multiple disk by striping the I/O.
Man
age LVM Disks Using Striping I/O
What is LVM Striping?
LVM Striping is one of the feature which will writes the data over multiple disk,
instead of constant write on a single Physical volume.
Features of Striping
Assume we have four disks drives and pointed to four physical volumes, if each
physical volume are capable of 100 I/O totally our volume group will get 400 I/O.
If we are not using the stripe method, the file system will writes across the
underlying physical volume. For example, some data writes to physical volume
100 I/O will be write only to the first (sdb1) PV. If we create the logical volume
with stripe option while writing, it will write to every four drives by splitting 100 I/O,
that means every four drive will receive 25 I/O each.
This will be done in round robin process. If any one of the logical volume need to
be extended, in this situation we can’t add 1 or 2 PV. We have to add all 4 pvs to
extend the logical volume size. This is one of the drawback in stripe feature, from
this we can know that while creating logical volumes we need to assign the same
stripe size over all logical volumes.
Logical Volume management has these features which we can stripe the data
over multiple pvs at the same time. If you are familiar with logical volume you can
go head to setup the logical volume stripe. If not then you must need to know
about the logical volume managements basics, read below articles to know more
about logical volume management.
Requirements
Here I’m using Centos6.5 for my workout. The same steps can be used in RHEL,
Oracle Linux, and most of the distributions.
IP Address : 192.168.0.222
Hostname : tecmint.storage.com
After you’ve created partitions successfully, now move forward to create Physical
volumes using all these 4 drives. For creating PV’s, use the following ‘pvcreate‘
command as shown.
# pvcreate /dev/sd[b-e]1 -v
Create Physical Volumes in LVM
Once PV’s created, you can list them using ‘pvs‘ command.
# pvs
Verify Physical
Volumes
Now we need to define volume group using those 4 physical volumes. Here I’m
defining my volume group with 16MB of Physical extended size (PE) with volume
group named as vg_strip.
1. [b-e]1 – Define your hard drive names such as sdb1, sdc1, sdd1, sde1.
2. -s – Define your physical extent size.
3. -v – verbose.
Next, verify the newly created volume group using.
# vgs vg_strip
Verify
Volume Group
To get more detailed information about VG, use switch ‘-v‘
with vgdisplay command, it will give us a every physical volumes which all used
in vg_strip volume group.
# vgdisplay vg_strip -v
Volume Group
Information
Back to our topic, now while creating Logical volume, we need to define the
stripe value, how data need to write in our logical volumes using stripe method.
Here I’m creating a logical volume in the name
of lv_tecmint_strp1 with 900MB size, and it needs to be in vg_strip volume group,
and I’m defining as 4 stripe, it means the data writes to my logical volume, needs
to be stripe over 4 PV’s.
Cre
ate Logical Volumes
In the above image, we can see that the default size of stripe-size was 64 KB, if
we need to define our own stripe value, we can use -I (Capital I). Just to confirm
that the logical volume are created use the following command.
# lvdisplay vg_strip/lv_tecmint_strp1
Confirm
Logical Volumes
Now next question will be, How do we know that stripes are writing to 4 drives?.
Here we can use ‘lvdisplay‘ and -m (display the mapping of logical volumes)
command to verify.
# lvdisplay vg_strip/lv_tecmint_strp1 -m
Check
Logical Volumes
To create our defined stripe size, we need to create one logical volume
with 1GB size using my own defined Stripe size of 256KB. Now I’m going to
stripe over only 3 PV’s, here we can define which pvs we want to be striped.
# lvcreate -L 1G -i3 -I 256 -n lv_tecmint_strp2 vg_strip
/dev/sdb1 /dev/sdc1 /dev/sdd1
# lvdisplay vg_strip/lv_tecmint_strp2 -m
Check
Stripe Size
It’s time to use a device mapper, for this we use command ‘dmsetup‘. It is a low
level logical volume management tool which manages logical devices, that use
the device-mapper driver. We can see the lvm information using dmsetup
command to know the which stripe depends on which drives.
Hope you have learnt, that how we can stripe through logical volumes to write the
data. For this setup one must know about the basic of logical volume
management. In my next article, I will show you how we can migrate in logical
volume management, till then stay tuned for updates and don’t forget to give
valuable comments about the article.
This is the 6th part of our ongoing Logical Volume Management series, in this
article we will show you how to migrate existing logical volumes to other new
drive without any downtime. Before moving further, I would like to explain you
about LVM Migration and its features.
LVM Storage
Migration
What is LVM Migration?
LVM migration is one of the excellent feature, where we can migrate the logical
volumes to a new disk without the data-loss and downtime. The purpose of this
feature is it to move our data from old disk to a new disk. Usually, we do
migrations from one disk to other disk storage, only when an error occur in some
disks.
Features of Migration
Now if we need to upgrade our server with SSD Hard-drive, what we used to
think at first? reformat of disk? No! we don’t have to reformat the server. The
LVM has the option to migrate those old SATA Drives with new SSD Drives. The
Live migration will support any kind of disks, be it local drive, SAN or Fiber
channel too.
Requirements
My Server Setup
IP Address : 192.168.0.224
# fdisk -l | grep vd
# lvs
Check
Logical Volume Disk
Step 2: Check for Newly added Drive
2. Once we confirm our existing drives, now it’s time to attach our new SSD drive
to system and verify newly added drive with the help of fdisk command.
# df -h
# cd /mnt/lvm
# cat tecmint.txt
Check
Logical Volume Data
Note: For demonstration purpose, we’ve created two files under /mnt/lvm mount
point, and we migrate these data to a new drive without any downtime.
4. Before migrating, make sure to confirm the names of logical volume and
volume group for which physical volume is related to and also confirm which
physical volume used to hold this volume group and logical volume.
# lvs
# pvcreate /dev/sda1 -v
# pvs
# vgs
# vgdisplay tecmint_vg -v
List Volume
Group Info
Note: In the above screen, we can see at the end of result as our PV has added
to the volume group.
8. If in-case, we need to know more information about which devices are
mapped, use the ‘dmsetup‘ dependency command.
# lvs -o+devices
In the above results, there is 1 dependencies (PV) or (Drives) and here 17 were
listed. If you want to confirm look into the devices, which has major and minor
number of drives that are attached.
# ls -l /dev | grep vd
List
Device Information
Note: In the above command, we can see that major number with 252 and minor
number 17 is related to vdb1. Hope you understood from above command
output.
1. -m = mirror
2. 1 = adding a single mirror
Mirroring
Method Migration
Note: The above migration process will take long time according to our volume
size.
# lvs -o+devices
# lvs -o+devices
# ls -l /dev | grep sd
13. Now verify the files that we’ve migrated from old to new drive. If same data is
present at the new drive, that means we have done every steps perfectly.
# cd /mnt/lvm/
# cat tecmin.txt
Check Mirrored Data
14. After everything perfectly created, now it’s time to delete the vdb1 from
volume group and later confirm, which devices are depends on our volume
group.
# vgs -o+devices
15. After removing vdb1 from volume group tecmint_vg, still our logical volume is
present there because we have migrated it to sda1 from vdb1.
# lvs
Delete Virtual
Disk
Step 6: LVM pvmove Mirroring Method
16. Instead using ‘lvconvert’ mirroring command, we use here ‘pvmove‘
command with option ‘-n‘ (logical volume name) method to mirror data between
two devices.
The command is one of the simplest way to mirror the data between two devices,
but in real environment Mirroring is used more often than pvmove.
Conclusion
In this article, we have seen how to migrate the logical volumes from one drive to
other. Hope you have learnt new tricks in logical volume management. For such
setup one should must know about the basic of logical volume management. For
basic setups, please refer to the links provided on top of the article at
requirement section.