0% found this document useful (0 votes)
257 views71 pages

Lustre Installation and Administration Lab Manual - Intel EE For Lustre With ZFS

Uploaded by

sohaileo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
257 views71 pages

Lustre Installation and Administration Lab Manual - Intel EE For Lustre With ZFS

Uploaded by

sohaileo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 71

LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre* Installation and Administration

Lab Manual – Intel EE for Lustre with ZFS

Version 2017.3

Do not copy, scan, transmit, redistribute or PAGE 1 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

DAY 1:

0. Lab Environment Setup


1. Lustre Installation - MGS, MDS, OSS with Intel Manager for Lustre
2. Lustre Management using IML
3. Lustre Networking, Introduction

DAY 2:
4. Data Lifecycle Management
5. Striping
6. OST Pools
7. Quotas

DAY 3:
8. Lustre Networking, Advanced
9. Lustre Troubleshooting
10. Lustre Tuning

Do not copy, scan, transmit, redistribute or PAGE 2 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre Installation and Administration

Lab 0

Lab Environment Setup

Student Notes:

Do not copy, scan, transmit, redistribute or PAGE 3 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lab Environment Setup:


This course uses virtual environments (VE) for the lab work as depicted below:

Note: The head node is now called KTLN2 and the systems are hosted at an Intel Site.
Each student will be assigned a virtual environment (VE) to complete the labs. Each VE is made
up of ten (10) virtual machines (VMs), with each VM having multiple storage targets attached.
The OSTs are all equal size while the MGT is a smaller volume than is the MDT.
Each VE has the same naming scheme except for the stXX prefix on node names. Student IDs
will be assigned prior to the first lab; you will use these student IDs to connect to the correct VE.
Example: Student ID 20 will connect to the hosts st20-mds1, st20-oss1, etc.
The lab nodes correspond to the above drawing using these interfaces:
eth0 = blue
eth1 = green

Do not copy, scan, transmit, redistribute or PAGE 4 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

eth2=red

Each system has a primary system (OS) disk - /dev/vda. The MDS1 server has a 10GB disk
(/dev/sda) and a 1GB disk (/dev/sdb). The OSS1 and OSS2 servers have each two 10GB
disks - /dev/sda, /dev/sdb,/dev/sdc,/dev/sdd. Carefully follow the instructions
provided to ensure that you use the correct device files when creating file systems.

To login to the “stXX” nodes you must first login to the head node (ktln2).
From “ktln2”, each node in your VE can be reached via its hostname without password.

Access the head node: [your workstation]$ ssh [email protected]

For this week: (mark you student information)


yogins01 NZd9lvgGoXltkqCb
yogins02 BWpPZMGyuGHG/h5O
yogins03 TqE1uP8oAkcPgczK
yogins04 r/o3Dzlx7URxbtwS
yogins05 iqMvPg1bJHwaYviS
yogins06 SQZDejfrplQgBLUp
yogins07 ECKC9HkLA3RaLJhV
yogins08 EobxnBq+3+kPsCne
yogins09 cagNe3crHFrJZJFw
yogins10 meJDPYI7KjpTnc8r
yogins11 yiQ2aq4buHlIex8d
yogins12 O46FCzLFWHG6lL6K
yogins16 GfsHAW9zWGCgN7XL

Access your student nodes: [ktln2]$ ssh root@stXX-iml1 (no password)

If Google Chrome is not installed


Google Chrome is the preferred browser for Intel Manager for Lustre.
Recent version of Mozilla Firefox can be used, but should be configured in order to enable Self
Signed Certification authorities:
1. In the Location bar, type about:config and press Enter.
2. Set security.use_mozillapkix_verification to false

Do not copy, scan, transmit, redistribute or PAGE 5 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

SSH Tunneling for Intel Manger for Lustre


For this lab, we need Intel Manager for Lustre. We need to Tunel out of the the lab environment
to your desktop.
Use can use this procedure:
1. Setup client

In Putty:
Souce port = 90XX

Destination = localhost:90XX

Do not copy, scan, transmit, redistribute or PAGE 6 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

2. [yogins16@ktln2 ~]$ ssh root@st16-iml1 –L 9016:localhost:443

3. On your workstation you can open your browser and use the location
https://localhost:90XX to access to Intel Manager for Lustre

The examples in this lab manual use student ID 16 as well as “stXX”. As you work through the
exercises, substitute your Student ID in place of “st16” or “stXX”.

Note studes are always 2 charaters: student 1 is st01 for example.

Do not copy, scan, transmit, redistribute or PAGE 7 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Preparation:

On the login node

Each student can access to the VMs password-less:

please access to all vms and accept the fingerprint.

ST=st16
HL=`echo mds{1..2} oss{1..4} iml1 cli{1..3}`
for i in $HL;
do ssh -o StrictHostKeyChecking=no root@$ST-$i uname -n
done

Please continue to Lab #1.

Do not copy, scan, transmit, redistribute or PAGE 8 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre Installation and Administration

Lab 1

Lustre Installation - MGS, MDS, OSS

with Intel Manager for Lustre

Student Notes:

Do not copy, scan, transmit, redistribute or PAGE 9 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre Installation & File System Creation

Task A: - IML Software Installation

Step 1: Log into your clusters maintenance server “stxx-iml1” as root


$ ssh root@st<xx>-iml1
Example: [st16@ktln2 ~]$ ssh root@st16-iml1

Step 2: List the already provided IEEL software package in the /root directory

[root@st16-iml1 ~]# ls -la ee-*


-rw-r--r-- 1 root root 135804974 Mar 2 13:24 ee-3.1.0.1.tar.gz

[NOTE: The ieel-latest.tar download is available from selected Intel Partners]

[root@st16-iml1 ~]# tar xvf ee-3.1.0.1.tar

Step 3: Change into the ieel installation directory and list the package contents
# cd /root/ee-3.X.X.X
# ls

Please verify the settings for the LANG variable in the bash environment.
Currently Intel Manager for Lustre is supporting only “en_US.utf8”

Step 4: Install IML using the install script


#./install –d
Note: -d is needed to keep a check in IML for 100GB of data that the VM does not
have. It is safe to do this in the training enviorment only.

Step 5: Accept License Agreement

Hit the space bar to page through the License Agreement


Do you agree to the licence terms? (yes/no) yes
Starting Intel(R) Manager for Lustre* software installation
Testing YUM
Loaded plugins: fastestmirror
Unpacking installation package
Installing Intel® Manager for Lustre*

Do not copy, scan, transmit, redistribute or PAGE 10 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Step 6: Create IML administrative user account


An administrative user account will now be created using the credentials which you
provide.
[Do not forget username and password]

Username: admin
Email: <your email address>
Password: admin.123
Confirm password: admin.123
User 'admin' successfully created.

Step 7: Hit return without typing to use the default “localhost” as the time synchronization server

NTP Server [localhost]:


...

chroma-manager 3.1.0.1 is currently installed


Intel(R) Manager for Lustre* software installation completed successfully

Step 8: View the installation log file for a detailed description of the installed packages

$ less /var/log/chroma/install.log

Do not copy, scan, transmit, redistribute or PAGE 11 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

IML Web Page Access


Access the IML GUI tool from your laptop or desktop web browser

[Google Chrome has tested as one of the better performing web browsers to the IML
interface.]

Step 1: Open a web browser on your laptop or desktop


Note: IML is designed to work best with the Chrome web browser

Step 2: Use “https” to access your training clusters from the ssh tunel setup.
https://localhost:90xx
Example for the st16 training cluster: https:// localhost:9016

Step 3: Accept the SSL certificate as a trusted site.

Step 4: Wait until all the javascript components are downloaded in your computer

You should now see the IML Dashboard

Step 5: Login as the super-user Administrator using the credentials inserted during the installation
phase

Step 6: Accept the License

You should now see the "Configuration" link in the top menu bar

Do not copy, scan, transmit, redistribute or PAGE 12 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

IML Administrator Interface


Configuration  Servers
Add Lustre Servers

Step 1: Navigate to Configuration  Servers

Step 2: Click on the “+ Add Server” link

Step 3: Hostname - Type in the hostname of the server to be added: stxx-mds1 and click Next.
On the confirmation screen, click Proceed.

Step 4: The Command View should appear, Close the window

Step 5: The Server Profile should appear and the “Managed storage server” profile should be
automatically selected
Note: “Monitored storage server” can be selected for monitoring existing Lustre servers
and file systems

Step 6: Proceed with the installation

Step 7: Click on Add another to repeat steps 3 – 5 and add stxx-mds2 and the OSS servers {stxx-
oss1, stxx-oss2}

Step 8: Wait until the configuration phase is completed (10 minutes)

Step 9: Click on the Volumes tab in the Configuration page and verify that all the volumes are in
GREEN Status.

Setup ZFS Zpools:


Users are required to setup Zpool outside of IML.
Step 1: Setup Metadata Zpools
From stXX-mds1:

Do not copy, scan, transmit, redistribute or PAGE 13 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

On stXX-mds1:

zpool create -f -o cachefile=none mdt /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_*0002


zpool create -f -o cachefile=none mgt /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_*0001
zpool export mdt mgt

Step 2: Setup Object Storage Zpools


From stXX-oss1 : (on real hardware –o ashift=12 is recomened but not needed for the vm environment)

zpool create -f -o cachefile=none ost1 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_*0003


zpool create -f -o cachefile=none ost2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_*0004
zpool create -f -o cachefile=none ost3 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_*0005
zpool create -f -o cachefile=none ost4 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_*0006
zfs set recordsize=1M ost1
zfs set recordsize=1M ost2
zfs set recordsize=1M ost3
zfs set recordsize=1M ost4
zfs set compression=on ost1
zfs set compression=on ost2
zfs set compression=on ost3
zfs set compression=on ost4

zpool export ost1 ost2 ost3 ost4

Do not copy, scan, transmit, redistribute or PAGE 14 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre Installation and Configuration


Manually modify the volume association with the primary server and the failover server.
Step 1: Navigate to “Configuration  Volumes”

Step 2: Find the Volume named “mgt” and make the primary server stXX-mds2 and the failover
server stXX-mds1.

Step 3: Find the Volume named “mdt” and make the primary server stXX-mds1 and the failover
server stXX-mds2.

Step 4: Click the “Apply” button and confirm the volume modification on XX0001 and XX0002.

Do not copy, scan, transmit, redistribute or PAGE 15 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre Installation and Configuration


Configuration  File Systems
Creating a New File System
Step 0. Navigate to Configuration -> Volumes
Observer only Zpools are preseted as volumens and are all configured for HA operation.
Step 1. Navigate to Configuration  File Systems

Step 2. Click on the "Create File System" button.

Step 3. Set file system options:


Step 3a: Type in a File system name: Lustre01
Step 3b: Click on Set Advanced Settings and observe the possible parameter changes
Note: Use care when changing these parameters as they can significantly impact
functionality or performance. For help with these settings, contact your storage solution
provider.
Note: Advanced settings can be updated later in the Configuration > File Systems >
Lustre01 > Update Advanced Settings menu

Step 4. Choose one management target (MGT):


Step 4a: Click on Select Storage
Step 4b: Click on mgt(1Gb) stXX-mds2.

Step 5. Choose one metadata target (MDT):


Step 5a: Click on Select Storage
Step 5b: Click on mdt (10 GB) on stxx-mds1

Step 6. Choose object storage targets (OSTs):


Step 6a: Click on Select All
Observe: Estimated file system capacity: 40GB

Step 7. Finish

Do not copy, scan, transmit, redistribute or PAGE 16 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Step 7a: Click on Create File System


Step 7b: Wait until the process is complete
Step 7c: Click on the Dashboard Link and verify that the file system Lustre01 is up and
running

Do not copy, scan, transmit, redistribute or PAGE 17 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre Installation and Administration

Lab 2

Lustre management with IML

Student Notes:

Do not copy, scan, transmit, redistribute or PAGE 18 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre management with IML


Task A: - Mount the lustre file system on clients

Step 1: Navigate to Configuration  File Systems

Step 2: Click on the file system name “Lustre01”

Step 3: Click on the button “View Client Mount Information”


In the window pop-up, you will find the string to use on the client to mount lustre

Step 4: Connect to the client and verify the lustre configuration


[root@st16-cli1 ~]# cat /etc/modprobe.d/lustre.conf
options lnet networks=tcp1(eth1)

[root@st16-cli1 ~]# rpm -qa |grep lustre


lustre-client-modules-2.7……
lustre-client-2.7.1…

Step 5: Create the mount point and mount lustre

[root@st16-cli1 ~]# mkdir /mnt/Lustre01


[root@st16-cli1 ~]# mount -t lustre 10.10.1XX.1@tcp1:10.10.1XX.2@tcp1:/Lustre01
/mnt/Lustre01
[root@st16-cli1 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root 8780808 1200940 7133816 15% /
tmpfs 1429664 0 1429664 0% /dev/shm
/dev/vda1 495844 72354 397890 16% /boot
10.10.116.1@tcp1:10.10.116.2@tcp1:/Lustre01 41284832 8540624 30646784 22%
/mnt/Lustre01

Do the same on stXX-cli2

Do not copy, scan, transmit, redistribute or PAGE 19 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Task B: - Create a workload and navigate the Dashboard

Step 1: Create a script to generate some workload on the lustre fs

cd $HOME
cat > test.sh <<\_EOF
while true; do
for i in {1..100}; do

dd if=/dev/zero of=/mnt/Lustre01/test-$i bs=4k count=10000


done
sleep 2
for i in {1..100}; do

dd of=/dev/null if=/mnt/Lustre01/test-$i bs=4k count=10000


done
sleep 2
ls -l /mnt/Lustre01/test*
sleep 2
for i in {1..100}; do
mkdir /mnt/Lustre01/dir-$i

done
sleep 2
rm -rf /mnt/Lustre01/dir-*

sleep 2
done
_EOF
chmod 755 test.sh
./test.sh

Run the script from one client only and navigate the Dashboard.
Let the script keep running for the following tasks

Do not copy, scan, transmit, redistribute or PAGE 20 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Task C: - Do a failover on one OST

Step 1: Navigate to Configuration  File System


Step 2: Click on the file system name “Lustre01”
Step 3: Use the “Actions” button to do a Failover

Step 4: Go back to the client’s terminal. Failover should be transparent for the application

Step 5: Go to the “Status” menu to see the Alert message

Task D: - Do a failback on the OST

Step 1: Navigate to Configuration  File System


Step 2: Click on the file system name “Lustre01”
Step 3: Use the “Actions” button to do a Failback

Step 4: Go back to the client’s terminal. Failback should be transparent for the application

Step 5: Go to the “Status” menu to see the Alert message

Task E: - Add two additional OSSs live

Step 1: Navigate to Configuration  Servers


Step 2: Click on the “+ Add Server” link
Step 3: Add two additional servers: stXX-oss3 and stXX-oss4
Step 3b: Add 4 new Zpools
On stxx-oss3
zpool create -f -o cachefile=none ost5 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_*0007
zpool create -f -o cachefile=none ost6 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_*0008
zpool create -f -o cachefile=none ost7 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_*0009
zpool create -f -o cachefile=none ost8 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_*0010
zfs set recordsize=1M ost5
zfs set recordsize=1M ost6
zfs set recordsize=1M ost7
zfs set recordsize=1M ost8
zfs set compression=on ost5

Do not copy, scan, transmit, redistribute or PAGE 21 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

zfs set compression=on ost6


zfs set compression=on ost7
zfs set compression=on ost8

zpool export ost5 ost6 ost7 ost8

Step 4: Add four additional OSTs to the filesystem

Navigate to Configuration  File System

Click on the file system name “Lustre01”

Click on the “Create OST” button and add 4 HA Zpools as new OSTs

Step 5: Check the client

After few minutes the total capacity of your lustre filesystem will be increased.
The script should work without any interruption.
The Dashboard should be update with new objects.
Why are areas of the heatmap on the dashboard is not updated?
Please stop the script for the upcoming Lab.

Do not copy, scan, transmit, redistribute or PAGE 22 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre Installation and Administration

Lab 3

Lustre Networking, Introduction

Student Notes:

Do not copy, scan, transmit, redistribute or PAGE 23 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre Networking (LNET)


Starting and testing LNET

1. Log on to your Lustre MDS1 server as the root user.

$ ssh root@stXX-mds1

#See what has happened so far


[mds1]# lnetctl status show
[mds1]# lnetctl net show

[mds1]# lctl network down


LNET busy

Why can’t LNET be stopped? Because, when a storage target is mounted, libcfs.ko
enforces a dependency that lnet.ko must keep the interface up. Run the following
command to see which modules Lustre has loaded:

[mds1]# lsmod |grep -E 'lustre|lnet'

2. Failover (using IML) the MDT target to mds2:

Step 1: Open the IML Web UI and navigate to Configuration  File Systems

Step 2: Click on the Lustre01 file system

Step 3: Scroll down to the Metadata Target section

Step 4: From the Actions menu, select Failover. This will move the Metadata Target
(e.g. Lustre01-MDT0000) from stxx-mds1 to stxx-mds2.

Step 5: Wait until complete. Verify by looking at the "Started on" column. The line
will also display an alert icon.

3. Now, login to stXX-mds1 and use the "lustre_rmmod" command to remove all of the
Lustre kernel modules.

[mds1]# lustre_rmmod
[mds1]# lsmod |grep -E 'lustre|lnet'

4. Load only the LNET kernel modules.

[mds1]# modprobe -v lnet


insmod /lib/modules/3.10.0-327.13.1.el7_lustre.x86_64/extra/kernel/net/lustre/libcfs.ko

Do not copy, scan, transmit, redistribute or PAGE 24 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

insmod /lib/modules/3.10.0-327.13.1.el7_lustre.x86_64/extra/kernel/net/lustre/lnet.ko
networks="tcp1(eth1)"

[mds1]# lsmod|grep lnet


lnet 376401 0
libcfs 492396 1 lnet

Enter the ‘lctl’ subshell and check to see if loading the LNET module started LNET. If not, start
LNET using “network up”. List your local NID.
Note in the following lab 10.10.116.* is for st16. 10.10.101.* would be the st01 subnet to use as
an example. You can check the eth1 device to double confirm your subnet.
[mds1]# lctl
lctl > list_nids
IOC_LIBCFS_GET_NI error 100: Network is down

lctl > network up


LNET configured

lctl > list_nids


10.10.116.1@tcp1

5. Ping your local NID from inside the LNET subshell, then exit and ping the NID on
OSS1.

NOTE: 10.10.1XX.* is the subnet with XX is your student number. ST16 info seen
below adjust the subnet for your student.

lctl > ping 10.10.116.1 (ping without full NID)


failed to ping 10.10.116.1@tcp: Input/output error

lctl > ping 10.10.116.1@tcp1


12345-0@lo
12345-10.10.116.1@tcp1

> exit

[mds1]# lctl ping 10.10.116.3@tcp1


12345-0@lo
12345-10.10.116.3@tcp1

6. Bring the Lustre network down and again try to ping OSS1 and MDS1.

[mds1]# lctl network down


LNET ready to unload

[mds1]# lctl list_nids


IOC_LIBCFS_GET_NI error 100: Network is down

[mds1]# lctl ping 10.10.116.3@tcp1


failed to ping 10.10.116.3@tcp1: Network is down

[mds1]# lctl ping 10.10.116.1@tcp1


failed to ping 10.10.116.1@tcp1: Network is down

Do not copy, scan, transmit, redistribute or PAGE 25 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

7. Bring LNET back up and verify that it is working properly.

[mds1]# lctl network up


LNET configured

[mds1]# lctl list_nids


10.10.116.1@tcp1

[mds1]# lctl ping 10.10.116.3@tcp1


12345-0@lo
12345-10.10.116.3@tcp1

8. Bring down LNET and remove all the Lustre modules.

[mds1]# lctl network down


LNET ready to unload

[mds1]# lustre_rmmod
[mds1]# lsmod |grep lnet

9. Move to the IML web interface and verify the status of server stxx-mds1
a. Navigate to Configuration  Servers
b. Look at the LNet State column in the Server Configuration table
10. Load all the Lustre modules and verify that LNET started automatically.

[mds1]# modprobe -v lustre


insmod /lib/modules/3.10.0-327.13.1.el7_lustre.x86_64/extra/kernel/net/lustre/libcfs.ko
insmod /lib/modules/3.10.0-327.13.1.el7_lustre.x86_64/extra/kernel/net/lustre/lnet.ko
networks=tcp1(eth1)
insmod /lib/modules/3.10.0-
327.13.1.el7_lustre.x86_64/extra/kernel/fs/lustre/obdclass.ko
insmod /lib/modules/3.10.0-327.13.1.el7_lustre.x86_64/extra/kernel/fs/lustre/ptlrpc.ko
insmod /lib/modules/3.10.0-327.13.1.el7_lustre.x86_64/extra/kernel/fs/lustre/fld.ko
insmod /lib/modules/3.10.0-327.13.1.el7_lustre.x86_64/extra/kernel/fs/lustre/fid.ko
insmod /lib/modules/3.10.0-327.13.1.el7_lustre.x86_64/extra/kernel/fs/lustre/lov.ko
insmod /lib/modules/3.10.0-327.13.1.el7_lustre.x86_64/extra/kernel/fs/lustre/mdc.ko
insmod /lib/modules/3.10.0-327.13.1.el7_lustre.x86_64/extra/kernel/fs/lustre/lmv.ko
insmod /lib/modules/3.10.0-327.13.1.el7_lustre.x86_64/extra/kernel/fs/lustre/lustre.ko
[mds1]# lctl list_nids
10.10.116.1@tcp1

11. Use the IML web interface to Failback the metadata target to stxx-mds1
Step 1: Open the IML Web UI and navigate to Configuration  File Systems
Step 2: Click on the Lustre01 file system
Step 3: Scroll down to the Metadata Target section
Step 4: From the Actions menu, select Failback.
Step 5: Wait until complete. Verify by looking at the "Started on" column. The line
will also display an alert icon.

Do not copy, scan, transmit, redistribute or PAGE 26 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre Installation and Administration

Lab 4

Data Lifecycle Management

Student Notes:

Do not copy, scan, transmit, redistribute or PAGE 27 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Data lifecycle management


Task A: Remove the previous lustre filesystem using IML.

Step 1: Unmount Lustre on all the clients and run lustre_rmmod

Step 2: Navigate to Configuration  File System

Step 3: Select Remove from the “Actions” button

Step 4: Wait until finish. Navigate to Configuration  MGT

Step 5: Select Remove from the “Actions” button

Task B: Create a new Lustre file system HSM enabled

Step 1: Navigate to Configuration  File System

Step 2: Create a Lustre filsystem Lustre01 as previously but enable the HSM flag.

Task C: Add HSM agent node

Step 1: Navigate to Configuration  Servers

Step 2: Add stXX-cli3, after few seconds the window will display an Incompatible “Managed
Profile”, please select “POSIX HSM Agent Node” as Server profile

Step 3: Log into the HSM Agent node, stXX-cli3, and create a POSIX archive file system:

[st16-cli3]# mkdir /data

Do not copy, scan, transmit, redistribute or PAGE 28 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Step 4: Add a Copytool to the HSM Agent node

Navigate to Configuration  HSM


Click on the “Add Copytool” button at the end of the page
Fill the form with these parameters:
a. File system: Lustre01
b. Worker: stXX-cli3
c. Path to the HSM agent binary: /usr/sbin/lhsmtool_posix
d. HSM agent arguments (optional): --hsm-root /data
e. File system mount point: /mnt/Lustre01
f. Archive number: 1

Save and exit.

Step 5: Start the Copytool


The copytool will be in “Unconfigured” state. Please use the Actions button to start it

Task D: First steps in the HSM world

Log into stXX-cli1 to perform some basic actions.

1. Mount the Lustre file system

2. Create file:
[root@st31-cli1 Lustre01]# cd /mnt/Lustre01/
[root@st31-cli1 Lustre01]# dd if=/dev/zero of=pippo bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.00744627 s, 141 MB/s

Do not copy, scan, transmit, redistribute or PAGE 29 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

3. Check HSM status of file


[root@st31-cli1 Lustre01]# lfs hsm_state pippo
pippo: (0x00000000)

This is normal for newly created files (0x00000000)

4. From a Lustre* client, archive the file (manual command):

[root@st31-cli1 Lustre01]# lfs hsm_archive pippo

[root@st31-cli1 Lustre01]# lfs hsm_state pippo


pippo: (0x00000009) exists archived, archive_id:1

5. Release the space on the lustre file system


[root@st31-cli1 Lustre01]# lfs hsm_release pippo
[root@st31-cli1 Lustre01]# lfs hsm_state pippo
pippo: (0x0000000d) released exists archived, archive_id:1

Task E: More advanced activities

1. From a Lustre* client, create a new much larger file and archive it
[root@st31-cli1 Lustre01]# dd if=/dev/zero of=big bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 34.4295 s, 62.4 MB/s
[root@st31-cli1 Lustre01]# lfs hsm_archive big

2. The archive command returns straight away. It will take some time to archive the file. You
can watch the progress on the HSM agent host or you can use “lfs hsm_state” to monitor
the file: 
[root@st31-cli1 Lustre01]# lfs hsm_state big
big: (0x00000001) exists, archive_id:1

Do not copy, scan, transmit, redistribute or PAGE 30 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

You can see the activity of the copytool in the HSM page of IML.

3. The copy to the archive is not complete until hsm_state lists both "exists" and "archived"

[root@st31-cli1 Lustre01]# lfs hsm_state big


big: (0x00000009) exists archived, archive_id:1

4. Effect on the file system size


[root@st31-cli1 Lustre01]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 8.4G 1.2G 6.8G 15% /
tmpfs 1.4G 0 1.4G 0% /dev/shm
/dev/vda1 485M 71M 389M 16% /boot
10.10.131.1@tcp1:10.10.131.2@tcp1:/Lustre01 79G 19G 57G 25% /mnt/Lustre01

[root@st31-cli1 Lustre01]# ls -lh


total 2.1G
-rw-r--r-- 1 root root 2.0G Oct 16 10:27 big

[root@st31-cli1 Lustre01]# lfs hsm_release big

[root@st31-cli1 Lustre01]# ls -lh


total 512
-rw-r--r-- 1 root root 2.0G Oct 16 10:27 big

[root@st31-cli1 Lustre01]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 8.4G 1.2G 6.8G 15% /
tmpfs 1.4G 0 1.4G 0% /dev/shm
/dev/vda1 485M 71M 389M 16% /boot
10.10.131.1@tcp1:10.10.131.2@tcp1:/Lustre01 79G 17G 59G 22% /mnt/Lustre01

Even if the file size is 2GB (with the ls command), the space used in the file system is decreased.
The 2 GB file is stored in the Archive file system (/data)

5. Restore the file using an indirect command and time how long it takes:
[root@st01-cli1 Lustre01]# time file big

Do not copy, scan, transmit, redistribute or PAGE 31 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

big: data

real 1m49.416s
user 0m0.005s
sys 0m0.006s

6. Run the command a second time.

Notice the difference? The first timed run had to wait for the file to be restored from the archive,
so it took longer to complete. By comparison, the second run completed almost instantly, since the
file was already restored to Lustre.

Task F: Taking an incremental backup of the Lustre filesystem efficiently

1. Activate the changelog register on the Metadata Target

# lctl --device Lustre01-MDT0000 changelog_register


Lustre01-MDT0000: Registered changelog userid 'cl1'

2. Prepare a client for the incremental backup

[root@st16-cli3 rsync]# mkdir /rsync


[root@st16-cli3 rsync]# lustre_rsync --source=/mnt/Lustre01 --target=/rsync
--mdt=Lustre01-MDT0000 --user=cl1 --statuslog /root/sync.log --verbose

At this point, files created before the activation of the changelog are not moved in the
backup directory.

3. Create a file

[root@st16-cli3 ~]# dd if=/dev/zero of=/mnt/Lustre01/testfile bs=1M count=10


10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0779182 s, 135 MB/s

4. Verify the changes in the metadata changelog

[root@st16-cli3 ~]# lfs changelog Lustre01-MDT0000


1 00MARK 20:24:38.59000000 2014.05.19 0x0 t=[0x10001:0x0:0x0] p=[0:0x0:0x0] mdd_obd-
st03fs-MDT0000
2 01CREAT 20:25:24.463000002 2014.05.19 0x0 t=[0x200000400:0x1:0x0]
p=[0x480001:0xa23ae941:0x0] testfile

Do not copy, scan, transmit, redistribute or PAGE 32 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

5. Run the rsync command

[root@st16-cli3 ~]# lustre_rsync --source=/mnt/Lustre01 --target=/rsync --mdt=Lustre01-


MDT0000 --user=cl1 --statuslog /root/rsync.log --verbose
Lustre filesystem: Lustre01
MDT device: Lustre01-MDT0000
Source: /mnt/Lustre01
Target: /rsync
Statuslog: rsync.log
Changelog registration: cl1
Starting changelog record: 0
Errors: 0
lustre_rsync took 1 seconds
Changelog records consumed: 2

6. Verify the file in the lustre file system and in the backup directory

[root@st16-cli3 ~]# md5sum /mnt/Lustre01/testfile


f1c9645dbc14efddc7d8a322685f26eb /mnt/Lustre01/testfile
[root@st16-cli3 ~]# md5sum /rsync/testfile
f1c9645dbc14efddc7d8a322685f26eb /rsync/testfile

7. Stop the changelog in the Metadata Target

[root@st16-mds1 ~]# lctl --device Lustre01-MDT0000 changelog_deregister cl1


Lustre01-MDT0000: Deregistered changelog user 'cl1'

Do not copy, scan, transmit, redistribute or PAGE 33 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre Installation and Administration

Lab 5

Lustre Striping

Student Notes:

Do not copy, scan, transmit, redistribute or PAGE 34 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Striping of Directories and Files

Task A - Striping by a Directory

1. Verify on the IML Dashboard that no client are connected


2. Remove the HSM Agent on stXX-cli3 on IML
3. Go to the Configuration  File Systems page and select the Lustre01 file system
4. Select the “View Client Mount Information” button and copy the provided mount
command.
5. Login to stxx-cli1 and stxx-cli2 and create the mount point for the Lustre client,
/mnt/Lustre01, then paste the client mount command, e.g.:

# mount -t lustre 10.10.116.2@tcp1:10.10.116.1@tcp1:/Lustre01 /mnt/Lustre01

6. Review the free space available from cli1 and cli2. It should look like this:

[root@st16-cli1 ~]# lfs df


UUID 1K-blocks Used Available Use% Mounted on
Lustre01-MDT0000_UUID 7862908 2144884 5193736 29% /mnt/Lustre01[MDT:0]
Lustre01-OST0000_UUID 10321208 2134156 7662700 22% /mnt/Lustre01[OST:0]
Lustre01-OST0001_UUID 10321208 2134156 7662700 22% /mnt/Lustre01[OST:1]
Lustre01-OST0002_UUID 10321208 2134156 7662700 22% /mnt/Lustre01[OST:2]
Lustre01-OST0003_UUID 10321208 2134156 7662700 22% /mnt/Lustre01[OST:3]

filesystem summary: 41284832 8536624 30650800 22% /mnt/Lustre01

7. Make a new subdirectory in your Lustre file system on one of your clients. Use the “lfs
getstripe” command to review the default striping policy for this directory:

# mkdir /mnt/Lustre01/new_dir_1

# lfs getstripe /mnt/Lustre01/new_dir_1


/mnt/Lustre01/new_dir_1
stripe_count: 1 stripe_size: 1048576 stripe_offset: -1

The default policy is as follows:

a. stripe_count == 1, meaning stripe the file across one (1) OST


b. stripe_size == 1048576, meaning write that many bytes (1048576 bytes, 1024 KB,
1 MB) across all the OSTs in the stripe group
c. stripe_offset == -1, meaning start writing on the “best” OST

Do not copy, scan, transmit, redistribute or PAGE 35 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

8. Create a file in the new subdirectory. Review the striping pattern for the file. In the
following example:
a. obdidx == 0, meaning the “index” OST (first OST) was OST0000
b. objid == 2, meaning the object id (count) on that OST was object # 2.

# dd if=/dev/zero of=/mnt/Lustre01/new_dir_1/file1 bs=1M count=100

100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 1.02791 s, 102 MB/s

# lfs getstripe /mnt/Lustre01/new_dir_1/file1

lfs getstripe /mnt/Lustre01/new_dir_1/file1


/mnt/Lustre01/new_dir_1/file1
lmm_stripe_count: 1
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 3
obdidx objid objid group
3 66 0x42 0

Repeat this for a couple more files, checking that the obdidx moves in a round-robin
like manner, and that the objid increments appropriately. Delete the files and repeat
this process. What do you find with regard to obdidx and objid? Is that what you
expected?

9. Create a new directory and use the “lfs setstripe” command to set the following striping
policy on the new sub-directory:
a. Stripe size of 128k
b. Stripe count of 2
c. Default stripe index

# mkdir -p /mnt/Lustre01/new_dir_2
# lfs getstripe /mnt/Lustre01/new_dir_2
# lfs setstripe -s 128K -c 2 -i -1 /mnt/Lustre01/new_dir_2
# lfs getstripe /mnt/Lustre01/new_dir_2

10. Create some more files in your subdirectory. Review the striping policy used for new
files less than 128KB in size and files larger than 128KB in size.

# dd if=/dev/zero of=/mnt/Lustre01/new_dir_2/file3 bs=1M count=10

10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.134001 s, 78.3 MB/s

# lfs getstripe /mnt/Lustre01/new_dir_2/file3

/mnt/Lustre01/new_dir_2/file3

Do not copy, scan, transmit, redistribute or PAGE 36 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

lmm_stripe_count: 2
lmm_stripe_size: 131072
lmm_layout_gen: 0
lmm_stripe_offset: 0
obdidx objid objid group
0 34 0x22 0
2 2 0x2 0

# touch /mnt/Lustre01/new_dir_2/file4

# lfs getstripe /mnt/Lustre01/new_dir_2/file4

/mnt/Lustre01/new_dir_2/file4
lmm_stripe_count: 2
lmm_stripe_size: 131072
lmm_layout_gen: 0
lmm_stripe_offset: 3
obdidx objid objid group
3 3 0x3 0
0 35 0x23 0
Notice how even small files, that is, files whose size is less than the stripe size, get
striped across the “stripe_count” of OSTs.

Task B - Striping by a Directory, Again…

1. Create three subdirectories in /mnt/Lustre01 called “nonstriped”, “striped” and


“reallystriped”. Set the stripe count to 1 for the nonstriped directory, 2 for the striped
directory and 4 for the reallystriped directory.

# mkdir /mnt/Lustre01/nonstriped
# mkdir /mnt/Lustre01/striped
# mkdir /mnt/Lustre01/reallystriped

# lfs setstripe -c 1 /mnt/Lustre01/nonstriped


# lfs setstripe -c 2 /mnt/Lustre01/striped
# lfs setstripe -c 4 /mnt/Lustre01/reallystriped

2. Check OST usage and create three (3) 1GB files (for a total of 9 files) in each of the
directories, checking usage between each new file. How well was space allocated by
Lustre? Look at the following example:

# lfs df -h
UUID bytes Used Available Use% Mounted on
Lustre01-MDT0000_UUID 7.5G 2.0G 5.0G 29% /mnt/Lustre01[MDT:0]
Lustre01-OST0000_UUID 9.8G 2.1G 7.2G 23% /mnt/Lustre01[OST:0]
Lustre01-OST0001_UUID 9.8G 2.1G 7.2G 23% /mnt/Lustre01[OST:1]
Lustre01-OST0002_UUID 9.8G 2.0G 7.3G 22% /mnt/Lustre01[OST:2]
Lustre01-OST0003_UUID 9.8G 2.1G 7.2G 23% /mnt/Lustre01[OST:3]

Do not copy, scan, transmit, redistribute or PAGE 37 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

filesystem summary: 39.4G 8.4G 28.9G 23% /mnt/Lustre01

# dd if=/dev/zero of=/mnt/Lustre01/nonstriped/X bs=1M count=1000

1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 13.5676 s, 77.3 MB/s

# lfs df -h

UUID bytes Used Available Use% Mounted on


Lustre01-MDT0000_UUID 7.5G 2.0G 5.0G 29% /mnt/Lustre01[MDT:0]
Lustre01-OST0000_UUID 9.8G 2.1G 7.2G 23% /mnt/Lustre01[OST:0]
Lustre01-OST0001_UUID 9.8G 2.1G 7.2G 23% /mnt/Lustre01[OST:1]
Lustre01-OST0002_UUID 9.8G 2.0G 7.3G 22% /mnt/Lustre01[OST:2]
Lustre01-OST0003_UUID 9.8G 3.1G 6.2G 33% /mnt/Lustre01[OST:3]

filesystem summary: 39.4G 9.4G 27.9G 25% /mnt/Lustre01

Looks like the last file was written to OST-3. Confirm with “getstripe”:
# lfs getstripe /mnt/Lustre01/nonstriped/X

/mnt/Lustre01/nonstriped/X
lmm_stripe_count: 1
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 3
obdidx objid objid group
3 78 0x4e 0

# dd if=/dev/zero of=/mnt/Lustre01/striped/Y bs=1M count=1000

1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 16.7898 s, 62.5 MB/s

# lfs df -h

UUID bytes Used Available Use% Mounted on


Lustre01-MDT0000_UUID 7.5G 2.0G 5.0G 29% /mnt/Lustre01[MDT:0]
Lustre01-OST0000_UUID 9.8G 2.6G 6.7G 28% /mnt/Lustre01[OST:0]
Lustre01-OST0001_UUID 9.8G 2.6G 6.7G 28% /mnt/Lustre01[OST:1]
Lustre01-OST0002_UUID 9.8G 2.0G 7.3G 22% /mnt/Lustre01[OST:2]
Lustre01-OST0003_UUID 9.8G 3.1G 6.2G 33% /mnt/Lustre01[OST:3]

filesystem summary: 39.4G 10.4G 26.9G 28% /mnt/Lustre01

Do not copy, scan, transmit, redistribute or PAGE 38 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Looks like the last file was written to OST0 & OST1. Confirm.
# lfs getstripe /mnt/Lustre01/striped/Y
/mnt/Lustre01/striped/Y
lmm_stripe_count: 2
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 0
obdidx objid objid group
0 74 0x4a 0
1 76 0x4c 0

# dd if=/dev/zero of=/mnt/Lustre01/reallystriped/Z bs=1M count=1000

1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 35.2855 s, 29.7 MB/s

# lfs df -h
UUID bytes Used Available Use% Mounted on
Lustre01-MDT0000_UUID 7.5G 2.0G 5.0G 29% /mnt/Lustre01[MDT:0]
Lustre01-OST0000_UUID 9.8G 2.9G 6.5G 31% /mnt/Lustre01[OST:0]
Lustre01-OST0001_UUID 9.8G 2.9G 6.5G 31% /mnt/Lustre01[OST:1]
Lustre01-OST0002_UUID 9.8G 2.3G 7.1G 24% /mnt/Lustre01[OST:2]
Lustre01-OST0003_UUID 9.8G 3.4G 6.0G 36% /mnt/Lustre01[OST:3]

filesystem summary: 39.4G 11.4G 26.0G 30% /mnt/Lustre01

In this last example, all of the OSTs were used. Confirm:


# lfs getstripe /mnt/Lustre01/reallystriped/Z
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 3
obdidx objid objid group
3 76 0x4c 0
1 73 0x49 0
0 72 0x48 0
2 74 0x4a 0

Task C - Striping by a File

1. The default striping patterns on a directory may be overridden when files are created
using the “lfs setstripe” command. Consider the following example and create some
files that have non-default striping patterns in each of the three directories above.

# lfs setstripe -c 4 /mnt/Lustre01/nonstriped/yesstriped


# lfs getstripe /mnt/Lustre01/nonstriped/yesstriped
/mnt/Lustre01/nonstriped/yesstriped
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_layout_gen: 0
lmm_stripe_offset: 3

Do not copy, scan, transmit, redistribute or PAGE 39 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

obdidx objid objid group


3 6 0x6 0
0 38 0x26 0
2 5 0x5 0
1 3 0x3 0

Use a “dd” command to write some data to each of the files you have created.

Task D - Striping by a File, again…

1. In this case we want to change the striping pattern of an existing file. Attempt to
change the striping pattern of one of your files. For example:

# lfs setstripe -c 2 /mnt/Lustre01/nonstriped/yesstriped

error on ioctl 0x4008669a for '/mnt/Lustre01/nonstriped/yesstriped' (3):


stripe already set
error: setstripe: create stripe file '/mnt/Lustre01/nonstriped/yesstriped'
failed

2. Ok, that didn’t work. As it turns out, Lustre does not support changing the striping
pattern of an existing file. To change the striping pattern on an existing file, it is
necessary to 1) create a new “temp” file (using ‘lfs setstripe’) with the desired striping
pattern, 2) copy the existing file to the newly created file, and finally 3) delete the
original file and rename the temp file. Give this a try and verify that your operations
worked.

Do not copy, scan, transmit, redistribute or PAGE 40 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre Installation and Administration

Lab 6

OST Pools

Student Notes:

Do not copy, scan, transmit, redistribute or PAGE 41 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

OST Pools
In this Lab you will create different pools and write files to the OSTs in a pool. Unfortunately,
some of the commands have to run on the MGS while others run on the MDS. To avoid
changing hosts to execute the different commands, we will use IML to perform a failover of
the MDT (stxx-mds1) to the MGS (stxx-mds2).

1. Go to Configuration  File Systems and select the Lustre01 file system


2. Scroll down to “Metadata Target” and select Failover from the Actions drop-down
menu.
3. Verify that the Metadata Target is running on stxx-mds2
4. Mount the filesystem on the clients if not already mounted.
5. Log into stxx-mds2
6. Create three new OST pools using the following commands:

[root@st16-mds2 ~]# lctl pool_list Lustre01


Pools from Lustre01:

[root@st16-mds2 ~]# lctl pool_new Lustre01.pool-03


Pool Lustre01.pool-03 created

[root@st16-mds2 ~]# lctl pool_list Lustre01


Pools from Lustre01:
Lustre01.pool-03

[root@st16-mds2 ~]# lctl pool_new Lustre01.pool-12


Pool Lustre01.pool-12 created

[root@st16-mds2 ~]# lctl pool_new Lustre01.pool-123


Pool Lustre01.pool-123 created

[root@st16-mds2 ~]# lctl pool_list Lustre01


Pools from Lustre01:
Lustre01.pool-123
Lustre01.pool-12
Lustre01.pool-03

Add one OST from each OSS to each pool. For example:
[root@st16-mds2 ~]# lctl pool_list Lustre01.pool-03
Pool: Lustre01.pool-03

[root@st16-mds2 ~]# lctl pool_add Lustre01.pool-03 Lustre01-OST0000 Lustre01-OST0003


OST Lustre01-OST0000_UUID added to pool Lustre01.pool-03
OST Lustre01-OST0003_UUID added to pool Lustre01.pool-03

[root@st16-mds2 ~]# lctl pool_list Lustre01.pool-03


Pool: Lustre01.pool-03
Lustre01-OST0000_UUID

Do not copy, scan, transmit, redistribute or PAGE 42 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre01-OST0003_UUID

[root@st16-mds2 ~]# lctl pool_add Lustre01.pool-12 OST[1,2]


OST Lustre01-OST0001_UUID added to pool Lustre01.pool-12
OST Lustre01-OST0002_UUID added to pool Lustre01.pool-12

[root@st16-mds2 ~]# lctl pool_list Lustre01.pool-12


Pool: Lustre01.pool-12
Lustre01-OST0001_UUID
Lustre01-OST0002_UUID

[root@st16-mds2 ~]# lctl pool_add Lustre01.pool-123 OST[1-3]


OST Lustre01-OST0001_UUID added to pool Lustre01.pool-123
OST Lustre01-OST0002_UUID added to pool Lustre01.pool-123
OST Lustre01-OST0003_UUID added to pool Lustre01.pool-123

[root@st16-mds2 ~]# lctl pool_list Lustre01.pool-123


Pool: Lustre01.pool-123
Lustre01-OST0001_UUID
Lustre01-OST0002_UUID
Lustre01-OST0003_UUID

7. On CLI1, make three new subdirectories in your Lustre file system and assign each
directory to an OST pool. The result of this is that files written to a directory which is
assigned to a pool causes the files to be written only to OSTs in that pool.

[cli1]# mkdir /mnt/Lustre01/pool-03


[cli1]# lfs setstripe -p Lustre01.pool-03 /mnt/Lustre01/pool-03

[cli1]# mkdir /mnt/Lustre01/pool-12


[cli1]# lfs setstripe -p Lustre01.pool-12 /mnt/Lustre01/pool-12

[cli1]# mkdir /mnt/Lustre01/pool-123


[cli1]# lfs setstripe -p Lustre01.pool-123 /mnt/Lustre01/pool-123

8. Using ‘dd’, write a new file into the “/mnt/Lustre01/pool-03” subdirectory. Confirm
that the file was written to the OST pool correctly.

[cli1]# dd if=/dev/zero of=/mnt/Lustre01/pool-03/file-03.dd bs=1M count=100


100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 2.02899 seconds, 51.7 MB/s

[cli1]# lfs getstripe /mnt/Lustre01/pool-03/file-03.dd


/mnt/Lustre01/pool-03/file-03.dd
lmm_stripe_count: 1
lmm_stripe_size: 1048576
lmm_layout_gen: 0
lmm_stripe_offset: 3
lmm_pool: pool-03
obdidx objid objid group
3 7 0x7 0

Do not copy, scan, transmit, redistribute or PAGE 43 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Note that only one (1) OST in the pool was written to. Why didn’t both OSTs get data
written to them? Read on for the answer.

9. In the previous steps a directory was assigned to an OST pool, however the striping
policy on the directory was the Lustre default (stripe_count: 1). To have files written
to that directory AND use all the OSTs in that pool, it is necessary to change the
striping policy for that directory.

Although we could define the policy to "start at the first OST" and “stripe across the
remaining OSTs”, we know that OSTs can be added and removed from pools, so it is
much better to use the “-1” values and Lustre figure it out dynamically.

[cli1]# lfs setstripe -1 -c -1 -p Lustre01.pool-03 /mnt/Lustre01/pool-03


[cli1]# lfs setstripe -1 -c -1 -p Lustre01.pool-12 /mnt/Lustre01/pool-12
[cli1]# lfs setstripe -1 -c -1 -p Lustre01.pool-123 /mnt/Lustre01/pool-123

[cli1]# lfs getstripe /mnt/Lustre01/pool-03


/mnt/Lustre01/pool-03
stripe_count: -1 stripe_size: 1048576 stripe_offset: -1 pool:
pool-03
/mnt/Lustre01/pool-03/file-03.dd
lmm_stripe_count: 1
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 0
lmm_pool: pool-03
obdidx objid objid group
3 7 0x2 0

[cli1]# lfs getstripe /mnt/Lustre01/pool-12


/mnt/Lustre01/pool-12
stripe_count: -1 stripe_size: 1048576 stripe_offset: -1 pool:
pool-12

[cli1]# lfs getstripe /mnt/Lustre01/pool-123


/mnt/Lustre01/pool-123
stripe_count: -1 stripe_size: 1048576 stripe_offset: -1 pool:
pool-123

Although the “getstripe” and “setstripe” arguments both support the “--pool” argument,
it should be realized that a striping policy may NOT be set on an OST pool. Instead, a
striping policy is set on a directory (or file), and a directory is assigned to a pool.

10. Use “dd” or “touch” to write *several* files in each of the "pool" directories, and
confirm that each file is written to the correct OSTs. Notice how “lfs getstripe” on a
directory returns the striping pattern for the directory and for the files inside.

[cli1]# dd if=/dev/zero of=/mnt/Lustre01/pool-03/file2.dd bs=1M count=100

Do not copy, scan, transmit, redistribute or PAGE 44 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

[cli1]# lfs getstripe /mnt/Lustre01/pool-03


/mnt/Lustre01/pool-03
stripe_count: -1 stripe_size: 1048576 stripe_offset: -1 pool:
pool-03
/mnt/Lustre01/pool-03/file2.dd
lmm_stripe_count: 2
lmm_stripe_size: 1048576
lmm_layout_gen: 0
lmm_stripe_offset: 3
lmm_pool: pool-03
obdidx objid objid group
3 8 0x8 0
0 39 0x27 0

/mnt/Lustre01/pool-03/file-03.dd
lmm_stripe_count: 1
lmm_stripe_size: 1048576
lmm_layout_gen: 0
lmm_stripe_offset: 3
lmm_pool: pool-03
obdidx objid objid group
3 7 0x7 0

Pools and directories are flexible. Test out the example below and prove to yourself that
files can be created in pools of which the directory is NOT a member.
[cli1]# lfs setstripe -p Lustre01.pool-03 /mnt/Lustre01/file-in-pool-03
[cli1]# lfs getstripe -p /mnt/Lustre01/file-in-pool-03
pool-03

11. This example extends the previous, demonstrating that a file can be created in one pool
even though its parent directory is in another pool.

[cli1]# lfs setstripe -c -1 -p Lustre01.pool-12 /mnt/Lustre01/pool-03/pool-12

[cli1]# lfs getstripe -p /mnt/Lustre01/pool-03/pool-12


pool-12

[cli1]# lfs getstripe /mnt/Lustre01/pool-03/pool-12


/mnt/Lustre01/pool-03/pool-12
lmm_stripe_count: 2
lmm_stripe_size: 1048576
lmm_layout_gen: 0
lmm_stripe_offset: 2
lmm_pool: pool-12
obdidx objid objid group
2 6 0x6 0
1 4 0x4 0

This example demonstrates that files retain their "creation-time" settings regardless of the state of
the pool.
[cli1]# lfs setstripe -c -1 -p Lustre01.pool-123 /mnt/Lustre01/pool-123/file-123

[cli1]# lfs getstripe /mnt/Lustre01/pool-123/file-123


/mnt/Lustre01/pool-123/file-123
lmm_stripe_count: 3
lmm_stripe_size: 1048576

Do not copy, scan, transmit, redistribute or PAGE 45 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

lmm_layout_gen: 0
lmm_stripe_offset: 2
lmm_pool: pool-123
obdidx objid objid group
2 7 0x7 0
3 10 0xa 0
1 5 0x5 0

On the MGS, remove OST2 from pool-123 and recheck the striping of the file.

[root@st16-mds2 ~]# lctl pool_remove Lustre01.pool-123 OST0002


OST Lustre01-OST0002_UUID removed from pool Lustre01.pool-123
[root@st16-mds2 ~]# lctl pool_list Lustre01.pool-123
Pool: Lustre01.pool-123
Lustre01-OST0001_UUID
Lustre01-OST0003_UUID

[cli1]# lfs getstripe /mnt/Lustre01/pool-123/file-123


(verify that the previous output has not changed)

On the MGS, remove the remaining OSTs, destroy pool-123 and again check the file.

[root@st16-mds2 ~]# lctl pool_remove Lustre01.pool-123 OST0001 OST0003


OST Lustre01-OST0001_UUID removed from pool Lustre01.pool-123
OST Lustre01-OST0003_UUID removed from pool Lustre01.pool-123

[root@st16-mds2 ~]# lctl pool_destroy Lustre01.pool-123


Pool Lustre01.pool-123 destroyed

[root@st16-cli1 ~]# lfs pool_list Lustre01


Pools from Lustre01:
Lustre01.pool-12
Lustre01.pool-03

[root@st16-cli1]# lfs getstripe /mnt/Lustre01/pool-123/file-123


/mnt/Lustre01/pool-123/file-123
lmm_stripe_count: 3
lmm_stripe_size: 1048576
lmm_layout_gen: 0
lmm_stripe_offset: 2
lmm_pool: pool-123
obdidx objid objid group
2 7 0x7 0
3 10 0xa 0
1 5 0x5 0

[root@st16-cli1]# lfs getstripe -p /mnt/Lustre01/pool-123/file-123


pool-123

(Yes, that last step is accurate. Files created in a pool retain that information.)

Do not copy, scan, transmit, redistribute or PAGE 46 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

12. Verify that files written to the "pool-123" directory retain the stripe size (-1) and that
the stripe size applies at the file system level (all 4 OSTs are written to).

[root@st16-cli1]# touch /mnt/Lustre01/pool-123/file-0123


[root@st16-cli1]# lfs getstripe /mnt/Lustre01/pool-123/file-0123
/mnt/Lustre01/pool-123/file-0123
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_layout_gen: 0
lmm_stripe_offset: 0
lmm_pool: pool-123
obdidx objid objid group
0 40 0x28 0
2 8 0x8 0
1 6 0x6 0
3 11 0xb 0

Do not copy, scan, transmit, redistribute or PAGE 47 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre Installation and Administration

Lab 7

Quotas

Student Notes:

Do not copy, scan, transmit, redistribute or PAGE 48 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Quotas
Preparation steps

1. On the MDS, the OSSs and the clients, add a new Linux user and directory.

[all]# useradd -u 1001 -G 100 user1

2. As the newly created user, write 99 new files, some with 100 MB of data.

[cli1]# mkdir /mnt/Lustre01/users


[cli1]# chgrp users /mnt/Lustre01/users && chmod 2775 /mnt/Lustre01/users
[cli1]# su - user1

[user1@cli1 ~]$ for a in {1..99} ; do touch /mnt/Lustre01/users/$a ; done

[user1@cli1 ~]$ for a in {1..8} ; do


dd if=/dev/zero of=/mnt/Lustre01/users/$a bs=1M count=100 ; done

3. Enable quota on the MGS server:

[st16-mds2]# lctl conf_param Lustre01.quota.ost=ug


[st16-mds2]# lctl conf_param Lustre01.quota.mdt=ug

4. Verify the quota on a target:

[st16-oss1]# lctl get_param osd-*.*.quota_slave.info

Verify initial quota settings on the file system.

1. Check usage and quotas using the ‘lfs quota’ command. Note that if no user name is
specified, the current user is used.

[cli1]# lfs quota /mnt/Lustre01


Disk quotas for user root (uid 0):
Filesystem kbytes quota limit grace files quota limit grace
/mnt/Lustre01 3390976 0 0 - 115 0 0 -
Disk quotas for group root (gid 0):
Filesystem kbytes quota limit grace files quota limit grace
/mnt/Lustre01 3390972 0 0 - 114 0 0 -

[cli1]# lfs quota -u user1 /mnt/Lustre01


Disk quotas for user user1 (uid 1001):
Filesystem kbytes quota limit grace files quota limit grace
/mnt/Lustre01 819212 0 0 - 99 0 0 -

Do not copy, scan, transmit, redistribute or PAGE 49 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

[cli1]# lfs quota -g users /mnt/Lustre01


Disk quotas for group users (gid 100):
Filesystem kbytes quota limit grace files quota limit grace
/mnt/Lustre01 819216 0 0 - 100 0 0 -

Configuring user quotas

1. From a client node, set a quota (soft limit) of 1 GB and 1500 files for user user1:

[cli1]# lfs setquota -u user1 -b 1024000 -B 2048000 -i 1500 -I 2000 /mnt/Lustre01

2. From a client node, set group quotas to a size larger than the individual user quotas:

[cli1]# lfs setquota -g users -b 10240000 -B 20480000 -i 5000 -I 10000 /mnt/Lustre01

3. Check usage and quotas again for user1 and users:


[cli1]# lfs quota -u user1 /mnt/Lustre01
Disk quotas for user user1 (uid 1001):
Filesystem kbytes quota limit grace files quota limit grace
/mnt/Lustre01 819212 1024000 2048000 - 99 1500 2000 -

[cli1]# lfs quota -g users /mnt/Lustre01


Disk quotas for group users (gid 100):
Filesystem kbytes quota limit grace files quota limit grace
/mnt/Lustre01 819216 10240000 20480000 - 100 5000 10000 -

4. Determine how many new files it will take user1 over the file count quota (soft limit)
but not over the limit (hard limit), and then as user1, write that many files. The
example writes 1500 files, which should be appropriate.
[root@cli1]# su - user1

[user1@cli1]$ for a in {100..1599}; do touch /mnt/Lustre01/users/$a ;done

5. Verify that the user1 account is now over the quota for files. An asterisk will be seen
alongside the file count, and a grace period countdown will have begun.
[user1@cli1]$ lfs quota -u user1 /mnt/Lustre01
Disk quotas for user user1 (uid 1001):
Filesystem kbytes quota limit grace files quota limit grace
/mnt/Lustre01 819212 1024000 2048000 - 1599* 1500 2000 6d23h59m43s

Do not copy, scan, transmit, redistribute or PAGE 50 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

6. Similarly, determine and then write enough data to force user1 over the block quota but
not over the block limit. This example writes 300MB.
[user1@cli1 ~]$ for a in {9..11}; do
dd if=/dev/zero of=/mnt/Lustre01/users/$a bs=1M count=100 ;done

Verify that user1 is now over the limit for both quotas. Add the verbose option to display
block utilization per target.
[user1@st16-cli1 ~]$ lfs quota -u user1 -v /mnt/Lustre01/
Disk quotas for user user1 (uid 1001):
Filesystem kbytes quota limit grace files quota limit grace
/mnt/Lustre01/ 1126428* 1024000 2048000 6d23h59m40s 1599* 1500 2000
6d23h51m57s
Lustre01-MDT0000_UUID
0 - 0 - 1599* - 1599 -
Lustre01-OST0000_UUID
307208 - 308232 - - - - -
Lustre01-OST0001_UUID
307208 - 307216 - - - - -
Lustre01-OST0002_UUID
307208 - 308240 - - - - -
Lustre01-OST0003_UUID
204804* - 204804 - - - - -

7. Observe the results as you attempt to write more than 1GB of data.

[user1@cli1 ~]$ dd if=/dev/zero of=/mnt/Lustre01/users/99 bs=1M count=10000


dd: writing `/mnt/Lustre01/users/99': Disk quota exceeded
921+0 records in
920+0 records out
964689920 bytes (965 MB) copied, 32.3791 s, 29.8 MB/s

Review the size of the file you have written and explain your results. Finally, run a
quota check and notice that the grace period for block usage is no longer displayed.
Why? Remove the '99' file and check the grace period timer. Does it make sense?

Do not copy, scan, transmit, redistribute or PAGE 51 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

8. Realizing that the default grace period (1 week) for being over the soft limit is too
short for your needs, change the grace period to 2 weeks for block data and 3 weeks for
files for users. For groups, change it to 3 weeks for block data and 4 weeks for inodes.
Then, check quota's on "user1." Examples follow:

Note that if week (w), day (d), hour (h), etc. are not specified, Lustre quotas assume
that the value provided is in seconds.

Usage:
lfs setquota -t <-u|-g>
[--block-grace <block-grace>]
[--inode-grace <inode-grace>] <filesystem>
lfs setquota -t <-u|-g>
[-b <block-grace>] [-i <inode-grace>]
<filesystem>
lfs setquota help

Examples (run as root user):


# lfs setquota -t -u -b 2w -i 3w /mnt/Lustre01
# lfs setquota -t -g -b 3w -i 4w /mnt/Lustre01

9. After changing the grace periods, observe the grace period for the "user1" user.

[cli1]# lfs quota -u user1 /mnt/Lustre01


Disk quotas for user user1 (uid 1001):
Filesystem kbytes quota limit grace files quota limit grace
/mnt/Lustre01 1426464* 1024000 2048000 6d23h57m28s - 1599* 1500 2000 6d23h55m11s

What happened? The current grace period did not change to the new grace period?

Change just made does not immediately change the grace period!

If an account, like in this case "user1", is over their quota (user or group), changing
the grace period does not change the countdown timer. This change will not help
them until the next time they go over a limit.

Reset quotas on the user1 account to 0, effectively disabling quotas for that user.

[cli1]# lfs setquota -u user1 -b 0 -B 0 -i 0 -I 0 /mnt/Lustre01

[cli1]# lfs quota -u user1 /mnt/Lustre01


Disk quotas for user user1 (uid 1001):
Filesystem kbytes quota limit grace files quota limit grace
/mnt/Lustre01 2068508 0 0 - 1599 0 0 -

Test your knowledge

Do not copy, scan, transmit, redistribute or PAGE 52 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

10. Notice that user user1 is in more than one group:

[root@cli1]# su - user1
[user1@cli1 ~]$ id
uid=1001(user1) gid=1001(user1) groups=1001(user1),100(users)

Create another user (user2) that is also in the users group.


One all Lustre Servers and Clients:
useradd -u 1002 -G 100 user2
grep user2 /etc/passwd /etc/group

/etc/passwd:user2:x:1002:1002::/home/user2:/bin/bash
/etc/group:users:x:100:user1,user2
/etc/group:user2:x:1002:

Then assign a group quota to the users group for files, block data, or both.

[cli1]# lfs setquota -g users -b 51200 -B 102400 -i 1025 -I 2000 /mnt/Lustre01

Have user2 exceed the quota for the entire group, and then attempt to write using the user1
account.

Did the user1 account successfully write? Why or why not?

Do not copy, scan, transmit, redistribute or PAGE 53 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre Installation and Administration

Lab 8

Lustre Networking, Advanced

Student Notes:

Do not copy, scan, transmit, redistribute or PAGE 54 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Advanced LNET

Task A: - Configure a multi-rail Lustre network using networks option

In this task you will configure clients on different LNET subnets. Each client will be able to
reach all the Lustre targets as the Lustre servers will run two LNET subnets.

1. Unmount the lustre file system from clients


2. Remove the Lustre file system using IML from Configuration  File Systems
3. Remove the MGT using IML from Configuration  MGTs
4. On all of your nodes, use the lustre_rmmod command to ensure that all Lustre modules are
unloaded.

On each Server:

lustre_rmmod
lctl network down
lustre_rmmod

5. Modify your Lustre MDS and OSS servers to configure LNET to run over two separate TCP
channels using interfaces eth0 and eth1. Use the networks option to configure LNET on each
server.

[mds1]# cat /etc/modprobe.d/lustre.conf


options lnet networks="tcp0(eth0),tcp1(eth1)"

[mds1]# scp /etc/modprobe.d/lustre.conf stXX-oss1:/etc/modprobe.d/


(repeat the SCP to OSS2 and MDS2)

Run these two commands on all MDS / OSS servers (shown below is MDS1 only):

[mds1 ~]# modprobe -v lustre


...
insmod /lib/modules/2.6.32-431.5.1.el6_lustre.x86_64/updates/kernel/net/lustre/lnet.ko
networks="tcp(eth0),tcp1(eth1)"
...
[root@st16-mds1 ~]# lctl list_nids
10.70.116.1@tcp < -- Notice how LNET often references "tcp0" as "tcp"?
10.10.116.1@tcp1

6. Configure LNET on client1 to use the eth0 interface using the networks option.

[root@st16-cli1 ~]# cat /etc/modprobe.d/lustre.conf


options lnet networks="tcp0(eth0)"

[root@st16-cli1 ~]# modprobe -v lustre


...
insmod /lib/modules/2.6.32-431.5.1.el6_lustre.x86_64/updates/kernel/net/lustre/lnet.ko
networks="tcp0(eth0)"

...

Do not copy, scan, transmit, redistribute or PAGE 55 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

[root@st16-cli1 ~]# lctl list_nids


10.70.116.5@tcp < -- Another "tcp" entry. That's why we suggest avoiding tcp0.

7. Configure LNET on client2 to use the eth1 interface using the networks option.

[root@st16-cli2 ~]# cat /etc/modprobe.d/lustre.conf


options lnet networks="tcp1(eth1)"
[root@st16-cli2 ~]# modprobe -v lustre
...
insmod /lib/modules/2.6.32-431.5.1.el6_lustre.x86_64/updates/kernel/net/lustre/lnet.ko
networks="tcp1(eth1)"

...
[root@st20-cli2 ~]# lctl list_nids
10.10.116.6@tcp1

8. Login to the IML UI and rescan the lustre server’s NIDs using the button in the Configuration
 Servers screen
9. Create a new Lustre Filesystem called Lustre01 using IML. Make sure that only the volumes
on the OSS servers are used for the OSTs (XX0003, XX0004, XX0005 and XX0006).
10. On CLI1, verify that you can ping Lustre servers over the tcp0 channel.

[cli1]# lctl ping 10.70.1XX.1@tcp0


12345-0@lo
12345-10.70.116.1@tcp
12345-10.10.116.1@tcp1

11. On CLI2, verify that you can ping Lustre servers over the tcp1 channel.

[cli2]# lctl ping 10.10.1XX.1@tcp1


12345-0@lo
12345-10.70.116.1@tcp
12345-10.10.116.1@tcp1

12. Mount the Lustre file system on both client systems. Explain the difference between the mount
commands used on the two client systems.

[cli1]# mount -t lustre 10.70.1XX.2@tcp0:/Lustre01 /mnt/Lustre01/

[cli2]# mount -t lustre 10.10.1XX.2@tcp1:/Lustre01 /mnt/Lustre01/

13. Write some data to the Lustre file system from CLI1 and read it back from CLI2.

[cli1]# for a in {1..100}; do echo $a >> /mnt/Lustre01/a; done

[cli2]# wc -l /mnt/Lustre01/a
100 /mnt/Lustre01/a

Do not copy, scan, transmit, redistribute or PAGE 56 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Task B: - Schedule LNET self test between an OSS node and a client node

1. Load LNET self test model on both OSS1 and CLI1 (only OSS1 shown):

[oss-1]# modprobe -v lnet_selftest


insmod /lib/modules/2.6.32-
431.17.1.el6_lustre.x86_64/extra/kernel/net/lustre/lnet_selftest.ko

2. Schedule bulk write tests from client to oss

[cli1]# export LST_SESSION=$$

[cli1]# lst new_session write


SESSION: write TIMEOUT: 300 FORCE: No

[cli1]# lst add_group oss 10.70.1XX.3@tcp0


10.70.1XX.3@tcp0 are added to session

[cli1]# lst add_group client 10.70.1XX.5@tcp0


10.70.1XX.5@tcp0 are added to session

[cli1]# lst add_batch bulk_write

[cli1]# lst add_test --batch bulk_write --from client --to oss brw write check=full
size=1M
Test was added successfully

[cli1]# lst run bulk_write


bulk_write is running now

[cli1]# lst stat client & sleep 30; kill $!


[1] 2569

[LNet Rates of client]


[R] Avg: 395 RPC/s Min: 395 RPC/s Max: 395 RPC/s
[W] Avg: 395 RPC/s Min: 395 RPC/s Max: 395 RPC/s
[LNet Bandwidth of client]
[R] Avg: 0.03 MB/s Min: 0.03 MB/s Max: 0.03 MB/s
[W] Avg: 197.39 MB/s Min: 197.39 MB/s Max: 197.39 MB/s

... wait a bit ...

[cli1]# lst end_session


session is ended

Do not copy, scan, transmit, redistribute or PAGE 57 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Task C: - Configure a Lustre (LNET) router.

In this task you will configure CLI1 to be a router, capable of routing LNET traffic between
MDS1 and CLI2.

A (very) rudimentary network diagram follows:


MDS-1 CLI-1 (router) CLI-2
tcp1 10.10.1XX.1 <- -> 10.10.1XX.5
tcp2 192.168.1XX.5 <- -> 192.168.1XX.6

1. Unmount the lustre file system from clients


2. Remove the Lustre file system using IML Configuration File Systems
3. Remove the MGT using IML from Configuration  MGTs
4. Remove the servers from the IML Configuration Servers screen
5. On MDS1, configure a LNET route specifying the router NID to reach tcp2.

[mds1]# lustre_rmmod
[mds1]# cat /etc/modprobe.d/lustre.conf
options lnet networks="tcp1(eth1)" routes="tcp2 10.10.1XX.5@tcp1"

[mds1]# modprobe -v lnet


insmod /lib/modules/2.6.32-431.17.1.el6_lustre.x86_64/extra/kernel/net/lustre/libcfs.ko
insmod /lib/modules/2.6.32-431.17.1.el6_lustre.x86_64/extra/kernel/net/lustre/lnet.ko
networks="tcp1(eth1)" routes="tcp2 10.10.1XX.5@tcp1"
[mds1]# lctl network configure
LNET configured
[mds1]# lctl list_nids
10.10.1XX.1@tcp1

6. On CLI2, configure the ETH2 network interface with IP Address 192.168.1XX.6, where XX is
the student number. Use the command system-config-network to create the interface
configuration. Use the command "ifup eth2" to bring the interface online.
7. Configure a LNET route specifying the router NID to reach tcp1.

[cli2]# lustre_rmmod
[cli2]# cat /etc/modprobe.d/lustre.conf
options lnet networks="tcp2(eth2)" routes="tcp1 192.168.1XX.5@tcp2"

[cli2]# modprobe -v lnet


insmod /lib/modules/2.6.32-431.17.1.el6.x86_64/extra/kernel/net/lustre/libcfs.ko
insmod /lib/modules/2.6.32-431.17.1.el6.x86_64/extra/kernel/net/lustre/lnet.ko
networks="tcp2(eth2)" routes="tcp1 192.168.1XX.5@tcp2"

[cli2]# lctl network configure


LNET configured
[cli2]# lctl list_nids
192.168.1XX.6@tcp2

Do not copy, scan, transmit, redistribute or PAGE 58 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

1. On CLI1, configure the ETH2 network interface with IP Address 192.168.1XX.5, where XX is
the student number. Use the command system-config-network to create the interface
configuration. Use the command "ifup eth2" to bring the interface online.
2. Configure LNET routing (forwarding) between tcp1(eth1) and tcp2(eth2).

[cli1]# lustre_rmmod
[cli1]# cat /etc/modprobe.d/lustre.conf
options lnet networks="tcp1(eth1),tcp2(eth2)" forwarding=enabled

[cli1]# modprobe -v lnet


insmod /lib/modules/2.6.32-431.17.1.el6.x86_64/extra/kernel/net/lustre/libcfs.ko
insmod /lib/modules/2.6.32-431.17.1.el6.x86_64/extra/kernel/net/lustre/lnet.ko
networks="tcp1(eth1),tcp2(eth2)" forwarding=enabled

[cli1]# lctl network configure


LNET configured
[cli1]# lctl list_nids
192.168.1XX.5@tcp2
10.10.1XX.5@tcp1

3. On CLI1, take the eth1 interface down.

[cli1]# ifconfig eth1 down

4. On MDS1, perform an LNET ping to the NID of the last failed ping.

[mds1]# lctl ping 192.168.1XX.6@tcp2


failed to ping 192.168.1XX.6@tcp2: Input/output error

What is going on now!??

5. Why isn't it working? Well, the answer isn't great, but its why you came to this class - to learn
these little tidbits of information. On MDS1 and CLI2, check the status of the router.

[mds1]# cat /proc/sys/lnet/routes


Routing disabled
net hops state router
tcp2 1 down 10.10.1XX.5@tcp1

[cli2]# cat /proc/sys/lnet/routes


Routing disabled
net hops state router
tcp1 1 down 192.168.1XX.5@tcp2

State down!? Why down? Let's check the state of the router to confirm.

[cli1]# cat /proc/sys/lnet/routes


Routing enabled
net hops state router

Do not copy, scan, transmit, redistribute or PAGE 59 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Hmm, looks like routing is enabled on the router. What else could it be?

Well, if you recall from the steps above, we first had you configure the clients and THEN
configure the router. So when the clients loaded up the LNET module, they checked the
router status and found it not was not up - thus, they now report that the router is not up
- no, it's not quite as dynamic as would be ideal. Now you know to bring up the routers
first.

From here you don't need to unload / reload modules, but rather just reconfigure LNET.

[mds1]# lctl network unconfigure


LNET ready to unload
[mds1]# lctl network configure
LNET configured
[mds1]# cat /proc/sys/lnet/routes
Routing disabled
net hops state router
tcp2 1 up 10.10.1XX.5@tcp1

[cli2]# lctl network unconfigure


LNET ready to unload
[cli2]# lctl network configure
LNET configured
[cli2]# cat /proc/sys/lnet/routes
Routing disabled
net hops state router
tcp1 1 up 192.168.1XX.5@tcp2

Notice that the state of the router is now up, but that MDS1 and CLI2 are NOT routers.

6. Try the LNET ping again, from each end.

[mds1]# lctl ping 192.168.1XX.6@tcp2


12345-0@lo
12345-192.168.1XX.6@tcp2

[cli2]# lctl ping 10.10.1XX.1@tcp1


12345-0@lo
12345-10.10.1XX.1@tcp1

7. On MDS1, CLI1 and CLI2, use the following set of commands to remove the LNET modules
(shown only for MDS1):

[mds1]# lctl network unconfigure


[mds1]# lustre_rmmod 2>/dev/null
[mds1]# lsmod|grep -E 'lustre|lnet'

8. Reset the LNET modules on all nodes using the following commands:

Do not copy, scan, transmit, redistribute or PAGE 60 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

[st16@whamcloud]$ echo 'options lnet networks="tcp1(eth1)"' > lus.cf


[st16@whamcloud]$ for a in {1..2}; do
> scp lus.cf root@stXX-mds${a}:/etc/modprobe.d/lustre.conf
> scp lus.cf root@stXX-oss${a}:/etc/modprobe.d/lustre.conf
> scp lus.cf root@stXX-cli${a}:/etc/modprobe.d/lustre.conf
> done

9. Restart the downed interfaces on MDS1 and CLI2:

[mds1]# ifup eth2


[cli2]# ifup eth1

Do not copy, scan, transmit, redistribute or PAGE 61 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre Installation and Administration

Lab 9

Lustre Troubleshooting

Student Notes:

Do not copy, scan, transmit, redistribute or PAGE 62 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre Troubleshooting

Task A: Remove the previous lustre filesystem using IML.

Step 1: Navigate to Configuration  File System


Step 2: Select Remove from the “Actions” button
Step 3: Wait until finish. Navigate to Configuration  MGT
Step 4: Select Remove from the “Actions” button

Task B: Creating the Lustre filesystem using IML

Step 1: Create the file system as previously

Step 2: Mount the lustre file system on the clients

Step 3: Run the test.sh script created in a previous Lab

Task C: Use jobstats with IML

1. In the MGS server activate the jobstats information:


[root@st16-mds1 ~]# lctl conf_param Lustre01.sys.jobid_var=procname_uid

2. In the Dashboard you can click on each block of the Heatmap chart to discover
applications and users that are doing I/O on the selectes OST.

Do not copy, scan, transmit, redistribute or PAGE 63 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Task D: Identify source of I/O

1. Activate the request buffer using this command on the MDS:

[root@st16-mds1 lustre]# lctl set_param mds.*.mdt.req_buffer_history_max=1000


mds.MDS.mdt.req_buffer_history_max=1000

2. Create on the Lustre File System some directories from Client2:

[root@st16-cli2 ~]# cd /mnt/Lustre01


[root@st16-cli2 ~]# for i in {1..100}; do mkdir $i; done

3. Extract the log of the buffer on the MDS:


[root@st16-mds1 lustre]# lctl get_param mds.*.mdt.req_history |tail -5
5948343752536555520:10.10.101.1@tcp1:12345-
10.10.101.6@tcp1:x1452231813497780:488:Complete:1384956704:0s(-6s) opc 36
5948343752545468416:10.10.101.1@tcp1:12345-
10.10.101.6@tcp1:x1452231813497784:488:Complete:1384956704:0s(-6s) opc 36
5948343752554250240:10.10.101.1@tcp1:12345-
10.10.101.6@tcp1:x1452231813497788:488:Complete:1384956704:0s(-6s) opc 36
5948343752563556352:10.10.101.1@tcp1:12345-
10.10.101.6@tcp1:x1452231813497792:488:Complete:1384956704:0s(-6s) opc 36

4. The above log output says that from NID 10.10.101.6@tcp1 the Metadata Server received
several operation code number 36. Use this command to identify the meaning of the
operation code :
[root@st16-mds1 lustre]# cat /usr/include/lustre/lustre_idl.h|grep 36
MDS_REINT = 36,

5. Try another activity from Client 1:


[root@st16-cli1 ~]# ls -ltr /mnt/Lustre01/

6. Extract again the buffer:


[root@st16-mds1 lustre]# lctl get_param mds.*.mdt.req_history |tail -5
5948344271949201408:10.10.101.1@tcp1:12345-
10.10.101.5@tcp1:x1452231776798188:480:Complete:1384956825:0s(-6s) opc 49
5948344271952936960:10.10.101.1@tcp1:12345-
10.10.101.5@tcp1:x1452231776798192:480:Complete:1384956825:0s(-6s) opc 49

Do not copy, scan, transmit, redistribute or PAGE 64 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

5948344271957065728:10.10.101.1@tcp1:12345-
10.10.101.5@tcp1:x1452231776798196:480:Complete:1384956825:0s(-6s) opc 49
5948344271960604672:10.10.101.1@tcp1:12345-
10.10.101.5@tcp1:x1452231776798200:480:Complete:1384956825:0s(-6s) opc 49
5948344271964405760:10.10.101.1@tcp1:12345-
10.10.101.5@tcp1:x1452231776798204:480:Complete:1384956825:0s(-6s) opc 49

7. The operation code and the NID is changed. Depending on the version of Lustre used and
the commands issued on the Lustre file system, the opcode may be different to the ones
shown in thwe above example. Again, search for the opcode to find its meaning:
[root@st16-mds1 lustre]# cat /usr/include/lustre/lustre_idl.h|grep 49
MDS_GETXATTR = 49,

Task E: Using the debug daemon

1. View the set of debug options currently captured by the debug daemon

[root@st16-cli2 ~]# lctl get_param debug


debug=ioctl neterror warning error emerg ha config console

2. Tell Lustre to capture all debug information:

[root@st16-cli2 ~]# lctl set_param debug=-1


debug=-1

[root@st16-cli2 ~]# lctl get_param debug


debug=
trace inode super ext2 malloc cache info ioctl neterror net warning buffs other dentry
nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota
sec lfsck hsm

3. Start the debug daemon using an output file in /tmp

[root@st01-cli2 ~]# lctl debug_daemon start /tmp/debug_daemon.trace 40

Do not copy, scan, transmit, redistribute or PAGE 65 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

4. Run some commands in the lustre file system and stop the daemon

[root@st16-cli2 ~]# lctl debug_daemon stop

5. Decrypt the log in text format

[root@st16-cli2 ~]# lctl df /tmp/debug_daemon.trace > /tmp/debug_file.log

6. Read some contents

[root@st16-cli2 ~]# tail -5 /tmp/debug_file.log


00000100:00000001:0.0:1384957905.482339:0:26845:0:(ptlrpcd.c:293:ptlrpcd_check())
Process entered
00000100:00000001:0.0:1384957905.482340:0:26845:0:(ptlrpcd.c:395:ptlrpcd_check())
Process leaving (rc=0 : 0 : 0)
00000100:00000001:0.0:1384957905.482342:0:26845:0:(ptlrpcd.c:293:ptlrpcd_check())
Process entered
00000100:00000001:0.0:1384957905.482343:0:26845:0:(ptlrpcd.c:395:ptlrpcd_check())
Process leaving (rc=0 : 0 : 0)
Debug log: 158050 lines, 158050 kept, 0 dropped, 0 bad.

7. Disable debugging

[root@st16-cli2 ~]# lctl set_param debug=0


debug=0
[root@st16-cli2 ~]# lctl get_param debug
debug=0

Do not copy, scan, transmit, redistribute or PAGE 66 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre Installation and Administration

Lab 10

Lustre Tuning

Student Notes:

Do not copy, scan, transmit, redistribute or PAGE 67 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Lustre Tuning
Task A: Setting the max cache size on clients

1. By default Lustre, can use up to 75% of the available memory on clients


2. Verify the total amount of memory on Client 2
[root@st16-cli2 ~]# free
total used free shared buffers cached
Mem: 2859140 412192 2446948 0 37032 221832
-/+ buffers/cache: 153328 2705812
Swap: 1048568 0 1048568

3. Verify the amount of memory available for the lustre client:


[root@st16-cli2 ~]# lctl get_param llite.*.max_cached_mb
llite.Lustre01-ffff8800b5544c00.max_cached_mb=
users: 4
max_cached_mb: 1396
used_mb: 0
unused_mb: 1396
reclaim_count: 0

4. Create a 5M file and verify that the lustre client takes this file in the cache

[root@st16-cli2 ~]# dd if=/dev/zero of=/mnt/Lustre01/pippo bs=1M count=5


5+0 records in
5+0 records out
5242880 bytes (5.2 MB) copied, 0.146831 s, 35.7 MB/s

[root@st16-cli2 ~]# ls -l /mnt/Lustre01/pippo


-rw-r--r-- 1 root root 5242880 Nov 20 09:58 /mnt/Lustre01/pippo

[root@st16-cli2 ~]# lctl get_param llite.*.max_cached_mb


llite.Lustre01-ffff8800b5544c00.max_cached_mb=
users: 4
max_cached_mb: 1396
used_mb: 5
unused_mb: 1391

Do not copy, scan, transmit, redistribute or PAGE 68 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

reclaim_count: 0

5. Lowering the cache to 1GB :


[root@st16-cli2 ~]# lctl set_param llite.*.max_cached_mb=1024

llite.Lustre01-ffff8800b5544c00.max_cached_mb=1024

[root@st16-cli2 ~]# lctl get_param llite.*.max_cached_mb


llite.Lustre01-ffff8800b5544c00.max_cached_mb=
users: 4
max_cached_mb: 1024
used_mb: 5
unused_mb: 1019
reclaim_count: 0

6. If you unmount the client and re-mount, lustre will set the parameter to the default
[root@st16-cli2 ~]# umount /mnt/Lustre01/

[root@st16-cli2 ~]# mount -t lustre 10.10.1XX.2@tcp1:10.10.1XX.1@tcp1:/Lustre01


/mnt/Lustre01

[root@st16-cli2 ~]# lctl get_param llite.*.max_cached_mb


llite.Lustre01-ffff8800379b2800.max_cached_mb=
users: 4
max_cached_mb: 1396
used_mb: 0
unused_mb: 1396
reclaim_count: 0

7. Use the IML to set the value centrally for all the clients.
8. Navigate to Configuration  File Systems and select the Lustre01 file system.
9. Use the Update Advanced Settings button to set the max_cached_mb value to 1024. Verify
10. From both of the clients, verify the value, e.g.:
[root@st16-cli1 ~]# lctl get_param llite.*.max_cached_mb
llite.Lustre01-ffff8800379ca800.max_cached_mb=
users: 4
max_cached_mb: 1024
used_mb: 0
unused_mb: 1024

Do not copy, scan, transmit, redistribute or PAGE 69 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

reclaim_count: 0

[root@st16-cli2 ~]# lctl get_param llite.*.max_cached_mb


llite.Lustre01-ffff8800379b2800.max_cached_mb=
users: 4
max_cached_mb: 1024
used_mb: 0
unused_mb: 1024
reclaim_count: 0

Do not copy, scan, transmit, redistribute or PAGE 70 Copyright Intel 2012-2015


otherwise reproduce without permission.
LUSTRE SYSTEM ADMINISTRATION TRAINING - LAB INSTRUCTIONS

Disclaimer and legal information


INTEL CONFIDENTIAL
Portions of the source code contained or described herein and all documents related to the source code ("Material") are owned by
Intel Corporation or its suppliers or licensors. Title to the Material remains with Intel Corporation or its suppliers and licensors.
The Material contains trade secrets and proprietary and confidential information of Intel or its suppliers and licensors. The Material
is protected by worldwide copyright and trade secret laws and treaty provisions. No part of the Material may be used, copied,
reproduced, modified, published, uploaded, posted, transmitted, distributed, or disclosed in any way without Intel’s prior express
written permission.

No license under any patent, copyright, trade secret or other intellectual property right is granted to or conferred upon you by
disclosure or delivery of the Materials, either expressly, by implication, inducement, estoppel or otherwise. Any license under such
intellectual property rights must be express and approved by Intel in writing.

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,


EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED
BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH
PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED
WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES
RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY
PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.

A "Mission Critical Application" is any application in which failure of the Intel Product could result, directly or indirectly, in
personal injury or death. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION CRITICAL
APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND
AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EMPLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS
COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS' FEES ARISING OUT OF, DIRECTLY OR
INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY WAY OUT
OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS
NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS.

Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the
absence or characteristics of any features or instructions marked "reserved" or "undefined". Intel reserves these for future definition
and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information
here is subject to change without notice. Do not finalize a design with this information.

The products described in this document may contain design defects or errors known as errata which may cause the product to
deviate from published specifications. Current characterized errata are available on request.
Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.

Before using any third party software referenced herein, please refer to the third party software provider’s website for more
information, including without limitation, information regarding the mitigation of potential security vulnerabilities in the third
party software.

Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by
calling 1-800-548-4725, or go to: http://www.intel.com/design/literature.htm.

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

* Other names and brands may be claimed as the property of others.

Copyright (C) 2016 Intel Corporation. All rights reserved.

Do not copy, scan, transmit, redistribute or PAGE 71 Copyright Intel 2012-2015


otherwise reproduce without permission.

You might also like