EVPN and Testing Through Bench Marking Tools
EVPN and Testing Through Bench Marking Tools
These configurations and commands will help you monitor the performance, troubleshoot issues, and
ensure optimal functionality during tests involving Pure Storage FlashArray.
Part 1: Detailed VMware Configuration (Including NFS Setup)
# 1. VM Setup (Operating System and Virtual Hardware)
Each VM will act as a client that accesses storage on the Pure Storage array, with
OS choices tailored for storage testing, such as **Linux** or **Windows Server**.
Ensure each VM has sufficient resources:
- **vCPU**: 2–4 vCPUs per VM.
- **RAM**: 4–8 GB per VM.
- **Disk**: Connect VM disks via FlashArray datastores (explained below).
# 2. Network Configuration for ESXi Hosts (For NVMe, iSCSI, and NFS Traffic)
To ensure Layer 3 network connectivity for storage traffic from ESXi VMs to the Pure Storage
FlashArray, follow these detailed manual steps. This configuration sets up VMkernel adapters for
NVMe, iSCSI, and NFS traffic, including VLAN assignments and Jumbo Frame settings
# Step 1: VMkernel Adapter Setup for Storage Traffic
pg. 1
Process Document
pg. 2
Process Document
Step-by-Step Storage Adapter Configuration for NVMe over RoCEv2, iSCSI, and NFS
# NVMe over RoCEv2 Configuration
1. **Log in to the vSphere Client**:
- Open the **vSphere Client** on your web browser and log in with administrative credentials.
pg. 4
Process Document
# iSCSI Configuration
- Set the **Port** to `3260`, which is the standard iSCSI port unless specified otherwise by your
FlashArray.
- Click **OK** to save.
pg. 6
Process Document
pg. 7
Process Document
# Step 3: Datastore Configuration Based on Storage Type (NVMe, iSCSI, and NFS)
3. **Save VM Configuration**:
- After selecting the appropriate datastore, click **OK** to save the VM configuration.
- Repeat these steps for each VM that will use the FlashArray datastore.
Step 5: Additional Verification Commands
To ensure the storage devices are correctly connected and mounted, use the following commands on
the ESXi host’s command line.
## Verify NFS Mounts
```
- This command displays all mounted NFS datastores, including information on their mount status,
server IP, and path.
pg. 10
Process Document
Part 3: Testing FlashArray Performance and Resiliency (RoCEv2, NFS, and iSCSI)
This section outlines the specific tests to be conducted on VMs connected to the FlashArray via
**RoCEv2, NFS, and iSCSI**. These tests validate the FlashArray’s ability to handle high-throughput
storage traffic, assess database performance, and ensure resilience during live migrations (VMotion) and
simulated backup operations.
# 1. Benchmark Tests: Fio Storage Test
The **Fio** (Flexible I/O Tester) test will be used to benchmark storage performance by generating
high-throughput traffic. Running Fio on multiple VMs helps simulate a heavy load, pushing the storage
and network to their limits.
## Steps to Run Fio Storage Test on VMs:
1. **Prepare the VMs**:
- Select at least **2 VMs** that are configured on the high-throughput host and connected to the
FlashArray storage over **RoCEv2**, **NFS**, or **iSCSI**.
- These VMs should have their operating system disks on local storage and additional disks on the
FlashArray.
pg. 11
Process Document
- **Parameters Explained**:
- `--rw=randrw`: Performs random read/write operations.
- `--bs=4k`: Sets the block size to 4 KB.
- `--size=1G`: Tests a 1 GB file.
- `--numjobs=4`: Runs 4 concurrent jobs to increase load.
- `--time_based --runtime=60`: Runs the test for 60 seconds.
- `--group_reporting`: Provides a summarized output.
Fio’s output typically includes detailed information for each job, broken
down by **Read** and **Write** operations (if applicable). Here’s how to
interpret each of the main metrics:
- **Where to Find It**: Look for the **IOPS** value next to either “READ” or
“WRITE” sections in the output. It may appear as:
```
read: IOPS=xxxx
write: IOPS=xxxx
```
pg. 12
Process Document
- **Example**:
```
read: IOPS=2500, BW=10.0MiB/s (10.5MB/s), Lat (ms, 95%): 0.80, 1.30, 2.90
```
- **Analysis**: Higher IOPS values indicate that the storage can handle more operations per second.
This is crucial for applications requiring fast access to data, such as databases or high-transaction
systems.
# b) **Latency**
- **Definition**: Latency measures the time taken for each individual I/O operation to complete. It is
often reported in milliseconds (ms).
- **Where to Find It**: Latency metrics are typically broken down into several
categories within Fio output:
- **Average Latency (avg)**: The mean time taken for I/O operations.
- **Minimum Latency (min)**: The shortest time taken for any single I/O operation.
- **Maximum Latency (max)**: The longest time taken for any single I/O operation.
- **Percentiles (e.g., 95th Percentile)**: Indicates the latency below which 95% of the operations
completed. Percentiles provide insight into typical latency rather than outliers.
- **Example**:
```
lat (usec): min=80, avg=200, max=5000, stdev=10.50
clat percentiles (usec): 95th=250, 99th=300
```
- **Analysis**: Lower latency values indicate faster response times, which is ideal for applications
requiring quick data access. Consistently high latency, especially in percentiles (like 95th or 99th), may
indicate performance issues under load.
# c) **Throughput**
- **Definition**: Throughput measures the amount of data transferred per second, often reported in
MB/s or MiB/s. It is a measure of the storage’s data-handling capacity.
- **Example**:
```
write: IOPS=1500, BW=5.0MiB/s (5.2MB/s), Lat (ms, 99%): 1.00, 1.50
```
pg. 13
Process Document
- **Analysis**: Higher throughput indicates the storage system’s ability to handle large data volumes
efficiently. This metric is particularly relevant for applications involving data streaming, such as backup
or multimedia.
3. **Advanced Metrics**:
- Fio may also report additional statistics, such as **CPU utilization** and **I/O depth** (queue
depth), which indicate how efficiently the storage system is using CPU resources and how many
operations it can queue up.
- Example:
```
cpu: usr=0.80%, sys=9.30%, ctx=8000, majf=0, minf=20
iodepth: max=16
```
- High CPU utilization or I/O depth values could indicate bottlenecks, especially if latency is also high.
pg. 14
Process Document
- This gives an at-a-glance view of the workload’s performance, helping you compare against
expected thresholds or requirements.
By thoroughly reviewing these metrics, you can assess the FlashArray’s performance in handling
specific I/O workloads, identify bottlenecks, and ensure data integrity during the test.
2. **Install HammerDB**:
- Download and install **HammerDB**, a benchmarking tool for databases, from [HammerDB’s
official site](https://www.hammerdb.com/).
- Follow the HammerDB installation instructions specific to your OS (Linux or Windows).
---
pg. 15
Process Document
Detailed Steps for Database Performance Test Using HammerDB for MySQL Workload
---
- Install MySQL:
```bash
sudo apt install mysql-server -y
```
---
1. **Download HammerDB**:
- Navigate to the [HammerDB official website](https://www.hammerdb.com/) and download the
appropriate version for your VM's operating system (Windows/Linux).
pg. 16
Process Document
2. **Install HammerDB**:
- For **Linux**:
- Extract the downloaded file:
```bash
tar -xvzf hammerdb-x.y.tar.gz
```
- For **Windows**:
- Run the installer and follow the on-screen prompts to complete the installation.
3. **Verify Installation**:
- Launch HammerDB to ensure the installation was successful.
---
1. **Open HammerDB**:
- Start HammerDB from the command line (Linux) or desktop shortcut (Windows).
pg. 17
Process Document
---
---
1. **Metrics in HammerDB**:
- Monitor the following key metrics:
- **Transactions Per Second (TPS)**: Indicates the throughput of the database.
- **Latency**: Tracks the time for each transaction.
3. **Error Detection**:
- Verify that no connectivity or storage errors occur during the test. If errors are observed,
troubleshoot network or storage configurations.
---
pg. 18
Process Document
---
This detailed setup ensures a comprehensive and repeatable database performance test, validating
both MySQL’s performance and the FlashArray's storage capabilities under realistic transaction loads.
Let me know if further clarification is needed!
# 3. Additional Tests
Here is a step-by-step explanation and detailed guide for conducting **VMotion Tests** to validate the
resilience and performance of VMs using **iSCSI, NFS, and RoCEv2** storage backends.
pg. 19
Process Document
---
---
pg. 20
Process Document
---
2. **Initiate VMotion**:
- Right-click on the VM and select:
```
Migrate > Change Compute Resource Only
```
- Choose the destination ESXi host within the same cluster.
---
2. **Initiate VMotion**:
- Right-click on the VM and select:
```
Migrate > Change Compute Resource Only
```
- Choose the destination ESXi host within the same cluster.
pg. 21
Process Document
---
2. **Initiate VMotion**:
- Right-click on the VM in the **vSphere Client** and select:
```
Migrate > Change Compute Resource Only
```
- Select the target ESXi host in the same cluster.
pg. 22
Process Document
---
2. **Validate VM Health**:
- Check the migrated VMs in the **vSphere Client** for:
- **Guest OS activity** (e.g., application responsiveness).
- **Resource utilization** (CPU, memory, storage).
4. **Confirm VM Compatibility**:
- Verify that all migrated VMs are running seamlessly on the destination host and accessing the
FlashArray storage without issues.
---
pg. 23
Process Document
- **Rollback Plan**:
- In case of migration failure, have a rollback strategy in place to move the VM back to its original host.
---
By following these steps, you can comprehensively test the resilience and performance of the
FlashArray during live VM migrations (VMotion) using iSCSI, NFS, and RoCEv2 backends. Let me
know if further clarification or additional information is required!
---
Here’s a detailed, step-by-step guide for simulating a backup scenario using **rsync** and **dd**
commands, transferring data between two VMs connected to the FlashArray over **RoCEv2, NFS, or
iSCSI**.
---
---
pg. 24
Process Document
---
- **Explanation**:
- `if=/dev/urandom`: Reads random data from the `/dev/urandom` device.
- `of=sample.txt`: Specifies the output file name (`sample.txt`).
- `bs=64M`: Sets the block size to 64 MB.
- `count=16`: Writes 16 blocks, resulting in a 1 GB file (`64 MB x 16`).
- `iflag=fullblock`: Ensures complete blocks are read and written.
pg. 25
Process Document
- Example output:
```
-rw-r--r-- 1 user user 1.0G Nov 18 12:00 sample.txt
```
---
1. **Install rsync**:
- On both the source and destination VMs, run:
```bash
sudo apt-get update
sudo apt-get install rsync -y
```
- This ensures the tool is available for transferring files.
- Example output:
```
rsync 3.2.3 protocol version 31
```
---
```
4. **Example Output**:
```bash
sending incremental file list
sample.txt
1,073,741,824 100% 95.23MB/s 0:00:10 (xfr#1, to-chk=0/1)
---
pg. 27
Process Document
---
Example output:
```
9b74c9897bac770ffc029102a200c5de sample.txt
```
pg. 28
Process Document
- On the destination VM, generate an MD5 checksum for the received file:
```bash
md5sum /path/to/destination/sample.txt
```
- Compare the two checksums. They should match exactly.
3. **Investigate Discrepancies**:
- If the file sizes or checksums do not match, re-run the rsync command with
the `--checksum` option for added verification:
```bash
rsync -avz --checksum sample.txt user@<destination_VM_IP>:/path/to/destination
```
Additional Tips for Backup Testing
This comprehensive guide includes all the steps required for testing FlashArray performance, resilience,
and connectivity across Fio benchmark, database workload testing, VMotion, and backup simulation
with specific configurations for RoCEv2, NFS, and iSCSI. Each step provides detailed configurations
and commands for a thorough validation of the storage setup on VMware ESXi. Let me know if you
need further clarification on any of these steps!
pg. 29