0% found this document useful (0 votes)
10 views29 pages

EVPN and Testing Through Bench Marking Tools

This document provides detailed configurations and commands for monitoring and troubleshooting Pure Storage FlashArray performance during VMware tests. It includes steps for setting up VM configurations, network configurations for ESXi hosts, and storage adapter configurations for NVMe, iSCSI, and NFS. Additionally, it outlines the process for creating datastores and verifying connectivity to ensure optimal functionality.

Uploaded by

nasarthemax
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views29 pages

EVPN and Testing Through Bench Marking Tools

This document provides detailed configurations and commands for monitoring and troubleshooting Pure Storage FlashArray performance during VMware tests. It includes steps for setting up VM configurations, network configurations for ESXi hosts, and storage adapter configurations for NVMe, iSCSI, and NFS. Additionally, it outlines the process for creating datastores and verifying connectivity to ensure optimal functionality.

Uploaded by

nasarthemax
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Process Document

These configurations and commands will help you monitor the performance, troubleshoot issues, and
ensure optimal functionality during tests involving Pure Storage FlashArray.
Part 1: Detailed VMware Configuration (Including NFS Setup)
# 1. VM Setup (Operating System and Virtual Hardware)
Each VM will act as a client that accesses storage on the Pure Storage array, with
OS choices tailored for storage testing, such as **Linux** or **Windows Server**.
Ensure each VM has sufficient resources:
- **vCPU**: 2–4 vCPUs per VM.
- **RAM**: 4–8 GB per VM.
- **Disk**: Connect VM disks via FlashArray datastores (explained below).
# 2. Network Configuration for ESXi Hosts (For NVMe, iSCSI, and NFS Traffic)
To ensure Layer 3 network connectivity for storage traffic from ESXi VMs to the Pure Storage
FlashArray, follow these detailed manual steps. This configuration sets up VMkernel adapters for
NVMe, iSCSI, and NFS traffic, including VLAN assignments and Jumbo Frame settings
# Step 1: VMkernel Adapter Setup for Storage Traffic

1. **Access the ESXi Host’s Networking Configuration**:


- Log in to the **vSphere Client**.
- In the **Hosts and Clusters** view, select the specific **ESXi host** where you’ll configure the
storage network.
- Go to the **Configure** tab for the selected host.

2. **Navigate to VMkernel Adapters**:


- Within the Configure tab, expand the **Networking** section and select **VMkernel adapters**.
- You’ll see a list of existing VMkernel adapters. Here, you’ll add a new adapter dedicated to storage
traffic.

3. **Add a New VMkernel Adapter**:


- Click on **Add Networking**.
- In the wizard that appears, select **VMkernel Network Adapter** and click **Next**.

4. **Choose a vSwitch for the VMkernel Adapter**:


- If you have an existing **vSwitch** dedicated to storage traffic, select it here.
- If not, create a new vSwitch by selecting **New standard switch** and assigning it a physical NIC.
Click **Next** to proceed.

5. **Configure VMkernel Adapter Port Group**:


- Choose an existing port group for storage or create a **New port group**:
- For a new port group, enter a name (e.g., `Storage-VMkernel`).
- Ensure that this port group is dedicated to storage traffic
- Click **Next**.

pg. 1
Process Document

6. **Assign IP Address and Subnet**:


- Configure an IP address specifically for storage traffic that is within the storage subnet.
- Example: If your storage subnet is `192.168.10.0/24`, set an IP address like `192.168.10.10`.
- Ensure the subnet mask matches the storage network, e.g., `255.255.255.0`.
- This IP address must be in a range accessible to the FlashArray.

7. **Enable Jumbo Frames**:


- Set the **MTU (Maximum Transmission Unit) to 9000** to support Jumbo Frames.
- To do this, navigate to **Network Settings > MTU** within the VMkernel configuration and enter
`9000`.
- Jumbo Frames help optimize data throughput, especially useful for storage protocols like NVMe
over RoCEv2 and iSCSI.

8. **Review and Complete the VMkernel Adapter Setup**:


- Review the settings and ensure they are correct.
- Click **Finish** to create the VMkernel adapter for storage traffic.
# Step 2: Configure a Dedicated Port Group for VM Traffic

1. **Access vSwitch Configuration**:


- In the **vSphere Client**, go to the **Networking** section under the ESXi host’s **Configure**
tab.
- Under **Networking**, select **Virtual switches**.

2. **Select or Create a vSwitch for Storage**:


- If you already have a vSwitch dedicated to storage traffic, select it.
- If not, create a new **standard switch (vSwitch)** specifically for storage traffic by clicking **Add
standard switch**.
3. **Configure Port Group for Storage VLAN**:
- If you need to create a port group, select **Add Port Group**.
- Enter a recognizable **name** for the port group (e.g., `Storage-PG`).

4. **Assign VLAN ID for Storage Traffic**:


- Set the **VLAN ID** that matches the VLAN configured on the physical network switch or router for
storage traffic.
- Example: If your storage VLAN is `100`, enter `100` in the VLAN ID field.
- This VLAN ID ensures that traffic on this port group is isolated specifically for storage.

5. **Attach the Port Group to the VMkernel Adapter**:


- Go back to **VMkernel adapters** under **Networking** and ensure the storage VMkernel adapter
you created is using this newly configured port group.

pg. 2
Process Document

6. **Verify MTU Settings on the Port Group**:


- Double-check that the MTU for the port group is also set to `9000` to match the VMkernel adapter
configuration.
# Step 3: Adapter Binding and Routing Configuration

## For iSCSI Adapter Binding


1. **Enable Software iSCSI Adapter** (if not already enabled):
- Under **Configure > Storage Adapters**, click **Add Adapter**.
- Select **Software iSCSI** and enable it if required.

2. **Navigate to iSCSI Adapter Configuration**:


- In the **Configure > Storage Adapters** section, select the **iSCSI adapter** from the list (often
named `vmhba#`).
- Click **Network Port Binding**.

3. **Bind VMkernel Adapters to the iSCSI Adapter**:


- Click **Add Port Binding** to bind the VMkernel adapter you created earlier for iSCSI storage traffic.
- This step ensures that iSCSI traffic is isolated through the designated VMkernel adapter for storage.

4. **Configure iSCSI Target Discovery**:


- In the iSCSI adapter settings, go to **Dynamic Discovery**.
- Click **Add** and enter the IP address of the FlashArray iSCSI target.
- This setting allows VMware to automatically discover iSCSI targets on the FlashArray.

5. **Set Authentication if Required**:


- Go to **Authentication** and set up CHAP (Challenge Handshake Authentication Protocol) if the
FlashArray requires it for iSCSI traffic.

## For Network Routing

1. **Ensure Proper Routing on Physical Switch/Router**:


- Confirm that routes between the storage subnet (where your VMkernel adapters are located) and
the FlashArray’s subnet are correctly set up on the physical switch/router.
- This may involve adding static routes if necessary to ensure Layer 3 connectivity across subnets.

2. **Test Network Connectivity**:


- From the ESXihost, use the **ping command** to test connectivity to the
FlashArray IP addresses over the storage network:
- Example:
```shell
pg. 3
Process Document

vmkping -I vmkX <FlashArray IP>


```
- Replace `vmkX` with the VMkernel adapter for storage traffic and `<FlashArray IP>` with the IP
of the storage array.
3. **Verify Storage Network Configuration**:
- Confirm the connection to the FlashArray by listing storage devices:
```shell
esxcli storage core device list
```

- For iSCSI, verify session status:


```shell
esxcli iscsi session list
```

- For NVMe over RDMA, list NVMe devices:


```shell
esxcli nvme device list

Step-by-Step Storage Adapter Configuration for NVMe over RoCEv2, iSCSI, and NFS
# NVMe over RoCEv2 Configuration
1. **Log in to the vSphere Client**:
- Open the **vSphere Client** on your web browser and log in with administrative credentials.

2. **Navigate to Storage Adapters**:


- Select the **ESXi host** in the left-side navigation pane.
- Go to the **Configure** tab.
- Under the **Storage** section, click **Storage Adapters**. This displays a list of all available storage
adapters on the ESXi host.

3. **Add a New NVMe Adapter**:


- Click **Add Adapter** in the top right corner.
- In the **Add New Storage Adapter** dialog, choose **NVMe over RDMA** from the list of available
adapter types.
- Click **OK** to add the NVMe adapter to your ESXi host.

4. **Configure the NVMe Adapter**:


- After adding the NVMe adapter, locate it in the **Storage Adapters** list. It will typically be named
something like `vmhbaX` (where X is a number assigned by ESXi).
- Click on the NVMe adapter to open its settings.

pg. 4
Process Document

5. **Bind the RDMA Adapter to the VMkernel Adapter**:


- In the NVMe adapter settings, locate the **Network Port Binding** section.
- Click **Add Port Binding**.
- Select the **VMkernel adapter** that you configured for NVMe storage traffic in the previous steps.
This VMkernel adapter should be set to the correct IP address on the storage subnet and configured with
Jumbo Frames (MTU 9000) for optimal performance.
- Click **OK** to save the binding.

6. **Verify NVMe Configuration**:


- Return to the **Storage Adapters** list, and check that the NVMe adapter shows the connection as
**Active**.

- Confirm that theNVMe devices connected to the Pure Storage FlashArray


are visible by running:
```shell
esxcli nvme device list
```
- This command lists NVMe devices connected to the ESXi host, confirming that the configuration is
complete.

# iSCSI Configuration

1. **Enable Software iSCSI Adapter**:


- In the **vSphere Client**, select the **ESXi host**.
- Navigate to **Configure > Storage Adapters**.
- Click **Add Adapter**.
- In the **Add New Storage Adapter** dialog, select **Software iSCSI** from the list and click **OK**.

2. **Configure the iSCSI Adapter Settings**:


- After the Software iSCSI adapter is added, find it in the **Storage Adapters** list (usually named
`vmhbaX`).
- Click on the iSCSI adapter to view its settings.

3. **Set the iSCSI Initiator Name**:


- In the iSCSI adapter settings, locate the **iSCSI Name** field.
- By default, ESXi generates an iSCSI Initiator Name, which is a unique identifier for the iSCSI
adapter. You can use the default name or enter a custom name if required by your storage network policy.
4. **Configure Dynamic Discovery for the iSCSI Target**:
- In the iSCSI adapter settings, go to the **Dynamic Discovery** tab.
- Click **Add** to add a new iSCSI target server.
- Enter the **iSCSI Server IP Address** of the Pure Storage FlashArray. This is the IP of the storage
array configured to serve iSCSI targets.
pg. 5
Process Document

- Set the **Port** to `3260`, which is the standard iSCSI port unless specified otherwise by your
FlashArray.
- Click **OK** to save.

5. **Configure Static Discovery (Optional)**:


- If required by your storage environment, you may add iSCSI targets manually under the **Static
Discovery** tab by specifying each target IQN (iSCSI Qualified Name) and IP address.

6. **Set CHAP Authentication (If Required)**:


- If the FlashArray requires CHAP authentication for iSCSI, go to the **Authentication** tab in the
iSCSI adapter settings.
- Enable **CHAP Authentication** and enter the **Username** and **Password** provided by your
storage administrator.
- Click **OK** to save the settings.

7. **Bind VMkernel Adapters to the iSCSI Adapter**:


- In the iSCSI adapter settings, select **Network Port Binding**.
- Click **Add Port Binding** and choose the VMkernel adapter dedicated to iSCSI storage traffic
(configured with the correct IP and MTU for storage traffic).
- This binding ensures that iSCSI traffic is routed through the specified VMkernel adapter, isolating it
from other types of network traffic.
8. **Rescan the iSCSI Adapter to Discover Storage Devices**:
- After configuring dynamic discovery, select the iSCSI adapter and click **Rescan Storage**.
- The ESXi host will scan for storage devices connected to the iSCSI target.

9. **Verify iSCSI Configuration**:


- Confirm that the iSCSI target devices are visible under **Storage Devices**.

- Run the following command to verify iSCSI sessions:


```shell
esxcli iscsi session list
```
- This command displays active iSCSI sessions, verifying that the connection to the FlashArray is
established.
# NFS Configuration

1. **Enable NFS on the FlashArray**:


- In the FlashArray management interface, enable **NFS services**.
- Configure an **NFS export path** on the FlashArray, which will act as the shared folder accessible
to ESXi.
- Set up **permissions** on the NFS export to allow access from the ESXi host's IP address or subnet.
Ensure read/write permissions are set if required.

pg. 6
Process Document

2. **Add an NFS Datastore in the vSphere Client**:


- In the **vSphere Client**, select the **ESXi host** where you want to add the NFS datastore.
- Navigate to **Storage** under the **Configure** tab.
- Click **Datastores**, then select **New Datastore**.

3. **Select NFS as the Datastore Type**:


- In the **New Datastore** wizard, choose **NFS** as the datastore type.
- You’ll be prompted to select the NFS version. Choose **NFS 3** or **NFS 4.1** based on
compatibility with your Pure Storage FlashArray.

4. **Enter NFS Server Details**:


- In the NFS configuration window, enter the following details:
- **NFS Server**: This is the IP address of the Pure Storage FlashArray where the NFS export is
hosted.
- **Folder**: Enter the full export path of the NFS shared folder on the FlashArray (e.g.,
`/mnt/nfs_share`).
- **Datastore Name**: Assign a unique name for the NFS datastore, which will identify it in vSphere
(e.g., `FlashArray_NFS`).

5. **Complete the NFS Datastore Setup**:


- Click **Next** and review the configuration details.
- Click **Finish** to complete the NFS datastore setup.
- The NFS datastore will now appear under the **Datastores** section and is available for assigning
to VMs.

6. **Verify NFS Configuration**:


- Verify the NFS datastore is accessible by listing the mounted datastores:
```shell
esxcli storage nfs list
```
- This command will confirm that the NFS datastore from the FlashArray is mounted and accessible.

Detailed Steps for Datastore Creation for VMs

# Step 1: Navigate to Storage in the vSphere Client


1. **Log in to the vSphere Client**:
- Open your web browser and navigate to the vSphere Client’s IP address.
- Log in with administrative credentials to access the ESXi host or vCenter environment.

2. **Select the ESXi Host**:


- In the **Hosts and Clusters** view, select the specific **ESXi host** where you want to add the
datastore.

pg. 7
Process Document

3. **Access the Storage Configuration**:


- With the ESXi host selected, go to the **Configure** tab.
- Under the **Storage** section, click on **Datastores**.
- This will display all currently available datastores on the ESXi host.
# Step 2: Create a New Datastore
1. **Initiate New Datastore Setup**:
- Click on **New Datastore** in the upper-right corner to start the datastore creation wizard.
- This wizard will guide you through selecting the type of datastore (VMFS, NVMe, or NFS),
configuring settings, and completing the setup.

# Step 3: Datastore Configuration Based on Storage Type (NVMe, iSCSI, and NFS)

## For NVMe and iSCSI (Using VMFS Datastore)


1. **Select the Datastore Type**:
- In the **Select datastore type** window, choose **VMFS** for iSCSI-based storage or **NVMe
Datastore** for NVMe over RoCEv2 storage.
- Click **Next** to proceed.

2. **Select the FlashArray Storage Device**:


- In the **Select storage device** window, you’ll see a list of storage devices available to the ESXi
host.
- Locate the FlashArray device that you configured for either NVMe or iSCSI and select it.

3. **Configure the VMFS Datastore**:


- For **VMFS Datastores**:
- **Datastore Name**: Assign a unique name for the VMFS datastore (e.g., `FlashArray_VMFS`).
- **VMFS Version**: Choose the VMFS version (typically VMFS 6, which is the latest).
- **Capacity**: Allocate the desired capacity for the datastore based on the available space on the
FlashArray device.
- Click **Next**.

4. **Complete VMFS Datastore Creation**:


- Review the settings, then click **Finish** to create the VMFS datastore.
- The new datastore should appear in the list of datastores under the ESXi host.
## For NFS (Using NFS Datastore)
1. **Select NFS as the Datastore Type**:
- In the **New Datastore** wizard, choose **NFS** as the datastore type.
- Select either **NFS 3** or **NFS 4.1** depending on the compatibility and configuration of your Pure
Storage FlashArray.
- Click **Next**.
pg. 8
Process Document

2. **Enter NFS Server Details**:


- In the **NFS Datastore Details** window, configure the following:
- **NFS Server**: Enter the IP address of the Pure Storage FlashArray NFS server.
- **Folder**: Input the exact path of the NFS export on the FlashArray (e.g., `/mnt/nfs_share`).
- **Datastore Name**: Provide a unique name for the NFS datastore (e.g., `FlashArray_NFS`).

3. **Configure Advanced NFS Options (Optional)**:


- If your environment requires specific NFS settings, such as **access permissions** or **mounting
options**, configure these under **Advanced options**.

4. **Complete NFS Datastore Setup**:


- Click **Next** to review the NFS configuration.
- Once confirmed, click **Finish** to add the NFS datastore.
- The NFS datastore will now appear in the **Datastores** list, available for assignment to VMs.
# Step 4: Assign the Datastore to Each VM’s Virtual Disk
1. **Navigate to the Virtual Machine Settings**:
- Select the **VM** that needs access to the FlashArray storage.
- Click **Edit Settings** to open the VM configuration.

2. **Add a New Virtual Disk or Change Existing Disk Storage**:


- To add a new virtual disk:
- Click **Add New Device** and select **New Hard Disk**.
- Under **Location**, choose the newly created datastore (VMFS or NFS) on the FlashArray.

- To move an existing disk:


- Select the disk and click on the **Browse** option next to the datastore field.
- Choose the FlashArray datastore and click **OK**.

3. **Save VM Configuration**:
- After selecting the appropriate datastore, click **OK** to save the VM configuration.
- Repeat these steps for each VM that will use the FlashArray datastore.
Step 5: Additional Verification Commands
To ensure the storage devices are correctly connected and mounted, use the following commands on
the ESXi host’s command line.
## Verify NFS Mounts

1. **Check NFS Datastore Mounts**:


- Run the following command to verify that the NFS datastore from the
FlashArray is correctly mounted:
```shell
esxcli storage nfs list
pg. 9
Process Document

```
- This command displays all mounted NFS datastores, including information on their mount status,
server IP, and path.

2. **Troubleshoot NFS Mount Issues**:


- If the NFS datastore doesn’t appear or is not mounted, verify:
- The network connectivity between the ESXi host and the FlashArray.
- The permissions on the NFS share.
- Correct IP and path settings in the datastore configuration.
## Verify Adapter and Device Connectivity
1. **List All Connected Storage Devices**:
- Run the following command to view all storage devices connected to the ESXi
host:
```shell
esxcli storage core device list
```
- This command provides detailed information on each device, such as the device name, type (e.g.,
NVMe, iSCSI), and size.

2. **Verify iSCSI Sessions**:


- For iSCSI-specific connectivity, list all active iSCSI sessions by running:
```shell
esxcli iscsi session list
```
- This command shows active iSCSI sessions, including details on targets, initiators, and connection
status.

3. **Check NVMe Devices**:


- To verify NVMe over RoCEv2 devices, run:
```shell
esxcli nvme device list
```
- This command lists NVMe devices detected by the ESXi host, including their current status and
capacity.
**Testing FlashArray Performance and Resiliency** with VMware ESXi. This includes setup for storage
benchmarks, database workload testing, VMotion tests, and backup simulation between VMs to validate
performance, connectivity, and resilience.

pg. 10
Process Document

Part 3: Testing FlashArray Performance and Resiliency (RoCEv2, NFS, and iSCSI)

This section outlines the specific tests to be conducted on VMs connected to the FlashArray via
**RoCEv2, NFS, and iSCSI**. These tests validate the FlashArray’s ability to handle high-throughput
storage traffic, assess database performance, and ensure resilience during live migrations (VMotion) and
simulated backup operations.
# 1. Benchmark Tests: Fio Storage Test
The **Fio** (Flexible I/O Tester) test will be used to benchmark storage performance by generating
high-throughput traffic. Running Fio on multiple VMs helps simulate a heavy load, pushing the storage
and network to their limits.
## Steps to Run Fio Storage Test on VMs:
1. **Prepare the VMs**:
- Select at least **2 VMs** that are configured on the high-throughput host and connected to the
FlashArray storage over **RoCEv2**, **NFS**, or **iSCSI**.
- These VMs should have their operating system disks on local storage and additional disks on the
FlashArray.

2. **Install Fio on Linux VMs**:


- If the VMs are running **Linux** (recommended for Fio), install Fio by
running:
```bash
sudo apt-get update
sudo apt-get install fio
```
- If Fio is not available via the package manager, download and compile it from source at [Fio
GitHub](https://github.com/axboe/fio).

3. **Configure Fio Test**:


- Choose a target file on the attached FlashArray disk for Fio to read/write. Ensure this file is large
enough (e.g., 1–10 GB) to accurately stress test the storage.
- Use the **Basic verification** setup described in [Fio
Documentation](https://fio.readthedocs.io/en/latest/fio_doc.html) to run a read/write workload.

4. **Run Fio with High-Throughput Parameters**:


- Execute the following Fio command on each VM to perform a random
read/write test:
```bash
fio --name=storage_test --ioengine=libaio --rw=randrw --bs=4k --size=1G --numjobs=4 --
time_based --runtime=60 --group_reporting
```

pg. 11
Process Document

- **Parameters Explained**:
- `--rw=randrw`: Performs random read/write operations.
- `--bs=4k`: Sets the block size to 4 KB.
- `--size=1G`: Tests a 1 GB file.
- `--numjobs=4`: Runs 4 concurrent jobs to increase load.
- `--time_based --runtime=60`: Runs the test for 60 seconds.
- `--group_reporting`: Provides a summarized output.

5. **Monitor and Analyze Results**:


Here’s an in-depth guide on monitoring and interpreting Fio test output to analyze storage performance
metrics like **IOPS**, **latency**, and **throughput**. These metrics provide insights into the
workload’s efficiency, response time, and data handling capacity. I’ll also explain how to identify
potential issues in case of errors during data verification.
Step-by-Step Guide to Monitoring and Reviewing Fio Output

1. **Run the Fio Test**:

- Execute the Fio job file with:


```bash
fio <job-file-name>.fio
```
- As the test runs, Fio will generate real-time output, which includes performance metrics for the
specified workload.

2. **Understanding Key Metrics in Fio Output**:

Fio’s output typically includes detailed information for each job, broken
down by **Read** and **Write** operations (if applicable). Here’s how to
interpret each of the main metrics:

# a) **IOPS (Input/Output Operations Per Second)**


- **Definition**: IOPS represents the number of read and/or write operations completed per second.
It’s a critical measure of how fast the storage can handle input and output requests.

- **Where to Find It**: Look for the **IOPS** value next to either “READ” or
“WRITE” sections in the output. It may appear as:
```
read: IOPS=xxxx
write: IOPS=xxxx
```

pg. 12
Process Document

- **Example**:
```
read: IOPS=2500, BW=10.0MiB/s (10.5MB/s), Lat (ms, 95%): 0.80, 1.30, 2.90
```
- **Analysis**: Higher IOPS values indicate that the storage can handle more operations per second.
This is crucial for applications requiring fast access to data, such as databases or high-transaction
systems.

# b) **Latency**
- **Definition**: Latency measures the time taken for each individual I/O operation to complete. It is
often reported in milliseconds (ms).
- **Where to Find It**: Latency metrics are typically broken down into several
categories within Fio output:
- **Average Latency (avg)**: The mean time taken for I/O operations.
- **Minimum Latency (min)**: The shortest time taken for any single I/O operation.
- **Maximum Latency (max)**: The longest time taken for any single I/O operation.
- **Percentiles (e.g., 95th Percentile)**: Indicates the latency below which 95% of the operations
completed. Percentiles provide insight into typical latency rather than outliers.
- **Example**:
```
lat (usec): min=80, avg=200, max=5000, stdev=10.50
clat percentiles (usec): 95th=250, 99th=300
```
- **Analysis**: Lower latency values indicate faster response times, which is ideal for applications
requiring quick data access. Consistently high latency, especially in percentiles (like 95th or 99th), may
indicate performance issues under load.

# c) **Throughput**
- **Definition**: Throughput measures the amount of data transferred per second, often reported in
MB/s or MiB/s. It is a measure of the storage’s data-handling capacity.

- **Where to Find It**: Throughput is shown as **BW (Bandwidth)**, often


labeled as follows:
```
BW=xxMiB/s
```

- **Example**:
```
write: IOPS=1500, BW=5.0MiB/s (5.2MB/s), Lat (ms, 99%): 1.00, 1.50
```
pg. 13
Process Document

- **Analysis**: Higher throughput indicates the storage system’s ability to handle large data volumes
efficiently. This metric is particularly relevant for applications involving data streaming, such as backup
or multimedia.

3. **Advanced Metrics**:
- Fio may also report additional statistics, such as **CPU utilization** and **I/O depth** (queue
depth), which indicate how efficiently the storage system is using CPU resources and how many
operations it can queue up.
- Example:
```
cpu: usr=0.80%, sys=9.30%, ctx=8000, majf=0, minf=20
iodepth: max=16
```
- High CPU utilization or I/O depth values could indicate bottlenecks, especially if latency is also high.

4. **Handling Verification Errors in Output**:


- **Data Verification**: If `verify` is enabled in the job file (e.g., `verify=crc32c`), Fio checks data
integrity after writing by comparing checksums.
- **Error Reporting**:
- If Fio detects a mismatch during verification, it will report an error in the output, indicating potential
data corruption or disk issues.
- Example:
```
verify: bad data at offset=0x1000, expected crc32c=0x12345678, received crc32c=0x87654321
```
- This means that the data at a specific offset did not match the expected checksum, suggesting a
potential issue with data integrity.

5. **Reviewing Overall Job Summary**:


- At the end of the Fio test, a summary provides aggregated results for all jobs, helping you assess
overall performance across IOPS, latency, and throughput for the entire test.
- Example summary:
```

Run status group 0 (all jobs):


READ: bw=10.0MiB/s, IOPS=2500, runt=60000msec
WRITE: bw=5.0MiB/s, IOPS=1250, runt=60000msec
lat (usec): min=80, avg=200, max=5000
```

pg. 14
Process Document

- This gives an at-a-glance view of the workload’s performance, helping you compare against
expected thresholds or requirements.
By thoroughly reviewing these metrics, you can assess the FlashArray’s performance in handling
specific I/O workloads, identify bottlenecks, and ensure data integrity during the test.

# 2. Database Performance Test: HammerDB for MySQL Workload


A database workload test will simulate real-world application behavior by running continuous queries
against a MySQL database hosted on the FlashArray.

## Steps to Run the Database Test on VMs Using HammerDB:


1. **Set Up the Database VM**:
- Select a VM connected to the FlashArray using **RoCEv2, NFS, or iSCSI**.
- Install **MySQL** on this VM as the target database for the test. The MySQL data directory should
be placed on the FlashArray virtual disk for direct storage access.

2. **Install HammerDB**:
- Download and install **HammerDB**, a benchmarking tool for databases, from [HammerDB’s
official site](https://www.hammerdb.com/).
- Follow the HammerDB installation instructions specific to your OS (Linux or Windows).

3. **Configure HammerDB for MySQL**:


- Launch HammerDB and create a new **MySQL** test schema by selecting **MySQL TPC-C**
from the available benchmark options.
- Configure the connection settings for MySQL, including the hostname, port, and credentials for the
MySQL instance.

4. **Run the Database Workload**:


- Set HammerDB to run **TPC-C** (Transaction Processing Performance Council) benchmarks,
simulating a heavy database transaction workload.
- Execute the benchmark and let it run continuously for an extended period (e.g., 30 minutes to 1
hour) to assess the database’s performance on the FlashArray.

5. **Monitor Database Performance**:


- Monitor the **Transactions Per Second (TPS)** and **latency** metrics in HammerDB’s output.
- Check the FlashArray GUI for performance metrics to ensure there are no storage errors or
connectivity issues under load.

---

pg. 15
Process Document

Detailed Steps for Database Performance Test Using HammerDB for MySQL Workload

Below is a comprehensive step-by-step guide for setting up and running the


database performance test using HammerDB with MySQL on VMware ESXi
and FlashArray:

---

# **1. Set Up the Database VM**


1. **Select the VM**:
- Choose a virtual machine (VM) connected to the FlashArray using **RoCEv2**, **iSCSI**, or
**NFS**.
2. **Install MySQL**:
- Update the system:
```bash
sudo apt update && sudo apt upgrade -y
```

- Install MySQL:
```bash
sudo apt install mysql-server -y
```

- Secure the installation:


```bash
sudo mysql_secure_installation
```
- Ensure the MySQL data directory is placed on a virtual disk connected to the FlashArray for
optimal storage performance.

---

# **2. Install HammerDB**

1. **Download HammerDB**:
- Navigate to the [HammerDB official website](https://www.hammerdb.com/) and download the
appropriate version for your VM's operating system (Windows/Linux).

pg. 16
Process Document

2. **Install HammerDB**:
- For **Linux**:
- Extract the downloaded file:
```bash
tar -xvzf hammerdb-x.y.tar.gz
```

- Change directory and start HammerDB:


```bash
cd hammerdb-x.y
./hammerdb
```

- For **Windows**:
- Run the installer and follow the on-screen prompts to complete the installation.

3. **Verify Installation**:
- Launch HammerDB to ensure the installation was successful.

---

# **3. Configure HammerDB for MySQL**

1. **Open HammerDB**:
- Start HammerDB from the command line (Linux) or desktop shortcut (Windows).

2. **Create a Test Schema**:


- Select **MySQL** as the database type.
- Choose the **TPC-C** workload for transaction processing benchmarking.

3. **Set Connection Parameters**:


- Configure the connection to the MySQL database:
- Hostname: `127.0.0.1` (or the IP of the VM running MySQL)
- Port: `3306`
- Username: `<your_mysql_user>`
- Password: `<your_mysql_password>`

4. **Build the Schema**:


- Navigate to **Options > Build** and set parameters such as the number of warehouses (e.g., `800`
for a larger dataset).
- Click **Build Schema** to create the required database structure.

pg. 17
Process Document

---

# **4. Run the Database Workload**


1. **Select TPC-C Benchmark**:
- Choose **TPC-C** as the workload type.

2. **Configure Virtual Users (VUs)**:


- Navigate to **Options > Load**.
- Set the number of virtual users (e.g., start with `2` VUs and scale up gradually).

- Define the ramp-up and duration times:


- Ramp-up: `2 minutes`
- Test duration: `30–60 minutes`

3. **Start the Workload**:


- Click **Run** to begin the benchmark.

4. **Monitor Workload Execution**:


- Observe the workload progress in HammerDB’s console.

---

# **5. Monitor Database Performance**

1. **Metrics in HammerDB**:
- Monitor the following key metrics:
- **Transactions Per Second (TPS)**: Indicates the throughput of the database.
- **Latency**: Tracks the time for each transaction.

2. **Analyze FlashArray Performance**:


- Log in to the FlashArray GUI.
- Check metrics like IOPS, latency, and throughput to ensure storage performance aligns with
expectations.

3. **Error Detection**:
- Verify that no connectivity or storage errors occur during the test. If errors are observed,
troubleshoot network or storage configurations.

---

pg. 18
Process Document

# **6. Automate with CLI (Optional)**


1. **Create a Configuration Script**:
- Use HammerDB’s CLI to script the schema build and workload execution:
```tcl
# schemabuild.tcl
puts "SETTING CONFIGURATION"
dbset db mysql
diset connection mysql_host 127.0.0.1
diset connection mysql_port 3306
diset tpcc mysql_count_ware 800
diset tpcc mysql_num_vu 64
diset tpcc mysql_storage_engine innodb
buildschema
```

2. **Run CLI Commands**:


- Launch HammerDB CLI:
```bash
./hammerdbcli
```

- Source the script:


```bash
hammerdb> source schemabuild.tcl
```

---

This detailed setup ensures a comprehensive and repeatable database performance test, validating
both MySQL’s performance and the FlashArray's storage capabilities under realistic transaction loads.
Let me know if further clarification is needed!

# 3. Additional Tests
Here is a step-by-step explanation and detailed guide for conducting **VMotion Tests** to validate the
resilience and performance of VMs using **iSCSI, NFS, and RoCEv2** storage backends.

pg. 19
Process Document

---

VMotion Tests: Step-by-Step Guide

---

# **1. Setup for VMotion Tests**

1. **Select VMs for Testing**:


- **iSCSI Backend**: Identify two VMs configured with disks residing on the FlashArray via iSCSI.
- **NFS Backend**: Identify two VMs using the NFS datastore configured on the FlashArray.
- **RoCEv2 Backend**: Identify two VMs connected to FlashArray storage over RoCEv2 (NVMe over
RDMA).

2. **Verify Host and Storage Configuration**:


- Ensure the selected VMs are located on **standard-throughput hosts** for consistent baseline
performance.
- Confirm all hosts in the cluster can access the same storage backends (iSCSI, NFS, RoCEv2) and
that the VMkernel adapters are correctly configured.

3. **Enable VMotion on Hosts**:


- Check that **VMotion** is enabled on all ESXi hosts in the cluster:
- Go to **vSphere Client > Configure > VMkernel Adapters** for each host.
- Verify that the **VMotion service** is enabled for at least one VMkernel adapter.

4. **Validate Network Connectivity**:


- Ensure that the ESXi hosts have network connectivity to the FlashArray and between themselves.

- Use the `vmkping` command to test network communication:


```bash
vmkping -I vmkX <target_host_IP>
```

5. **Ensure Resource Availability**:


- Check that the destination host has sufficient compute (CPU, memory) and storage resources to
support the VM during migration.

pg. 20
Process Document

---

# **2. Perform VMotion for iSCSI VMs**

1. **Start the Migration**:


- Log in to the **vSphere Client**.
- Locate one of the VMs with an iSCSI-backed disk in the **Hosts and Clusters** view.

2. **Initiate VMotion**:
- Right-click on the VM and select:
```
Migrate > Change Compute Resource Only
```
- Choose the destination ESXi host within the same cluster.

3. **Monitor the Migration**:


- Watch the **Migration Tasks** in the **Recent Tasks** pane.
- Ensure the process completes successfully without interruption or storage disconnection.

4. **Repeat for the Second VM**:


- Perform the same steps for the second VM on the iSCSI backend.

---

# **3. Perform VMotion for NFS VMs**

1. **Start the Migration**:


- Select one of the VMs with an NFS-backed disk from the **Hosts and Clusters** view.

2. **Initiate VMotion**:
- Right-click on the VM and select:
```
Migrate > Change Compute Resource Only
```
- Choose the destination ESXi host within the same cluster.

pg. 21
Process Document

3. **Monitor the Migration**:


- Observe the **Migration Tasks** in the **Recent Tasks** pane to ensure a smooth transfer.

4. **Repeat for the Second VM**:


- Perform the same steps for the second VM using the NFS backend.

5. **Check Storage Backend**:


- After migration, log in to the FlashArray GUI to monitor any changes in latency, IOPS, or throughput
for the NFS datastore.

---

# **4. Perform VMotion for RoCEv2 VMs**

1. **Start the Migration**:


- Identify one of the VMs using RoCEv2-backed storage (NVMe over RDMA).

2. **Initiate VMotion**:
- Right-click on the VM in the **vSphere Client** and select:
```
Migrate > Change Compute Resource Only
```
- Select the target ESXi host in the same cluster.

3. **Monitor the Migration**:


- Track the migration process in the **Recent Tasks** pane.
- Ensure the migration completes successfully without storage or network interruptions.

4. **Repeat for the Second VM**:


- Perform the same steps for the second VM on the RoCEv2 backend.

pg. 22
Process Document

5. **Monitor FlashArray Metrics**:


- During and after migration, log in to the FlashArray GUI and review the
performance metrics:
- **IOPS**
- **Latency**
- **Throughput**
- Look for any latency spikes or connectivity issues that could indicate resilience problems.

---

# **5. Verify Post-Migration Connectivity**

1. **Run Post-Migration Workloads**:


- After completing the migration for all VMs, ensure that the storage connection
remains intact:
- Run a **Fio benchmark** test on each VM to verify IOPS, latency, and throughput.
- Execute a **database workload** test using HammerDB on one of the VMs to confirm transaction
performance.

2. **Validate VM Health**:
- Check the migrated VMs in the **vSphere Client** for:
- **Guest OS activity** (e.g., application responsiveness).
- **Resource utilization** (CPU, memory, storage).

3. **Review FlashArray Logs**:


- Access the FlashArray GUI and review system logs for any warnings or errors during the VMotion
tests.

4. **Confirm VM Compatibility**:
- Verify that all migrated VMs are running seamlessly on the destination host and accessing the
FlashArray storage without issues.

---

Additional Tips for VMotion Testing

pg. 23
Process Document

- **Simulate Workloads During Migration**:


- To mimic real-world scenarios, simulate active workloads (e.g., file transfers or database queries) on
the VMs during migration to test the storage backend under stress.

- **Test Under Heavy Load**:


- Increase the number of VMs migrated simultaneously to evaluate storage and network scalability.

- **Rollback Plan**:
- In case of migration failure, have a rollback strategy in place to move the VM back to its original host.

---

By following these steps, you can comprehensively test the resilience and performance of the
FlashArray during live VM migrations (VMotion) using iSCSI, NFS, and RoCEv2 backends. Let me
know if further clarification or additional information is required!

---

Here’s a detailed, step-by-step guide for simulating a backup scenario using **rsync** and **dd**
commands, transferring data between two VMs connected to the FlashArray over **RoCEv2, NFS, or
iSCSI**.

---

**Backup Test: Step-by-Step Guide**

---

# **1. Create Source and Destination VMs**

1. **Set Up the Source VM**:


- Configure a VM connected to the FlashArray via **RoCEv2**, **NFS**, or **iSCSI**.
- Ensure the VM has a virtual disk provisioned from the FlashArray and sufficient storage space for
testing.
- Install a Linux OS (e.g., Ubuntu, CentOS).

pg. 24
Process Document

2. **Set Up the Destination VM**:


- Configure a second VM connected to the FlashArray via the same protocol.
- Ensure the VM has enough disk space to receive the transferred files.
- Install a Linux OS on the destination VM.

3. **Verify Network Connectivity**:


- Check that the source VM can communicate with the destination VM over the
network:
```bash
ping <destination_VM_IP>
```
- Replace `<destination_VM_IP>` with the IP address of the destination VM.
- If there are connectivity issues, ensure routing and firewall configurations are correct.

---

# **2. Generate Random Data Files on the Source VM**

1. **Generate a 1 GB Random File**:


- Use the `dd` command to create a file filled with random data:
```bash
dd if=/dev/urandom of=sample.txt bs=64M count=16 iflag=fullblock
```

- **Explanation**:
- `if=/dev/urandom`: Reads random data from the `/dev/urandom` device.
- `of=sample.txt`: Specifies the output file name (`sample.txt`).
- `bs=64M`: Sets the block size to 64 MB.
- `count=16`: Writes 16 blocks, resulting in a 1 GB file (`64 MB x 16`).
- `iflag=fullblock`: Ensures complete blocks are read and written.

2. **Verify the File**:


- Check the file size to ensure it matches the expected size:
```bash
ls -lh sample.txt
```

pg. 25
Process Document

- Example output:
```
-rw-r--r-- 1 user user 1.0G Nov 18 12:00 sample.txt
```

---

# **3. Install rsync on Both VMs**

1. **Install rsync**:
- On both the source and destination VMs, run:
```bash
sudo apt-get update
sudo apt-get install rsync -y
```
- This ensures the tool is available for transferring files.

2. **Verify rsync Installation**:


- Check the installed version of rsync:
```bash
rsync --version
```

- Example output:
```
rsync 3.2.3 protocol version 31
```

---

# **4. Transfer Data Using rsync**

1. **Set Up rsync Command**:


- On the source VM, run the following command to transfer the file:
```bash
rsync -avz sample.txt user@<destination_VM_IP>:/path/to/destination
pg. 26
Process Document

```

- Replace the placeholders:


- `<destination_VM_IP>`: IP address of the destination VM.
- `/path/to/destination`: Path on the destination VM where the file will be copied.
- `user`: The username on the destination VM.

2. **Explanation of rsync Options**:


- `-a`: Archive mode (preserves file permissions and timestamps).
- `-v`: Verbose output (provides detailed information during the transfer).
- `-z`: Compresses data during transfer to optimize network usage.

3. **Monitor the Transfer**:


- Observe the progress of the file transfer. The output will display:
- Transfer speed.
- Data transferred.
- Elapsed time.

4. **Example Output**:
```bash
sending incremental file list
sample.txt
1,073,741,824 100% 95.23MB/s 0:00:10 (xfr#1, to-chk=0/1)

sent 1,073,800,824 bytes received 35 bytes 89,481,570.00 bytes/sec


total size is 1,073,741,824 speedup is 1.00
```

---

# **5. Monitor Transfer Performance**

1. **Observe rsync Statistics**:


- Monitor the transfer speed from the rsync output.
- Check for any delays or interruptions in the transfer process.

pg. 27
Process Document

2. **Check Network Utilization**:


- On both VMs, use tools like `iftop` or `nload` to monitor network bandwidth
utilization during the transfer:
```bash
sudo apt-get install iftop -y
sudo iftop -i <network_interface>
```
- Replace `<network_interface>` with the appropriate interface (e.g., `eth0`).

3. **Review FlashArray Performance**:


- Log in to the FlashArray GUI and check metrics such as:
- **IOPS**: Input/Output Operations Per Second.
- **Latency**: Response time for read/write operations.
- **Throughput**: Data transferred per second.

---

# **6. Verify Data Integrity on the Destination VM**

1. **Compare File Sizes**:


- On the destination VM, check the size of the received file:
```bash
ls -lh /path/to/destination/sample.txt
```
- Ensure the size matches the source file (e.g., `1.0G`).

2. **Generate and Compare Checksums**:


- On the source VM, generate an MD5 checksum:
```bash
md5sum sample.txt
```

Example output:
```
9b74c9897bac770ffc029102a200c5de sample.txt
```

pg. 28
Process Document

- On the destination VM, generate an MD5 checksum for the received file:
```bash
md5sum /path/to/destination/sample.txt
```
- Compare the two checksums. They should match exactly.

3. **Investigate Discrepancies**:
- If the file sizes or checksums do not match, re-run the rsync command with
the `--checksum` option for added verification:
```bash
rsync -avz --checksum sample.txt user@<destination_VM_IP>:/path/to/destination
```
Additional Tips for Backup Testing

- **Simulate Larger Backups**:


- Increase the file size or the number of files transferred to simulate a real-world
backup scenario:
```bash
dd if=/dev/urandom of=largefile.txt bs=1G count=10 iflag=fullblock
```

- **Test Parallel Transfers**:


- Transfer multiple files simultaneously using `rsync` with multiple processes to simulate heavy backup
traffic.

- **Monitor FlashArray for Bottlenecks**:


- Use the FlashArray GUI to monitor for any resource saturation or storage errors.

This comprehensive guide includes all the steps required for testing FlashArray performance, resilience,
and connectivity across Fio benchmark, database workload testing, VMotion, and backup simulation
with specific configurations for RoCEv2, NFS, and iSCSI. Each step provides detailed configurations
and commands for a thorough validation of the storage setup on VMware ESXi. Let me know if you
need further clarification on any of these steps!

pg. 29

You might also like