SPECstorage(TM) Solution 2020_eda_blended Result
NetApp Inc. : NetApp 8-node AFF A90 with FlexGroup
SPECstorage Solution = 8100 Job_Sets (Overall Response Time = 1.17 msec)
2020_eda_blended
===============================================================================
Performance
===========
Business Average
Metric Latency Job_Sets Job_Sets
(Job_Sets) (msec) Ops/Sec MB/Sec
------------ ------------ ------------ ------------
810 0.4 364520 5881
1620 0.4 729040 11765
2430 0.4 1093560 17646
3240 0.5 1458079 23527
4050 0.6 1822600 29410
4860 0.8 2187119 35294
5670 1.1 2551642 41171
6480 1.5 2916162 47056
7290 2.6 3280653 52936
8100 4.9 3644870 58809
===============================================================================
Product and Test Information
============================
+---------------------------------------------------------------+
| NetApp 8-node AFF A90 with FlexGroup |
+---------------------------------------------------------------+
Tested by NetApp Inc.
Hardware Available June 2024
Software Available June 2024
Date Tested June 2024
License Number 33
Licensee Locations Sunnyvale, CA USA
Designed and built for customers seeking a storage solution for the high
demands of enterprise applications, the NetApp high-end flagship all-flash AFF
A90 delivers unrivaled performance, superior resilience, and best-in-class data
management across the hybrid cloud. With an end-to-end NVMe architecture
supporting the latest NVMe SSDs, and both NVMe/FC and NVMe/TCP network
protocols, it provides over 27% performance increase over its predecessor with
ultra-low latency. Powered by ONTAP data management software, it supports
non-disruptive scale-out to a cluster of 24 nodes.
ONTAP is designed for massive scaling in a single namespace to over 20PB with
over 400 billion files while evenly spreading the performance across the
cluster. This makes the AFF A90 a great system for engineering and design
applications as well as DevOps. It is particularly well-suited for chip
development and software builds that are typically high file-count environments
with high data and meta-data traffic.
Solution Under Test Bill of Materials
=====================================
Item
No Qty Type Vendor Model/Name Description
---- ---- ---------- ---------- ---------- -----------------------------------
1 4 Storage NetApp AFF A90 A single NetApp AFF A90 system is a
System Flash chassis with 2 controllers. A set
System (HA of 2 controllers comprises a High-
Pair, Availability (HA) Pair. The words
Active- "controller" and "node" are used
Active interchangeably in this
Dual Contr document.
One internal FS4483
oller) disk shelf is direct-connected to
the AFF A90 controllers, with 48
SSDs per disk shelf.
Each AFF A90 HA Pair includes
2048GB of ECC memory, 128GB of
NVRAM, 18 PCIe expansion slots and
a set of included I/O ports:
Included CoreBundle, Data
Protection Bundle and Security and
Compliance bundle which includes
All Protocols, SnapRestore,
SnapMirror, FlexClone, Autonomous
Ransomware Protection, SnapCenter
and SnapLock. Only the NFS protocol
license is active in the test which
is available in the Core Bundle.
2 16 Network NetApp 2-Port 1 card in slot 1 and 1 card in slot
Interface 100GbE 7 of each controller; 4 cards per
Card X50130A HA pair; used for cluster
connections.
3 16 Network NetApp 2-Port 1 card in slot 6 and 1 card in slot
Interface 100GbE 11 of each controller; 4 cards per
Card X50131A HA pair; used for data connections
to clients as part of bonded LACP
4x100 GbE.
4 4 Internal NetApp FS4483 Disk shelf with capacity to hold up
Disk Shelf (48-SSD to 48 x 2.5" drives.
Disk
Shelf)
5 144 Solid- NetApp 1.92TB NVMe Solid-State Drives (NVMe SSDs)
State NVMe SSD installed in FS4483 disk shelf, 48
Drive X4016A per shelf
6 48 Solid- NetApp 3.8TB NVMe NVMe Solid-State Drives (NVMe SSDs)
State SSD X4011A installed in FS4483 disk shelf, 48
Drive per shelf
7 9 Network Mellanox T ConnectX-5 2-port 100 GbE NIC, one installed
Interface echnologie MCX516A-CC per Lenovo SR650 V2 client. lspci
Card s AT output: Mellanox Technologies
MT27800 Family [ConnectX-5]
8 4 Network Mellanox T ConnectX-6 2-port 100 GbE NIC, one installed
Interface echnologie CX653106A- per Lenovo SR650 V3 client. lspci
Card s ECAT output: Mellanox Technologies
MT28908 Family [ConnectX-6]
9 2 Switch Cisco Cisco Used for Ethernet data connections
Nexus between clients and storage
9336C-FX2 systems. Only the ports used for
this test are listed in this
report. See the 'Transport
Configuration - Physical' section
for connectivity details.
10 2 Switch Cisco Cisco Used for Ethernet connections of
Nexus AFF A90 storage cluster network.
9336C-FX2 Only the ports used for this test
are listed in this report. See the
'Transport Configuration -
Physical' section for connectivity
details.
11 9 Client Lenovo Lenovo Thi Lenovo ThinkSystem SR650 V2
nkSystem clients. System Board machine type
SR650 V2 is 7Z73CTO1WW, PCIe Riser part
number R2SH13N01D7. Each client
also contains 2 Intel Xeon Gold
6330 CPU @ 2.00GHz with 28 cores, 8
DDR4 3200MHz 128GB DIMMs, 240GB M.2
SATA SSD part number SSS7A23276,
and a 240G M.2 SATA SSD part number
SSDSCKJB240G7. All 8 clients are
used to generate the workload, 1 is
used as Prime Client.
12 4 Client Lenovo Lenovo Thi Lenovo ThinkSystem SR650 V3
nkSystem clients. System Board machine type
SR650 V3 is 7D76CTO1WW, PCIe Riser part
number SC57A86662. Each client also
contains 2 Intel Xeon Gold 5420 CPU
@ 2.00GHz with 28 cores, 32 DDR5
3200MHz 32GB DIMMs, 240GB M.2 SATA
SSD part number ER3-GD240. All 4
SR650 V3 clients are also used to
generate the workload, in addition
to the 8 SR650 V2 clients.
Configuration Diagrams
======================
1) storage2020-20240708-00080.config1.jpg (see SPECstorage Solution 2020 results webpage)
Component Software
==================
Item Name and
No Component Type Version Description
---- ------------ ------------ ------------ -----------------------------------
1 Linux Operating RHEL 9.2 Operating System (OS) for the 13
System (Kernel clients
6.5.0-rc2+)
2 ONTAP Storage OS 9.15.1RC2 Storage Operating System
3 Data Switch Operating 9.3(3) Cisco switch NX-OS (system
System software)
Hardware Configuration and Tuning - Physical
============================================
+----------------------------------------------------------------------+
| Storage |
+----------------------------------------------------------------------+
Parameter Name Value Description
--------------- --------------- ----------------------------------------
MTU 9000 Jumbo Frames configured for data ports
Hardware Configuration and Tuning Notes
---------------------------------------
Data network was set up with MTU of 9000.
Software Configuration and Tuning - Physical
============================================
+----------------------------------------------------------------------+
| Clients |
+----------------------------------------------------------------------+
Parameter Name Value Description
--------------- --------------- ----------------------------------------
rsize,wsize 262144 NFS mount options for data block size
protocol tcp NFS mount options for protocol
nfsvers 3 NFS mount options for NFS version
nofile 102400 Maximum number of open files per user
nproc 10240 Maximum number of processes per user
sunrpc.tcp_slot 128 sets the number of (TCP) RPC entries to
_table_entries pre-allocate for in-flight RPC requests
net.core.wmem_m 16777216 Maximum socket send buffer size
ax
net.core.wmem_d 1048576 Default setting in bytes of the socket
efault send buffer
net.core.rmem_m 16777216 Maximum socket receive buffer size
ax
net.core.rmem_d 1048576 Default setting in bytes of the socket
efault receive buffer
net.ipv4.tcp_rm 1048576 8388608 Minimum, default and maximum size of the
em 33554432 TCP receive buffer
net.ipv4.tcp_wm 1048576 8388608 Minimum, default and maximum size of the
em 33554432 TCP send buffer
net.core.optmem 4194304 Maximum ancillary buffer size allowed
_max per socket
net.core.somaxc 65535 Maximum tcp backlog an application can
onn request
net.ipv4.tcp_me 4096 89600 Maximum memory in 4096-byte pages across
m 8388608 all TCP applications. Contains minimum,
pressure and maximum.
net.ipv4.tcp_wi 1 Enables TCP window scaling
ndow_scaling
net.ipv4.tcp_ti 0 Turn off timestamps to reduce
mestamps performance spikes related to timestamp
generation
net.ipv4.tcp_no 1 Prevent TCP from caching connection
_metrics_save metrics on closing connections
net.ipv4.route. 1 Flush the routing cache
flush
net.ipv4.tcp_lo 1 Allows TCP to make decisions to prefer
w_latency lower latency instead of maximizing
network throughput
net.ipv4.ip_loc 1024 65000 Defines the local port range that is
al_port_range used by TCP and UDP traffic to choose
the local port.
net.ipv4.tcp_sl 0 Congestion window will not be timed out
ow_start_after_ after an idle period
idle
net.core.netdev 300000 Sets maximum number of packets, queued
_max_backlog on the input side, when the interface
receives packets faster than kernel can
process
net.ipv4.tcp_sa 0 Disable TCP selective acknowledgements
ck
net.ipv4.tcp_ds 0 Disable duplicate SACKs
ack
net.ipv4.tcp_fa 0 Disable forward acknowledgement
ck
vm.dirty_expire 30000 Defines when dirty data is old enough to
_centisecs be eligible for writeout by the kernel
flusher threads. Unit is 100ths of a
second.
vm.dirty_writeb 30000 Defines a time interval between periodic
ack_centisecs wake-ups of the kernel threads
responsible for writing dirty data to
hard-disk.
Software Configuration and Tuning Notes
---------------------------------------
Tuned the necessary client parameters as shown above, for communication between
clients and storage controllers over Ethernet, to optimize data transfer and
minimize overhead.
The second M.2 SSD in each client was configured as a dedicated swap space of
224GB.
Service SLA Notes
-----------------
None
Storage and Filesystems
=======================
Item Stable
No Description Data Protection Storage Qty
---- ------------------------------------- ------------------ -------- -----
1 1.92TB NVMe SSDs used for data and RAID-DP Yes 144
storage operating system; used to
build three RAID-DP RAID groups per
storage controller node in the
cluster
2 3.8TB NVMe SSDs used for data and RAID-DP Yes 48
storage operating system; used to
build three RAID-DP RAID groups per
storage controller node in the
cluster
3 1.92TB NVMe M.2 device, 1 per none Yes 6
controller; used as boot media
4 3.8TB NVMe M.2 device, 1 per none Yes 2
controller; used as boot media
Number of Filesystems 1
Total Capacity 384TiB
Filesystem Type NetApp FlexGroup
Filesystem Creation Notes
-------------------------
The single FlexGroup consumed all data volumes from all of the aggregates
across all of the nodes.
Storage and Filesystem Notes
----------------------------
The storage configuration consisted of 4 AFF A90 HA pairs (8 controller nodes
total). The two controllers in a HA pair are connected in a SFO (storage
failover) configuration. Together, all 8 controllers (configured as an HA pair)
comprise the tested AFF A90 HA cluster. Stated in the reverse, the tested AFF
A90 HA cluster consists of 4 HA Pairs, each of which consists of 2 controllers
(also referred to as nodes).
Each storage controller was connected to its own and partner's NVMe drives in a
multi-path HA configuration.
All NVMe SSDs were in active use during the test (aside from 1 spare SSD per
shelf). In addition to the factory configured RAID Group housing its root
aggregate, each storage controller was configured with two 21+2 RAID-DP RAID
Groups. There was 1 data aggregate on each node, each of which consumed one of
the node's two 21+2 RAID-DP RAID Groups. This is (21+2 RAID-DP + 1 spare per
shelf) x 4 shelves = 192 SSDs total. 16x volumes, holding benchmark data, were
created within each aggregate. "Root aggregates" hold ONTAP operating system
related files. Note that spare (unused) drive partitions are not included in
the "storage and filesystems" table because they held no data during the
benchmark execution.
A storage virtual machine or "SVM" was created on the cluster, spanning all
storage controller nodes. Within the SVM, a single FlexGroup volume was created
using the one data aggregate on each controller. A FlexGroup volume is a
scale-out NAS single-namespace container that provides high performance along
with automatic load distribution and scalability.
Transport Configuration - Physical
==================================
Item Number of
No Transport Type Ports Used Notes
---- --------------- ---------- -----------------------------------------------
1 100GbE 45 For the client-to-storage network, the AFF A90
Cluster used a total of 32x 100 GbE connections
from storage to the switch, communicating via
NFSv3 over TCP/IP to 12 clients, via 1x 100GbE
connection to the switch for each client.
MTU=9000 was used for data switch ports.
2 100GbE 16 The Cluster Interconnect network is connected
via 100 GbE to a Cisco 9336C-FX2 switch, with 4
connections to each HA pair..
Transport Configuration Notes
-----------------------------
Each AFF A90 HA Pair used 4x 100 GbE bonded LACP ports for data transport
connectivity to clients (through a Cisco 9336C-FX2 switch), Item 1 above. Each
of the clients driving workload used 1x 100GbE ports for data transport. All
ports on the Item 1 network utilized MTU=9000. The Cluster Interconnect
network, Item 2 above, also utilized MTU=9000. All interfaces associated with
dataflow are visible to all other interfaces associated with dataflow.
Switches - Physical
===================
Total Used
Item Port Port
No Switch Name Switch Type Count Count Notes
---- -------------------- --------------- ------ ----- ------------------------
1 Cisco Nexus 100GbE 36 23 6 client-side 100 GbE
9336C-FX2 data connections, 1 port
per client; 4 storage-
side bonded LACP 4x 100
GbE data connections
(half of A90 nodes on
each data switch), 1
port per prime client, 4
per A90 node. Only the
ports on the Cisco Nexus
9336C-FX2 used for the
solution under test are
included in the total
port count.
2 Cisco Nexus 100GbE 36 22 6 client-side 100 GbE
9336C-FX2 data connections, 1 port
per client; 4 storage-
side bonded LACP 4x 100
GbE data connections
(half of A90 nodes on
each data switch), 4 per
A90 node. Only the ports
on the Cisco Nexus
9336C-FX2 used for the
solution under test are
included in the total
port count.
3 Cisco Nexus 100GbE 36 8 1 ports per A90 node,
9336C-FX2 for Cluster
Interconnect.
4 Cisco Nexus 100GbE 36 8 1 ports per A90 node,
9336C-FX2 for Cluster
Interconnect.
Processing Elements - Physical
==============================
Item
No Qty Type Location Description Processing Function
---- ---- -------- -------------- ------------------------- -------------------
1 16 CPU Storage Intel Xeon Gold 6438N NFS, TCP/IP, RAID
Controller and Storage
Controller
functions
2 18 CPU Lenovo SR650 Intel Xeon Gold 6330 NFS Client, Linux
V2 Client OS
3 8 CPU Lenovo SR650 Intel Xeon Gold 5420 NFS Client, Linux
V3 Client OS
Processing Element Notes
------------------------
Each of the 8 NetApp AFF A90 Storage Controllers contains 2 Intel Xeon Gold
6438N processors with 64 cores each; 2.20 GHz, hyperthreading enabled. Each of
the 9 Lenovo SR650 V2 clients contains 2 Intel Xeon Gold 6330 processors with
28 cores at 2.00 GHz, and each of the 4 Lenovo SR650 V3 clients contains 2
Intel Xeon Gold 5420 processors with 28 cores at 2.00GHz. All 13 clients have
hyperthreading enabled.
Memory - Physical
=================
Size in Number of
Description GiB Instances Nonvolatile Total GiB
------------------------- ---------- ---------- ------------ ------------
Main Memory for NetApp 2048 4 V 8192
AFF A90 HA Pair
NVDIMM (NVRAM) Memory for 128 4 NV 512
NetApp AFF A90 HA pair
Memory for each of 13 1024 13 V 13312
clients
Grand Total Memory Gibibytes 22016
Memory Notes
------------
Each storage controller has main memory that is used for the operating system
and caching filesystem data. Each controller also has NVRAM; See "Stable
Storage" for more information.
Stable Storage
==============
The AFF A90 utilizes non-volatile battery-backed memory (NVRAM) for write
caching. When a file-modifying operation is processed by the filesystem (WAFL)
it is written to system memory and journaled into a non-volatile memory region
backed by the NVRAM. This memory region is often referred to as the WAFL NVLog
(non-volatile log). The NVLog is mirrored between nodes in an HA pair and
protects the filesystem from any SPOF (single-point-of-failure) until the data
is de-staged to disk via a WAFL consistency point (CP). In the event of an
abrupt failure, data which was committed to the NVLog but has not yet reached
its final destination (disk) is read back from the NVLog and subsequently
written to disk via a CP.
Solution Under Test Configuration Notes
=======================================
All clients accessed the FlexGroup from all the available network interfaces.
Unlike a general-purpose operating system, ONTAP does not provide mechanisms
for non-administrative users to run third-party code. Due to this behavior,
ONTAP is not affected by either the Spectre or Meltdown vulnerabilities. The
same is true of all ONTAP variants including both ONTAP running on FAS/AFF
hardware as well as virtualized ONTAP products such as ONTAP Select and ONTAP
Cloud. In addition, FAS/AFF BIOS firmware does not provide a mechanism to run
arbitrary code and thus is not susceptible to either the Spectre or Meltdown
attacks. More information is available from
https://security.netapp.com/advisory/ntap-20180104-0001/.
None of the components used to perform the test were patched with Spectre or
Meltdown patches (CVE-2017-5754,CVE-2017-5753,CVE-2017-5715).
Other Solution Notes
====================
ONTAP Storage Efficiency techniques including inline compression and inline
deduplication were enabled by default, and were active during this test.
Standard data protection features, including background RAID and media error
scrubbing, software validated RAID checksum, and double disk failure protection
via double parity RAID (RAID-DP) were enabled during the test.
Dataflow
========
Please reference the configuration diagram. 12 clients were used to generate
the workload; 1 client acted as Prime Client to control the 12 workload
clients. Each client used one 100 GbE connection, through a Cisco Nexus
9336C-FX2 switch. Each storage HA pair had 8x 100 GbE connections to the data
switch. The filesystem consisted of one ONTAP FlexGroup. The clients mounted
the FlexGroup volume as an NFSv3 filesystem. The ONTAP cluster provided access
to the FlexGroup volume on every 100 GbE port connected to the data switch (32
ports total). Each of the 8 cluster nodes had 1 Logical Interfaces (LIFs) on 4x
100GbE Bonded LACP Ports, for a total of 1 LIF per node, for a total of 8 LIFs
for the AFF A90 cluster. Each client created mount points across those 8 LIFs
symmetrically.
Other Notes
===========
There are 8 mounts per client. Example mount commands from one server are shown
below. /etc/fstab entry: 192.168.5.5:/vino_sfs2020_fg_1
/t/wle-vino-int-hi-05_a0a/vino_sfs2020_fg_1 nfs
hard,proto=tcp,vers=3,rsize=262144,wsize=262144 0 0
mount | grep sfs 192.168.5.5:/vino_sfs2020_fg_1 on
/t/wle-vino-int-hi-05_a0a/vino_sfs2020_fg_1 type nfs
(rw,relatime,vers=3,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.5.5,mountvers=3,mountport=635,mountproto=tcp,local_lock=none,addr=192.168.5.5)
Other Report Notes
==================
NetApp, Data ONTAP and WAFL are registered trademarks, FlexGroup is a trademark
of NetApp, Inc. in the United States and other countries. All other trademarks
belong to their respective owners and should be treated as such.
===============================================================================
Generated on Fri Aug 16 13:01:23 2024 by SpecReport
Copyright (C) 2016-2024 Standard Performance Evaluation Corporation