0% found this document useful (0 votes)
12 views4 pages

SB HighFreq Trading

Mellanox provides low latency networking solutions for high frequency trading, utilizing InfiniBand and 10/40 Gigabit Ethernet technologies to enhance trading system performance. Their offerings include adapters, switches, and software that significantly reduce application latency and increase throughput, making them suitable for investment banks and hedge funds. The solutions have been benchmarked to demonstrate superior performance, with no application code changes required for integration.

Uploaded by

Somaditya Basak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views4 pages

SB HighFreq Trading

Mellanox provides low latency networking solutions for high frequency trading, utilizing InfiniBand and 10/40 Gigabit Ethernet technologies to enhance trading system performance. Their offerings include adapters, switches, and software that significantly reduce application latency and increase throughput, making them suitable for investment banks and hedge funds. The solutions have been benchmarked to demonstrate superior performance, with no application code changes required for integration.

Uploaded by

Somaditya Basak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

SOLUTION BRIEF

High Frequency Trading

As more financial firms engage in high frequency


trading, every microsecond can translate to mil- KEY ADVANTAGES
lions of dollars of profits or losses. Data volumes
–– Ultra-low application latency of as low as 1.0
are seeing dramatic growth, bringing existing
microseconds for InfiniBand and 1.3 microseconds
trading systems to their limits. In a business
for 10GigE
where profits are directly measured by system
–– Ultra-low switching latency of less than 100
speed, a low latency, high volume infrastructure is
nanoseconds for InfiniBand and less than 250
essential.
nanoseconds for 10/40GbE (port-to-port)
Mellanox offers the lowest latency networking so- –– Extremely high packet per second rate of up to 3
lutions for high frequency trading with its unique million PPS for sustaining growing market data
end-to-end solution (adapters, switching platforms rates
and software), leveraging both Infini-Band and –– No application code changes required and fully
10/40 Gigabit Ethernet technologies. As a pioneer compatible with Linux socket API
in the field of high frequency trading networking, –– Low jitter minimizing maximum and average
Mellanox has deployed its solutions at a large latencies
array of investment banks, hedge funds and ex- –– Fast access to storage for transaction logging and
changes. These solutions have been certified and intra-day analytics
deployed with the leading software providers in –– Cost effective optimized products for co-location
this industry. environments
Mellanox’s solutions for financial services repeat-
edly demonstrate lower application latency, higher
end low latency portfolio. Below are the corner-
application throughput, and higher scalability in
stones:
benchmark testing. While Mel-
lanox offers solutions for both In- ConnectX-2/ConnectX-3 Network Adapters
finiBand and 10/40GbE, InfiniBand ConnectX® adapter cards with Virtual Protocol
continues to deliver the lowest Interconnect® (VPI) provide the highest per-
latencies (as much as 30% lower than Ethernet- forming and most flexible interconnect solution
VMA based solutions), and highest scalability for High Frequency Trading environments. Con-
Messaging Accelerator
nectX adapters use advanced offload capabilities
The Mellanox Solution to provide highest throughput and lowest latency,
Mellanox offers a unique low latency transport while maintaining extremely low CPU utiliza-
solution for high frequency trading environments. tion. ConnectX adapters are the only adapters
Mellanox adapters, switches and gateways pro- in the industry to support RDMA transmission
vide the lowest latency, lowest jitter and highest over both InfiniBand and Ethernet with RDMA
message rates of any interconnect in the in- over Converged Ethernet (RoCE) allowing them
dustry. Supporting both 40/56Gb/s InfiniBand and to fully utilize hardware performance capabilities.
10/40GbE, both TCP/UDP and RDMA gives users With auto-sense capability, each ConnectX port
the optimal combination of high performance and can identify and operate on InfiniBand, Ethernet,
seamless integration with existing infrastructure and Ethernet with RoCE or Data Center Ethernet
and software. (DCE) fabrics. ConnectX with VPI simplifies I/O
system design and makes it easier for IT man-
Many different products comprise the full end-to-
agers to deploy infrastructure that meets the chal-

©2012 Mellanox Technologies. All rights reserved.


SOLUTION BRIEF page 2

lenges of a dynamic data center. ticast, the Grid Director 4036E can significantly accelerate
this traffic. When used with VMA software, even greater
VMA Messaging Acceleration Software latency reduction can be achieved. With 34 40Gb/s Infini-
VMA Messaging Acceleration software is a dynamically- Band ports, the Grid Director 4036E provides low latency
linked user-space Linux library for accelerating mes- and high throughput cluster connectivity.
saging traffic, and is proven to boost performance of high
frequency trading applications. Applications that utilize SX1016/SX1036 L2 10/40GbE Switches
standard BSD sockets use the library to offload network The SX1016/1036 switches provide the highest-performing
processing from a server’s CPU. The traffic is passed di- fabric solution in a 1U form factor by delivering up to
rectly to the IB or 10/40GbE Network adapter (NIC or HCA) 2.88Tb/s of non-blocking throughput to High-Performance
from the application user space, bypassing the kernel Computing, High Frequency Trading and Enterprise Data
and IP stack and thus minimizing context switches, buffer Centers, with ultra low-latency. With port to port latency
copies and interrupts resulting in extremely low latency. below 250 nanoseconds the SX1016/1036 are the ideal
VMA software runs on both Mellanox InfiniBand and top-of-rack switches for multi-tiered high frequency trading
10/40GbE switches and requires no changes to the applica- environments.
tion.
VSA Storage Acceleration Software
Grid Director™ 4036E InfiniBand-to-Ethernet Gateway VSA Storage Acceleration software is a highly scalable,
Switch high performance, low-latency software solution for tier-
The Mellanox Grid Director 4036E switch is a high per- one storage and gateways that provides ultra-fast remote
formance, low latency, fully non- blocking 40 Gb/s (QDR) block storage access and accelerates access to SAN, DAS,
InfiniBand switch, which includes a built-in low latency or Flash based storage. Supporting up to a million IOPS
Ethernet gateway for bridging traffic to and from Ethernet- (I/O operations per second), VSA helps modern traders
based networks or storage. This self-contained solution cope with new regulations requiring faster transaction log-
combines an InfiniBand switch, an embedded subnet man- ging and intra-day analytics.
ager, and a built-in, hardware-based low latency Ethernet
gateway in a compact 1U device. Integrated with Industry leading Messaging Applications
The Mellanox solution boosts the performance of finan-
The Grid Director 4036E is ideal for seamless exchange cial market data and messaging applications from leading
connectivity for both market data and trade orders using its vendors such as NYSE Technologies, Informatica (29West),
built-in low latency Ethernet-to-InfiniBand gateway. Market Tibco, IBM WMQ LLM and others, as well as customers’
data feeds typically run multicast traffic over 1 or 10 Gigabit homegrown trading systems. The solution is proven to
Ethernet. By mapping it to hardware-based InfiniBand mul- cut latency by a factor of 2-3X and increases application

©2012 Mellanox Technologies. All rights reserved.


SOLUTION BRIEF page 3

Industry’s Only End-to-End InfiniBand and Ethernet Low Latency Portfolio

Adapter Cards Acceleration & Switches/Gateways Cables


Monitoring Software

UFM
United Fabric Manager
VSA
Storage Accelerator

VMA
Messaging Accelerator

throughput per server, as compared to applications running


on standard Ethernet interconnect networks – all without UDP Latency (Netperf)
8
making any changes to the application. Latency (usec) UDP Latency (Netperf)
7
8
Latency (usec)
6
7
5
6
4
5
3
4
2
3
1
2
0
1 1 2 4 8 16 32 64 128 256 512 1024
Message Size (Bytes)
0
1 2 4 8 16 32 64 128 256 512 1024
ConnectX-3 10GbE ConnectX-3 40GbE ConnectX-3 10GbE + VMA ConnectX-3 40GbE + VMA
Message Size (Bytes)

ConnectX-3 10GbE ConnectX-3 40GbE ConnectX-3 10GbE + VMA ConnectX-3 40GbE + VMA

Figure 2. UDP Latency on Intel Romley based HP ProLiant Gen 8 servers


Figure 1. VMA Block Diagram. TCP Latency (Netperf)
with ConnectX-3
9
Proven Results Latency (usec)
8
TCP Latency (Netperf)
Mellanox's solutions for financial services repeatedly Latency (usec)
9
7
8
demonstrate lower application latency, higher application 6
7
throughput, and higher scalability in benchmark testing— 5 ConnectX-3 10GbE
6 ConnectX-3 40GbE
4
on both InfiniBand and 10GbE fabrics. 5 ConnectX-3 10GbE
10GbE + VMA
3 ConnectX-3

The following benchmarks show the performance of Mel- 42 ConnectX-3 40GbE


ConnectX-3 40GbE + VMA

ConnectX-3 10GbE + VMA


lanox VMA Message Accelerator over 10GbE compared to 31
ConnectX-3 40GbE + VMA
competitive solutions, as well as the value of InfiniBand for 20

1 1 2 4 8 16 32 64 128 256 512 1024


latency and determinism at scale. Message Size (Bytes)
0
1
ConnectX-3 10GbE 2 4 8 40GbE16
ConnectX-3 32ConnectX-3
64 10GbE
128 + VMA
256 512 1024 40GbE + VMA
ConnectX-3

Message Size (Bytes)

ConnectX-3 10GbE ConnectX-3 40GbE ConnectX-3 10GbE + VMA ConnectX-3 40GbE + VMA

Figure 3. TCP Latency on Intel Romley based HP ProLiant Gen 8 servers


with ConnectX-3
Benchmark configuration for figures 2 and 3:
–– 2 x HP ProLiantDL380p G8
–– Intel(R) Xeon (R) CPU E5-2690 2.90GHz
–– 32GB memory
–– Operating System: RHEL 6.1
–– No switch involved
–– Measured using Netperf_RR

©2012 Mellanox Technologies. All rights reserved.


SOLUTION BRIEF page 4

Solution Components
18
InfiniBand QDR with VMA Average Latency Part Number Product Description
16 InfiniBand QDR with VMA 99.9th Percentile
10Gigabit Ethernet with VMA Average Latency Adapters
14 10Gigabit Ethernet with VMA 99.9th Percentile
MCX354A-FCBT ConnectX®-3 VPI adapter card, dual-port QSFP,
12
FDR IB (56Gb/s) and 40GigE
Latency (µs)

10
MCX312A-XCBT ConnectX®-3 EN network interface card, 10GigE,
8 dual-port SFP+
6 MCX314A-BCBT ConnectX®-3 EN network interface card, 40GigE,
dual-port QSFP
4

2
Acceleration Software

0
SWL-00400 VMA license per server
32 128 256 512 1024
SWL-00346 VSA license per server
Message Size (Bytes)
Switches & Gateways

Figure 4. Latency vs Message Size at 100K messages per second with VLT-30034 Grid Director 4036E, 40 Gb/s (QDR) edge switch with
integrated Ethernet gateway
Informatica Ultra Messaging.
MSX1036B-1SFR SwitchX® based 36-port QSFP 40GigE 1U Ethernet
22 switch
20
MSX1016X-2BFR SwitchX® based 64-port SFP 10GigE 1U Ethernet switch
18
Cables and Transceivers
16
14 MTM1T02A-SR 10GbE SFP+ SR Transceiver
Latency (µs)

12 MTM1T02A-LR 10GbE SFP+ LR Transceiver


10 MC2207130-002 Copper cable, 56G FDR, QSFP, 30 AWG, 2 meter
8
MC2210130-002-002 Copper cable, up to 40GbE, QSFP, 30 AWG, 2 meter
6
4
InfiniBand QDR with VMA Average Latency MAM1Q00A-QSA QSFP to SFP+ cable adapter
InfiniBand QDR with VMA 99.9th Percentile
2 10Gigabit Ethernet with VMA Average Latency
10Gigabit Ethernet with VMA 99.9th Percentile
0
100 500 1,000 2,000
Message Rate (in thousands per second)

Figure 5. Latency vs Message Rate at 32B message size with Infor-


matica Ultra Messaging.
Benchmark configuration for figures 4 and 5:
–– 2x HP ProLiant® DL380 G7 servers each with two Intel(R) Xeon (R)
X5680 processors - 6 cores @ 3.33Ghz (each)
–– 48GB memory of 1333MHz memory
–– HCA/NIC Adapters: Mellanox ConnectX-2
–– VMA version: libvma-4.5.4-0
–– External Switches Mellanox 4036E InfiniBand switch, Mellanox
–– 6024 10GbE switch

350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085


Tel: 408-970-3400 • Fax: 408-970-3403
www.mellanox.com
© Copyright 2012. Mellanox Technologies. All rights reserved.
Mellanox, Mellanox logo, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, PhyX, SwitchX, Virtual Protocol Interconnect, and Voltaire are registered trademarks of Mellanox Technologies, Ltd. 3639SB Rev 1.2
Connect-IB, FabricIT, MLNX-OS, ScalableHPC, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

You might also like