WP Raid Controller Performance 2016 WW en
WP Raid Controller Performance 2016 WW en
White Paper
FUJITSU Server PRIMERGY & PRIMEQUEST
RAID Controller Performance 2016
This technical documentation is aimed at the persons responsible for the disk I/O
performance of Fujitsu PRIMERGY and PRIMEQUEST servers. The document is intended
to help you become acquainted - from a performance viewpoint - with the options and
application areas of various RAID controllers for internal disk subsystems. Depending on
the requirements for data security and performance as well as planned or existing server
configuration, specific recommendations arise for the selection and parameterization of
controllers. Controllers of the current generation that are available for PRIMERGY and
PRIMEQUEST systems in 2016 are to be considered here.
Version
1.0d
2016-08-29
Contents
Document history
Version 1.0 (2016-03-07)
Initial version
Introduction
Hard disks are a security factor as well as critical performance components in the server environment. It is
thus important to bundle the performance of such components via intelligent organization so that they do not
cause a system bottleneck. They should simultaneously compensate for any failure of an individual
component. Methods exist for arranging several hard disks in logical drive so that any hard disk failure can
be compensated. This is known as a “Redundant Array of Independent Disks” or in short RAID. Special
RAID controllers are normally used.
The PRIMERGY and PRIMEQUEST servers are available in a wide range of internal configuration versions
with different RAID controller and hard disk configurations. The “Modular RAID” concept that is offered as a
standard for all servers in the PRIMERGY and PRIMEQUEST family consists of a modular controller family
and standardized management via the Fujitsu RAID Manager software known as “ServerView RAID
Manager”. The comprehensive offer of RAID solutions enables the user to select the appropriate controller
for a particular application scenario. The performance of the disk subsystem is defined by the controller, the
selected hard disks and the features of the RAID level.
Several documents have been created in the PRIMERGY & PRIMEQUEST white paper series which
illustrate all aspects of “Modular RAID” regarding performance:
We recommend - as a comprehensive introduction to disk I/O performance - the White Paper
“Basics of Disk I/O Performance”.
This document “RAID Controller Performance 2016” covers all RAID controllers of the current
generation, including their performance, that are on offer for PRIMERGY and PRIMEQUEST.
This predecessor document “RAID Controller Performance 2013” covers the RAID controllers of the
generation of that time and their performance.
When sizing internal disk subsystems for PRIMERGY and PRIMEQUEST servers you can proceed in such a
way that a suitable hard disk type is selected and the necessary number of hard disks for the required RAID
level is estimated using rules of thumb. Due to the number and technology of the hard disks that are to be
connected as well as the required RAID level the RAID controller is self-evident. This may be adequate for
years in order to accurately size a disk subsystem.
However, the technology of storage media (for example Solid State Drives, or in short SSDs) or in the
internal interfaces of the server has progressed over the years and the new disk subsystem no longer meets
the increased requirements. Or, in a productive server configuration the application scenario changes and
the achieved disk I/O performance is - despite an adequate number of hard disks - not as desired. In both
these cases it can be worthwhile to look at the influence of the RAID controller on performance more closely.
Sometimes the right controller, or even simply the correctly configured controller, is prerequisite for the best
possible performance.
That outlines the objective of this document. First, there will be an overview of the current internal RAID
controllers that are available for the PRIMERGY and PRIMEQUEST systems. The throughput limits of the
involved controller interfaces will then be presented under the aspects of performance. After a brief
introduction into the measurement context, the different RAID controllers will be compared at various RAID
levels and in different application scenarios, which will be substantiated by the measurement results.
In the past the terms “Hard Disk” and also “Hard Disk Drive” (HDD) were used for a hard magnetic-coated,
rotating, digital, non-volatile storage medium that could be directly addressed. Technical development has
now seen new “hard disk” versions introduced as storage media; they use the same interface to the server
and are accordingly handled as hard disks by the server. An SSD, which as an electronic storage medium
does not contain any moving parts, can be stated as a typical example, but which nevertheless is also
colloquially referred to as a hard disk. Throughout this document the term “hard disk” is used as a generic
term, with the names “SSD” and “HDD” being used as a means of differentiation.
12
This document specifies hard disk capacities on a basis of 10 (1 TB = 10 bytes) while all other capacities,
20
file sizes, block sizes and throughputs are specified on a basis of 2 (1 MB/s = 2 bytes/s).
In the evaluation of the performance of disk subsystems, processor performance and memory configuration
do not for the most part play a significant role in today’s systems - a possible bottleneck usually affects the
hard disks and the RAID controller, and not CPU or memory of the server system. Thus the various RAID
controllers can be compared independently of the PRIMERGY or PRIMEQUEST models in which they are
used - even if all the configurations are not possible in all PRIMERGYs or PRIMEQUESTs due to their
expandability with hard disks.
The following table is a compilation of which RAID controllers of the current generation are released in the
individual PRIMERGY and PRIMEQUEST systems for the connection of hard disks at the time this white
paper was written and how many hard disks the single RAID controllers support in these models at most.
Please see the configurators of the systems for the possible combinations of PRIMERGY and PRIMEQUEST
configuration versions and controllers.
Onboard
Controller with PCIe interface
controller
PRAID CM400i
PRAID EM400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PSAS CP400i
System
Expander
C220
C236
C610
PRIMERGY BX2560 M1 2 2 2
PRIMERGY BX2580 M1 2
PRIMERGY CX2550 M1 6 6 6 6 6
PRIMERGY CX2570 M1 6 6 6 6
PRIMEQUEST 2800B2 (DU) 4
PRIMEQUEST 2x00E2 (SB) 4
PRIMEQUEST 2x00E2 (DU) 4
PRIMERGY RX1330 M1 -/ 4 8 10 10 10
PRIMERGY RX1330 M2 -/ 4 8 10 10
PRIMERGY RX2530 M1 -/ 4 (8) 8 10 10 10
PRIMERGY RX2540 M1 -/ 4 (8) 8 24 24 24
PRIMERGY RX2560 M1 -/ 8 32 32 32
PRIMERGY RX4770 M2 8 8 8
PRIMERGY SX960 S1 -/ 10 10 10
PRIMERGY TX1310 M1 4
PRIMERGY TX1320 M1 4
PRIMERGY TX1320 M2 4 6 6 6
PRIMERGY TX1330 M1 4
PRIMERGY TX1330 M2 -/ 4 8 24 24
PRIMERGY TX2560 M1 -/ 8 32 32 32
RAID controllers of previous generations (SAS-6G) can also be ordered for some systems. Since these
controllers have already been dealt with in the previous document RAID Controller Performance 2013, they
will not be analyzed again here.
The abbreviation “DU” stands for “Disk Unit”, and “SB” stands for “System Board” for PRIMEQUEST
systems. The figures in the corresponding table lines specify in each case the maximum number of hard
disks in such a sub-unit.
This white paper only examines the previously mentioned mezzanine cards in connection with internal hard
disks in the same server blade.
In connection with hard disks, the PSAS CP400i is essentially planned for Microsoft Windows Server 2012
Storage Spaces. For this purpose, this controller passes on the physical drives to the operating system in an
unchanged state. The hardware RAID support that is also available in the controller offers RAID 0 and
RAID 1 and is intended for a boot drive.
Theoretical Practical
Type Frequency
throughput throughput (90%)
SAS 3G / SATA 3G 3000 MHz 286 MB/s 257 MB/s
SAS 6G / SATA 6G 6000 MHz 572 MB/s 515 MB/s
SAS 12G 12000 MHz 1144 MB/s 1030 MB/s
Alternatively, a version number is also used with SAS - 1.0 for 3G, 2.0 for 6G and 3.0 for 12G. Alternatively,
version number 2.0 is used for 3G and 3.0 for 6G with SATA.
The theoretically achievable throughput is calculated as follows: 1 bit per 1 Hz, minus 20% redundancy of
the serial transfer due to the so-called 8b/10b coding. The throughput that can be achieved in practice can
be estimated by multiplying this with 0.90. This 90% is a mean empirical value taken from the values that
have been observed over the years for various components.
All the components of a connection between end devices must use the same version of the SAS or SATA
protocol. In addition to the hard disks, these also include the controllers and any expanders that are possibly
used. If different components come together here, the most high-performance standard that is jointly
supported by all components is automatically used, i.e. a lower frequency is possible. In this respect, the
higher protocols are downwards compatible.
Whereas each port with SATA is often individually connected to a hard disk, four SAS connections and
cables are frequently put together and referred to as an “x4 SAS” or “x4 wide port”. This makes it possible to
directly connect a maximum of four SAS hard disks via a backplane. The throughput of x4 SAS is four times
that of the corresponding individual SAS connection; this also applies similarly for SATA.
Theoretical Practical
Interface Connection Frequency
throughput throughput (90%)
SAS 3G / SATA 3G 1 × x4 3000 MHz 1144 MB/s 1030 MB/s
SAS 3G / SATA 3G 2 × x4 3000 MHz 2289 MB/s 2060 MB/s
SAS 6G / SATA 6G 1 × x4 6000 MHz 2289 MB/s 2060 MB/s
SAS 6G / SATA 6G 2 × x4 6000 MHz 4578 MB/s 4120 MB/s
SAS 12G 1 × x4 12000 MHz 4578 MB/s 4120 MB/s
SAS 12G 2 × x4 12000 MHz 9155 MB/s 8240 MB/s
Some PRIMERGY models can be expanded with a larger number of hard disks than the controller has hard
disk connections. In this case, the number of connectable hard disks is increased by means of an expander.
As already mentioned, an expander can only distribute the data flow, not increase the throughput.
The SAS protocol is defined in such a way that it can also transport the SATA protocols of the same or a
lower frequency (tunneling). This enables the controllers of both SAS versions to communicate with SATA
hard disks. Conversely, it is not possible to connect SAS hard disks via a SATA interface.
PCIe 1.0 is also often referred to as “PCIe Gen1”, PCIe 2.0 as “PCIe Gen2” and PCIe 3.0 as “PCIe Gen3”.
The theoretically achievable throughput is calculated as follows: 1 bit per 1 Hz multiplied by the number of
connections (x4 or x8), minus 20% redundancy of the serial transfer due to the so-called 8b/10b coding for
PCIe 1.0 and 2.0 or minus 1.54% redundancy due to the 128b/130b coding for PCIe 3.0 respectively. The
throughput that can be achieved in practice can be estimated by multiplying this with 0.90. This 90% value is
a mean empirical value taken from the values for various components that have been observed over the
years.
All PRIMERGY servers, beginning with the generation introduced in 2010 (e. g. PRIMERGY RX300 S5),
support PCIe 2.0 and from the generation introduced in 2012 (e. g. PRIMERGY RX300 S7) PCIe 3.0. If
different components come together here, the highest frequency jointly supported by all components is used.
The Direct Media Interface, or in its abbreviated form DMI, is closely related to PCIe. This is an Intel-specific
standard for connecting a CPU to the chipset. The corresponding statements apply for DMI with regard to
the throughputs, as do those for PCIe in the above table. Thus, for example DMI 2.0, x4, permits a maximum
practical throughput of 1716 MB/s. On the input side (CPU side) this throughput value is relevant for the
onboard controllers, as these are accommodated in the chipsets.
1) The second controller instance does not increase the throughput limit of the CPU-side interface.
2) This halved throughput limit applies for the case, in which only hard disks with a 6G interface are connected to the
controller.
In the majority of cases the throughput limits do not represent a bottleneck. In practice, the application
scenarios with random access to conventional hard disks prevail in particular, in which no high throughputs
are achieved.
The throughput values in the column “Limit for throughput of disk interface” apply for the connections
between the controller and the hard disks in their entirety. The throughputs via this SAS/SATA interface are
only in the case of RAID 0 identical with the throughputs from the viewpoint of the application. With other
RAID levels the throughput via the SAS/SATA interface is from the viewpoint of the application multiplied by
a specific factor compared with the throughput. This factor is always ≥ 1 and depends on the RAID level and
several characteristics of the access pattern. The real throughput limits are therefore always lower than the
values in the column “Limit for throughput of disk interface” by the mentioned specific factor.
The FBU version is offered for all the RAID controllers with controller cache that are dealt with in this white
paper.
FastPath
FastPath is a high-performance IO accelerator for logical drives that are made up of SSDs. This optimized
version of LSI MegaRAID technology permits a clear-cut increase in the performance of applications with a
high IO load for random access if SSDs are used.
FastPath used to be part of the RAID option “RAID Advanced Software Options int.” which could be ordered
in addition to a RAID controller.
From firmware package version 24.7.0-0061, FastPath has automatically been active in the 12G-enabled
RAID controllers with cache (PRAID EM400i, PRAID EP400i and PRAID EP420i) and effective for newly
created logical drives or ones that were already created with older firmware versions. You should merely
ensure that there are generally optimal prerequisites for SSDs as far as the cache settings are concerned.
This means that when creating a logical drive with the ServerView RAID Manager the cache settings must be
set en bloc to “Fast Path optimum”, and with the existing logical drives you should ensure that the settings
are as follows:
Read Mode “No read-ahead”
Write Mode “Write-through”
Cache Mode “Direct”
Disk Cache “Enabled”
In the remainder of this document it is assumed that on account of the firmware status FastPath is active.
Read mode
The “Read mode” parameter can be used to control whether reading is done in advance. Two options “No
read-ahead” and “Read-ahead” are available. Reading in advance is not done with “No read-ahead”. Blocks
that sequentially follow directly requested blocks are read and transferred to the controller cache in the case
of “Read-ahead”. This is done with the expectation that the blocks are also required in one of the next
requests.
In the case of the “Read-ahead” option the onboard controllers (e. g. C220) generally read blocks in
advance. The PCIe controllers with a cache work in a more differentiated way for this option: The requested
blocks are continuously analyzed to see whether there is sequential read access. If the controller detects
such an access, it starts to also read the sequentially following blocks – in addition to the requested block –
in the cache in order to have them available for the expected, next requests. The current option “Read-
ahead” is in other words adaptive. This is a merger of the two previous options “Read-ahead” and “Adaptive”.
Write mode
The setting options of the controller cache that control the handling of write requests are summarized under
the term “Write mode”. There are three options for setting the write cache: “Write-through”, “Write-back” and
“Always Write-back (independent of BBU state)”. The “Write-through” option ensures that each write request
from the controller is only reported back as completed when it has been acknowledged by the hard disk.
With the “Write-back” and “Always Write-back” options the requests are cached in the controller cache,
immediately acknowledged to the application as completed and only transferred to the hard disk later. This
procedure enables optimal utilization of controller resources, faster succession of the write requests and
therefore higher throughput. Any power failures can be bridged by means of an optional FBU, thus
guaranteeing the integrity of the data in the controller cache. The “Always Write-back” option enables the
write cache on a permanent basis; it is also used if the FBU is not operational. On the other hand, the “Write-
back” option automatically switches to “Write-through” as long as the controller cache is not safeguarded by
the FBU.
Cache mode
The “Cache Mode” parameter is also sometimes referred to as the “I/O Cache”. The “Direct” option specifies
that the data to be read are transferred directly from the hard disk to the RAM of the server. The alternative
“Cached” causes all the data to be read and written on its way between the server memory and the hard
disks to pass the controller cache. “Direct” is the recommended setting. The Read-Ahead functionality is not
influenced by the Cache Mode setting.
The next table shows which of these setting options exist for the individual controllers.
To complete matters the following table also provides a compilation of the settings that are currently
implemented in the modi “Data Protection”, “Performance” and “Fast Path optimum” in ServerView RAID
Manager. It should be noted that the settings for the controllers with a controller cache also depend on the
existence of a FBU, but are independent of the selected RAID level.
FBU?
Read mode Read-ahead Read-ahead Read-ahead
Data Write mode Write-through Write-back
Protection Cache mode Direct Direct
Disk cache off off off off
Read mode Read-ahead Read-ahead Read-ahead
Write mode Always Write-back Write-back
Performance
Cache mode Direct Direct
Disk cache on on on on
Read mode No read-ahead No read-ahead
Fast Path Write mode Write-through Write-through
optimum Cache mode Direct Direct
Disk cache on on on
Other settings
In addition to the setting options for the caches of RAID controllers and hard disks, there are further setting
options in the “ServerView RAID Manager” (version ≥ 6.3.3) for logical drives, and knowledge of these
options is worthwhile from a performance viewpoint.
Stripe size
The first interesting parameter is the Stripe size. It can only be set when you create a logical drive. Various
values are possible for the RAID controllers with cache (e. g. PRAID EP400i); the default value for all other
controllers is 64 kB.
The significance of the stripe size is to be explained in detail below using the example of the simplest case
RAID 0.
The stripe size controls the design of logical drives that are made up of physical hard disks. The controller
implements access to a data block of a logical drive by converting the addresses in the logical drive by
means of a specific rule to addresses in the involved physical hard disks. This conversion is based on a
division of each of the involved hard disks – beginning in each case from the start of the hard disk - in
equally sized blocks of N bytes each. The first N bytes of the logical drive are now assigned to block 0 on
hard disk 0, the next N bytes are then assigned to block 0 on hard disk 1. This continues successively until
assignment to block 0 has taken place on all the involved hard disks. It then continues with block 1 on hard
disk 0, block 1 on hard disk 1, etc. The conversion rule is illustrated by the following diagram:
Logical drive
Disk 0 Disk 1
Stripe
0 1
2 3 Stripe Set
4 5
6 7
8 9
10 11
Each of these blocks on one of these hard disks is called a stripe, and its size in bytes is called the stripe
size. All the stripes that lie horizontally adjacent to each other in the above diagram are known as a stripe
set.
The stripe size influences performance. On the one hand the stripe size must be small enough to distribute -
with a high degree of probability - accesses to the logical drive evenly over the hard disks. On the other hand
it must also be large enough to prevent the requested blocks of the logical drive from mostly being divided at
the hard disk limits. This would result in an unwanted multiplication of hard disk accesses and thus an
unnecessarily early overload of the hard disks.
Normally, the default of the stripe size is optimal. Merely in the case of random accesses the previously
described block division should for the most part be avoided. In other words, the stripe size should
either be large compared with the blocks requested by the application (example: requested blocks of
8 kB for a 64 kB stripe size)
or be exactly the same size as the blocks requested by the application if the latter aligns them at the
stripe limits
The possible values of the stripe size for the RAID controllers with cache that are dealt with here are 64 kB,
128 kB, 256 kB, 512 kB and 1 MB, the default value is 256 kB.
Emulation type
The second interesting parameter is the emulation type. The handling of 512e hard disks should be
associated with emulation. The internal structure of such hard disks has a sector size of 4096 B. Externally,
however, they emulate a sector size of 512 B. In other words, the physical sector size for such hard disks is
4096 B, and the logical sector size is 512 B. Detailed information on the topic of 512e HDDs is available in
the white paper 512e HDDs: Technology, Performance, Configurations.
The emulation type can not only be set for the creation of a logical drive; a subsequent change is also
possible and this has an effect after the next reboot. There are three possible values:
Default If only 512n hard disks are contained in a logical drive, it is given the property “logical
sector size = 512 B” for the operating system. As soon as at least one 512e hard disk is
included, a logical drive has the property “physical sector size = 4096 B”. This default
should normally be retained. It provides meaningful parameter information to the
accessing software layers that are located above: If the logical drive contains a hard disk
with a physical sector size of 4096 B, the software layers located above receive the
information and can align their accesses to the logical drive to the physical sectors of
4096 B with an optimal performance level.
None The logical drive always has the property “physical sector size = 512 B”, even if one of the
affected hard disks has the physical sector size 4096 B. This mode does not make sense
in productive use.
Force 512e The logical drive always has the property “physical sector size = 4096 B”, even if the
physical sector size is only 512 B. In the case of an existing logical drive consisting of
512n hard disks this setting can make sense if you want to ensure that replacing a failed
hard disk with a 512e hard disk does not result in losses in performance, either.
Measurement context
Now that the various controllers have been presented and their technical features explained, it is our
intention in the following section “Controller comparison” to discuss the controllers in various application
scenarios and to back this up on the basis of measurement results. Hence, a brief introduction to begin with
of the measurement method and the measurement environment.
All the details of the measurement method and the basics of disk I/O performance are described in the white
paper “Basics of Disk I/O Performance”.
Measurement method
As standard, performance measurements of disk subsystems in PRIMERGY and PRIMEQUEST servers are
carried out with a defined measurement method, which models the hard disk accesses of real application
scenarios on the basis of specifications.
The essential specifications are:
Share of random accesses / sequential accesses
Share of read / write access types
Block size (kB)
Number of parallel accesses (# of outstanding I/Os)
A given value combination of these specifications is known as “load profile”. The following five standard load
profiles can be allocated to typical application scenarios:
In order to model applications that access in parallel with a different load intensity, the “# of Outstanding
I/Os” is increased from 1 to 512 (in steps to the power of two).
The measurements of this document are based on these standard load profiles.
Measurement environment
All the measurement results discussed in this document were determined using the hardware and software
components listed below:
C236:
Intel C236 PCH, Code name Sunrise Point (in PRIMERGY TX1330 M2)
Driver name: megasr1.sys, Driver version: 17.01.2015.0716
BIOS version: A.15.08211538R
C610:
Intel C610 PCH, Code name Wellsburg (in PRIMERGY RX2560 M1)
Driver name: megasr1.sys, Driver version: 16.02.2014.0811
BIOS version: A.14.02121826R
PRAID CM400i, PRAID CP400i, PRAID EM400i, PRAID EP400i, PRAID EP420i:
Driver name: megasas2.sys, Driver version: 6.706.06
Firmware package: 24.7.0-0061
PSAS CP400i:
Driver name: lsi_sas3.sys, Driver version: 2.50.85.00
Firmware: 05.00.00.00
Storage media SSDs HDDs
SAS-12G: SAS-12G:
Toshiba PX02SMF040 HGST HUC156045CSS204
SATA-6G: SATA-6G:
Intel SSDSC2BA400G3C Seagate ST91000640NS
Software
Operating system Microsoft Windows Server 2012 Standard R2
RAID Manager software ServerView RAID Manager 6.3.4
Benchmark version 3.0
RAID type Logical drive of type RAID 0, 1, 5 or 10
Stripe size Controller default (i.e. 256 kB for 12G controllers with cache, 64 kB otherwise)
Measuring tool Iometer 1.1.0
Measurement area The first 10% of the usable LBA area is used for sequential accesses; the next 25%
for random accesses.
File system raw
Total number of Iometer 1
workers
Alignment of Iometer Aligned to whole multiples of 4096 bytes
accesses
The hard disk models used for the controller comparison are summarized again below in detail together with
their fundamental performance data, because this is important for your understanding of the performance
values achieved with the controllers. A high-performance SATA-6G and SAS-12G hard disk were chosen in
each case for the classic hard disks (HDDs), and a SAS-12G-SSD and a SATA-6G-SSD represent the SSD
class.
The table depicts the maximum values measured with a single hard disk for the five standard load profiles
that were shown in the previous subsection “Measurement method”. The hard disk cache is enabled in all
cases, because this almost always ensures optimal performance.
Controller comparison
All the important preliminary information about controllers has been provided in the previous sections. This
information will in many cases already narrow down the choice of controller for a given application. If further
customer information about the planned use of the controller is added, a great deal more can be said about
the performance to be expected with the individual controllers. Thus this section is to compare the controllers
differentiated for various RAID levels, application scenarios, load intensities, numbers of hard disks as well
as hard disk technologies. The statements are illustrated with the help of measurement results. The
comparisons are divided into the following subsections, which can be read independently of each other:
RAID 1 (two SATA hard disks)
RAID 0 and 10 (four SATA hard disks)
RAID 0, 10 and 5 (eight SAS hard disks)
RAID 0, 10 and 5 (more than eight SAS-SSDs)
Random accesses
RAID 1 with two SATA-6G-SSDs
The diagram shows a controller comparison for two SATA-6G-SSDs configured as RAID 1. The three groups
of columns in the diagram represent the transaction rates for the standard load profiles “File copy” (random
access, 50% read, 64 kB block size), “File server” (random access, 67% read, 64 kB block size) and
“Database” (random access, 67% read, 8 kB block size).
70000
Transaction rate [IO/s]
60000
50000
40000
30000
PRAID CM400i
PRAID CM400i
PRAID CM400i
PRAID CP400i
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
20000
10000
C220
C220
C236
C610
C220
C236
C610
C236
C610
0
File copy File server Database
The PCIe controllers provide the highest overall transaction rates here.
Sequential accesses
RAID 1 with two SATA-6G-SSDs
The next diagram shows a controller comparison for two SATA-6G-SSDs configured as RAID 1. The two
groups of columns in the diagram represent the throughputs for the standard load profiles “Streaming”
(sequential access, 100% read, 64 kB block size) and “Restore” (sequential access, 100% write, 64 kB block
size).
1000
900
Throughput [MB/s]
800
700
600
500
400
PRAID CM400i
PRAID CM400i
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
300
200
C220
C220
C236
C610
C236
100 C610
0
Streaming Restore
When reading with higher load intensities the PCIe controllers use both hard disks to a greater extent than
the onboard controllers and consequently show a higher maximum throughput.
Random accesses
HDDs
RAID 0 with four SATA-6G-HDDs
The next diagram shows the transaction rates of the logical drive of type RAID 0 for random load profiles that
can be achieved with various controllers. The three groups of columns show the transaction rates for the
standard load profiles “File copy” (random access, 50% read, 64 kB block size), “File server” (random
access, 67% read, 64 kB block size) and “Database” (random access, 67% read, 8 kB block size).
1400
Transaction rate [IO/s]
1200
1000
800
600
PRAID CP400i
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
PSAS CP400i
PSAS CP400i
PSAS CP400i
400
200
C220
C220
C236
C610
C220
C236
C610
C236
C610
0
File copy File server Database
It can be clearly seen that the transaction rates are higher if the quality of the controller is higher.
1200
Transaction rate [IO/s]
1000
800
600
400
PRAID CP400i
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
200
C610
C220
C236
C610
C220
C236
C610
C220
C236
0
File copy File server Database
The fact that the transaction rates are higher if the quality of the controller is higher can also be clearly seen
here.
SSDs
RAID 0 with four SATA-6G-SSDs
The next diagram shows the transaction rates of the logical drive of type RAID 0 for random load profiles that
can be achieved with various controllers. The three groups of columns show the transaction rates for the
standard load profiles “File copy” (random access, 50% read, 64 kB block size), “File server” (random
access, 67% read, 64 kB block size) and “Database” (random access, 67% read, 8 kB block size).
180000
Transaction rate [IO/s]
160000
140000
120000
100000
80000
60000
PRAID CP400i
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
40000
C610
C220
C236
C610
C220
C236
C610
C220
C236
20000
0
File copy File server Database
It can be clearly seen that the transaction rates are higher if the quality of the controller is higher.
120000
Transaction rate [IO/s]
100000
80000
60000
40000
PRAID CP400i
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
20000
C610
C220
C236
C610
C220
C236
C610
C220
C236
0
File copy File server Database
The fact that the transaction rates are higher if the quality of the controller is higher can also be clearly seen
here.
Sequential accesses
HDDs
RAID 0 with four SATA-6G-HDDs
The next diagram shows the maximum throughputs of the logical drive of type RAID 0 for sequential load
profiles that can be achieved with various controllers. The two groups of columns in the diagram show the
throughputs for the standard load profiles “Streaming” (sequential access, 100% read, 64 kB block size) and
“Restore” (sequential access, 100% write, 64 kB block size).
All the controllers deliver
approximately the same Maximum throughputs, sequential access, RAID 0, 4 SATA-6G-HDDs
performance in these cases.
500
450
Throughput [MB/s]
400
350
300
250
200
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
PSAS CP400i
PSAS CP400i
150
100
C220
C220
C236
C610
C236
C610
50
0
Streaming Restore
150
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
100
50
C610
C220
C236
C220
C236
C610
0
Streaming Restore
SSDs
RAID 0 with four SATA-6G-SSDs
The next diagram shows the maximum throughputs of the logical drive of type RAID 0 for sequential load
profiles that can be achieved with various controllers. The two groups of columns in the diagram show the
throughputs for the standard load profiles “Streaming” (sequential access, 100% read, 64 kB block size) and
“Restore” (sequential access, 100% write, 64 kB block size).
2000
1800
Throughput [MB/s]
1600
1400
1200
1000
800
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
600
400
C610
C220
C236
C220
C236
C610
200
0
Streaming Restore
2000
1800
Throughput [MB/s]
1600
1400
1200
1000
800
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
600
400
C610
C220
C236
C220
C236
C610
200
0
Streaming Restore
Random accesses
When considering random accesses for larger numbers of hard disks it makes sense to distinguish between
HDDs and SSDs, because the maximum values for SSDs are of a quite different magnitude.
HDDs
The controllers are compared below with random accesses to HDDs. The maximum transaction rates of the
storage medium for the load profile used are the most important limiting factor here. Nevertheless,
performance in such cases is not fully independent of the controller. Although the following results were
acquired with eight SAS-12G-HDDs, they can also be used to estimate the maximum transaction rates to be
expected for other types and numbers (≤ 8) of hard disks.
6000
Transaction rate [IO/s]
5000
4000
3000
2000
PRAID CP400i
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
1000
0
File copy File server Database
The two right-hand columns in each of the three groups of columns in this diagram represent the two
controllers with a cache (PRAID EP400i and PRAID EP420i). The superiority of these two controllers is
made possible on the one hand by the controller cache, and on the other hand by the higher default value of
the stripe size compared with the PRAID CP400i.
5000
Transaction rate [IO/s]
4500
4000
3500
3000
2500
2000
PRAID CP400i
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
1500
1000
500
0
File copy File server Database
3500
Transaction rate [IO/s]
3000
2500
2000
1500
PRAID CP400i
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
1000
500
0
File copy File server Database
SSDs
For the number of SSDs under consideration here the possible transaction rates of a logical drive are so high
that the FastPath option, which is enabled as standard in the latest controller firmware, has a distinct
influence. This can be seen below by the superiority of the controllers PRAID EP400i and PRAID EP420i
compared with the PRAID CP400i. The latter does not support the FastPath option.
300000
Transaction rate [IO/s]
250000
200000
150000
100000
PRAID CP400i
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
50000
0
File copy File server Database
The PRAID EP420i controller is the one with top performance here.
The controllers with cache have a clear advantage for the load profile “Database” (8 kB block size). They
also achieve their maximum transaction rate here.
It is also interesting to understand the throughput values that are associated with these transaction rates.
Despite the lower transaction rates, the two load profiles with a 64 kB block size have the higher
throughputs. For example, the PRAID EP400i controller handles a throughput of about 2848 MB/s with the
load profile “File server”. The controllers do not yet have a limiting effect here for the two load profiles “File
copy” and “File server” (64 kB block size).
250000
Transaction rate [IO/s]
200000
150000
100000
PRAID CP400i
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
50000
0
File copy File server Database
In the case of the load profile with the small blocks (“Database”) the controllers with cache have an
advantage here, too.
160000
Transaction rate [IO/s]
140000
120000
100000
80000
60000
PRAID CP400i
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
40000
20000
0
File copy File server Database
In the case of the load profile with the small blocks (“Database”) the controllers with cache have an
advantage here, too.
Sequential accesses
HDDs
RAID 0 with eight SAS-12G-HDDs
The next diagram shows the throughputs of the logical drive of type RAID 0 for sequential load profiles that
can be achieved with various controllers. The two groups of columns in the diagram show the throughputs
for the standard load profiles “Streaming” (sequential access, 100% read, 64 kB block size) and “Restore”
(sequential access, 100% write, 64 kB block size).
The throughputs here are
clearly limited by HDD type
and number. Maximum throughputs, sequential access, RAID 0, 8 SAS-12G-HDDs
2000
1800
Throughput [MB/s]
1600
1400
1200
1000
800
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
600
400
200
0
Streaming Restore
600
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
400
200
0
Streaming Restore
1000
800
PRAID CP400i
PRAID CP400i
600
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
400
200
0
Streaming Restore
SSDs
RAID 0 with eight SAS-12G-SSDs
The next diagram shows the throughputs of the logical drive of type RAID 0 for sequential load profiles that
can be achieved with various controllers. The two groups of columns in the diagram show the throughputs
for the standard load profiles “Streaming” (sequential access, 100% read, 64 kB block size) and “Restore”
(sequential access, 100% write, 64 kB block size).
For “Streaming” the
throughput limit of the Maximum throughputs, sequential access, RAID 0, 8 SAS-12G-SSDs
controllers in reading
direction (approx.
5900 MB/s) is achieved with 7000
eight SAS-12G-SSDs as a
RAID 0. The maximum 6000
Throughput [MB/s]
3000
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
2000
1000
0
Streaming Restore
5000
4000
3000
PRAID CP400i
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
2000
1000
0
Streaming Restore
PRAID CP400i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP420i
The enabled write cache of
2000
the controllers is vital here
in order to achieve the
1000
maximum data throughput
for “Restore”.
0
Streaming Restore
7000
RAID 0 RAID 10 RAID 5
Transaction rate [IO/s]
6000
5000
4000
3000
2000
RAID 10
RAID 10
RAID 10
RAID 0
RAID 5
RAID 0
RAID 5
RAID 0
RAID 5
1000
0
File copy File server Database
These percentages can also be theoretically estimated if you use a multiplication factor for random write
accesses. This is a matter of a so-called “write penalty”, which is defined as:
# of accesses caused from the viewpoint of all the physical hard disks
# of causing write accesses from the viewpoint of the application
1
This “write penalty” has a value of 1 for RAID 0, 2 for RAID 10 and a value of 4 for RAID 5. Taken together
with the read share (which does not multiply) contained in the respective load profile, the result is a specific
multiplication factor between the accesses from the viewpoint of the application and the accesses from the
viewpoint of all the hard disks. For example with RAID 5 compared to RAID 0, this factor causes the hard
disks to already come under maximum load from an application viewpoint with much lower transaction rates.
These theoretical percentage differences between the various RAID levels are listed in following table for the
three random standard load profiles (and thus ultimately write shares).
In a comparison of these theoretical values with the percentages from the above diagram, which result from
the measurement values, you will find that the percentages in the diagram are somewhat higher. This is due
to optimization measures of the controllers through cache usage.
1
In the case of RAID 10 the value 2 expresses the double writing of each data block on account of disk mirroring. And
in the case of RAID 5 one write access must take place as follows on account of the random load profile: 1) Read the
old data stripe; 2) Read the old parity stripe; 3) Calculate the new parity stripe from the read stripes; 4) Write the new
data stripe; 5) Write the new parity stripe. Thus, in total the random writing of a data stripe means read twice and
write twice. So this is why value 4 is for the “write penalty”.
Random accesses
RAID 0 with 24 SAS-12G-SSDs
The next diagram shows the transaction rates of the logical drive of type RAID 0 for random load profiles that
can be achieved with various controllers. The three groups of columns show the transaction rates for the
standard load profiles “File copy” (random access, 50% read, 64 kB block size), “File server” (random
access, 67% read, 64 kB block size) and “Database” (random access, 67% read, 8 kB block size).
300000
Transaction rate [IO/s]
250000
200000
150000
100000
PRAID EP420i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP400i
PRAID EP420i
50000
0
File copy File server Database
The most important information received here is the very high transaction rate of about 240000 IO/s that can
be achieved for the load profile with a small block size (“Database”). The impact of the FastPath option,
which is enabled as standard in the latest controller firmware, is especially apparent for this load profile.
Expressed in the form of SAS-12G-SSD numbers: In order to make full use of the possibilities of the
PRAID EP400i for RAID 0 it is necessary to have between five (8 kB block size) and 17 (64 kB block size)
fully loaded SAS-12G-SSDs - depending on the random load profile. With a smaller SSD load or other SSD
types these numbers must be suitably modified.
It is also interesting for us to recognize the throughput values that result through conversion from these
transaction rates. Despite the lower transaction rates, the two load profiles with a 64 kB block size have the
higher throughputs. For example, the PRAID EP420i handles a throughput of about 7763 MB/s with the load
profile “File server”. This value is remarkable, because it is higher than the two sequential maximum
throughputs of the controller for 100% read and 100% write with this RAID level. This value would not have
been reached without real bidirectional use of the SAS connections.
300000
Transaction rate [IO/s]
250000
200000
150000
100000
PRAID EP420i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP400i
PRAID EP420i
50000
0
File copy File server Database
Remark:
The configurations consisting of numerous SAS-12G-SSDs as described here can be used to achieve
transaction rates of several hundred thousand IO/s with small block sizes (≤ 8 kB). Handling so many I/Os
can utilize the processing CPU core at almost 100% of its capacity. As a result, the real frequency of the
server CPU can become the limiting factor. A CPU of medium nominal frequency (Xeon E5-2660 v3 @ 2.60
GHz) and BIOS settings for optimal performance were used to obtain measurement results that are also
valid for an average CPU configuration and a wide selection of server models. In Xeon E5-2600 v4 based
servers the transaction rates presented here are also achieved for example with the processor type Xeon
E5-2623 v4 @ 2.60 GHz. If a CPU with optimal frequency is used, it is possible to clearly surpass the
transaction rates presented here. For example, it is possible with a CPU Xeon E5 2637 v3 @ 3.50 GHz or
Xeon E5 2637 v4 @ 3.50 GHz to achieve more than 300000 IO/s instead of about 250000 IO/s for a logical
drive of type RAID 0 consisting of 24 SAS-SSDs for the load profile “Database” (random access, 67% read, 8
kB block size).
160000
Transaction rate [IO/s]
140000
120000
100000
80000
60000
PRAID EP420i
PRAID EP400i
PRAID EP420i
PRAID EP400i
PRAID EP400i
PRAID EP420i
40000
20000
0
File copy File server Database
To express this in numbers of SAS-12G-SSDs, it is necessary to have between seven (8 kB block size) and
17 (64 kB block size) fully loaded SAS-12G-SSDs in order to make full use of the possibilities of the
PRAID EP400i for RAID 5 - depending on the random load profile. With a smaller SSD load or other SSD
types these numbers must be suitably modified.
Sequential accesses
Generally applicable statements about the controllers are listed below based on measurements with 24 (or
16) SAS-12G-SSDs. It is possible to calculate the anticipated maximum throughputs for other hard disk types
and -numbers by appropriately multiplying the basic performance values of the hard disk. If the throughput
calculated in this way exceeds the threshold value of the controller, the controller threshold value becomes
effective.
PRAID EP400i
PRAID EP420i
2000
1000
0
Streaming Restore
5000
4000
3000
PRAID EP420i
PRAID EP400i
PRAID EP400i
PRAID EP420i
2000
1000
0
Streaming Restore
7000
6000
Throughput [MB/s]
5000
4000
3000
PRAID EP420i
PRAID EP400i
PRAID EP400i
PRAID EP420i
2000
1000
0
Streaming Restore
The data throughputs presented here for “Restore” (approx. 3100 MB/s) and “Streaming” (approx.
5900 MB/s) are the limits of the controllers used here for RAID 5.
In the case of RAID 5 this maximum value for sequential write is a significant indicator for the performance of
a RAID controller, as the speed of the controller is reflected in a relatively undistorted way here in the
calculation of the parity blocks.
In order to achieve the maximum data throughputs here with RAID 5 – despite the SSDs – “Performance”
mode is required in the ServerView RAID Manager. The enabled write cache of the controllers is vital here in
order to achieve the maximum data throughput for “Restore”.
Remarks:
In the cases under consideration here the disk cache was “Disabled”. The percentages for the
setting “Enabled” are in each case very similar to the corresponding percentages for “Disabled”
The ranges of load intensities mentioned here have been modeled within the measurements as
follows: “Low load up to acceptable high load” corresponds to 1-32 outstanding I/Os, “Overload”
corresponds to 64-512 outstanding I/Os
RAID 10 also stands as an example for RAID 0 and RAID 1 (RAID levels without parity calculation);
RAID 5 also stands as an example for RAID 6, RAID 50 and RAID 60 (RAID levels with parity
calculation)
The “Read-ahead” setting is prerequisite to achieving these values for sequential read, as is the setting
“Write-back” for sequential write. These maximum throughputs also depend very much on the block size,
whereby the interrelations of the table values as regards size are similar for other block sizes.
The differences between the controllers in the table become significant at the latest when the logical drive
used is as a matter of principle in a position to enable more than 500 MB/s of sequential throughput for “1
outstanding IO”. In such cases, an inappropriately selected controller can have the effect of a restriction.
The following example illustrates this on the basis of throughput measurements with a logical drive of type
RAID 0 consisting of eight SAS-12G-HDDs for the load profile “Streaming” (sequential access, 100% read,
64 kB block size). The comparison is made between the PRAID CP400i and the PRAID EP400i with differing
numbers of parallel accesses (“# Outstanding IOs”).
1800
PRAID CP400i
1842
1775
1600
PRAID EP400i
1568
1400
1200
1000
994
800
844
600
400
490
200
0
1 2 4 # Outstanding I/Os
You can clearly see that in this case the PRAID CP400i does not achieve the same throughput for 1 and 2
outstanding IOs as the PRAID EP400i. The latter already achieves a throughput of 844 MB/s, whereas the
PRAID CP400i controller only achieves a little more than half.
From the viewpoint of response times, this means that it is possible for low load intensities to approximately
halve the response times with the PRAID EP400i compared with the PRAID CP400i.
Conclusion
The PRIMERGY and PRIMEQUEST servers use the “Modular RAID” concept to offer a plethora of
opportunities to meet the requirements of various application scenarios.
An onboard controller is a low-priced entry-level alternative for the RAID levels 0, 1 and 10, which saves one
PCIe slot but is restricted to six hard disks. The pro rata consumption of the server's processor performance
is increasingly less important in newer servers.
On the SATA side the current onboard controllers support the standards up to frequency 6G.
In the case of PCIe controllers the current generation supports the standard SAS-12G. As a result, the
maximum real data throughputs were increased from 3800 MB/s to 6280 MB/s compared with the
predecessor generation.
The PRAID CP400i is the PCIe controller suited for average requirements. This controller does not have a
cache, permits up to eight hard disks and supports the RAID solutions RAID 0, RAID 1, RAID 1E, RAID 10
and – contrary to the predecessor controller – also RAID 5.
The PRAID EP400i and PRAID EP420i controllers offer all the current standard RAID solutions RAID 0,
RAID 1, RAID 1E, RAID 5, RAID 6, RAID 10, RAID 50 and RAID 60 in the High-End sector. These
controllers have a controller cache and can as an option be backed up using an FBU. Manifold options to set
the use of the cache make it possible to flexibly adapt the controller performance to suit the RAID level used.
A further optimization option here is the adjustable stripe size. In many application scenarios, for example if
random accesses take place on conventional hard disks with a high load intensity, these controllers enable a
75% higher transaction rate than the PRAID CP400i (example: RAID 0 with four SATA-6G-HDDs, random
access, 50% read, 64 kB block size).
The RAID controllers PRAID EP400i and PRAID EP420i only differ as far as cache size is concerned. The
first controller has a 1 GB cache, and the second one has a 2 GB cache. The larger cache is recommended
for HDDs that are used for random load profiles with a high write share.
The majority of the application scenarios that put a load on the disk subsystem come along with a random
read / write access. If SSDs are used to manage very high IO rates, the controller has considerable influence
on the maximum transaction rate. In the case of a logical drive of type RAID 0 and typical accesses for a
database (67% read, random, block size 8 kB) the PRAID CP400i for example permits up to 164000 IO/s,
and the PRAID EP420i on the other hand permits up to 248000 IO/s, in other words 1.5 times that amount.
The differences are particularly large for a logical drive of type RAID 5: the PRAID CP400i achieves up to
28700 IO/s for typical accesses for a database, while the PRAID EP420i has up to 133000 IO/s, in other
words about 4.6 times that amount. Thus, in the case of RAID 5 in conjunction with SSDs it is imperative to
choose the PRAID EP400i or the PRAID EP420i.
Regardless of the hard disk type, the various controllers each have maximum sequential throughputs that
are specific to the RAID level and the load profile. These maximum values have in part increased
substantially in comparison to the predecessor generation. In the case of sequential write on a logical drive
of type RAID 5 the PRAID EP420i for example achieves approx. 3100 MB/s, whereas the predecessor
controller only achieved approx. 2200 MB/s.
If a higher transaction rate or higher throughput is required for the planned application scenario than a single
controller can provide, two controllers can be used. A number of PRIMERGY servers provide this option
(e. g. PRIMERGY RX2540 M1).
A further aspect of faster controllers with sequential access profiles is the increased throughput that is
already achieved with low access parallelism. If the logical drive is efficient enough, it means that more than
1300 MB/s is possible for read and write with the PRAID EP400i in this special application. Compared with
controllers of the previous generation this also means a significant increase in the maximum throughput for
these special cases.
The RAID-Manager software “ServerView RAID Manager” that is supplied for PRIMERGY servers is
recommended for the configuration of controllers and hard disks. This utility program makes it possible to
conveniently adapt controller and hard disk settings to meet customer requirements regarding performance
and data security in a controller-independent way for the majority of the application scenarios. If FBUs and
UPSs are used as buffers in the case of power failures, maximum performance can be reconciled with data
security.
Literature
PRIMERGY & PRIMEQUEST Servers
http://www.fujitsu.com/fts/products/computing/servers/
Performance of Server Components
http://www.fujitsu.com/fts/products/computing/servers/mission-critical/benchmarks/x86-
components.html
This White Paper:
http://docs.ts.fujitsu.com/dl.aspx?id=9845be50-7d4f-4ef7-ac61-bbde399c1014
http://docs.ts.fujitsu.com/dl.aspx?id=7826d783-bc71-4cd7-8486-d74f4dc2509c
http://docs.ts.fujitsu.com/dl.aspx?id=3075886a-3c79-4b5b-8d9f-e9269e083bef
BIOS optimizations for Xeon E5-2600 v4 based systems
http://docs.ts.fujitsu.com/dl.aspx?id=eb90c352-8d98-4f5a-9eed-b5aade5ccae1
BIOS optimizations for Xeon E5-2600 v3 based systems
http://docs.ts.fujitsu.com/dl.aspx?id=f154aca6-d799-487c-8411-e5b4e558c88b
RAID-Controller-Performance 2013 (previous white paper)
http://docs.ts.fujitsu.com/dl.aspx?id=e2489893-cab7-44f6-bff2-7aeea97c5aef
512e HDDs: Technology, Performance, Configurations
http://docs.ts.fujitsu.com/dl.aspx?id=f5550c48-d4db-47f6-ab9d-ce135eaacf81
Basics of Disk I/O Performance
http://docs.ts.fujitsu.com/dl.aspx?id=65781a00-556f-4a98-90a7-7022feacc602
Information about Iometer
http://www.iometer.org
Contact
FUJITSU
Website: http://www.fujitsu.com/
PRIMERGY & PRIMEQUEST Product Marketing
mailto:[email protected]
PRIMERGY Performance and Benchmarks
mailto:[email protected]
© Copyright 2016 Fujitsu Technology Solutions. Fujitsu and the Fujitsu logo are trademarks or registered trademarks of Fujitsu Limited in Japan and other
countries. Other company, product and service names may be trademarks or registered trademarks of their respective owners. Technical data subject to
modification and delivery subject to availability. Any liability that the data and illustrations are complete, actual or correct is excluded. Designations may be
trademarks and/or copyrights of the respective manufacturer, the use of which by third parties for their own purposes may infringe the rights of such owner.
For further information see http://www.fujitsu.com/fts/resources/navigation/terms-of-use.html
2016-05-20 WW EN