Distributed Storage Performance For OpenStack Clouds Using Small-File IO Workloads: Red Hat Storage Server vs. Ceph Storage
OpenStack cloud environments demand strong storage performance to handle the requests of end users. Software-based distributed storage can provide this performance while also providing much needed flexibility for storage resources.
In our tests, we found that Red Hat Storage Server better handled small-file IO workloads than did Ceph Storage, handling up to two times the number of files per second in some instances. The smallfile tool we used simulated users performing actions on their files to show the kind of end-user performance you could expect using both solutions at various node, VM, and thread counts.
These results show that Red Hat Storage Server can provide equivalent or better performance than Ceph Storage for similar workloads in OpenStack cloud environments, which can help users better access the files they keep in the cloud.
Distributed Storage Performance For OpenStack Clouds Using Small-File IO Workloads: Red Hat Storage Server vs. Ceph Storage
OpenStack cloud environments demand strong storage performance to handle the requests of end users. Software-based distributed storage can provide this performance while also providing much needed flexibility for storage resources.
In our tests, we found that Red Hat Storage Server better handled small-file IO workloads than did Ceph Storage, handling up to two times the number of files per second in some instances. The smallfile tool we used simulated users performing actions on their files to show the kind of end-user performance you could expect using both solutions at various node, VM, and thread counts.
These results show that Red Hat Storage Server can provide equivalent or better performance than Ceph Storage for similar workloads in OpenStack cloud environments, which can help users better access the files they keep in the cloud.
Commissioned by Red Hat, Inc. DISTRIBUTED STORAGE PERFORMANCE FOR OPENSTACK CLOUDS USING SMALL- FILE IO WORKLOADS: RED HAT STORAGE SERVER VS. CEPH STORAGE
OpenStack clouds require fast-acting storage solutions to deliver optimal performance to end users. Software-based distributed storage systems are a popular choice for such environments because they allow for pooled resources with flexible management and scaling capabilities. In cloud environments, IO workloads often use smaller datasets, requiring distributed storage systems to handle small-file workloads and all the associated filesystem actions that occur with this type of IO. In the Principled Technologies labs, we investigated how two distributed storage solutions, Red Hat Storage Server and Ceph Storage, performed handling small-file IO workloads using the smallfile benchmark tool. We tested how both storage solutions performed a number of common storage operations across various configurations of nodes, virtual machines (VMs), and threads. In our tests, Red Hat Storage Server delivered greater throughput (faster storage performance) in almost every instance, including 124.0 percent more throughput than Ceph when completing the create operation during the workload.
A Principled Technologies test report 2
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage DISTRIBUTED STORAGE TESTING OpenStack and distributed storage An OpenStack cloud manages compute, storage, and networking resources. For the backing storage in an OpenStack cloud environment, organizations face the challenge of selecting cost-effective, flexible, and high-performing storage. By using distributed scale-out storage with open-source software, companies can achieve these goals. Software such as Red Hat Storage removes the high-cost and specialized skillset of traditional storage arrays and instead relies on servers as storage nodes, which means that datacenter staff can simply add servers to scale capacity and performance. Conversely, if storage needs decrease, administrators can repurpose those server storage nodes if necessary. The storage components in an OpenStack environment are Cinder, a component that handles persistent block storage for guests; Glance, a component that stores and manages guest images; and Swift, an object storage component. This study focuses on performance of guest (virtual machine) local file systems constructed on Cinder block devices stored in either Red Hat Storage or Ceph. The clear winner in our performance and scalability tests was Red Hat Storage. With small-file IO workloads using the smallfile tool, it dramatically outperformed Ceph on nearly every operation at nearly every compute node/VM configuration we tested. In tests where it did not win, Red Hat Storage performed comparably to Ceph. Handling more files per second translates into how an end-user of an OpenStack cloud application could experience the speed of accessing or altering files in the cloud. Figure 1 shows the performance the solutions achieved with four server nodes, 16 VMs, and 64 threads across all operations. Here, we normalize performance to the Ceph Storage scores, showing Red Hat Storage performance as a factor of what Ceph achieved. For detailed performance results for all configurations and operations, see the Test results section.
A Principled Technologies test report 3
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage Figure 1: Performance comparison for 4 nodes, 16 VMs, and 64 threads, normalized to Ceph scores. 2.24 2.21 0.99 1.05 1.27 1 1 1 1 1 0.0 0.5 1.0 1.5 2.0 2.5 Create Read Delete Rename Append N o r m a l i z e d
p e r f o r m a n c e Normalized peformance at 4 nodes, 16 VMs, 64 threads Red Hat Storage Ceph
Software overview In our tests of two leading open-source distributed storage solutions, we compared small-file performance of Red Hat Storage Server and Ceph Storage, along with the scalability of both solutions using one to four nodes, one to 16 VMs, and four to 64 threads. We used RDO OpenStack for our OpenStack distribution, and we used the smallfile benchmark running within virtual machine instances on our OpenStack compute nodes to measure filesystem read and write throughput for multiple configurations using each storage solution. For testing, we used the same hardware for both solutions four compute nodes running RDO OpenStack 1 and four storage nodes running either Red Hat Storage or Ceph Storage. For detailed system configuration information, see Appendix A. Red Hat commissioned these tests and this report. Testing with smallfile To test the relative performance of Red Hat Storage Server and Ceph Storage in such a highly virtualized, multi-tenant scenario, we used smallfile, a python-based, open-source workload tool designed to assess filesystem performance across distributed storage with metadata-intensive file operations. Smallfile is available via Github at https://github.com/bengland2/smallfile.git. Smallfile differs from many synthetic storage benchmarks in that it accounts for metadata operations. Other storage benchmarks often send IO directly to drives or use a small number of very large files during the test iterations, completely bypassing
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage filesystem metadata operations. While these benchmarks can provide useful data, such workloads can be less than ideal for emulating a cloud environment where there is an assumption of high multi-tenancy, smaller VMs, few cores, smaller amounts of vRAM, and fewer available IOPS. In these environments, where small-file IO is common, the application must also use resources working with metadata operations around the IO events, such as opening, closing, deleting, calculating file distribution or sizes, and so on. These conditions lend themselves to use cases involving high numbers of files that are smaller in size, which smallfile can emulate. We ran smallfile within RHEL VMs residing on four identical compute nodes, with virtual disks attached to each VM, which were physically located on four storage nodes. We ran smallfile from within the guests first using Ceph and then using Red Hat Storage Server as the backing storage on the four storage nodes. We used a random exponential distribution of file sizes (as supported by the smallfile tool) to provide a distribution of file sizes similar to what many real-world virtualized, multi-tenant environments would use. Figure 2 shows our test setup for both solutions. The virtio block devices were created in the OpenStack framework as Cinder volumes.
Figure 2: Our test setup for Ceph Storage and Red Hat Storage Server. The virtio block devices were created as Cinder volumes in OpenStack.
A Principled Technologies test report 5
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage In both configurations, we started with a single node, single VM, and four smallfile workload threads and then increased threads, VMs, and nodes in a predictable manner up to a maximum of 4 nodes, 16 guests, and 64 total threads (staying at a consistent four threads per VM). Each thread in the smallfile workload operated on a total of 32,768 files for approximately two-million unique files in our maximum configuration (4 node/4VM/64 threads). The file-size distribution averaged 64 KB, however during the append tests the average file size grew to 128 KB. Note: Within the smallfile test cycle, each operation is executed on every file (one operation at a time). On the first run, all files are created, on the next run all files are appended to, and so on until all operations are complete. The operations are: Create Read Delete Rename Append The metadata functions occur throughout each of the test operations including the read, write, and append tests. Every time a file is accessed, an OPEN and CLOSE operation is run. TEST RESULTS Create test results Completing the create operation, Red Hat Storage handled up to 124.0 percent more files per second than Ceph Storage. As we increased nodes, VMs, and thread counts (noted as #N-#V-#T in the charts below), Red Hat Storage continued to deliver increased performance for the workload (see Figure 3). For specific throughput data for this operation, see Figure 4.
A Principled Technologies test report 6
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage Figure 3: Throughput comparison of the storage solutions completing the create operation at various node, VM, and thread counts. 0 2,000 4,000 6,000 8,000 10,000 12,000 14,000 16,000 18,000 1N-1V-4T 2N-2V-8T 4N-4V-16T 1N-4V-16T 2N-8V-32T 4N-16V-64T F i l e s / s e c o n d Create Red Hat Storage Ceph
Create 1N-1V-4T 2N-2V-8T 4N-4V-16T 1N-4V-16T 2N-8V-32T 4N-16V-64T Red Hat Storage 4,152 7,779 11,893 10,143 17,016 15,977 Ceph 4,459 6,495 7,081 6,829 7,609 7,131 Red Hat win -6.9% 19.8% 68.0% 48.5% 123.6% 124.0% Figure 4: Throughput, in files/second for the storage solutions completing the create operation at various node, VM, and thread counts.
Read test results Completing the read operation, Red Hat Storage handled up to 120.7 percent more files per second than Ceph Storage. As we increased nodes, VMs, and thread counts, Red Hat Storage continued to deliver increased performance for the workload (see Figure 5). For specific throughput data for this operation, see Figure 6.
A Principled Technologies test report 7
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage Figure 5: Throughput comparison of the storage solutions completing the read operation at various node, VM, and thread counts. 0 2,000 4,000 6,000 8,000 10,000 12,000 14,000 1N-1V-4T 2N-2V-8T 4N-4V-16T 1N-4V-16T 2N-8V-32T 4N-16V-64T F i l e s / s e c o n d Read Red Hat Storage Ceph
Read 1N-1V-4T 2N-2V-8T 4N-4V-16T 1N-4V-16T 2N-8V-32T 4N-16V-64T Red Hat Storage 1,879 3,612 6,845 3,636 7,814 11,991 Ceph 1,307 2,191 3,416 3,348 4,650 5,434 Red Hat win 43.8% 64.9% 100.4% 8.6% 68.0% 120.7% Figure 6: Throughput, in files/second for the storage solutions completing the read operation at various node, VM, and thread counts.
Delete test results Completing the delete operation, Red Hat Storage performed comparably to Ceph storage at each node, VM, and thread count (see Figure 7). For specific throughput data for this operation, see Figure 8.
A Principled Technologies test report 8
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage Figure 7: Throughput comparison of the storage solutions completing the delete operation at various node, VM, and thread counts. 0 50,000 100,000 150,000 200,000 250,000 1N-1V-4T 2N-2V-8T 4N-4V-16T 1N-4V-16T 2N-8V-32T 4N-16V-64T F i l e s / s e c o n d Delete Red Hat Storage Ceph
Delete 1N-1V-4T 2N-2V-8T 4N-4V-16T 1N-4V-16T 2N-8V-32T 4N-16V-64T Red Hat Storage 20,882 28,035 55,244 51,840 103,365 201,940 Ceph 20,716 27,442 54,716 52,121 103,464 203,749 Red Hat win 0.8% 2.2% 1.0% -0.5% -0.1% -0.9% Figure 8: Throughput, in files/second for the storage solutions completing the delete operation at various node, VM, and thread counts.
Rename test results Completing the rename operation, Red Hat Storage handled up to 11.1 percent more files per second than Ceph Storage. As we increased nodes, VMs, and thread counts, Red Hat Storage continued to deliver increased performance for the workload (see Figure 9). For specific throughput data for this operation, see Figure 10.
A Principled Technologies test report 9
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage Figure 9: Throughput comparison of the storage solutions completing the rename operation at various node, VM, and thread counts. 0 20,000 40,000 60,000 80,000 100,000 120,000 140,000 160,000 180,000 1N-1V-4T 2N-2V-8T 4N-4V-16T 1N-4V-16T 2N-8V-32T 4N-16V-64T F i l e s / s e c o n d Rename Red Hat Storage Ceph
Rename 1N-1V-4T 2N-2V-8T 4N-4V-16T 1N-4V-16T 2N-8V-32T 4N-16V-64T Red Hat Storage 18,727 25,433 50,296 43,454 87,455 169,015 Ceph 16,863 23,782 46,981 42,863 82,828 161,126 Red Hat win 11.1% 6.9% 7.1% 1.4% 5.6% 4.9% Figure 10: Throughput, in files/second for the storage solutions completing the rename operation at various node, VM, and thread counts.
Append test results Completing the append operation, Red Hat Storage handled up to 41.4 percent more files per second than Ceph Storage. As we increased nodes, VMs, and thread counts, Red Hat Storage continued to deliver increased performance for the workload (see Figure 11). For specific throughput data for this operation, see Figure 12.
A Principled Technologies test report 10
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage Figure 11: Throughput comparison of the storage solutions completing the append operation at various node, VM, and thread counts. 0 500 1,000 1,500 2,000 2,500 3,000 1N-1V-4T 2N-2V-8T 4N-4V-16T 1N-4V-16T 2N-8V-32T 4N-16V-64T F i l e s / s e c o n d Append Red Hat Storage Ceph
Append 1N-1V-4T 2N-2V-8T 4N-4V-16T 1N-4V-16T 2N-8V-32T 4N-16V-64T Red Hat Storage 944 1,777 2,548 2,383 2,727 2,779 Ceph 805 1,257 1,856 1,866 2,075 2,185 Red Hat win 17.3% 41.4% 37.3% 27.7% 31.4% 27.2% Figure 12: Throughput, in files/second for the storage solutions completing the append operation at various node, VM, and thread counts.
WHAT WE TESTED About Red Hat Storage Server Red Hat Storage Server is a software-based, or according to Red Hat software- defined, storage platform to manage big, semi-structured, and unstructured data growth while maintaining performance, capacity, and availability to meet demanding enterprise storage requirements. Running on open-source software, it collects compute and network resources in addition to storage capacity, on both physical infrastructure and cloud environments to independently scale beyond the limitations of each type of environment. Along with the ability to deploy on-premises or in a cloud environment, Red Hat Storage Server has flexible deployment options to meet various business needs. For more information about Red Hat Storage, visit http://www.redhat.com/products/storage-server/. About Ceph Storage Ceph Storage is an object-based storage system, separating objects from the underlying storage hardware using Reliable Autonomic Distributed Object Store
A Principled Technologies test report 11
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage (RADOS). According to Ceph, the RADOS foundation ensures flexibility in data storage by allowing applications to use object, block, or file system interfaces simultaneously. For more information about Ceph Storage, visit http://ceph.com/ceph-storage/. IN CONCLUSION OpenStack cloud environments demand strong storage performance to handle the requests of end users. Software-based distributed storage can provide this performance while also providing much needed flexibility for storage resources. In our tests, we found that Red Hat Storage Server better handled small-file IO workloads than did Ceph Storage, handling up to two times the number of files per second in some instances. The smallfile tool we used simulated users performing actions on their files to show the kind of end-user performance you could expect using both solutions at various node, VM, and thread counts. These results show that Red Hat Storage Server can provide equivalent or better performance than Ceph Storage for similar workloads in OpenStack cloud environments, which can help users better access the files they keep in the cloud.
A Principled Technologies test report 12
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage APPENDIX A SYSTEM CONFIGURATION INFORMATION Figure 13 provides detailed configuration information for the systems we used in our tests. System Dell PowerEdge C8220X (storage node) Dell PowerEdge C8220 (compute node) Power supplies Total number 2 2 Vendor and model number Dell B07B Dell B07B Wattage of each (W) 2800 2800 Cooling fans Total number 6 (chassis fans) 6 (chassis fans) Vendor and model number Delta Electronics, Inc. PFC1212DE Delta Electronics, Inc. PFC1212DE Dimensions (h x w) of each 5" x 5" x 1.5" 5" x 5" x 1.5" Volts 12 12 Amps 4.80 4.80 General Number of processor packages 2 2 Number of cores per processor 8 8 Number of hardware threads per core 2 2 System power management policy N/A N/A CPU Vendor Intel Intel Name Xeon Xeon Model number E5-2650 E5-2650 Stepping C2 C2 Socket type LGA2011 LGA2011 Core frequency (GHz) 2.00 2.00 Bus frequency 4,000 4,000 L1 cache 32 KB + 32 KB (per core) 32 KB + 32 KB (per core) L2 cache 256 KB (per core) 256 KB (per core) L3 cache 20 MB 20 MB Memory module(s) Total RAM in system (GB) 16 128 Vendor and model number Samsung M393B5273DH0-CK0 Samsung M393B1K70DH0-CK0 Type PC3-12800R PC3-12800R Speed (MHz) 1,600 1,600 Size (GB) 4 8 Number of RAM module(s) 4 8 Chip organization Double-sided Double-sided Rank Dual Dual
A Principled Technologies test report 13
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage System Dell PowerEdge C8220X (storage node) Dell PowerEdge C8220 (compute node) Operating system Name Red Hat Enterprise Linux Red Hat Enterprise Linux Build number 6.4 6.5 Beta File system xfs ext4 Kernel 2.6.32-358.18.1.el6.x86_64 2.6.32-415.el6.x86_64 Language English English Graphics Vendor and model number ASPEED AST2300 ASPEED AST2300 Graphics memory (MB) 16 16 RAID controller 1 Vendor and model number Intel C600 Intel C600 Cache size N/A N/A RAID controller 2 Vendor and model number LSI 9265-8i N/A Cache size 1 GB N/A Hard drive Vendor and model number Dell 9RZ168-136 Dell 9RZ168-136 Number of disks in system 2 2 Size (GB) 1,000 1,000 Buffer size (MB) 32 32 RPM 7,200 7,200 Type SATA 6.0 Gb/s SATA 6.0 Gb/s Hard drive 2 Vendor and model number Dell 9TG066-150 N/A Number of disks in system 8 N/A Size (GB) 600 N/A Buffer size (MB) 64 N/A RPM 10,000 N/A Type SAS 6.0 Gb/s N/A Ethernet adapters First network adapter Vendor and model number Intel I350-BT2 Intel I350-BT2 Type Integrated Integrated Second network adapter Vendor and model number Mellanox MAX383A Mellanox MAX383A Type 10/40 GbE 10/40 GbE USB ports Number 2 2 Type 2.0 2.0 Figure 13: Configuration information for our test systems.
A Principled Technologies test report 14
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage APPENDIX B TEST SETUP OVERVIEW Compute nodes and OpenStack controller We installed four compute server nodes with Red Hat Enterprise Linux 6.5 Beta to be used for the OpenStack cloud. Each compute node contained two hard disks, which we configured in a RAID 1 mirror, where we installed the operating system. We used a separate server node to serve as our OpenStack controller, on which we ran all OpenStack services (Neutron, Cinder, Horizon, Keystone, MySQL) other than nova-compute, which ran on the compute nodes. Figure 14 shows our test configuration.
Figure 14: Test hardware configuration.
Storage server configuration Each of the four storage nodes contained two 1TB 7,200RPM SATA disks, which we configured in a RAID 1 mirror. On this RAID 1 set, we created two logical volumes. On the first, we installed Red Hat Storage 2.1 Beta for Red Hat Storage tests. On the second logical volume, we installed Red Hat Enterprise Linux 6.4 and the necessary Ceph Storage packages for the Ceph Storage tests (ceph version 0.67.4). To switch between storage platforms, we used GRUB to choose the boot volume, and booted the storage nodes into the correct environment, either Red Hat Storage, or Ceph Storage. These configurations remained constant amongst the four storage nodes. Each of the four storage nodes also contained eight 600GB 10K RPM SAS disks. We configured these disks to be our data disks for testing, and varied our approach based on each platforms best practices and recommendations. For Red Hat Storage, we configured these eight disks in an eight disk RAID 6 volume and presented the volume to Red Hat Storage. Figure 15 shows the node configuration for Red Hat Storage tests.
A Principled Technologies test report 15
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage
Figure 15: Red Hat Storage node configuration.
For Ceph Storage, we configured eight RAID 0 volumes (one for each physical disk) and presented all of them to Ceph Storage, whereby it could then use an independent OSD on each physical disk, per Ceph Storage best practices. These configurations remained constant amongst the four storage nodes. Figure 16 shows the node configuration for our Ceph tests.
Figure 16: Ceph Storage node configuration.
Figure 17 details the software versions we used in our tests. Servers Operating system Additional software OpenStack Controller Red Hat Enterprise Linux 6.5 Beta RDO 2013.2 b3 OpenStack Compute nodes Red Hat Enterprise Linux 6.5 Beta qemu-kvm-rhev-0.12.1.2-2.411 glusterfs-api-3.4.0.34rhs-1 librbd1-0.67.4-0 Storage nodes (Red Hat Storage tests) Red Hat Storage 2.1 Beta glusterfs-server-3.4.0.19rhs-2 Storage nodes (Ceph Storage tests) Red Hat Enterprise Linux 6.4 ceph-0.67.4-0 Figure 17: Software versions we used in our tests.
A Principled Technologies test report 16
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage APPENDIX C DETAILED CONFIGURATION STEPS In this section, we review in detail the steps we followed on the various machines to install and configure the various components. Commands are presented with no shading, while file contents or output is presented with gray shading.
Configuring Red Hat Network Beta repositories 1. On each machine that will use Red Hat Enterprise Linux Beta, configure the RHN Beta repositories. subscription-manager repos --enable=rhel-6-server-beta-rpms subscription-manager repos --enable=rhel-6-server-optional-beta-rpms
Configuring networking on all servers 1. Install the necessary rpms by using the following commands: yum install -y openssh-clients wget acpid cpuspeed tuned sysstat sysfsutils
2. Bring these devices down using the following commands: ifconfig p2p1 down ifconfig ib0 down
3. Remove the ifcfg files using the following commands: cd /etc/sysconfig/network-scripts/ rm -f ifcfg-p[0-9]p[0-9] ifcfg-ib[0-9]
4. Configure the Mellanox components using the following commands: modprobe -r mlx4_en mlx4_ib mlx4_core sed -i '/mlx4_core/,+1d' /etc/udev/rules.d/70-persistent-net.rules echo "install mlx4_core /sbin/modprobe --ignore-install mlx4_core msi_x=1 enable_64b_cqe_eqe=1 port_type_array=2 && /sbin/modprobe mlx4_en" > /etc/modprobe.d/mlx4.conf modprobe mlx4_core
5. Disable SELinux using the following commands: sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config reboot
6. Edit the /etc/hosts file on every host using the following command. Run vi and edit the hosts file. vi /etc/hosts
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage 192.168.43.202 storage2.test.lan storage2 192.168.43.203 storage3.test.lan storage3 192.168.43.204 storage4.test.lan storage4
Configuring additional networking OpenStack controller 1. Edit the network configuration for the first NIC using the following command. Run vi and edit the ifcfg-em1 file. vi ifcfg-em1
We used the following settings: DEVICE=em1 TYPE=Ethernet ONBOOT=yes IPADDR=192.168.43.10 PREFIX=24 MTU=9000
2. Edit the network configuration for the second NIC using the following command. Run vi and edit the ifcfg-em2 file. vi ifcfg-em2
We used the following settings: DEVICE=em2 TYPE=Ethernet ONBOOT=yes MTU=9000
3. Set up passwordless ssh access for all relevant nodes from the OpenStack controller using the following commands: ssh-keygen ssh-copy-id cephmon ssh-copy-id compute1 ssh-copy-id compute2 ssh-copy-id compute3 ssh-copy-id compute4 ssh-copy-id storage1 ssh-copy-id storage2 ssh-copy-id storage3 ssh-copy-id storage4
4. Configure DNS using the following command: echo "nameserver 192.168.43.1" > /etc/resolv.conf service network restart
Configuring additional networking OpenStack compute nodes 1. Edit the network configuration for the first NIC using the following command. Run vi and edit the ifcfg-em1 file. vi ifcfg-em1
We used the following settings: DEVICE=em1 TYPE=Ethernet ONBOOT=no IPADDR=192.168.43.101 PREFIX=24 MTU=9000
2. Edit the network configuration for the second NIC using the following command. Run vi and edit the ifcfg-em2 file.
A Principled Technologies test report 18
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage vi ifcfg-em2
We used the following settings. DEVICE=em2 TYPE=Ethernet ONBOOT=yes MTU=9000
3. Edit the network configuration for the third NIC using the following commands. Run vi and edit the ifcfg-eth0 file. cp -p ifcfg-em1 ifcfg-eth0 vi ifcfg-eth0
We used the following settings: DEVICE=eth0 TYPE=Ethernet ONBOOT=yes IPADDR=192.168.43.101 PREFIX=24 MTU=9000
4. Configure DNS using the following command: echo "nameserver 192.168.43.1" > /etc/resolv.conf service network restart
Installing OpenStack 1. On the OpenStack controller machine, install the RDO rpms using the following commands: yum install -y http://rdo.fedorapeople.org/openstack-havana/rdo-release-havana.rpm yum install -y http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6- 8.noarch.rpm
2. On the OpenStack controller machine, install PackStack using the following commands: yum install -y openstack-packstack packstack --gen-answer-file=packstack-answer-havana.txt cp packstack-answer-havana.txt packstack-answer-havana.txt.orig
3. Edit the PackStack configuration file. Below we show the revisions we made to our PackStack configuration file from the original default file. vi packstack-answer-havana.txt
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage < CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan --- > CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=local 253c253 < CONFIG_NEUTRON_OVS_VLAN_RANGES=inter-vlan:1200:1205 --- > CONFIG_NEUTRON_OVS_VLAN_RANGES= 257c257 < CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=inter-vlan:br-inst --- > CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS= 261c261 < CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-inst:em2 --- > CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
4. On the OpenStack controller, run PackStack using the following command: packstack --answer-file=packstack-answer-havana.txt The output should be similar to the following: Welcome to Installer setup utility
Additional information: * Did not create a cinder volume group, one already existed * To use the command line tools you need to source the file /root/keystonerc_admin created on 192.168.43.10 * To use the console, browse to http://192.168.43.10/dashboard * The installation log file is available at: /var/tmp/packstack/20131001-030053- rzecgC/openstack-setup.log
Configuring OpenStack Configuring Neutron #### NOTE: THIS IS WORKAROUND FOR RDO AND RHEL6.5 AT THE TIME OF WRITING #### DO ON ALL OPENSTACK SERVERS yum downgrade iproute #### NOTE: THIS FIXES A BUG WITH UNSUPPORTED HARDWARE VLAN OFFLOAD ovs-vsctl set interface em2 other-config:enable-vlan-splinters=true
#### DO ALL THESE ON THE CONTROLLER source ~/keystonerc_admin neutron net-create priv_net Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 1cf06d0d-7bda-4665-90ea-92faf11071ca | | name | priv_net | | provider:network_type | vlan | | provider:physical_network | inter-vlan | | provider:segmentation_id | 1200 | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | 1fbb0466632b42008ffc9e7a2f7f0f6f | +---------------------------+--------------------------------------+
neutron router-create router1 Created a new router: +-----------------------+--------------------------------------+
A Principled Technologies test report 23
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | external_gateway_info | | | id | 0cab02ce-800b-43b0-90c9-cc202dd19b72 | | name | router1 | | status | ACTIVE | | tenant_id | 1fbb0466632b42008ffc9e7a2f7f0f6f | +-----------------------+--------------------------------------+
neutron router-gateway-set router1 ext_net Set gateway for router router1
neutron router-interface-add router1 priv_subnet Added interface 51ce34ea-3980-47bf-8ece-f4b2ce17fdcd to router router1.
neutron security-group-rule-create --protocol icmp --direction ingress default Created a new security_group_rule: +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | direction | ingress | | ethertype | IPv4 | | id | ac2e33ca-741e-4b59-aec7-830a6332b79a | | port_range_max | | | port_range_min | | | protocol | icmp | | remote_group_id | | | remote_ip_prefix | | | security_group_id | ff8c2ff5-a9db-4ec0-bf3e-d0ac249d8fe4 | | tenant_id | 1fbb0466632b42008ffc9e7a2f7f0f6f | +-------------------+--------------------------------------+
rm -f floatingip_list.txt; for i in `seq 1 16`; do neutron floatingip-create ext_net | awk "/floating_ip_address/{print \$4\"\tvm$i\"}" | tee -a floatingip_list.txt ; done 10.35.1.101 vm1 10.35.1.102 vm2 10.35.1.103 vm3
A Principled Technologies test report 24
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage 10.35.1.104 vm4 10.35.1.105 vm5 10.35.1.106 vm6 10.35.1.107 vm7 10.35.1.108 vm8 10.35.1.109 vm9 10.35.1.110 vm10 10.35.1.111 vm11 10.35.1.112 vm12 10.35.1.113 vm13 10.35.1.114 vm14 10.35.1.115 vm15 10.35.1.116 vm16
cat floatingip_list.txt >> /etc/hosts
#### PREPARE KEYS (on the controller only) nova keypair-add GUEST_KEY > GUEST_KEY.pem && chmod 600 GUEST_KEY.pem
Configuring Availability Zones #### DO ALL THESE ON THE CONTROLLER nova availability-zone-list
nova aggregate-create compaggr1 compzone1 nova aggregate-create compaggr2 compzone2 nova aggregate-create compaggr3 compzone3 nova aggregate-create compaggr4 compzone4
nova aggregate-add-host compaggr1 compute1.test.lan nova aggregate-add-host compaggr2 compute2.test.lan nova aggregate-add-host compaggr3 compute3.test.lan nova aggregate-add-host compaggr4 compute4.test.lan
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage qemu-img.x86_64 2:0.12.1.2-2.398.el6 qemu-kvm.x86_64 2:0.12.1.2-2.398.el6
Updating Gluster Client Update these packages on the compute nodes: umount -a -t fuse.glusterfs yum install -y gluster_update/glusterfs*rpm
#### Do these 2 commands on all compute nodes and controller/cinder #### openstack-config --set /etc/nova/nova.conf DEFAULT qemu_allowed_storage_drivers gluster openstack-config --set /etc/nova/nova.conf DEFAULT debug False
#### ON CONTROLLER for i in api scheduler volume; do sudo service openstack-cinder-${i} stop; done for i in api scheduler volume; do sudo service openstack-cinder-${i} start; done
cinder type-create gluster cinder type-key gluster set volume_backend_name=GLUSTER cinder extra-specs-list
Installing Ceph Storage and configuring Cinder for Ceph Storage #### Install repos on ceph monitor and all storage nodes then install main ceph packages yum install -y http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm yum install -y ceph
#### on ceph monitor (hostname: cephmon) yum install -y ceph-deploy
ceph-deploy new cephmon ceph-deploy mon create cephmon ceph-deploy gatherkeys cephmon
ceph-deploy disk list storage{1,2,3,4} ceph-deploy disk zap storage{1,2,3,4}:sd{a,b,c,d,e,f,g,h} ceph-deploy osd create storage{1,2,3,4}:sd{a,b,c,d,e,f,g,h}
A Principled Technologies test report 26
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage ceph-deploy disk list storage{1,2,3,4}
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage [storage4][INFO ] /dev/sdf1 ceph data, active, cluster ceph, osd.29, journal /dev/sdf2 [storage4][INFO ] /dev/sdf2 ceph journal, for /dev/sdf1 [storage4][INFO ] /dev/sdg : [storage4][INFO ] /dev/sdg1 ceph data, active, cluster ceph, osd.30, journal /dev/sdg2 [storage4][INFO ] /dev/sdg2 ceph journal, for /dev/sdg1 [storage4][INFO ] /dev/sdh : [storage4][INFO ] /dev/sdh1 ceph data, active, cluster ceph, osd.31, journal /dev/sdh2 [storage4][INFO ] /dev/sdh2 ceph journal, for /dev/sdh1 [storage4][INFO ] /dev/sdi other, isw_raid_member [storage4][INFO ] /dev/sdj other, isw_raid_member
Configuring Ceph pools for OpenStack ceph osd pool create volumes 1600 ceph osd pool create images 128
Configuring Cinder for Ceph Storage #### all openstack systems controller, compute yum install -y http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm
#### on cinder server yum install -y ceph mkdir /usr/lib64/qemu ln -s /usr/lib64/librbd.so.1 /usr/lib64/qemu/librbd.so.1
# rules rule data { ruleset 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step emit } rule metadata { ruleset 1 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step emit } rule rbd { ruleset 2 type replicated min_size 1
A Principled Technologies test report 32
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage max_size 10 step take default step chooseleaf firstn 0 type host step emit }
# end crush map
Starting VMs and creating cinder disks Below, we include a sample script to start VMs and create cinder volumes.
#!/bin/bash COUNT=0 COMPUTE_NODES=4 if [ "${2}" = "" ]; then COUNT=${1:-0} else COMPUTE_NODES=${1:-1} COUNT=${2} fi
DELAY=1 #VOLTYPE=gluster VOLTYPE=ceph VOLSIZE=40
PRIV_NET_ID=`neutron net-list -F id -F name -f csv --quote none | awk -F',' '/priv_net/{print $1}'`
for i in `seq 1 $COUNT`; do VM=vm$i nova boot --image rhel-6.4-smallfile --flavor m1.smallfile --key_name GUEST_KEY -- availability-zone compzone`expr \( ${i} - 1 \) % ${COMPUTE_NODES} + 1` --nic net- id=${PRIV_NET_ID} ${VM} sleep $DELAY DEVICE_ID=`nova list --name $VM | awk "/$VM/{ print \\$2 }"` #sleep $DELAY PORT_ID=`neutron port-list -- --device_id ${DEVICE_ID} | awk '/ip_address/{print $2}'` FLOATIP=`resolveip -s $VM` FLOATING_ID=`neutron floatingip-list | awk "/$FLOATIP/{ print \\$2 }"` until [ `nova list --name $VM | awk "/$VM/{ print \\$6 }"` = ACTIVE ]; do sleep $DELAY done neutron floatingip-associate $FLOATING_ID $PORT_ID sleep $DELAY echo -n "Pinging ${VM}..." until ping -qc 1 ${VM} 1> /dev/null 2> /dev/null ; do sleep 1 echo -n "." done echo "success!" done
sleep 30
A Principled Technologies test report 33
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage
for i in `seq 1 $COUNT`; do VM=vm$i sleep $DELAY until [ `jobs | wc -l` -lt 2 ]; do sleep 1 done if [ "`cinder list --display-name ${VM}_${VOLTYPE}vol | awk '/available/{print $2}'`" = "" ]; then cinder create --volume_type $VOLTYPE --display_name ${VM}_${VOLTYPE}vol $VOLSIZE sleep $DELAY while [ "`cinder list --display-name ${VM}_${VOLTYPE}vol | awk '/available/{print $2}'`" = "" ]; do sleep 1 done nova volume-attach ${VM} `cinder list --display-name ${VM}_${VOLTYPE}vol | awk '/available/{print $2}'` /dev/vdb sleep $DELAY BLOCKS=`expr $VOLSIZE \* 1024` sleep $DELAY ssh -i GUEST_KEY.pem -o StrictHostKeyChecking=no ${VM} "dd if=/dev/zero of=/dev/vdb bs=1M count=$BLOCKS ; sync" ./run_all_storage.sh "sync" sleep $DELAY else nova volume-attach ${VM} `cinder list --display-name ${VM}_${VOLTYPE}vol | awk '/available/{print $2}'` /dev/vdb fi done wait
Installing and configuring the base VM Our VM base image used a 4GB qcow2 virtual disk, 4,096 MB of RAM, and two vCPUs. We installed Red Hat Enterprise Linux 6.4, chose to connect the virtual NIC automatically, and performed a custom disk layout. The disk layout configuration included one standard partition where we used the whole disk with the ext4 filesystem, mountpoint=/, and no swap. We chose the minimal package selection on installation. Below we show the additional steps we followed for the base VM, as well as installed and updated packages.
#### Disable selinux. sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
#### Disable firewall. iptables -F /etc/init.d/iptables save chkconfig iptables off
#### Subscribe system to RHN: subscription-manager register subscription-manager refresh
#### Install updates: yum update -y
A Principled Technologies test report 34
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage Installed: kernel.x86_64 0:2.6.32-358.18.1.el6
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage subscription-manager.x86_64 0:1.8.22-1.el6_4 tzdata.noarch 0:2013c-2.el6 upstart.x86_64 0:0.6.5-12.el6_4.1 util-linux-ng.x86_64 0:2.17.2-12.9.el6_4.3 yum-rhn-plugin.noarch 0:0.9.1-49.el6
#### after updates reboot reboot
#### Edit /boot/grub/grub.conf vi /boot/grub/grub.conf
# grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You do not have a /boot partition. This means that # all kernel and initrd paths are relative to /, eg. # root (hd0,0) # kernel /boot/vmlinuz-version ro root=/dev/vda1 # initrd /boot/initrd-[generic-]version.img #boot=/dev/vda default=0 timeout=3 splashimage=(hd0,0)/boot/grub/splash.xpm.gz hiddenmenu title Red Hat Enterprise Linux Server (2.6.32-358.18.1.el6.x86_64) root (hd0,0) kernel /boot/vmlinuz-2.6.32-358.18.1.el6.x86_64 ro root=/dev/vda1 console=tty0 console=ttyS0 initrd /boot/initramfs-2.6.32-358.18.1.el6.x86_64.img title Red Hat Enterprise Linux (2.6.32-358.el6.x86_64) root (hd0,0) kernel /boot/vmlinuz-2.6.32-358.el6.x86_64 ro root=/dev/vda1 console=tty0 console=ttyS0 initrd /boot/initramfs-2.6.32-358.el6.x86_64.img
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage perl-libs.x86_64 4:5.10.1-131.el6_4 perl-version.x86_64 3:0.77-131.el6_4 vim-common.x86_64 2:7.2.411-1.8.el6
#### Add Common channel: rhn-channel --add --channel=rhel-x86_64-server-rh-common-6
#### Install and configure cloud-init packages: yum install -y cloud-init
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage #### To... ssh_pwauth: 1 datasource_list: ["ConfigDrive", "Ec2", "NoCloud"]
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage Cleaning up and preparing for qcow2 compact yum clean all rm -rf /var/tmp/* rm -rf /tmp/* dd if=/dev/zero of=/zerofile.tmp bs=64k ; sync ; rm -f /zerofile.tmp ; sync
Running the tests Each VM used its own cinder volume, backed by either Red Hat Storage or Ceph Storage, for each test. We formatted the virtual disk using ext4. The disk was reformatted before running the tests for every VM/node count combination. All filesystem caches were cleared on the storage nodes, compute nodes, and VMs before each smallfile operation type. mkfs.ext4 /dev/vdb
Example smallfile command: python smallfile_cli.py --top /mnt/test --network-sync-dir /mnt/nfsexport/smf --remote-pgm-dir /root/smallfile -- response-times Y --hash-into-dirs Y --file-size-distribution exponential --fsync Y --pause 10 --threads 4 --file-size 64 --files 32768 --host-set vm1 --operation create
We used the following three scripts to run smallfile tests on the VMs from a remote system.
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage SMF_NETMOUNT="192.168.43.11:/mnt/nfsshare" SMF_OPERATIONS="create append read rename delete-renamed"
if [ "${1}" = "ceph" ]; then CEPH_MON=1 fi
# Initialize SMF host list SMF_HOSTS="vm1" for i in `seq 2 ${VM_COUNT}`; do SMF_HOSTS="${SMF_HOSTS},vm$i" done
# Cleanup and drop caches for i in `seq 1 ${VM_COUNT}`; do ssh -i GUEST_KEY.pem vm${i} "sync ; echo 3 > /proc/sys/vm/drop_caches ; sync" done
for i in `seq 1 ${COMPUTE_COUNT}`; do ssh compute${i} "sync ; echo 3 > /proc/sys/vm/drop_caches ; sync" done
for i in `seq 1 ${STORAGE_COUNT}`; do ssh storage${i} "sync ; echo 3 > /proc/sys/vm/drop_caches ; sync" done sleep $INTERVAL
# Start statistics collection
A Principled Technologies test report 40
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage echo Starting stat collection... for i in `seq 1 ${VM_COUNT}`; do ssh -i GUEST_KEY.pem vm${i} "pkill vmstat ; vmstat -n ${INTERVAL} | sed -e 's/^[ \t]*//' -e '3d'" > ${RESULT_DIR}/vmstat_vm${i}.log & ssh -i GUEST_KEY.pem vm${i} "pkill sar ; rm -f /root/sar_*.bin ; sar -o sar_vm${i}.bin ${INTERVAL} > /dev/null" & done
for i in `seq 1 ${COMPUTE_COUNT}`; do ssh compute${i} "pkill vmstat ; vmstat -n ${INTERVAL} | sed -e 's/^[ \t]*//' -e '3d'" > ${RESULT_DIR}/vmstat_compute${i}.log & ssh compute${i} "pkill sar ; rm -f /root/sar_*.bin ; sar -o sar_compute${i}.bin ${INTERVAL} > /dev/null" & done
for i in `seq 1 ${STORAGE_COUNT}`; do ssh storage${i} "pkill vmstat ; vmstat -n ${INTERVAL} | sed -e 's/^[ \t]*//' -e '3d'" > ${RESULT_DIR}/vmstat_storage${i}.log & ssh storage${i} "pkill sar ; rm -f /root/sar_*.bin ; sar -o sar_storage${i}.bin ${INTERVAL} > /dev/null" & done
if [ $CEPH_MON -eq 1 ]; then ssh cephmon "pkill vmstat ; vmstat -n ${INTERVAL} | sed -e 's/^[ \t]*//' -e '3d'" > ${RESULT_DIR}/vmstat_cephmon.log & ssh cephmon "pkill sar ; rm -f /root/sar_*.bin ; sar -o sar_cephmon.bin ${INTERVAL} > /dev/null" & ssh cephmon "pkill python ; nohup python -u /usr/bin/ceph -w > ceph_status.log &" fi
# Run smallfile sleep `expr $INTERVAL \* 2`
echo Running smallfile test: $SMF_RESULT
$SMF_CMD --operation $OPERATION | tee ${RESULT_DIR}/${SMF_RESULT}.txt
sleep `expr $INTERVAL \* 2`
# Stop statistics collection for i in `seq 1 ${VM_COUNT}`; do ssh -i GUEST_KEY.pem vm${i} "pkill vmstat ; pkill iostat ; pkill sar" done for i in `seq 1 ${COMPUTE_COUNT}`; do ssh compute${i} "pkill vmstat ; pkill iostat ; pkill sar" done for i in `seq 1 ${STORAGE_COUNT}`; do ssh storage${i} "pkill vmstat ; pkill iostat ; pkill sar" done if [ $CEPH_MON -eq 1 ]; then ssh cephmon "pkill vmstat ; pkill iostat ; pkill python ; killall -w sar" scp cephmon:/root/sar_*.bin ${RESULT_DIR}/ scp cephmon:/root/ceph_status.log ${RESULT_DIR}/ fi
wait
A Principled Technologies test report 41
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage # Copy stat files for i in `seq 1 ${VM_COUNT}`; do scp -i GUEST_KEY.pem vm${i}:/root/sar_*.bin ${RESULT_DIR}/ done for i in `seq 1 ${COMPUTE_COUNT}`; do scp compute${i}:/root/sar_*.bin ${RESULT_DIR}/ done for i in `seq 1 ${STORAGE_COUNT}`; do scp storage${i}:/root/sar_*.bin ${RESULT_DIR}/ done cp -fv ${SMF_NETSHARE}/smf/*.csv ${RESULT_DIR}/
./prepare_vms_smallfile.sh $VM_COUNT sleep 1 time ./smallfile_test.sh $STORAGE_TYPE $FILESIZE 32768 $VM_COUNT $COMPUTE_COUNT 4 N 10
Example output from a smallfile test:
smallfile version 2.1 hosts in test : ['vm1'] top test directory(s) : ['/mnt/test'] operation : create files/thread : 32768 threads : 4 record size (KB) : 0 file size (KB) : 64 file size distribution : random exponential files per dir : 100 dirs per dir : 10 threads share directories? : N filename prefix : filename suffix : hash file number into dir.? : Y fsync after modify? : N pause between files (microsec) : 10 finish all requests? : Y stonewall? : Y measure response times? : Y verify read? : Y verbose? : False
A Principled Technologies test report 42
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage log to stderr? : False permute host directories? : N remote program directory : /root/smallfile network thread sync. dir. : /mnt/nfsexport/smf host = vm1, thread = 00, elapsed sec. = 28.767661, total files = 32768, total_records = 32769, status = ok host = vm1, thread = 01, elapsed sec. = 28.708897, total files = 32768, total_records = 32769, status = ok host = vm1, thread = 02, elapsed sec. = 28.722196, total files = 32700, total_records = 32701, status = ok host = vm1, thread = 03, elapsed sec. = 28.724667, total files = 32500, total_records = 32501, status = ok total threads = 4 total files = 130736 total data = 7.980 GB 99.74% of requested files processed, minimum is 70.00 4544.547420 files/sec 4544.686465 IOPS 28.767661 sec elapsed time, 284.042904 MB/sec
A Principled Technologies test report 43
Distributed storage performance for OpenStack clouds using small-file IO workloads: Red Hat Storage Server vs. Ceph Storage
ABOUT PRINCIPLED TECHNOLOGIES
Principled Technologies, Inc. 1007 Slater Road, Suite 300 Durham, NC, 27703 www.principledtechnologies.com We provide industry-leading technology assessment and fact-based marketing services. We bring to every assignment extensive experience with and expertise in all aspects of technology testing and analysis, from researching new technologies, to developing new methodologies, to testing with existing and new tools.
When the assessment is complete, we know how to present the results to a broad range of target audiences. We provide our clients with the materials they need, from market-focused data to use in their own collateral to custom sales aids, such as test reports, performance assessments, and white papers. Every document reflects the results of our trusted independent analysis.
We provide customized services that focus on our clients individual requirements. Whether the technology involves hardware, software, Web sites, or services, we offer the experience, expertise, and tools to help our clients assess how it will fare against its competition, its performance, its market readiness, and its quality and reliability.
Our founders, Mark L. Van Name and Bill Catchings, have worked together in technology assessment for over 20 years. As journalists, they published over a thousand articles on a wide array of technology subjects. They created and led the Ziff-Davis Benchmark Operation, which developed such industry-standard benchmarks as Ziff Davis Medias Winstone and WebBench. They founded and led eTesting Labs, and after the acquisition of that company by Lionbridge Technologies were the head and CTO of VeriTest. Principled Technologies is a registered trademark of Principled Technologies, Inc. All other product names are the trademarks of their respective owners. Disclaimer of Warranties; Limitation of Liability: PRINCIPLED TECHNOLOGIES, INC. HAS MADE REASONABLE EFFORTS TO ENSURE THE ACCURACY AND VALIDITY OF ITS TESTING, HOWEVER, PRINCIPLED TECHNOLOGIES, INC. SPECIFICALLY DISCLAIMS ANY WARRANTY, EXPRESSED OR IMPLIED, RELATING TO THE TEST RESULTS AND ANALYSIS, THEIR ACCURACY, COMPLETENESS OR QUALITY, INCLUDING ANY IMPLIED WARRANTY OF FITNESS FOR ANY PARTICULAR PURPOSE. ALL PERSONS OR ENTITIES RELYING ON THE RESULTS OF ANY TESTING DO SO AT THEIR OWN RISK, AND AGREE THAT PRINCIPLED TECHNOLOGIES, INC., ITS EMPLOYEES AND ITS SUBCONTRACTORS SHALL HAVE NO LIABILITY WHATSOEVER FROM ANY CLAIM OF LOSS OR DAMAGE ON ACCOUNT OF ANY ALLEGED ERROR OR DEFECT IN ANY TESTING PROCEDURE OR RESULT.
IN NO EVENT SHALL PRINCIPLED TECHNOLOGIES, INC. BE LIABLE FOR INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH ITS TESTING, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. IN NO EVENT SHALL PRINCIPLED TECHNOLOGIES, INC.S LIABILITY, INCLUDING FOR DIRECT DAMAGES, EXCEED THE AMOUNTS PAID IN CONNECTION WITH PRINCIPLED TECHNOLOGIES, INC.S TESTING. CUSTOMERS SOLE AND EXCLUSIVE REMEDIES ARE AS SET FORTH HEREIN.