Big Data Network
Big Data Network
applications are available, such as Hadoop: The Definitive Guide, Second Edition, which is referenced in this document.
Mobility trends: Mobile devices, mobile events and sharing, and sensory integration Data access and consumption: Internet, interconnected systems, social networking, and convergent interfaces and access models (Internet, search and social networking, and messaging)
Ecosystem capabilities: Major changes in the information processing model and the availability of an open source framework; the general-purpose computing and unified network integration
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 1 of 1
Data generation, consumption, and analytics have provided competitive business advantages for Web 2.0 portals and Internet-centric firms that offer services to customers and services differentiation through correlation of adjacent data. (An IDC data study provides a compelling view of the future of data growth; see http://idcdocserv.com/1142.) With the rise of business intelligence data mining and analytics spanning market research, behavioral modeling, and inference-based decision, data can be used to provide a competitive advantage. Here are just a few of the nearly limitless use cases of big data for the companies with large Internet presence:
Targeted marketing and advertising Related attached sale promotions Analysis of behavioral social patterns Metadata-based optimization of workload and performance management for millions of users
Hadoop: Provides storage capability through a distributed, shared-nothing file system, and analysis capability through MapReduce
NoSQL: Provides the capability to capture, read, and update, in real time, the large influx of unstructured data and data without schemas; examples include click streams, social media, log files, event data, mobility trends, and sensor and machine data
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 2 of 33
Figure 1.
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 3 of 33
Figure 2.
Hadoop Overview
The challenge facing companies today is how to analyze this massive amount of data to find those critical pieces of information that provide a competitive edge. Hadoop provides the framework to handle massive amounts of data: to either transform it to a more usable structure and format or analyze and extract valuable analytics from it.
Hadoop History
Every two days we create as much information as we did from the dawn of civilization up until 2003.
Eric Schmidt, former CEO of Google
During the upturn in Internet traffic in the early 2000s, the scale of data reached terabyte and petabyte levels on a daily basis for many companies. At those levels, standard databases could no longer scale enough to handle the so-called big data. In 2004, Google published a paper on the Google File System (GFS) and another paper on MapReduce, Googles patented software framework for distributed computing and large data sets on a scaled-out shared-nothing architecture, to address the challenge of sorting and scaling big data. The concepts in these papers were then implemented in Nutch, open source web-search software that enabled sort-and-merge-based processing.
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 4 of 33
Hadoop, also based on open source and Java architecture, was later spun off from Nutch as an Apache project. Enabling cost-effective bulk computing, Hadoop is both a distributed file system modeled on GFS and a distributed processing framework using the MapReduce metaphor, according to its creator Doug Cutting of Yahoo! (Figure 3).
Figure 3. Lineage of Hadoop
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 5 of 33
Files in HDFS are write once files: input data is streamed or loaded into HDFS and processed by the MapReduce framework (described later in this document), and any generated results are stored back in HDFS. The original input data is not modified during its life in HDFS. With this approach, HDFS is not intended to be used as a general-purpose file system. To enhance reliability and availability of the data in HDFS, the data assigned to one node is replicated among the other nodes. The default replication is threefold. This replication helps ensure that the data can survive the failure or nonavailability of a node. The nodes in a Hadoop cluster serve one of the following HDFS functions:
Name node: The name node is a single node in the cluster that is the brain of HDFS. It is responsible for keeping track of the file system metadata. It keeps a list of all the blocks in an HDFS file and a list of data nodes that host this block. Since it can be a single point of failure, it is generally provisioned with a resilient, highly available server.
Data node: The data node is shared-nothing cluster of computers capable of executing the workload components. These nodes operate independent of each other, and built with general-purpose hardware storing the data blocks of workload in HDFS.
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 6 of 33
Hadoop MapReduce
The MapReduce component of Hadoop is a framework for processing huge data sets on the Hadoop cluster (Figure 6). The MapReduce framework provides a clean abstraction of the underlying Hadoop cluster infrastructure, so that programmers can use the power of the infrastructure without dealing with the complexities of a distributed system. MapReduce workloads can be divided into two distinct phases:
Map phase: The submitted workload is divided into smaller subworkloads and assigned to mapper tasks. Each mapper processes one block of the input file. The output of the mapper is a sorted list of key-andvalue pairs. These key-value pairs are distributed or shuffled to reducers.
Reduce phase: The input for the reduce phase is the list of key-value pairs received from mappers. The job of a reducer task is to analyze, condense, and merge the input to produce the final output. The final output is written to a file in HDFS.
Figure 6.
MapReduce
An HDFS client has a large amount of data to place into HDFS. An HDFS client is constantly streaming data into HDFS.
Both these scenarios have the same interaction with HDFS, except that in the streaming case, the client waits for enough data to fill a data block before writing to HDFS. Data is stored in HDFS in large blocks, generally 64 to 128 MB or more in size. This storage approach allows easy parallel processing of data.
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 7 of 33
During the process of writing to HDFS, the blocks are generally replicated to multiple data nodes for redundancy. The number of copies, or the replication factor, is set to a default of 3 and can be modified by the cluster administrator. When a new data block is stored on a data node, the data node initiates a replication process to replicate the data onto a second data node. The second data node, in turn, replicates the block to a third data node, completing the replication of the block.
The following are the broad steps in running a MapReduce job: 1. Input setup: Load the input data into HDFS. a. b. c. d. 2. The input data can be bulk loaded or streamed into HDFS. Input data is split into large blocks (128 MB or larger) and distributed to data nodes in the cluster. The blocks are replicated to help ensure availability of the input data in the event of failures in the cluster during MapReduce processing. The name node keeps track of the list of data nodes and the blocks they hold.
Job setup: Submit a MapReduce job to the JobTracker. A job definition consists of the: a. b. c. d. e. Path to the input file in HDFS Path to the output file in HDFS in which the results should be stored Class that defines the map function Class that defines the reduce function Driver code that sets up the MapReduce job
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 8 of 33
3.
Job initialization: The JobTracker interacts with the TaskTracker on each data node to schedule map and reduce tasks a. b. The JobTracker interacts with the name node to get a list of data nodes that hold the blocks of the input file. It schedules map tasks on these data nodes. The JobTracker also schedules reducer tasks on nodes in the cluster
4.
Map phase: The mapper processes input splits, or chunks of an HDFS block of an input file, and generate intermediate key-value pairs.
5.
Sort phase: The mapper performs a sort of the intermediate key-value pairs. Each mapper then partitions the intermediate data into smaller units, one unit per reducer, typically using a hash function for the partitioning.
6.
Shuffle phase: In Hadoop: The Definitive Guide (2010), shuffle is described as follows: MapReduce makes the guarantee that the input to every reducer is sorted by key. The process by which the system performs the sort - and transfers the map outputs to the reducers as inputs - is known as the shuffle.
7.
Reduce phase: Each reducer merges all the units received from mappers and processes the merged list of key-value pairs to generate the final result.
8.
Result storage and replication: The results generated by reducers are stored as files in HDFS. This HDFS write operation again triggers replication of blocks of the result file, for redundancy.
9.
Result extraction: A client reads the HDFS file to export the results from HDFS.
Cluster size MapReduce data model Input data size Characteristics of data nodes
Cluster Size
Most Hadoop clusters start out with a modest size and grow as the demand on the cluster grows. You can increase the cluster size on an as-needed basis because the Hadoop infrastructure can exploit the parallel nature of MapReduce algorithms. As the number of jobs in the cluster grows or as the size of the input data grows, you can, in most cases, just add more nodes to the cluster and scale almost linearly. When nodes are added to the cluster, you must make sure that the network infrastructure and the name node can scale to match the increased size of the cluster.
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 9 of 33
Figure 8 shows an example of the effect of cluster size on completion time for a 1-terabyte (TB) Yahoo TeraSort workload representing an extract, transform, and load (ETL) process. (Additional test details are provided in the detailed testing section of this document.) A general characteristic of an optimally configured cluster is the capability to decrease job completion times by scaling out the nodes. For example, doubling the number of servers should decrease the completion time by roughly half for the same job and data set size.
Figure 8. Impact of number of nodes to workload completion (Empirical Observation)
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 10 of 33
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 11 of 33
Note:
The amount of memory allocated per data node should depend on importance of job completion time
considerations and price-to-performance trade-offs for those gains. If a Hadoop cluster is used for a single job, it is sufficient if a data node has adequate memory to run that single job. However, most Hadoop clusters run multiple jobs at a time. In such cases, having more memory will allow more tasks to be accommodated on the data node, increasing the possibility of faster job completion. When determining the characteristics of the data node, you need to consider the benefits and costs of the available options. Should you use a one-rack-unit (1RU) server with 8 disks and 2 CPUs, or a 2RU server with 16 disks and 2 CPUs? A 2RU server with 16 disks gives the node more storage space and more disk transfer parallelism, helping it cope better with workloads that are I/O intensive. However, a 2RU server also uses more space, decreasing the number of CPUs that can be fit into a single rack (CPU rack density), a factor that can increase the number of racks in a cluster and the total infrastructure needs (space, power, cooling, and networking resources) of the cluster. From the point of view of a top-of-rack (ToR) networking device, the ports on the device could be sparsely used, leading to the use of more ToR devices and, hence, more core ports and devices. A 1RU server, while limited in storage, offers greater CPU density and a lower per-server power footprint. More 1RU servers can be fit into a single rack, requiring a higher-density ToR switch or fabric extender. It is useful to consider some of these trade-offs in the context of whether the applications are CPU bound or I/O bound. A CPUbound application might benefit from greater CPU density per rack, whereas an I/O-bound application might benefit from larger storage capacity and greater disk transfer parallelism.
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 12 of 33
Figure 10 shows what happens when you add eight servers that are constantly loading 128-MB blocks of data to HDFS while the job is running. The figure shows that the background activity of loading data to HDFS, which triggers replication as well, can have a significant impact on job completion time. (Additional test details are provided in the detailed testing section of this document.)
Figure 10. Impact of HDFS imports to workload completion time
Network Characteristics
The nodes in a Hadoop cluster are interconnected through the network. Typically, one or more of the following phases of MapReduce jobs transfers data over the network: 1. Writing data: This phase occurs when the initial data is either streamed or bulk-delivered to HDFS. Data blocks of the loaded files are replicated, transferring additional data over the network. 2. Workload execution: The MapReduce algorithm is run. a. Map phase: In the map phase of the algorithm, almost no traffic is sent over the network. The network is used at the beginning of the map phase only if a HDFS locality miss occurs (the data block is not locally available and has to be requested from another data node). Shuffle phase: This is the phase of workload execution in which traffic is sent over the network, the degree to which depends on the workload. Data is transferred over the network when the output of the mappers is shuffled to the reducers. Reduce phase: In this phase, almost no traffic is sent over the network because the reducers have all the data they need from the shuffle phase. Output replication: MapReduce output is stored as a file in HDFS. The network is used when the blocks of the result file have to be replicated by HDFS for redundancy.
b.
c. d.
3.
Reading data: This phase occurs when the final data is read from HDFS for consumption by the end application, such as the website, indexing, or SQL database.
In addition, the network is crucial for the Hadoop control plane: the signaling and operations of HDFS and the MapReduce infrastructure.
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 13 of 33
Be sure to consider the benefits and costs of the choices available when designing a network: network architectures, network devices, resiliency, oversubscription ratios, etc. The following section discusses some of these parameters in more detail.
More information about any of the findings discussed in this section can be found in the detailed test section of this document. A functional and resilient network is a crucial part of a good Hadoop cluster. However, an analysis of the relative importance of the factors shows that other factors in a cluster have a greater influence on the performance of the cluster than the network. Nevertheless, you should consider some of the relevant network characteristics and their potential effects. Figure 11 shows the relative importance of the primary parameters as revealed by the observations in the detailed test section of this document.
Figure 11. Relative Importance of Parameters to Job Completion as Revealed by Observations in the Detailed Test Section
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 14 of 33
After the architectural framework is laid out, you should consider the availability aspects of individual devices. Switches and routers that run operating systems that are proven in the industry to be resilient provide better network availability to servers. Switches and routers that can be upgraded without any disruption to the data nodes provide higher availability. Further, devices that are proven to be easy to manage, troubleshoot, and upgrade help ensure less network downtime and increase the availability of the network and, hence, the cluster.
Loading input files into HDFS or writing result files to HDFS uses the network. In addition, these write operations trigger replication of data blocks of the file, leading to higher network use. These operations, since they occur in a short period of time, show up as bursts of traffic in the network.
The output of mappers, as they complete, is shuffled to reducers over the network. If many mappers finish at the same time, they will try to send their output to reducers at the same time, leading to bursts of traffic in the network.
A network that cannot handle bursts effectively will drop packets, so optimal buffering is needed in network devices to absorb bursts. Any packet dropped because a buffer is not available will result in retransmission, if excessive leads to longer job completion times. Be sure to choose switches and routers with architectures that employ buffer and queuing strategies that can handle bursts effectively.
Oversubscription Ratio
A good network design will consider the possibility of unacceptable congestion at critical points in the network under realistic loads. A ToR device that accepts 20 Gbps of traffic from the servers, but that has only two 1-Gbps uplinks (a total of 2 Gbps) provisioned (a 20:2 [or 10:1] oversubscription ratio) can drop packets, leading to poor cluster performance. However, overprovisioning the network can be costly. Generally accepted oversubscription ratios are around 4:1 at the server access layer and 2:1 between the access layer and the aggregation layer or core. Lower oversubscription ratios can be considered if higher performance is required. You should consider how oversubscription increases when certain devices fail and be sure to provision critical points in the network (such as the core) adequately. Network architectures (Layer 2 multipathing technologies such as Cisco FabricPath or Layer 3 Equal Cost Multipath [ECMP]) that deliver a linear increase in oversubscription with each device failure are better than architectures that degrade dramatically during failures.
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 15 of 33
Network Latency
Variations in switch and router latency have been shown to have only limited impact on cluster performance. From a network point of view, any latency-related optimization should start with a network wide analysis. Architecture first, and device next is an effective strategy. Architectures that deliver consistently lower latency at scale are better than architectures with higher overall latency but lower individual device latency. The latency contribution to the workload is much higher at the application level, contributed by the application logic (Java Virtual Machine software stack, socket-buffer etc)than network latency. In any case, slightly more or less network latency will not noticeably affect job completion times.
What are the traffic patterns of the map tasks? What happens when the reducer tasks start? What happens when the map tasks are complete? What is the effect of adding replication to the reduce tasks?
Two distinct workloads were used with associated benchmark tools to demonstrate their behavior in thw network:
Business Intelligence(BI) workload: The business intelligence workload is a reduced-function workload in which a large amount of data is presented as input, while the amount of resulting data is much smaller than the amount of input data. This type of workload can be simulated with tools such as WordCount, which comes packaged with Apache Hadoop. This workload takes a large data amount of input data (1 TB) and outputs a much smaller amount (1 MB).
Extract, Transform, and Load(ETL) workload: ETL workloads are found most common in enterprises in which a large amount of data needs to be converted to another format suited for various application. This type of workload can be demonstrated with the Yahoo TeraSort benchmarking application, described at http://sortbenchmark.org/Yahoo2009.pdf
Hadoop Topology
The Hadoop cluster was validated with various configurations of server components and network devices. The data nodes are UCS rack-mount severs while network devices consist of Nexus 5000/2000 and 3000 switches. The topology consists of two types of topologies commonly used in many enterprise networks. The workload was tested with various Hadoop cluster sizes. The below figures depict the topology and components of Hadoop cluster. For the Nexus 5000/2248 topology, data nodes are divided into groups in multiple FEXs. The Nexus 3000 topology used multiple devices to accommodate varying data node density.
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 16 of 33
Figure 12.
Test Topology
Servers
128 nodes of Cisco UCS C200 M2 High-Density Rack-Mount Servers (1RU each)
Four 2-TB disk drives, dual Intel Xeon 5670 processors at 2.93 GHz, and 96 GB of RAM
16 SFF drives, dual Intel Xeon 5670 processors at 2.93 GHz, and 96 GB of RAM
Network
Cisco Nexus 7000 Series Switches, Cisco Nexus 5500 switching platform, and Cisco Nexus 2200 fabric extender platform
Software
Red Hat Enterprise Linux 5.4, 5.5, 6.0, and 6.1 Hadoop versions: Apache Hadoop 0.20.2 and Clouderas Distribution Including Hadoop (CDH) Release 3u1
The initial data set size is large, however final output is small Higher computation-intensive map process, compared to shuffle and reduced phases The network utilization is low in shuffle phase due to the output of the map tasks are a small subset of the total dataset.
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
The figure 13 and 14 depicts above behavior with a benchmark workload - WordCount - which starts with 1 TB (200K copies of complete work of Shakespeare). The Figure 13 shows the traffic received via typical data node. Multiple data nodes (running mappers) finish the map task and reducer pull the data from each node finishing the map task. The graph depicts the multiple burst of receive data, as multiple mappers finishes assigned tasks and output from each is pulled by reducer tasks. Typical bandwidth for each burst of the spike is averaging around 15 Mb of total traffic. This traffic is minimal as the data node is performing a compute intensive map task.
This traffic is short-lived flows coming from several different servers all at the same time to reducers. These flows may cause temporary congestion at the receiving node either in form of high utilization of port or throughput reduction. The detail behavior of group of flows is shown in below Figure 14.
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 18 of 33
Figure 14.
A Closer Look at Traffic Received from Multiple Mappers to a single Reducer Nodes
TeraGen is a MapReduce program that generates the data. TeraSort samples the input data and uses MapReduce to sort the data into a total order. TeraSum is a MapReduce program that computes the 128-bit sum of the cyclic redundancy checksum (CRC32) of each key-value pair.
TeraValidate is a MapReduce program that verifies that the output is sorted and computes the sum of the checksums as TeraSum.
The initial as well as the final output data set size is same - 1 TB in this workload The network activity first spikes briefly during the map phase, depending of initial data set size and the data locality failure
Many computation-intensive map and as well as reduced process The network utilization is high in shuffle phase due to the output of map tasks are the same size as the initial data set, 1TB
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 19 of 33
Figure 15-17 show above characterization of the work load and its behavior on the network. Figure 15-17 depicts the same node (hpc064), however explains different event/aspect of Hadoop activity from a single node. In Figure 15 the whole event is shown with a data node receiving a large amount of data from all the senders. This is due to the fact the output of the ETL workload remains the same as input. From the Figure 15, the peak receives bandwidth for a given node is approaching to 1 Gbps.
Figure 15. Complete workload (TeraSort) time line for a given node - Rx by Reduced, Tx by Mappers
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 20 of 33
Figure 16.
Shuffle Tasks
Figure 17 shows that there is significant amount of traffic because the entire data set (1 TB) needs to be shuffled across the network. It also shows that the spikes are made up of many short-lived flows from all the nodes in the job. This workload behavior can potentially create temporary burst trigger short-lived buffer and IO congestion. The degree to which congestion can occur is highly variable on cluster size, network configuration, IO capability, and workload as described in section << Hadoop Cluster Performance Considerations >>.
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 21 of 33
Figure 17.
Hadoop Reduce - Shuffle Phase with TeraSort: A Closer Look at Traffic Received from Multiple Nodes During the Shuffle Phase Spikes
Output Replication
Data is replicated among the nodes to help ensure that the data can survive the nodes failure, either partially or indefinitely. Usually general replication of final output after the reducers have finished is enabled, but, with TeraSort, replication is usually disabled (Figure 18). In the tests here, replication was enabled for TeraSort. Figure 19 shows a closer view of the aggregate traffic during replication. This traffic is again made up of multiple nodes transmitting data all at the same time, but now 2 TB is being transferred across the network (using a replication factor of 3: one local and two remote copies).
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 22 of 33
Figure 18.
Replication of Final Output - TeraSort: Receipt of Data from Multiple Nodes Sending Data at the Same Time Leads to Incast
Figure 19.
Replication of Final Output - TeraSort: A Closer Look at the Spikes Caused by Multiple Nodes Sending Data at the Same Time
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 23 of 33
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 24 of 33
Figure 21.
Benchmark
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 25 of 33
Figure 22.
Benchmark
Reading Data
Network traffic is limited during the importing from HDFS. The reading time depends of several factors. First the reading is performed serially (get block 1, get block 2, get block 3 get block N), secondly the data size to be read and finally the disk IO capacity of the compute node.
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 26 of 33
Figure 23.
The above set of the graphs depicts multiple states of data read. The first graph in which inca049 is receiving the data from HDFS. The second set of four graphs is the HDFC data node sending the data to inca049. Notice that later graphs are sending data serial at various time slots.
Writing Data
Hadoop by default has replication set to 3: that is, any data imported into Hadoop has one copy stored locally on the receiving data node and two copies sent to two other nodes in the cluster. Therefore, a simple import of data into a Hadoop cluster also results in bursts of traffic (Figure 24).
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 27 of 33
Figure 24.
Eight clients were added to write 128-MB files constantly to HDFS. Replication is set to 3 for both the initial data and the TeraSort results data. The portions marked with brown labels at the beginning and end of the graph represent the buffer use when data is imported into HDFS. After TeraSort starts, buffer use increases; however, buffer use is not very different than for TeraSort without writing data to HDFS because MapReduce jobs in general have a higher priority than background jobs such as copying data to HDFS.
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 28 of 33
Figure 25.
Switch Buffer Use While Running 10-TB TeraSort and Importing Data into HDFS
The buffer utilization in FEX (Fabric Extender) is measured by cell usage. The below graphs depict the cell usage during various phases of the workload. The dark green line depicts the map tasks completion status. As seen from the graph as the map tasks being to finish at various node, the shuffle phase is started in parallel. The relative buffer usage of the shuffle phase is lower compared to final reduce phase due to the fact that replication of output is enabled causing higher buffer usage spikes.
Figure 26 takes a closer look at the buffer use during the HDFS import process. Notice the brown column descriptor describing the update data streaming into HDFS by eight nodes updating continuously, which does not have any significant impact on buffer usage.
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 29 of 33
Figure 26.
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 30 of 33
Figure 27.
Effect of Background Activity (HDFS Import) on a Hadoop Job (TeraSort): Comparison of Completion Time With and Without HDFS Import
Server Port Speed 1 Gbps Network Topology Cisco Nexus 5500 Platform plus Cisco Nexus 2248TP GE
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 31 of 33
Figure 28.
Benchmark
1 Gigabit Ethernet Compared to 10 Gigabit Ethernet Buffer Use: Less Buffer Use with 10 Gigabit Ethernet
Server Port Speed 1-Gbps Compared to 10 Gbps Network Topology Cisco Nexus 3048 Compared to Cisco Nexus 3064
Gridmix: A tool used for modeling workloads from Hadoop production workloads; Clouderas strong presence in providing support for Hadoop helps in tuning Gridmix (http://hadoop.apache.org/mapreduce/docs/current/gridmix.html)
SWIM: Another tool, similar to Gridmix, that tracks actual jobs on a production setup that can later be used for benchmarking (http://www.eecs.berkeley.edu/~ychen2/SWIM.html)
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 32 of 33
Figure 29 shows the time improvement for different disks when moving from 1 to 10 Gigabit Ethernet.
Figure 29.
Benchmark Clouderas Certification Suite
Figure 30.
Printed in USA
C11-690561-00
11/11
2011 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 33 of 33