0% found this document useful (0 votes)
42 views14 pages

The Science DMZ: A Network Design Pattern For Data-Intensive Science

Uploaded by

raunakmeshram158
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views14 pages

The Science DMZ: A Network Design Pattern For Data-Intensive Science

Uploaded by

raunakmeshram158
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Scientific Programming 22 (2014) 173–185 173

DOI 10.3233/SPR-140382
IOS Press

The Science DMZ: A network design pattern


for data-intensive science 1
Eli Dart a,∗ , Lauren Rotman a , Brian Tierney a , Mary Hester a and Jason Zurawski b
a Energy Sciences Network, Lawrence Berkeley National Laboratory, Berkeley, CA, USA

E-mails: {eddart, lbrotman, bltierney, mchester}@lbl.gov


b Internet2, Office of the CTO, Washington DC, USA

E-mail: [email protected]

Abstract. The ever-increasing scale of scientific data has become a significant challenge for researchers that rely on networks
to interact with remote computing systems and transfer results to collaborators worldwide. Despite the availability of high-
capacity connections, scientists struggle with inadequate cyberinfrastructure that cripples data transfer performance, and impedes
scientific progress. The Science DMZ paradigm comprises a proven set of network design patterns that collectively address these
problems for scientists. We explain the Science DMZ model, including network architecture, system configuration, cybersecurity,
and performance tools, that creates an optimized network environment for science. We describe use cases from universities,
supercomputing centers and research laboratories, highlighting the effectiveness of the Science DMZ model in diverse operational
settings. In all, the Science DMZ model is a solid platform that supports any science workflow, and flexibly accommodates
emerging network technologies. As a result, the Science DMZ vastly improves collaboration, accelerating scientific discovery.
Keywords: High performance networking, perfsonar, data-intensive science, network architecture, measurement

1. Introduction by many factors including: firewalls that cannot ef-


fectively process science traffic flows; routers and
A design pattern is a solution that can be applied to switches with inadequate burst capacity; dirty optics;
a general class of problems. This definition, originat- and failing network and system components. In addi-
ing in the field of architecture [1,2], has been adopted tion, another performance problem can be the miscon-
in computer science, where the idea has been used figuration of data transfer hosts, which is often a con-
in software designs [6] and in our case network de- tributing factor in poor network performance.
signs. The network design patterns we discuss are fo- Many of these problems are found on the local area
cused on high end-to-end network performance for networks, often categorized as “general-purpose” net-
data-intensive science applications. These patterns fo- works, that are not designed to support large science
cus on optimizing the network interactions between data flows. Today many scientists are relying on these
wide area networks, campus networks, and computing network infrastructures to share, store, and analyze
systems. their data which is often geographically dispersed.
The Science DMZ model, as a design pattern, can The Science DMZ provides a design pattern de-
be adapted to solve performance problems on any ex- veloped to specifically address these local area net-
isting network. Of these performance problems, packet work issues and offers research institutions a frame-
loss has proven to be the most detrimental as it causes work to support data-intensive science. The Science
an observable and dramatic decrease in data through- DMZ model has been broadly deployed and has al-
put for most applications. Packet loss can be caused ready become indispensable to the present and future
of science workflows.
1 This paper received a nomination for the Best Paper Award at the The Science DMZ provides:
SC2013 conference and is published here with permission of ACM.
* Corresponding author: Eli Dart, Energy Sciences Network, • A scalable, extensible network infrastructure free
Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA. from packet loss that causes poor TCP perfor-
E-mail: [email protected]. mance;

1058-9244/14/$27.50 © 2014 – IOS Press and the authors. All rights reserved
174 E. Dart et al. / The Science DMZ: A network design pattern for data-intensive science

• Appropriate usage policies so that high- wide area network and the local area networks. The
performance applications are not hampered by wide area networks (or WANs) are often already op-
unnecessary constraints; timized and can accommodate large data flows up to
• An effective “on-ramp” for local resources to ac- 100 Gbps. However, the local area networks are still a
cess wide area network services; and choke point for these large data flows.
• Mechanisms for testing and measuring, thereby Local area networks are usually general-purpose
ensuring consistent performance. networks that support multiple missions, the first of
This paper will discuss the Science DMZ from its which is to support the organization’s business op-
development to its role in future technologies. First, erations including email, procurement systems, web
Section 2 will discuss the Science DMZ’s original browsing, and so forth. Second, these general networks
development in addressing the performance of TCP- must also be built with security that protects finan-
based applications. Second, Section 3 enumerates the cial and personnel data. Meanwhile, these networks
components of the Science DMZ model and how each are also used for research as scientists depend on this
component adds to the overall paradigm. Next, Sec- infrastructure to share, store, and analyze data from
tions 4 and 5 offer some sample illustrations of net- many different sources. As scientists attempt to run
works that vary in size and purpose. Following, Sec- their applications over these general-purpose networks,
tion 6 will discuss some examples of Science DMZ im- the result is often poor performance, and with the in-
plementations from the R&E community. And lastly, crease of data set complexity and size, scientists often
Section 7 highlights some future technological ad- wait hours, days, or weeks for their data to arrive.
vancements that will enhance the applicability of the Since many aspects of general-purpose networks are
Science DMZ design. difficult or impossible to change in the ways necessary
to improve their performance, the network architecture
must be adapted to accommodate the needs of science
2. Motivation applications without affecting mission critical business
and security operations. Some of these aspects that are
When developing the Science DMZ, several key difficult to change might include the size of the mem-
principles provided the foundation to its design. First, ory buffers for individual interfaces; mixed traffic pat-
these design patterns are optimized for science. This terns between mail and web traffic that would include
means the components of the system – including all science data; and emphasis on availability vs. perfor-
the equipment, software and associated services – are mance and what can be counted on over time for net-
configured specifically to support data-intensive sci- work availability.
ence. Second, the model is designed to be scalable in The Science DMZ model has already been im-
its ability to serve institutions ranging from large ex-
plemented at various institutions to upgrade these
perimental facilities to supercomputing sites to multi-
general-purpose, institutional networks. The National
disciplinary research universities to individual research
Science Foundation (NSF) recognized the Science
groups or scientists. The model also scales to serve a
DMZ as a proven operational best practice for univer-
growing number of users at those facilities with an in-
sity campuses supporting data-intensive science and
creasing and varying amount of data over time. Lastly,
the Science DMZ model was created with future inno- specifically identified this model as eligible for fund-
vation in mind by providing the flexibility to incorpo- ing through the Campus Cyberinfrastructure–Network
rate emerging network services. For instance, advances Infrastructure and Engineering Program (CC–NIE).2
in virtual circuit services, 100 Gigabit Ethernet, and This program was created in 2012 and has since been
the emergence of software-defined networking present responsible for implementing approximately 20 Sci-
new and exciting opportunities to improve scientific ence DMZs at different locations – thereby serving the
productivity. In this section, we will mostly discuss the needs of the science community. Another NSF solicita-
first principle since it is the driving mission for the Sci- tion was released in 2013 and awards to fund a similar
ence DMZ model. number of new Science DMZ’s are expected.
The first principle of the model is to optimize the
network for science. To do this, there are two entities 2 NSF’s CC–NIE Program: http://www.nsf.gov/pubs/2013/
or areas of the network that should be considered: the nsf13530/nsf13530.html.
E. Dart et al. / The Science DMZ: A network design pattern for data-intensive science 175

Fig. 1. Graph shows the TCP throughput vs. round-trip time (latency) with packet loss between 10 Gbps connected hosts, as predicted by the
Mathis Equation. The topmost line (shown in purple) shows the throughput for TCP in a loss-free environment. (The colors are visible in the
online version of the article; http://dx.doi.org/10.3233/SPR-140382.)

2.1. TCP performance second due to the loss rate. While this only resulted
in an overall drop of throughput of 450 Kbps (on the
The Transmission Control Protocol (TCP) [15] of device itself), it reduced the end-to-end TCP perfor-
the TCP/IP protocol suite is the primary transport pro- mance far more dramatically as demonstrated in Fig. 1.
tocol used for the reliable transfer of data between ap- This packet loss was not being reported by the router’s
plications. TCP is used for email, web browsing, and internal error monitoring, and was only noticed using
similar applications. Most science applications are also the owamp active packet loss monitoring tool, which is
built on TCP, so it is important that the networks are part of the perfSONAR Toolkit.4
able to work with these applications (and TCP) to op-
Because TCP interprets the loss as network conges-
timize the network for science.
tion, it reacts by rapidly reducing the overall sending
TCP is robust in many respects – in particular it has
rate. The sending rate then slowly recovers due to the
sophisticated capabilities for providing reliable data
delivery in the face of packet loss, network outages, dynamic behavior of the control algorithms. Network
and network congestion. However, the very mecha- performance can be negatively impacted at any point
nisms that make TCP so reliable also make it per- during the data transfer due to changing conditions in
form poorly when network conditions are not ideal. In the network. This problem is exacerbated as the la-
particular, TCP interprets packet loss as network con- tency increases between communicating hosts. This is
gestion, and reduces its sending rate when loss is de- often the case when research collaborations sharing
tected. In practice, even a tiny amount of packet loss is data are geographically distributed. In addition, feed-
enough to dramatically reduce TCP performance, and back regarding the degraded performance takes longer
thus increase the overall data transfer time. When ap- to propagate between the communicating hosts.
plied to large tasks, this can mean the difference be- The relationship between latency, data loss, and net-
tween a scientist completing a transfer in days rather work capability was described by Mathis et al. as a
than hours or minutes. Therefore, networks that sup- mechanism to predict overall throughput [12]. The
port data-intensive science must provide TCP-based “Mathis Equation” states that maximum TCP through-
applications with loss-free service if TCP-based appli- put is at most:
cations are to perform well in the general case.
As an example of TCP’s sensitivity, consider the fol- maximum segment size 1
lowing case. In 2012, Department of Energy’s (DOE) ×√ .
round-trip time packet loss rate
Energy Sciences Network (ESnet) had a failing 10
Gbps router line card that was dropping 1 out of 22,000 (1)
packets, or 0.0046% of all traffic. Assuming the line
card was working at peak efficiency, or 812,744 regu- Figure 1 shows the theoretical rate predicted by the
lar sized frames per second,3 37 packets were lost each Mathis Equation, along with the measured rate for both
TCP-Reno and TCP-Hamilton across ESnet. These
3 Performance Metrics, http://www.cisco.com/web/about/security/
intelligence/network_performance_metrics.html. 4 perfSONAR Toolkit: http://psps.perfsonar.net.
176 E. Dart et al. / The Science DMZ: A network design pattern for data-intensive science

tests are between 10 Gbps connected hosts configured There are several reasons to separate the high-
to use 9 KByte (“Jumbo Frame”) Maximum Transmis- performance science traffic from the rest of the net-
sion Units (MTUs). work. The support of high-performance applications
This example is indicative of the current opera- can involve the deployment of highly capable equip-
tional reality in science networks. TCP is used for ment that would be too expensive to use throughout the
the vast majority of high-performance science applica- general-purpose network but that has necessary fea-
tions. Since TCP is so sensitive to loss, a science net- tures such as high-performance filtering capabilities,
work must provide TCP with a loss-free environment, sufficient buffering for burst capacity, and the ability to
end-to-end. This requirement, in turn, drives a set of accurately account for packets that traverse the device.
design decisions that are key components of the Sci- In some cases, the configuration of the network devices
ence DMZ model. must be changed to support high-speed data flows – an
example might be conflict between quality of service
settings for the support of enterprise telephony and the
3. The science DMZ design pattern burst capacity necessary to support long-distance high-
performance data flows. In addition, the location pat-
tern makes the application of the appropriate security
The overall design pattern or paradigm of the Sci-
pattern significantly easier (see Section 3.4).
ence DMZ is comprised of four sub-patterns. Each of
The location design pattern can also significantly
these sub-patterns offers repeatable solutions for four
reduce the complexity of the portion of the network
different areas of concern: proper location (in network
used for science applications. Troubleshooting is time-
terms) of devices and connections; dedicated systems; consuming, and there is a large difference in opera-
performance measurement; and appropriate security tional cost and time-to-resolution between verifying
policies. These four sub-patterns will be discussed in the correct operation of a small number of routers and
the following subsections. switches and tracing the science flow through a large
number of network devices in the general-purpose net-
3.1. Proper location to reduce complexity work of a college campus. For this reason, the Sci-
ence DMZ is typically located as close to the network
The physical location of the Science DMZ (or “lo- perimeter as possible, i.e. close to or directly connected
cation design pattern") is important to consider during to the border router that connects the research institu-
the deployment process. The Science DMZ is typically tion’s network to the wide area science network.
deployed at or near the network perimeter of the in-
stitution. The reason for this is that it is important to 3.2. Dedicated systems: The Data Transfer Node
involve as few network devices as reasonably possible (DTN)
in the data path between the experiment at a science
facility, the Science DMZ, and the WAN. Systems used for wide area science data transfers
Network communication between applications run- perform far better if they are purpose-built for and ded-
ning on two hosts traverses, by definition, the hosts icated to this function. These systems, which we call
themselves and the entire network infrastructure be- data transfer nodes (DTNs), are typically PC-based
tween the hosts. Given the sensitivity of TCP to packet Linux servers constructed with high quality compo-
loss (as discussed in Section 2.1), it is important to en- nents and configured specifically for wide area data
sure that all the components of the network path be- transfer. The DTN also has access to storage resources,
tween the hosts are functioning properly and config- whether it is a local high-speed disk subsystem, a con-
nection to a local storage infrastructure, such as a stor-
ured correctly. Wide area science networks are typi-
age area network (SAN), or the direct mount of a high-
cally engineered to perform well for science applica-
speed parallel file system such as Lustre5 or GPFS.6
tions, and in fact the Science DMZ model assumes that
The DTN runs the software tools used for high-speed
the wide area network is doing its job. However, the
data transfer to remote systems. Some typical software
local network is often complex, and burdened with the
packages include GridFTP7 [3] and its service-oriented
compromises inherent in supporting multiple compet-
ing missions. The location design pattern accomplishes 5 Lustre, http//www.lustre.org/.
two things. The first is separation from the rest of the 6 GPFS, http://www.ibm.com/systems/software/gpfs/.
general network, and the second is reduced complexity. 7 GridFTP, http://www.globus.org/datagrid/gridftp.html.
E. Dart et al. / The Science DMZ: A network design pattern for data-intensive science 177

front-end Globus Online8 [4], discipline-specific tools rity perspective, and this makes the appropriate secu-
such as XRootD,9 and versions of default toolsets such rity policy easier to apply (see Section 3.4).
as SSH/SCP with high-performance patches10 applied. Because the design and tuning of a DTN can be
DTNs are widely applicable in diverse science en- time-consuming for small research groups, ESnet has
vironments. For example, DTNs are deployed to sup- a DTN Tuning guide19 and a Reference DTN Imple-
port Beamline 8.3.2 at Berkeley Lab’s Advanced Light mentation guide.20 The typical engineering trade-offs
Source,11 and as a means of transferring data to and between cost, redundancy, performance and so on. ap-
from a departmental cluster. On a larger scale, sets ply when deciding on what hardware to use for a DTN.
of DTNs are deployed at supercomputer centers (for In general, it is recommended that DTNs be procured
example at the DOE’s Argonne Leadership Comput- and deployed such that they can be expanded to meet
ing Facility,12 the National Energy Research Scientific future storage requirements.
Computing Center,13 and Oak Ridge Leadership Com-
puting Facility14 ) to facilitate high-performance trans- 3.3. Performance monitoring
fer of data both within the centers and to remote sites.
At even larger scales, large clusters of DTNs provide Performance monitoring is critical to the discov-
data service to the Large Hadron Collider (LHC)15 col- ery and elimination of so-called “soft failures” in the
laborations. The Tier-116 centers deploy large numbers network. Soft failures are problems that do not cause
of DTNs to support thousands of scientists. These are a complete failure that prevents data from flowing
systems dedicated to the task of data transfers so that (like a fiber cut), but causes poor performance. Ex-
they provide reliable, high-performance service to sci- amples of soft failures include packet loss due to fail-
ence applications.17 ing components; dirty fiber optics; routers forward-
DTNs typically have high-speed network interfaces, ing packets using the management CPU rather than
but the key is to match the DTN to the capabilities of the high-performance forwarding hardware; and inad-
the wide area network infrastructure. For example, if equate hardware configuration. Soft failures often go
the network connection from the site to the WAN is undetected for many months or longer, since most net-
1 Gigabit Ethernet, a 10 Gigabit Ethernet interface on work management and error reporting systems are op-
the DTN may be counterproductive. The reason for this timized for reporting “hard failures”, such as loss of
is that a high-performance DTN can overwhelm the a link or device. Also, many scientists do not know
slower wide area link causing packet loss. what level of performance to expect, and so they do not
The set of applications that run on a DTN is typi- know when to alert knowledgeable staff about a poten-
cally limited to parallel data transfer applications like tial problem.
GridFTP or FDT.18 In particular, user-agent applica- A perfSONAR host [16] helps with fault diagno-
tions associated with general-purpose computing and sis on the Science DMZ. It offers end-to-end testing
business productivity (e.g., email clients, document ed- with collaborating sites that have perfSONAR tools in-
itors, media players) are not installed. This is for two stalled, which allows for multi-domain troubleshoot-
reasons. First, the dedication of the DTN to data trans- ing. perfSONAR is a network monitoring software
fer applications produces more consistent behavior and suite designed to conduct both active and passive net-
avoids engineering trade-offs that might be part of sup- work measurements, convert these to a standard for-
porting a larger application set. Second, data transfer mat, and then publish the data so it is publicly acces-
applications are relatively simple from a network secu- sible. The perfSONAR host can run continuous checks
8 Globus Online, https://www.globusonline.org/.
for latency changes and packet loss using OWAMP,21
9 XRootD, http://xrootd.slac.stanford.edu/. as well as periodic “throughput” tests (a measure of
10 HPN-SSH, http://www.psc.edu/networking/projects/hpn-ssh/. available network bandwidth) using BWCTL.22 If a
11 LBNL ALS, http://www-als.lbl.gov. problem arises that requires a network engineer to
12 ALCF, https://www.alcf.anl.gov. troubleshoot the routing and switching infrastructure,
13 NERSC, http://www.nersc.gov.
14 OLCF, http://www.olcf.ornl.gov/. 19 DTN Tuning, http://fasterdata.es.net/science-dmz/DTN/
15 LHC, http://lhc.web.cern.ch/lhc/. tuning/.
16 US/LHC, http://www.uslhc.us/The_US_and_the_LHC/ 20 Reference DTN, http://fasterdata.es.net/science-dmz/data-
Computing. transfer-node-reference-implementation/.
17 LHCOPN, http://lhcopn.web.cern.ch/lhcopn/. 21 OWAMP, http://www.internet2.edu/performance/owamp/.
18 FTD, http://monalisa.cern.ch/FDT/. 22 BWCTL, http://www.internet2.edu/performance/bwctl/.
178 E. Dart et al. / The Science DMZ: A network design pattern for data-intensive science

Science DMZ model addresses security using a multi-


pronged approach.
The appropriate security pattern is heavily depen-
dent on the location and the dedicated systems pat-
terns. By deploying the Science DMZ in a separate lo-
cation in the network topology, the traffic in the Sci-
ence DMZ is separated from the traffic on the rest of
the network (i.e., email, etc.), and security policy and
tools can be applied specifically to the science-only
traffic on the Science DMZ. The use of dedicated sys-
tems limits the application set deployed on the Science
DMZ, and also reduces the attack surface.
Fig. 2. Regular perfSONAR monitoring of the ESnet infrastructure. A comprehensive network security capability uses
The color scales denote the “degree” of throughput for the data path. many tools and technologies, including network and
Each square is halved to show the traffic rate in each direction be- host intrusion detection systems, firewall appliances,
tween test hosts. (Colors are visible in the online version of the arti-
cle; http://dx.doi.org/10.3233/SPR-140382.)
flow analysis tools, host-based firewalls, router access
control lists (ACLs), and other tools as needed. Appro-
priate security policies and enforcement mechanisms
the tools necessary to work the problem are already
are designed based on the risk levels associated with
deployed – they need not be installed before trou-
high-performance science environments and built us-
bleshooting can begin.
ing components that scale to the data rates required
By deploying a perfSONAR host as part of the Sci-
without causing performance problems. Security for a
ence DMZ architecture, regular active network testing
data-intensive science environment can be tailored for
can be used to alert network administers when packet
the data transfer systems on the Science DMZ.
loss rates increase, or throughput rates decrease. This
Science DMZ resources are designed to interact
is demonstrated by “dashboard” applications, as seen
with external systems, and are isolated from (or have
in Fig. 2. Timely alerts and effective troubleshooting
carefully managed access to) internal systems. This
tools significantly reduce the time and effort required
means the security policy for the Science DMZ can be
to isolate the problem and resolve it. This makes high
tailored for this purpose. Users at the local site who ac-
performance the norm for science infrastructure, and
cess resources on their local Science DMZ through the
provides significant productivity advantages for data-
lab or campus perimeter firewall will typically get rea-
intensive science experiments.
sonable performance, since the latency between the lo-
cal users and the local Science DMZ is low (even if the
3.4. Appropriate security firewall causes some loss), TCP can recover quickly.

Network and computer security are of critical im-


portance for many organizations. Science infrastruc- 4. Sample designs
tures are no different than any other information infras-
tructure. They must be secured and defended. The Na- As a network design paradigm, the individual pat-
tional Institute for Standards and Technology (NIST) terns of the Science DMZ can be combined in many
framework for security uses the CIA concepts – Confi- different ways. The following examples of the overall
dentiality, Integrity, and Availability.23 Data-intensive Science DMZ model are presented as illustrations of
science adds another dimension – performance. If the the concepts using notional network diagrams of vary-
science applications cannot achieve adequate perfor- ing size and functionality.
mance, the science mission of the infrastructure has
failed. Many of the tools in the traditional network se- 4.1. Simple Science DMZ
curity toolbox do not perform well enough for use in
high-performance science environments. Rather than A simple Science DMZ has several essential
compromise security or compromise performance, the components. These include dedicated access to high-
performance wide area networks, high-performance
23 FIPS-199, http://csrc.nist.gov/publications/PubsFIPS.html. network equipment, DTNs, and monitoring infrastruc-
E. Dart et al. / The Science DMZ: A network design pattern for data-intensive science 179

Fig. 4. Example supercomputer center built as a Science DMZ. (Col-


ors are visible in the online version of the article; http://dx.doi.org/
Fig. 3. Example of the simple Science DMZ. Shows the data path 10.3233/SPR-140382.)
through the border router and to the DTN (shown in green). The
campus site access to the Science DMZ resources is shown in red.
(The colors are visible in the online version of the article; http://
high-rate data flows without packet loss, and designed
dx.doi.org/10.3233/SPR-140382.) to allow easy troubleshooting and fault isolation. Test
and measurement systems are integrated into the in-
ture provided by perfSONAR. These components are frastructure from the beginning, so that problems can
organized in an abstract diagram with data paths in be located and resolved quickly, regardless of whether
Fig. 3. the local infrastructure is at fault. Note also that access
The DTN is connected directly to a high- to the parallel filesystem by wide area data transfers
performance Science DMZ switch or router, which is is via data transfer nodes that are dedicated to wide
attached to the border router. By attaching the Science area data transfer tasks. When data sets are transferred
DMZ to the border router, it is much easier to guar- to the DTN and written to the parallel filesystem, the
antee a packet loss free path to the DTN, and to cre- data sets are immediately available on the supercom-
ate virtual circuits that extend all the way to the end puter resources without the need for double-copying
host. The DTN’s job is to efficiently and effectively the data. Furthermore, all the advantages of a DTN –
move science data between the local environment and i.e., dedicated hosts, proper tools, and correct config-
remote sites and facilities. The security policy enforce- uration – are preserved. This is also an advantage in
ment for the DTN is done using access control lists that the login nodes for a supercomputer need not have
(ACLs) on the Science DMZ switch or router, not on a their configurations modified to support wide area data
separate firewall. The ability to create a virtual circuit transfers to the supercomputer itself. Data arrives from
all the way to the host also provides an additional layer outside the center via the DTNs and is written to the
of security. This design is suitable for the deployment central filesystem. The supercomputer login nodes do
of DTNs that serve individual research projects or to not need to replicate the DTN functionality in order
support one particular science application. An exam- to facilitate data ingestion. A use case is described in
ple use case of the simple Science DMZ is discussed Section 6.4.
in Sections 6.1 and 6.2.
4.3. Big data site
4.2. Supercomputer center network
For sites that handle very large data volumes (e.g.,
The notional diagram shown in Fig. 4 illustrates a for large-scale experiments such as the LHC), individ-
simplified supercomputer center network. While this ual data transfer nodes are not enough. These sites de-
may not look much like the simple Science DMZ dia- ploy data transfer “clusters”, and these groups of ma-
gram in Fig. 3, the same principles are used in its de- chines serve data from multi-petabyte data storage sys-
sign. tems. Still, the principles of the Science DMZ apply.
Many supercomputer centers already use the Sci- Dedicated systems are still used for data transfer, and
ence DMZ model. Their networks are built to handle the path to the wide area is clean, simple, and easy to
180 E. Dart et al. / The Science DMZ: A network design pattern for data-intensive science

section.) One should also look for devices that have


flexible, high-performance ACL (Access Control List)
support so that the router or switch can provide ade-
quate filtering to eliminate the need for firewall appli-
ances. Note that some care must be taken in reading
the documentation supplied by vendors. For example,
Juniper Network’s high-performance router ACLs are
actually called “firewall filters” in the documentation
and the device configuration. In general, it is impor-
tant to ask vendors for specifics about packet filtering,
interface queues, and other capabilities.
As discussed, two very common causes of packet
loss are firewalls and aggregation devices with inad-
Fig. 5. Example of an extreme data cluster. The wide area data path
covers the entire network front-end, similar to the supercomputer equate buffering. In order to understand these prob-
center model. (Colors are visible in the online version of the article; lems, it is important to remember that a TCP-based
http://dx.doi.org/10.3233/SPR-140382.) flow rarely runs at its overall “average” speed. When
observed closely, it is apparent that most high-speed
troubleshoot. Test and measurement systems are inte- TCP flows are composed of bursts and pauses. These
grated in multiple locations to enable fault isolation. bursts are often very close to the maximum data rate
This network is similar to the supercomputer center ex- for the sending host’s interface. This is important, be-
ample in that the wide area data path covers the entire cause it means that a 200 Mbps TCP flow between
network front-end, as shown in Fig. 5. hosts with Gigabit Ethernet interfaces is actually com-
This network has redundant connections to the re- posed of short bursts at or close to 1 Gbps with pauses
search network backbone, each of which is capable in between.
of both routed IP and virtual circuit services. The en- Firewalls are often built with an internal architec-
terprise portion of the network takes advantage of the ture that aggregates a set of lower-speed processors to
high-capacity redundant infrastructure deployed for achieve an aggregate throughput that is equal to the
serving science data, and deploys redundant firewalls speed of the network interfaces of the firewall. This ar-
to ensure uptime. However, the science data flows do chitecture works well when the traffic traversing the
not traverse these devices. Appropriate security con- firewall is composed of a large number of low-speed
trols for the data service are implemented in the rout- flows (e.g., a typical business network traffic profile).
ing and switching plane. This is done both to keep the However, this causes a problem when a host with a net-
firewalls from causing performance problems and be- work interface that is faster than the firewall’s internal
cause the extremely high data rates are typically be- processors emerges. Since the firewall must buffer the
yond the capacity of firewall hardware. More discus- traffic bursts sent by the data transfer host until it can
sion about the LHC high-volume data infrastructure process all the packets in the burst, input buffer size
can be found in Johnston et al.’s paper presented at the is critical. Firewalls often have small input buffers be-
2013 TERENA Networking Conference [9]. cause that is typically adequate for the traffic profile of
a business network. If the firewall’s input buffers are
too small to hold the bursts from the science data trans-
5. Network components fer host, the user will suffer severe performance prob-
lems caused by packet loss.
When choosing the network components for a Sci- Given all the problems with firewalls, one might ask
ence DMZ, it is important to carefully select network- what value they provide. If the application set is lim-
ing hardware that can efficiently handle the high band- ited to data transfer applications running on a DTN, the
width requirements. The most important factor is to answer is that firewalls provide very little value. When
deploy routers and switches that have enough queue a firewall administrator is collecting the information
buffer space to handle “fan-in” issues, and are prop- necessary to allow a data transfer application such as
erly configured to use this buffer space, as the default GridFTP to traverse the firewall, the firewall adminis-
settings are often not optimized for bulk data transfers. trator does not configure the firewall to use a special-
(The fan-in issue is described in detail at the end of this ized protocol analyzer that provides deep inspection of
E. Dart et al. / The Science DMZ: A network design pattern for data-intensive science 181

the application’s traffic. The firewall administrator asks 6.1. University of Colorado, Boulder
for the IP addresses of the communicating hosts, and
the TCP ports that will be used by the hosts to commu- The University of Colorado, Boulder campus was an
nicate. Armed with that information, the firewall ad- early adopter of Science DMZ technologies. Their core
ministrator configures the firewall to permit the traf- network features an immediate split into a protected
fic. Filtering based on IP address and TCP port num- campus infrastructure (beyond a firewall), as well as
ber can be done on the Science DMZ switch or router a research network (RCNet) that delivers unprotected
with ACLs. When done with ACLs on a modern switch functionality directly to campus consumers. Figure 6
or router, the traffic does not need to traverse a fire- shows the basic breakdown of this network, along with
wall at all. This is a key point: by running a limited set the placement of measurement tools provided by perf-
of applications on the Science DMZ DTNs, the appli-
SONAR.
cation profile is such that the Science DMZ can typ-
The physics department, a participant in the Com-
ically be defended well without incurring the perfor-
pact Muon Solenoid (CMS)24 experiment affiliated
mance penalties of a firewall. This is especially true if
with the LHC project, is a heavy user of campus net-
the ACLs are used in combination with intrusion de-
work resources. It is common to have multiple streams
tection systems or other advanced security tools. How-
of traffic approaching an aggregate of 5 Gbps affiliated
ever, an intrusion detection system should be used even
if a firewall is present. with this research group. As demand for resources in-
Aggregation (“fan-in”) problems are related to the creased, the physics group connected additional com-
firewall problem in that they too result from the com- putation and storage to their local network. Figure 7
bination of the burstiness of TCP traffic and small shows these additional 1 Gbps connections as they en-
buffers on network devices. However, the fan-in prob- tered into the portion of the RCNet on campus.
lem arises when multiple traffic flows entering a switch Despite the initial care in the design of the network,
or router from different ingress interfaces are destined overall performance began to suffer during heavy use
for a common egress interface. If the speed of the sum times on the campus. Passive and active perfSONAR
of the bursts arriving at the switch is greater than the monitoring alerted that there was low throughput to
speed of the device’s egress interface, the device must
buffer the extra traffic or drop it. If the device does
not have sufficient buffer space, it must drop some of
the traffic, causing TCP performance problems. This
situation is particularly common in inexpensive LAN
switches. Since high-speed packet memory is expen-
sive, cheap switches often do not have enough buffer
space to handle anything except LAN traffic. Note that Fig. 6. University of Colorado campus network, showing RCNet
the fan-in problem is not unique to coincident bursts. If connected at the perimeter as a Science DMZ. (Colors are visi-
a burst from a single flow arrives at a rate greater than ble in the online version of the article; http://dx.doi.org/10.3233/
the rate available on the egress interface due to existing SPR-140382.)
non-bursty traffic flows, the same problem exists.

6. Use cases

In this section we give some examples of how ele-


ments of the Science DMZ model have been put into
practice. While a full implementation of recommen-
dations is always encouraged, many factors influence
what can and cannot be installed at a given location due
to existing architectural limitations and policy. These Fig. 7. University of Colorado Network showing physics group con-
use cases highlight the positive outcomes of the design nectivity. (Colors are visible in the online version of the article;
methodology, and show that the Science DMZ model http://dx.doi.org/10.3233/SPR-140382.)
is able to please both administrative and scientific con-
stituencies. 24 CMS, http://cms.web.cern.ch.
182 E. Dart et al. / The Science DMZ: A network design pattern for data-intensive science

downstream facilities, as well as the presence of


dropped packets on several network devices. Further
investigation was able to correlate the dropped packets
to three main factors:
• Increased number of connected hosts,
• Increased network demand per host,
• Lack of tunable memory on certain network de-
Fig. 8. Penn State College of Engineering network utilization, col-
vices in the path. lected passively from SNMP data. (Colors are visible in the online
version of the article; http://dx.doi.org/10.3233/SPR-140382.)
Replacement hardware was installed to alleviate this
bottleneck in the network, but the problem remained
upon initial observation. After additional investigation needed to achieve network speeds close to 1 Gbps, the
sites were measured at 10 ms away in terms of round-
by the vendor and performance engineers, it was re-
trip latency, which yielded a window size of:
vealed that the unique operating environment (e.g.,
high “fan-out” that featured multiple 1 Gbps connec- 1000 Mb/s 1s
tions feeding a single 10 Gbps connection) was con- ∗ 10 ms ∗ = 1.25 MB. (2)
8 B/b 1000 ms
tributing to the problem. Under high load, the switch
changed from cut-through mode to store-and-forward This theoretical value was 20 times less than the re-
mode, and the cut-through switch was unable to pro- quired size. Further investigation into the behavior of
vide loss-free service in store-and-forward mode. the network revealed that there was no packet loss
After a fix was implemented by the vendor and ad- observed along the path, and other perfSONAR test
ditional changes to the architecture were implemented, servers on campus showed performance to VTTI that
performance returned to near line rate for each member exceeded 900 Mbps. From some continued perfor-
of the physics computation cluster. mance monitoring, the investigation began to center on
the performance of the CoE firewall.
6.2. The Pennsylvania State University & Virginia A review of the firewall configuration revealed that
Tech Transportation Institute a setting on the firewall, TCP flow sequence checking,
modifies the TCP header field that specifies window
The Pennsylvania State University’s College of En- size (e.g., a clear violation of tcp_window_scaling, set
gineering (CoE) collaborates with many partners on forth in RFC 1323 [8]). Disabling this firewall setting
jointly funded activities. The Virginia Tech Transporta- increased inbound performance by nearly 5 times, and
tion Institute (VTTI), housed at Virginia Polytech- outbound performance by close to 12 times the original
nic Institute and State University, is one such part- observations. Figure 8 is a capture of overall network
ner. VTTI chooses to collocate computing and storage utilization to CoE, and shows an immediate increase in
resources at Penn State, whose network security and performance after the change to the firewall setting.
management is implemented by local staff. However, Because CoE and VTTI were able to utilize the Sci-
due to policy requirements for collocated equipment, ence DMZ resources, like perfSONAR, engineers were
a security mechanism in the form of a firewall was re- able to locate and resolve the major network perfor-
quired to protect both the campus and VTTI equip- mance problem. Figure 8 also shows that numerous
ment. Shortly after collocation, VTTI users noticed users, not just VTTI, were impacted by this abnormal-
that performance for hosts connected by 1 Gbps lo- ity. The alteration in behavior allowed TCP to reach
cal connections were limited to around 50 Mbps over- higher levels of throughput, and allowed flows to com-
all; this observation was true in either direction of data plete in a shorter time than with a limited window.
flow.
Using perfSONAR, network engineers discovered 6.3. The National Oceanic and Atmospheric
that the size of the TCP window was not growing be- Administration
yond the default value of 64 KB, despite the fact that
hosts involved in data transfer and measurement test- The National Oceanic and Atmospheric Administra-
ing were configured to use auto-tuning – a mechanism tion (NOAA) in Boulder houses the Earth System Re-
that would allow this value to grow as time, capacity, search Lab, which supports a “reforecasting” project.
and demand dictated. To find the correct window size The initiative involves running several decades of his-
E. Dart et al. / The Science DMZ: A network design pattern for data-intensive science 183

torical weather forecasts with the same current version DMZ architecture at NERSC. Most recently, it has
of NOAA’s Global Ensemble Forecast System (GEFS). enabled high-speed multi-terabyte transfers between
Among the advantages associated with a long refore- SLAC Linear Accelerator National Lab’s Linac Co-
cast data set, model forecast errors can be diagnosed herent Light Source and NERSC to support protein
from the past forecasts and corrected, thereby dramat- crystallography experiments as well as transfers be-
ically increasing the forecast skill, especially in fore- tween Beamline 8.3.2 at Berkeley Lab’s Advanced
casts of relatively rare events and longer-lead forecast. Light Source and NERSC in support of X-ray tomog-
In 2010, the NOAA team received an allocation raphy experiments.
of 14.5 million processor hours at NERSC to per-
form this work. In all, the 1984–2012 historical GEFS
dataset totaled over 800 TB, stored on the NERSC 7. Future technologies
HPSS archival system. Of the 800 TB at NERSC, the
NOAA team sought to bring about 170 TB back to
In addition to solving today’s network performance
NOAA Boulder for further processing and to make it
problems, the Science DMZ model also makes it easier
more readily available to other researchers. When the
to experiment and integrate with tomorrow’s technolo-
NOAA team tried to use an FTP server located behind
gies. Technologies such as dynamic “virtual circuits”,
NOAA’s firewall for the transfers, they discovered that
software-defined networking (SDN), and 40/100 Gbps
data trickled in at about 1–2 MB/s.
ethernet can be deployed in the Science DMZ, elimi-
Working with ESnet and NERSC, the NOAA team
nating the need to deploy these technologies deep in-
leveraged the Science DMZ design pattern to set up a
side campus infrastructure.
new dedicated transfer node enabled with Globus On-
line to create a data path unencumbered by legacy fire-
walls. Immediately the team saw a throughput increase 7.1. Virtual circuits
of nearly 200 times. The team was able to transfer 273
files with a total size of 239.5 GB to the NOAA DTN Virtual circuit services, such as the ESnet-developed
in just over 10 minutes – approximately 395 MB/s. On-demand Secure Circuits and Reservation System,
or OSCARS platform [7,14], can be used to connect
6.4. National Energy Research Scientific Computing wide area layer-2 circuits directly to DTNs, allow-
Center ing the DTNs to receive the benefits of the bandwidth
reservation, quality of service guarantees, and traf-
In 2009, both NERSC and OLCF installed DTNs to fic engineering capabilities. The campus or lab “inter-
enable researchers who use their computing resources domain” controller (IDC)25 can provision the local
to move large data sets between each facility’s mass switch and initiate multi-domain wide area virtual cir-
storage systems. As a result, WAN transfers between cuit connectivity to provide guaranteed bandwidth be-
NERSC and OLCF increased by at least a factor of tween DTN’s at multiple institutions. An example of
20 for many collaborations. As an example, a com- this configuration is the NSF-funded Development of
putational scientist in the OLCF Scientific Comput- Dynamic Network System (DYNES) [17] project that
ing Group who was researching the fundamental nu- is supporting a deployment of approximately 60 uni-
clear properties of carbon-14, in collaboration with versity campuses and regional networks across the US.
scientists from Lawrence Livermore National Labora- Virtual circuits also enable the use of new data transfer
tory (LLNL) and Iowa State University, had previously protocols such as RDMA (remote direct memory ac-
waited more than an entire workday for a single 33 GB cess) over Converge Ethernet (RoCE) [5] on the Sci-
input file to transfer – just one of the 20 files of sim- ence DMZ DTNs. RoCE has been demonstrated to
ilar size that needed to be moved between the sites. work well over a wide area network, but only on a guar-
With the improved infrastructure, those researchers anteed bandwidth virtual circuit with minimal com-
were immediately able to improve their transfer rate peting traffic [11]. Kissel et al. show that RoCE can
to 200 MB/s enabling them to move all 40 TB of data achieve the same performance as TCP (39.5 Gbps for
between NERSC and OLCF in less than three days. a single flow on a 40GE host), but with 50 times less
Since 2009, several science collaborations includ- CPU utilization.
ing those in astrophysics, climate, photon science, ge-
nomics and others have benefitted from the Science 25 IDC, http://www.controlplane.net/.
184 E. Dart et al. / The Science DMZ: A network design pattern for data-intensive science

7.2. 100-Gigabit Ethernet 8. Conclusion

100 Gigabit Ethernet (GE) technology is being de- The Science DMZ model has its roots in opera-
ployed by research networks around the world, to sup- tional practices developed over years of experience,
port data-intensive science. The NSF CC–NIE pro- and incorporates aspects of network architecture, net-
gram is increasing the rate of 100GE deployment at US work security, performance tuning, system design, and
campuses with solicitations offered in 2012 and 2013. application selection. The Science DMZ, as a design
While 100GE promises the ability to support next- pattern, has already been successfully deployed at mul-
generation instruments and facilities, and to conduct tiple sites across the US, and many through NSF fund-
scientific analysis of distributed data sets at unprece- ing. The Science DMZ model and its contributing tech-
dented scale, 100GE technology poses significant chal- nologies are well-tested and have been effectively used
lenges for the general-purpose networks at research at supercomputer centers, national laboratories, and
institutions. Once a site is connected to a 100GE back- universities as well as in large-scale scientific experi-
bone, it would be very costly to distribute this new in- ments.
creased bandwidth across internal campus infrastruc- The Science DMZ model provides a conceptual
ture. With the Science DMZ model, all hosts need- framework for the deployment of networks and
ing the increased bandwidth are near the border router, network-enabled tools and systems for the effective
making it much easier to benefit from the 100GE con- support of data-intensive science. With many science
nection. collaborations moving to large-scale or distributed ex-
periments, the purpose of sharing best practices is be-
coming more important. This paper shares our work
7.3. Software-defined networking
in developing the Science DMZ for the larger science
community.
Testing and deploying software defined networking,
particularly the use of OpenFlow as a platform [13],
is a timely example of how the Science DMZ model Acknowledgements
could be used for exploring and hardening new tech-
nologies. The authors would like to thank NOAA, NERSC,
Software-defined networking concepts and produc- the Pennsylvania State University, and the University
tion uses of OpenFlow are still in their early stages of Colorado, Boulder, for their contributions to this
of adoption by the community. Many innovative ap- work.
proaches are still being investigated to develop best The authors wish to acknowledge the vision of the
practices for the deployment and integration of these National Science Foundation for its support of the CC–
services in production environments. ESnet and its col- NIE program.
laborators at Indiana University have demonstrated an This manuscript has been authored by an author at
OpenFlow-based Science DMZ architecture that inter- Lawrence Berkeley National Laboratory under Con-
operates with a virtual circuit service like OSCARS. It tract No. DE-AC02-05CH11231 with the U.S. Depart-
is easy to set up an OSCARS virtual circuit across the ment of Energy. The U.S. Government retains, and
WAN, but plumbing the circuit all the way to the end the publisher, by accepting the article for publica-
host must be done by hand. OpenFlow simplifies this tion, acknowledges, that the U.S. Government retains
process. a non-exclusive, paid-up, irrevocable, world-wide li-
Another promising use of OpenFlow is as a mech- cense to publish or reproduce the published form of
anism to dynamically modify the security policy for this manuscript, or allow others to do so, for U.S. Gov-
large flows between trusted sites. Multiple groups have ernment purposes.
demonstrated the use of OpenFlow to dynamically by-
pass the firewall (e.g., Kissel et al.’s research on SDN
with XSP [10]). Further, one could also use OpenFlow Disclaimer
along with an intrusion detection system (IDS) to send
the connection setup traffic to the IDS for analysis, and This document was prepared as an account of work
then once the connection is verified allow the flow to sponsored by the United States Government. While
bypass the firewall and the IDS. this document is believed to contain correct informa-
E. Dart et al. / The Science DMZ: A network design pattern for data-intensive science 185

tion, neither the United States Government nor any [7] C. Guok, D. Robertson, M. Thompson, J. Lee, B. Tierney and
agency thereof, nor the Regents of the University of W. Johnston, Intra and interdomain circuit provisioning us-
California, nor any of their employees, makes any war- ing the OSCARS reservation system, in: Third International
Conference on Broadband Communications Networks and Sys-
ranty, express or implied, or assumes any legal respon- tems, IEEE/ICST, October 2006.
sibility for the accuracy, completeness, or usefulness [8] V. Jacobson, R. Braden and D. Borman, TCP Extensions for
of any information, apparatus, product, or process dis- High Performance, RFC 1323 (Proposed Standard), May 1992.
closed, or represents that its use would not infringe pri- [9] W.E. Johnston, E. Dart, M. Ernst and B. Tierney, Enabling high
vately owned rights. Reference herein to any specific throughput in widely distributed data management and analy-
commercial product, process, or service by its trade sis systems: Lessons from the LHC, in: TERENA Networking
name, trademark, manufacturer, or otherwise, does not Conference (TNC) 2013, June 2013.
necessarily constitute or imply its endorsement, rec- [10] E. Kissel, G. Fernandes, M. Jaffee, M. Swany and M. Zhang,
ommendation, or favoring by the United States Gov- Driving software defined networks with XSP, in: Workshop on
Software Defined Networks (SDN’12), International Confer-
ernment or any agency thereof, or the Regents of the ence on Communications (ICC), IEEE, June 2012.
University of California. The views and opinions of [11] E. Kissel, B. Tierney, M. Swany and E. Pouyoul, Efficient data
authors expressed herein do not necessarily state or transfer protocols for big data, in: Proceedings of the 8th Inter-
reflect those of the United States Government or any national Conference on eScience, IEEE, July 2012.
agency thereof or the Regents of the University of Cal- [12] M. Mathis, J. Semke, J. Mahdavi and T. Ott, The macro-
ifornia. scopic behavior of the tcp congestion avoidance algorithm,
SIGCOMM Comput. Commun. Rev. 27(3) (1997), 67–82.
[13] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar,
References L. Peterson, J. Rexford, S. Shenker and J. Turner, Openflow:
enabling innovation in campus networks, SIGCOMM Comput.
[1] C. Alexander, The Timeless Way of Building, Oxford Univ. Commun. Rev. 38(2) (2008), 69–74.
Press, New York, 1979. [14] I. Monga, C. Guok, W.E. Johnston and B. Tierney, Hybrid net-
[2] C. Alexander, S. Ishikawa and M. Silverstein, A Pattern Lan- works: Lessons learned and future challenges based on ESnet4
guage: Towns, Buildings, Construction, Oxford Univ. Press, experience, IEEE Communications Magazine, May 2011.
New York, 1977. [15] J. Postel, Transmission Control Protocol, Request for Com-
[3] W. Allcock, J. Bresnahan, R. Kettimuthu, M. Link, C. Du- ments (Standard) 793, Internet Engineering Task Force,
mitrescu, I. Raicu and I. Foster, The globus striped GridFTP September 1981.
framework and server, in: Proceedings of the 2005 ACM/IEEE [16] B. Tierney, J. Boote, E. Boyd, A. Brown, M. Grigoriev, J. Met-
Conference on Supercomputing, SC’05, IEEE Computer Soci- zger, M. Swany, M. Zekauskas and J. Zurawski, perfSONAR:
ety, Washington, DC, USA, 2005, p. 54. Instantiating a global network measurement framework, in:
[4] B. Allen, J. Bresnahan, L. Childers, I. Foster, G. Kandaswamy, SOSP Workshop on Real Overlays and Distributed Systems
R. Kettimuthu, J. Kordas, M. Link, S. Martin, K. Pickett et al., (ROADS’09), Big Sky, MT, USA, ACM, October 2009.
Software as a service for data scientists, Communications of
[17] J. Zurawski, R. Ball, A. Barczyk, M. Binkley, J. Boote,
the ACM 55(2) (2012), 81–88.
E. Boyd, A. Brown, R. Brown, T. Lehman, S. McKee,
[5] I.T. Association, InfiniBand. Architecture Specification Re-
B. Meekhof, A. Mughal, H. Newman, S. Rozsa, P. Sheldon,
lease 1.2.1 Annex A16: RoCE, 2010.
A. Tackett, R. Voicu, S. Wolff and X. Yang, The dynes instru-
[6] E. Gamma, R. Helm, R. Johnson and J. Vlissides, Design
ment: A description and overview, Journal of Physics: Confer-
Patterns: Elements of Reusable Object-Oriented Software,
ence Series 396(4) (2012), 042065.
Addison-Wesley Longman Publishing, Boston, MA, USA,
1995.
Advances in Journal of
Industrial Engineering
Multimedia
Applied
Computational
Intelligence and Soft
Computing
The Scientific International Journal of
Distributed
Hindawi Publishing Corporation
World Journal
Hindawi Publishing Corporation
Sensor Networks
Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Advances in

Fuzzy
Systems
Modelling &
Simulation
in Engineering
Hindawi Publishing Corporation
Hindawi Publishing Corporation Volume 2014 http://www.hindawi.com Volume 2014
http://www.hindawi.com

Submit your manuscripts at


Journal of
http://www.hindawi.com
Computer Networks
and Communications Advances in
Artificial
Intelligence
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014

International Journal of Advances in


Biomedical Imaging Artificial
Neural Systems

International Journal of
Advances in Computer Games Advances in
Computer Engineering Technology Software Engineering
Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

International Journal of
Reconfigurable
Computing

Advances in Computational Journal of


Journal of Human-Computer Intelligence and Electrical and Computer
Robotics
Hindawi Publishing Corporation
Interaction
Hindawi Publishing Corporation
Neuroscience
Hindawi Publishing Corporation Hindawi Publishing Corporation
Engineering
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

You might also like