Government Data Center Reference Architecture
Government Data Center Reference Architecture
Table of Contents
Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Target Audience. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Trends and Challenges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Juniper Networks Approach and Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Government Data Center Network Design Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 A Green and Environmentally Friendly Data Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 High Availability Disaster Recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Visibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Network Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Policy and Control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 QoS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 High Performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Juniper Networks Data Center Network Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Open Systems Approach Juniper Networks Government Framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Location-Based Approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Design Principles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 High-Level Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Edge Services Tier. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Edge Services Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Edge Services HA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Edge Services Performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Edge Services Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Core Network Tier. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Core Network Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Core Network HA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Core Network Virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 Network Services Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Data Center Security Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Application Front-Ending Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Storage Area Networks (SANs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Fibre Channel SANs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 iSCSI SANs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Data Center Backbone. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Data Center Network Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Appendix A: Juniper Networks Data Center Network Solution Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Partner Products. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Symantec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 SurfControl and Websense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Avaya IG550. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Appendix B: Juniper Networks Core Network Power Efficiency Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 About Juniper Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2
Copyright 2010, Juniper Networks, Inc.
Table of Figures
Figure 1: Location-based perspective of the government agency network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Figure 2: Data center network functional design model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Figure 3: The Juniper Networks government framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Figure 4: Network connectivity to the data centers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Figure 5: Juniper Networks data center network architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Figure 6: Data center network edge services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Figure 7: Data center core network and network services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Figure 8: Connectivity systems, application systems and network service systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Figure 9: Data center application network types/purposes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Figure 10: Application and data services network view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Figure 11: Data center backbone connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Figure 12: Network management framework built on Juniper Networks products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Executive Summary
The next-generation data centers currently being used by enterprises and data hosting facilities also allow governments to reduce operational cost while delivering transparency to citizen services through an open architecture. Leveraging the Federal Enterprise Architecture (FEA), the U.S. government has sought industry input to develop IT solutions that optimize investments in commercial off-the-shelf (COTS) technology to cost-effectively build highperformance data centers. The Juniper Networks Innovation in Government concept helps agencies to provide a responsive and trusted environment that drives mission assurance. Government agencies trust Juniper Networks to provide a comprehensive approach to building next-generation data centers that leverage the FEA framework by utilizing best-in-class products with well-defined practices that can be replicated across the government enterprise.
Target Audience
IT managers and security managers Systems engineers Network analysts and engineers Network administrators The remainder of this paper references the Juniper Network architecture approach and outlines best practices, technologies and products that support data center architects and engineers responsible for answering the requirements of designing government agencies data center networks.
Introduction
The purpose of this document is to provide government IT managers and administrators with a data center network architecture that mitigates risk and supports the modern, consolidated data center. This document addresses the following topics: Network infrastructure Security Connectivity Performance aspects of the data center infrastructure In addition, it provides design guidance for the data center network, the inter-data center and associated connectivity. Discussions focus on the following network devices: Routers Switches Firewalls Intrusion prevention systems VPN access devices Application front ends WAN acceleration products Note: Because application-specific components such as operating systems, processing machines, databases and storage arrays are out of the scope of this solution, they are not addressed in this document.
These technologies also support the trend toward consolidation and virtualization of government data centers, which is reducing the number of facilities and operating locations. However, this also means that architects are faced with the challenge of designing data centers that centralize servers and applications while keeping them accessible from a variety of locations (see Figure 1).
HEADQUARTERS OFFICE
IC Series M Series
SSG Series
INTERNET
SSG Series
STANDALONE OFFICE
SSG Series
RETAIL STORE
SSG Series
M Series
DATA CENTER
IC Series
WXC Series
SERVERS
on Pe High izati rform ual t an r ce Vi
HA/ DR
QoS
APPL I C AT
AGE R TO
NS IO
Co nn ect i vi ty
r i ty Secu
Po l Co i cy & ntr ol
Network Infrastructure
i lit Vi si b
Observing all of the installed devices in the data center, one can see large racks of servers (X86 servers, blade servers or mainframe systems), different types of storage switches that use Fibre Channel and InfiniBand, and a variety of applications (Oracle, SAP, Microsoft) that utilize these resources to deliver agency requirements. These three silos are connected through a fast, secure and reliable data center network fabric that forms the fourth silo of systems and devices in the data center. The critical attributes for designing todays data center for extreme availability and superior performance include: High Availability Disaster Recovery (HADR) Visibility not only in the network traffic and security events, but also into application traffic Connectivity ubiquitous connectivity to disparate sets of resources Security data security and regulatory compliance Policy and Control centralized policy and control Quality of Service (QoS) High Performance applications, storage, servers and the network
Virtualization
Virtualization is a technique for hiding the physical characteristics of computing resources from other systems, applications or end users interacting with those resources. A single physical resourcesuch as a server, operating system, application or storage device thus appears to function as multiple logical resources; multiple physical resources, such as storage devices or servers, may appear as a single logical resource; or, one physical resource may appearwith somewhat different characteristicsas one logical resource. From a network virtualization perspective, there are various technologies that provide data, control and management plane virtualization. An example of data plane virtualization is a single physical interface that provides security to multiple network segments using 802.1q VLAN tagging. Control plane virtualization could include multiple routing domains and protocol instances. An example of management plane virtualization supports multiple logical firewall/ VPN security systems that use Virtual Systems (VSYS) for true multi-agency environments, such as when state and federal agencies must work together to enact a Homeland Security directive.
Visibility
It is important to have visibility into traffic and security events in order to effectively maintain and manage network resources. This includes the ability to collect IP traffic flow statistics to give organizations insight into data flow, resource utilization, fault isolation, capacity planning, tuning, and offline security analysis. WAN utilization and userlevel visibility can help IT better support application performance by leveraging network services and other resources. Security visibility is crucial to granular viewing of security events to help determine how these are being handled. Extending this visibility to develop a deeper understanding of application-specific traffic provides a wide range of operational and performance information that can impact application users. For example, specific compression and acceleration technologies can be applied at the network layer to accelerate email applications such as Microsoft Exchange. Or, it may be necessary to bar employee access to services such as YouTube and social networking sites, as they may impact internal application performance or violate agency security procedures. Understanding the application (YouTube, instant messaging) and enforcing appropriate policies ensure that performance meets or exceeds the expectations of end users.
Network Connectivity
Agency employees and associated third-party contractors all require immediate access to applications and information. Citizen services applications also demand significant network performance. The challenge of working from multiple locations further increases the complexity of providing consistent data access. As part of the data center network design, the following critical aspects of external network connectivity must therefore be considered: WAN connectivity to enable distributed agency users to access applications Internet connectivity to enable secure remote access for remote and mobile users Superior speed for data center backbone connectivity and use of technologies such as VPLS and MPLS The internal data center comprises one or more server network(s) or data center LANs. The data center LAN hosts a large population of servers that requires high-speed, highly available network connectivity. In addition, LAN segments and networks may be deployed that require different security and capacity levels and services. Typically, connections of 1 Gbps and higher (while 10 Gbps is becoming the standard) should be available in the data center network, providing at least 1 Gbps to the server and preferably 10 Gbps at network choke points.
Security
The most critical resources in any agency location are typically the applications themselves and their servers and supporting systems, such as storage and databases. Financial, human resources (HR) and citizen-facing applications with supporting data can, if compromised, create a potential operations and public relations disaster. The core network security layers must therefore protect these mission-critical resources from unauthorized user access and attacks, including at the application level. The security design needs to employ layers of protection from the network edge through the core to the various endpoints. Multiple layers of security protect critical network resources: If one layer fails, the next steps up to stop the attack and/or limit the damage. This security approach allows IT departments to apply the appropriate level of resource protection to the various network entry points based upon their different security, performance and management requirements. Layers of security that should be deployed at the data center include the following: Denial of service (DoS) protection at the edge Firewall(s) to tightly control who and what gets in and out of the network VPN to protect internal communications Intrusion prevention system (IPS) solutions to prevent a more generic set of application-layer attacks Further, application-layer firewalls and gateways also play a key role in protecting specific application traffic such as XML.
QoS
In order to assure a high-quality application experience over large networks, QoS levels are assigned and managed to ensure satisfactory performance. A minimum of three levels of QoS (each of which determines a priority for applications and resources) is as follows: Real-time Mission-critical Best effort MPLS networks and network traffic engineering capabilities are typically deployed to configure label switched paths (LSPs) with RSVP or LDP. This is especially critical with voice and video deployments, as QoS can mitigate latency and jitter issues by sending traffic along preferred paths or by enabling fast reroute to anticipate performance problems or failures. The data center network design should allow the flexibility to assign multiple QoS levels based on end-to-end assessment, as well as rapid and efficient management to ensure end-to-end QoS for the agency.
High Performance
To effectively address performance requirements related to virtualization, server centralization and data center consolidation, the data center network needs to boost performance of all application traffic, whether local or remote. Providing a LAN-like experience for all users irrespective of physical location, the data center network should optimize applications, servers, storage and network performance. WAN optimization techniques include data compression, TCP and application protocol acceleration, bandwidth allocation, and traffic prioritization to improve performance network traffic. These techniques can also be applied to data replication, as well as backup and restoration between data centers and remote sites, including disaster recovery sites. Within the data center, application front ends (AFEs) and load-balancing solutions boost the performance of both client/server and Web-based applications, speeding Web page downloads. In addition, designers must consider offloading CPU-intensive functions, such as TCP connection processing and HTTP compression, from backend applications and Web servers. Beyond application acceleration, critical infrastructure components such as routers, switches, firewalls, remote access platforms and other security devices can be built on a nonblocking modular architecture, so that they have the performance characteristics necessary to handle the higher volumes of mixed traffic types associated with centralization and consolidation. Designers should also account for remote users.
10
Applications
Alliance Partners Products utilizing open Interfaces
Services
Security Acceleration and Optimization Access
Infrastructure
Routing Switching Wireless
11
Location-Based Approach
The key function of the data center is to offload always on requirements from various locations to a central, stable location that contains the most recent application data. By decoupling the information store from the physical location of the user, agencies derive greater efficiencies by creating a centralized pool of resources. This trend of centralizing applications and consolidating multiple facilities increases the importance of the WAN and other external networks, as users need to traverse a larger network in order to gain access to data. As such, a great deal of emphasis has been given to the design of the agencys private WAN and the Internet edge that hosts remote user connections. The data center does not typically host users and most certainly does not accommodate data center application users. However, this model can support different operational requirements unique to each agency. Options such as administrative user access can be built into any data center design. WAN services should extend to all of the remote location connections. Among these services are stateful firewalls, intrusion prevention and WAN acceleration. Figure 4 depicts a high-level perspective, illustrating the overall connectivity into the data center and connectivity between data centers.
BRANCH 1 BRANCH 2 BRANCH 3
DATA CENTER A
DATA CENTER B
REGIONAL OFFICE B
CAMPUS B
BRANCH
BRANCH
BRANCH
BRANCH
BRANCH
12
Design Principles
Key design principles are derived from operational and technical objectives. The operational objectives are fairly clearreduce operation expenses, maintain security, adhere to green IT principles and so on. The top-level technical requirements include: Leverage shared infrastructures Employ virtualization technologies to increase utilization and efficiencies Ensure scalability, flexibility, security and application performance over the network Juniper Networks key design principles are as follows: Consolidation of Data Centers and Centralization of Services from Multiple Offices This principle imposes a variety of technical requirements on the data center network. Centralizing services typically does not improve overall processing time nor data availability, but it often increases overall utilization and allows for more streamlined IT operations. Additionally, centralizing services requires maintenance of the unique aspects of legacy distributed processing configurations, such that different processing instances may belong to different agency entities, such as contracts management or tactical operations. Uniqueness and operational freedom must remain virtually independent. Virtualization The virtualization of processing has introduced a new standard in resource pooling and resource utility optimization. Such technologies at various levels are introduced into the data center, from large storage arrays and servers to network virtualization and service. The network infrastructure manifests virtualization through VPNs, labels and tags of forwarding plane traffic, while the network services manifest virtualization through the definition of service instances and application of unique processing logic to the different instances. The overall data virtualization capabilities of the data center are key requirements that effectively drive network virtualization. HA Consolidating and centralizing resources, as well as virtualizing technologies, make guaranteeing data access all the more critical. Data should be available regardless of the location from which it is being served. The four key vectors that address network HA include: Component Device Link Site Streamlined Operation and Management of Data Center Services A consolidated and virtualized environment relies on a single management platform that can control servers, applications, storage and network infrastructure as one. Hence, devices and systems need to support open standards-based interfaces and protocols, so that they can all be controlled from existing and evolving management systems.
High-Level Architecture
Figure 5 illustrates the Juniper Networks data center network architecture. The major architectural tiers include: Edge Services Tier hosts all WAN services connecting to non-data center locations Core Network Tier connects all data center networks within and across data centers Network Services Tier supports WAN acceleration, intrusion prevention and other network services Applications and Data Services provides network connectivity to the data center server and application infrastructure Data Center Backbone provides connectivity between data center facilities for HA, replication and disaster recovery
13
In the paragraphs that follow, the different network tiers are explored in greater detail.
PRIVATE WAN
INTERNET
EDGE SERVICES
M Series M Series
ISG Series
NETWORK SERVICES
Intrusion Prevention System IDP Series EX4200
CORE NETWORK
MX Series
SSL
EX4200
EX4200
EX4200
EX4200
IP Storage Network
14
Edge Services HA
The Edge Services tier should provide HA at three levels where appropriate: Link Device Component Link-level HA should be applied at all Internet connections. In cases where additional data centers are available, it is best to keep a single leased line/private WAN connection in each data center. Device-level HA is relevant only when enabling the link-level HA setting, as multiple devices cannot utilize a single link themselves. Hence, Internet-facing routers and devices located behind these should support device-level HA. Additionally, component-level HAs (multiple power supplies, fans, route engines) are mandatory for edge-deployed devices.
15
INTERNET
PRIVATE WAN
HA
HA
HA
ISG Series
ISG Series
ISG Series
ISG Series
ISG Series
HA
MX Series
MX Series
While stateful firewalls provide much-needed visibility and fine-grade protection against a variety of floods, all stateful firewalls have an upper limit in their capacity to deal with certain types of floods such as SYN or Internet Control Message Protocol (ICMP). If a firewall is overwhelmed by a flood, it will experience high CPU load and may drop legitimate traffic. The specific rate varies per firewall, depending upon its configuration and software version. To protect the firewall and network against massive floods, rate limits should be implemented on routers protecting all firewall interfaces. The goal is to limit certain kinds of traffic, such as TCP control traffic and ICMP types, to rates that will not impact available bandwidth and overwhelm the firewall. In selecting VPN design and encryption protocols, trade-offs must be made. Organizations should choose the strongest encryption algorithm that does not compromise performance requirements for the network while maintaining security. A longer key length provides more security against brute force attacks, yet may require more computational power. Therefore, this approach lowers performance when encrypting large amounts of data. Note that performance considerations should be made for all devices participating in the VPN, not only devices that terminate at the headend. Satellite devices may not be as powerful as the ASIC-accelerated, crypto-powered headend systems. When analyzing the elements, it is important to acknowledge the handshake protocol encryption requirements. These typically use asymmetric encryption algorithms for improved security and may affect devices dramatically, especially those with many VPN peers. One also must consider bulk encryption algorithms. Typically, they are symmetrical and least influenced by design due to hardware assistance and the lower cost of handshakes. However, if the design presents few VPN peers and extensive data transfer, this element should be considered: The lowest common denominator will be the speed that determines VPN capacity. Finally, one should consider hashing algorithms. This selection is primarily done based on security requirements, but if hardware assistance is involved, design considerations diminish.
Core Network HA
By connecting all networks to the core network with full redundancy at the core, HA is achieved without added complexity and dependency to data center network protocols and convergence. Traditionally, adding HA requires redesign of the network, but by using standards-based redundancy protocols and a core network approach, HA is enabled with a lower operational overhead. As well as adding redundant devices, it is extremely important to ensure that the core data center devices support in-service operations, such as hot-swappable interfaces and software upgrades.
17
PRIVATE WAN
INTERNET
M Series
M Series
WAN Acceleration
WAN Acceleration
WXC Series
ISG Series
ISG Series
ISG Series
ISG Series
ISG Series
WXC Series
Network Services
Intrusion Prevention System Data Center Acceleration (AFE) SSL IDP Series
IDP Series
SLB
SLB
EX4200
EX4200
EX4200
EX4200
IP Storage Network
EXT
EXT
Network 4
VRF 4
Network 3
VRF 3
Network 2
VRF 2
Network 1
CORE NETWORK
19
20
Applications and Data Services Tier The Core Network tier connects to the Applications and Data Services tier that hosts all of the servers, databases and storage. Generally, there are four types of networks, with multiple instances of each type. The primary reasons for the multiple instances are separation of duties within the organization, and differentiated objectives and IT requirements for the different networks. Figure 9 illustrates the four networks: External Applications Network There can be multiple external networks serving separate network segments. These typically include applications such as the public Web site, public mail transfer agent (MTA), Domain Name System (DNS) services, remote access and potential file services that are available through unfiltered access. Internal Applications Network Multiple internal networks serve different levels of internal access from within the organizations various locations. These networks typically connect internal applications such as finance or healthcare services systems. Infrastructure Services Network Only servers that are accessible to users are allowed to access infrastructure networks. These are intended to operate only on an automatic basis and performance usually is quite predictable. Common examples of infrastructure services include Lightweight Directory Access Protocol (LDAP), databases, file shares, content management and middleware servers. Storage The storage network is built on technologies including Fibre Channel, the InfiniBand serial link, and the Internet Small Computer System Interface (iSCSI) protocol. Critical application servers directly connect to storage devices through a separate Host Bus Adapter (HBA) to ensure fast access to data. Other servers connect using Ethernet to access storage facilities.
Storage
iSCSI, FC, CIFS
21
Data center application connection is as follows: Each server has two 1 Gbps access network switches. Each server connects to a separate access switch for redundancy purposes. The access switching layer connects to the core network using 10 Gbps uplink. Each access switch has separate 10 Gbps links. The server connection links and access switch uplinks can use VLAN trunking technology to support both server virtual location and aggregation. All aggregate multiple Layer 2 networks to use fewer connections. Each internal and external applications network can be segmented into several subnetworks (see Figure 10). The servers that host these applications connect with at least a 1 Gbps (currently moving towards 10 Gbps) link to the EX Series switch with Virtual Chassis technology. The EX Series switch connects to the network core via a 10 Gbps connection. Depending on the number of servers, multiple EX Series may be required, as shown in Figure 10. Juniper Networks recommends dual homing the access layer switches using L3 with OSPF equal-cost multipath (ECMP) instead of the Spanning Tree Protocol for deterministic behavior for minimal packet loss.
EX4200
10/100/1000 BASE-T/TX
EX4200
EX4200
MX Series
NETWORK CORE
MX Series
22
iSCSI SANs
An iSCSI SAN can be based upon any network supporting the IP protocols. In practice, this means iSCSI SANs are built from Ethernet switches. Because iSCSI is based upon TCP/IP, it can in principle run on any switching infrastructure. However, in practice, depending upon the features of the Ethernet switches, the performance characteristics of TCP/ IP in the face of dropped frames can limit iSCSI deployments to low-performance SANs. In addition, most iSCSI deployments presently only use 1 Gigabit Ethernet with software drivers, and the resulting performance does not compare favorably to FC at 2 or 4 GB with an offload HBA. However, iSCSI SANs can be considerably less expensive than FC SANs. The Internet Storage Name Service (iSNS) server provides all fabric services in an iSCSI SAN. Where iSCSI-based SANs are desirable, Juniper Networks switches and core routers are excellent platforms for creating the underlying network, because they support symmetric flow control using 802.3X pause frames, random early detection (RED), QoS and logical partitioning. Discards due to RED only occur in congested environments, and most SANs are designed to avoid all but transient congestion. QoS allows traffic priority to be set so that storage traffic can have improved throughput and delivery characteristics during congestion. Logical partitioning allows the networking equipment that implements the SANs to be tailored to fit the needs of the specific data center and its applications. SANs are often linked to remote data centers so that data can be replicated as part of a Business Continuity/Disaster Recovery (BC/DR) design. The inter-data center connections can run across direct optical repeater circuits such as dense wavelength-division multiplexing (DWDM), private IP-based WAN connections or the Internet. FC traffic uses DWDM for metro-to-regional distances and specialized FCIP tunnel gateways for regional to longer distances. Using DWDM requires FC switches with FC credits sufficient to span the distance at the desired throughput. Fibre Channel over IP (FCIP) gateways create complete WAN acceleration services such as compression, large buffering, security, encapsulation and tunneling for FC traffic. The iSCSI traffic can directly traverse the WAN connection without requiring a gateway, but iSCSI implementations do not generally provide sufficient buffering to fully utilize high-speed connections. The iSCSI implementations do not contain compression or other WAN optimization features. Therefore, iSCSI WAN traffic can often benefit from a WAN acceleration device such as Juniper Networks WXC Series Application Acceleration Platforms. The iSCSI traffic also can benefit from a data security gateway providing IPsec and VPN tunnels.
23
DATA CENTER A
MX Series Core Routers MX Series
DATA CENTER B
MX Series Core Routers MX Series
X Connect
X Connect
Ethernet
Ethernet
Ethernet
Ethernet
PTP Connection
PRIVATE WAN
PTP Connection
M Series
DC B BACKBONE CONNECTIVITY
24
Interconnectivity between data centers can be implemented using MPLS or VPLS as routing and forwarding technologies. This allows distinct IP routing information to be shared across data centers, and forwarding can be performed based on unique, per-domain logic exchanged across the data center facilities. MPLS technologies allow for the exchange of the forwarding and routing information base to achieve consistent forwarding across all networks that interconnect using MPLS. In addition, L2 extensions and technologies can be used so that non-IP or broadcast domain dependent/attached protocols are connected as part of a single network. For such applications, pseudowires, data-link switching (DLSw) and VPLS technologies should be used with the MPLS implementation. Ensuring that the service is globally available and is enabled by the Network Services tier is a task that extends beyond the network-forwarding layer. The key premise is that applications and users connect and associate themselves to name conventions other than IP (HTTP, SIP, CIFS, FTP and so on), typically through the DNS. To present available services and data regardless of data center location and device availability, a GSLB technology should be applied so that queries regarding an IP-resident service will always have an answer and that service will always remain available. BGP multihoming is also important. The first is for the transitory phase in which end-service clients still maintain DNS information obtained from the GSLB service that does not represent changes to the network (potentially 24 hours, depending on time to live or TTL). The second reason is for cases where certain services are tied to a specific data center and Internet or where WAN connectivity is lost. The latter is the more common and is an important use case. Obviously, it is paramount to assume that the data center backbone connectivity layer exists in order to support service availability when data center connectivity is lost. To summarize, the four key elements that construct the data center backbone are as follows: Optical transport Network virtualization technology that interconnects the data centers IP-level availability/resilience scheme GLSB All four elements support the services associated with backbone connectivity and utilization.
25
Schema Driven Common Element Management Platform Common Embedded Web UI Across all Juniper Networks Platforms
Standards-based Common Programmatic Management Interface Across all Juniper Networks Platforms (DMI)
T Series M Series WXC Series Firewall/VPN E Series J Series SA Series IDP Series ISG Series/ IDP Series
26
Conclusion
Juniper Networks offers an advanced data center network architecture that consolidates and simplifies the management and administration of government data center network infrastructures to deliver services throughout the distributed agency network. It is enabled by an open systems approach that supports devices and elements to achieve a more efficient, secure and cost-effective network infrastructure. This powerful solution greatly simplifies the design and enables operational efficiencies by deploying networks that are agnostic to multiple media types. The Juniper Networks architecture virtualizes critical network infrastructure components and functionalitiesfor example, security, load balancing and applications acceleration, deployed and managed using a combination of organizational and technical heuristics. It also optimizes network performance and increases efficiencies of the network infrastructure. Finally, it automates network infrastructure management by plugging smoothly into existing agency management frameworks and third-party tools such as IBM Tivoli.
Policy and Management IC6000 NSM WX CMS OAC SBR Enerprise Series
Routing
MX960
Switching
EX3200 EX4200
Firewall
SA6500 WXC500 Stack IC6000 NSM WX CMS OAC SBR Enterprise Series
27
Partner Products
Symantec
Juniper Networks has teamed with Symantec Corporation to leverage its market-leading anti-spam solution for Juniper Networks small to medium office platforms, helping to slow the flood of unwanted email and the potential attacks they carry. Part of a complete set of UTM features available on Juniper Networks firewall/VPN gateway, the antispam engine filters incoming email for known spam and phishing users to act as a first line of defense. When a known malicious email arrives, it is blocked and/or flagged so that the email server can take an appropriate action.
Avaya IG550
The Avaya IG550 Integrated Gateway provides an additional choice in the Avaya line of Media Gateways. Agencies can consolidate the number of devices that they deploy and manage in remote sites. This solution provides highsustained network performance when under load, integrated voice and data security, and multilevel business continuity options. This best-in-class solution is available through the Avaya direct direct channel and certified Avaya and Juniper Networks resellers. The Avaya IG550 Integrated Gateway consists of two primary components: a Telephony Gateway Module (TGM) and Telephony Interface Modules (TIMs). The TGM550 module inserts into any slot in the Juniper Networks J4350 Services Router or Juniper Networks J6350 Services Router and delivers a rich telephony feature set to the branch office. This feature set includes: Central Avaya Communication Manager and other communications applications Call center agent support 6-party meet-me conferencing Local survivability in the event of a WAN failure Local music-on-hold and voice announcements Full encryption of voice traffic
28
The TGM operates as any other Avaya H.248-based gateway and includes a two-analog trunk/two-analog station module, modular Digital Signal Processors (DSPs) and a memory expansion slot. There is a choice of several TIMs with analog, T1/E1/PRI and BRI options. The TIM514 analog module contains four trunks (FXO) and four stations (FXS); the TIM510 DS1 module supports T1/E1 and ISDN PRI; and the TIM521 module supports four ISDN BRI interfaces.
Line-rate 10 Gigabit Ethernet (ports) Throughput per chassis (Mpps) Output current (Amps) Output power (Watts) Heat dissipation (BTU/Hr) Chassis required (rack space) Rack space (racks)
Corporate and Sales Headquarters Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, CA 94089 USA Phone: 888.JUNIPER (888.586.4737) or 408.745.2000 Fax: 408.745.2100 www.juniper.net
APAC Headquarters Juniper Networks (Hong Kong) 26/F, Cityplaza One 1111 Kings Road Taikoo Shing, Hong Kong Phone: 852.2332.3636 Fax: 852.2574.7803
EMEA Headquarters Juniper Networks Ireland Airside Business Park Swords, County Dublin, Ireland Phone: 35.31.8903.600 EMEA Sales: 00800.4586.4737 Fax: 35.31.8903.601
To purchase Juniper Networks solutions, please contact your Juniper Networks representative at 1-866-298-6428 or authorized reseller.
Copyright 2010 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Junos, NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. All other trademarks, service marks, registered marks, or registered service marks are the property of their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.
29