Cloud04how To Design A Scalable Private Cloudmarksand 120706080608 Phpapp01
Cloud04how To Design A Scalable Private Cloudmarksand 120706080608 Phpapp01
Look at the Cloud sessions offered at the upcoming Fall 2012 Data Center World Conference at:
www.datacenterworld.com.
This presentation was given during the Spring, 2012 Data Center World Conference and Expo. Contents contained are owned by
AFCOM and Data Center World and can only be reused with the express permission of ACOM. Questions or for permission contact:
[email protected].
How to Design a Scalable
Private Cloud
Mark Sand
Datacenter Architect
Citrix Systems Inc.
Defining the Private & Public Clouds
• The Public cloud is a virtual environment that is publically available for any
consumer to purchase computing resources, usually on a pay per use basis, via
an easy to use web portal. The public cloud allows any consumer to purchase,
manage, and monitor the lifecycle of their VMs through a user friendly web
portal.
Designing the Cloud Infrastructure
• Cluster/pool(s) configuration:
• We support a mix of 2,4,8, and 16GB VMs in each of our cluster/pool(s)
• We average approximately 20 VMs per host
• Datacenter Locations:
• Determine if the cloud will be hosted from several global datacenters or if it will
be hosted from one central datacenter
• If the cloud will be hosted from different locations then it is also important to
follow a set of standards for each of the areas we will be talking about (network,
storage, server HW, etc.)
Datacenter Locations Example
• US Private Cloud
• We currently have a large private cloud environment that is hosted out of our
corporate datacenter as well as a smaller private cloud that is hosted in two
additional datacenters in the US
• Global Standards
• We have standardized on the same server hardware/configuration
and networking devices for the global private cloud; however, we
were required to create two different storage standards
Network Design
• Network Components
• Management Network
• 2 x switches with 2 x 1GB uplinks connected to each switch. Each switch is connected to a different
distribution layer switch to ensure network redundancy
• VM Traffic
• 2 x blade switches with 4 x 1GB uplinks configured as a 2GB port channel is connected to each switch.
We have three dedicated /24 VLANs for new VMs, and we also trunk existing VLANs to the switches in
order to account for servers that were P2V’ed and are unable to change their IP address
• Storage Traffic (regional datacenters only)
• 2 x blade switches with 4 x 1GB uplinks configured as a 2GB port channel is connected to each switch
Network Diagram Example – Corporate DC
Note: Storage is connected via an HBA to our fibre channel SAN (not depicted here)
Network Diagram Example – Regional DCs
Note: Storage for the regional servers are connected to our NAS via NFS
SAN/NAS Design
• NAS vs. Fibre arrays:
• Each technology has benefits and drawbacks, so each organization should
choose whichever option best fits their needs
•Scale
Out vs. Scale Up Methodologies
•Scale Out - several host servers are configured with standard to moderate
virtualization specs (2 x CPUs & 48 to 128GBs of RAM) that make up a pool/cluster
• Pros: The servers are less expensive so you can usually grow the pool faster, and you will sustain less downtime
for VMs if a server fails
• Cons: There are more servers to manage in each pool/cluster
•Scale Up - only a few host servers are configured with large virtualization specs (4 CPUs
or greater & 128GBs of RAM or greater) that can handle a large number of VMs
• Pros: You can run a large number of VMs on the host server due to the vast resources each server has available
• Cons: The servers are costly so you will likely not be able to grow the pool/cluster as fast, and you will potentially
have a larger outage for VMs if a host fails
Server Hardware Cont.
•Minimu
m specs for virtualization (blade or rack mount):
• 2 x Quad Core CPUs
• 48GBs of RAM (96GBs or greater is preferred for large environments)
• Enough 1GB/10GB NICs that will allow you to have two connections to each
uplink so you can bond the NICs for redundancy
• HBA for servers that will connect to the SAN via fibre
•Ensure
you plan for an additional host server to account for failover
(HA) for each cluster/pool
Server Diagram Example – Corporate DC
• Server specs:
• 2 x Six Core CPUs
• 96GBs of RAM
• 6 x NICs (2 x embedded & 1 quad
port mezzanine card)
• 1 x dual port HBA mezzanine card
• Interconnect specs:
• 4 x network switches (1GB)
• 2 x 4GB San switches
• 1 x 1GB Ethernet pass-thru
module (for backups)
Server Diagram Example – Regional DC
• Server specs:
• 2 x Quad Core CPUs
• 96GBs of RAM
• 8 x NICs (2 x embedded, 1 quad
port & 1 dual port NIC mezzanine
card)
• Interconnect specs:
• 6 x network switches (1GB)
Power Design
• The two blade enclosures that house all of the virtual hosts
are located in two different racks
Monitoring Solution
Look at the Cloud sessions offered at the upcoming Fall 2012 Data Center World Conference at:
www.datacenterworld.com.
This presentation was given during the Spring, 2012 Data Center World Conference and Expo. Contents contained are owned by
AFCOM and Data Center World and can only be reused with the express permission of ACOM. Questions or for permission contact:
[email protected].