Install-Guide-4.2.2 Ecalyptus PDF
Install-Guide-4.2.2 Ecalyptus PDF
2 Installation Guide
Contents
Installation Overview................................................................................................5
Introduction to Eucalyptus.......................................................................................6
Eucalyptus Overview.............................................................................................................................................6
Eucalyptus Components........................................................................................................................................6
System Requirements............................................................................................................................................8
Eucalyptus Installation.............................................................................................9
Plan Your Installation............................................................................................................................................9
Eucalyptus Architecture Overview............................................................................................................9
Plan Your Hardware.................................................................................................................................10
Plan Services Placement..........................................................................................................................11
Plan Disk Space.......................................................................................................................................12
Plan Eucalyptus Features.........................................................................................................................13
Plan Networking Modes..........................................................................................................................15
Prepare the Network................................................................................................................................30
Configure Dependencies.....................................................................................................................................31
Configure Bridges...................................................................................................................................32
Disable the Firewall.................................................................................................................................32
Configure SELinux..................................................................................................................................33
Configure NTP........................................................................................................................................33
Configure an MTA..................................................................................................................................33
Enable Packet Routing............................................................................................................................34
Install MidoNet........................................................................................................................................34
Install Repositories..............................................................................................................................................39
Software Signing.....................................................................................................................................40
Install Eucalyptus Release Packages.......................................................................................................41
Install Nightly Release Packages............................................................................................................43
Configure Eucalyptus..........................................................................................................................................43
Configure Network Modes......................................................................................................................44
Create Scheduling Policy........................................................................................................................51
Configure Loop Devices.........................................................................................................................51
Configure Multi-Cluster Networking......................................................................................................52
Start Eucalyptus...................................................................................................................................................52
Start the CLC...........................................................................................................................................53
Start the UFS...........................................................................................................................................53
Start Walrus.............................................................................................................................................53
Start the CC.............................................................................................................................................54
Eucalyptus | Contents | 3
Eucalyptus Upgrade................................................................................................84
Prepare for Upgrade............................................................................................................................................84
Shutdown Services..............................................................................................................................................85
Upgrade Euca2ools Package Repositories..........................................................................................................86
Upgrade Eucalyptus Package Repositories.........................................................................................................87
Restart Eucalyptus Services................................................................................................................................87
Verify the Services...............................................................................................................................................89
Update the Service Images..................................................................................................................................90
Downgrade a Failed Upgrade..............................................................................................................................90
Downgrade Eucalyptus............................................................................................................................91
Downgrade Euca2ools.............................................................................................................................93
Verify the Downgrade..............................................................................................................................93
Installation Overview
This topic helps you understand, plan for, and install Eucalyptus. If you follow the recommendations and instructions
in this guide, you will have a working version of Eucalyptus customized for your specific needs and requirements.
This guide walks you through installations for a few different use cases. You can choose from one of the installation
types listed in the following table.
Quickly deploy Eucalyptus on one machine If you have a CentOS 6.7 minimal install and a few IP
addresses to spare, try the FastStart script. Run the
following command as root:
bash <(curl -Ls
hphelion.com/eucalyptus-install)
We recommend that you read the section you choose in the order presented. There are no shortcuts for installing
Eucalyptus, though Eucalyptus FastStart is fairly easy. However, to customize your installation, you have to understand
what Eucalyptus is, what the installation requirements are, what your network configuration and restrictions are, and
what Eucalyptus components and features are available based on your needs and requirements.
Document version: Build 3221 (2016-07-14 22:01:24)
Introduction to Eucalyptus
Eucalyptus is a Linux-based software architecture that implements scalable private and hybrid clouds within your existing
IT infrastructure. Eucalyptus allows you to use your own collections of resources (hardware, storage, and network) using
a self-service interface on an as-needed basis.
You deploy a Eucalyptus cloud across your enterprise’s on-premise data center. Users access Eucalyptus over your
enterprise's intranet. This allows sensitive data to remain secure from external intrusion behind the enterprise firewall.
You can install Eucalyptus on the following Linux distributions:
• CentOS 6
• Red Hat Enterprise Linux 6
Eucalyptus Overview
Eucalyptus was designed to be easy to install and as non-intrusive as possible. The software framework is modular, with
industry-standard, language-agnostic communication.
Eucalyptus provides a virtual network overlay that both isolates network traffic of different users and allows two or
more clusters to appear to belong to the same Local Area Network (LAN). Also, Eucalyptus offers API compatibility
with Amazon’s EC2, S3, IAM, ELB, Auto Scaling, and CloudWatch services. This offers you the capability of a hybrid
cloud.
Eucalyptus Components
This topic describes the various components that comprise a Eucalyptus cloud.
The following image shows an example of Eucalyptus components.
Cloud Controller
In many deployments, the Cloud Controller (CLC) and the User-Facing Services (UFS) are on the same host machine.
This server is the entry-point into the cloud for administrators, developers, project managers, and end-users. The CLC
handles persistence and is the backend for the UFS. A Eucalyptus cloud must have exactly one CLC.
User-Facing Services
The User-Facing Services (UFS) serve as endpoints for the AWS-compatible services offered by Eucalyptus: EC2
(compute), AS (AutoScaling), CW (CloudWatch), ELB (LoadBalancing), IAM (Euare), and STS (tokens). A Eucalyptus
cloud can have several UFS host machines.
Management Console
The Eucalyptus Management Console is an easy-to-use web-based interface that allows you to manage your Eucalyptus
cloud. The Management Console is often deployed on the same host machine as the UFS. A Eucalyptus cloud can have
multiple Management Console host machines.
Cluster Controller
The Cluster Controller (CC) must run on a host machine that has network connectivity to both the machines running
the Node Controllers (NCs) and to the machine running the CLC. CCs gather information about a set of NCs and
schedules virtual machine (VM) execution on specific NCs. The CC also manages the virtual machine networks in
Managed and Managed (No VLAN) networking modes. All NCs associated with a single CC must be in the same subnet.
Storage Controller
The Storage Controller (SC) provides functionality similar to the Amazon Elastic Block Store (Amazon EBS). The SC
can interface with various storage systems. Elastic block storage exports storage volumes that can be attached by a VM
and mounted or accessed as a raw block device. EBS volumes can persist past VM termination and are commonly used
to store persistent data. An EBS volume cannot be shared between multiple VMs at once and can be accessed only within
the same availability zone in which the VM is running. Users can create snapshots from EBS volumes. Snapshots are
stored by the OSG and made available across availability zones. Eucalyptus with SAN support provides the ability to
use your enterprise-grade SAN devices to host EBS storage within a Eucalyptus cloud.
Node Controller
The Node Controller (NC) executes on any machine that hosts VM instances. The NC controls VM activities, including
the execution, inspection, and termination of VM instances. It also fetches and maintains a local cache of instance images,
and it queries and controls the system software (host OS and the hypervisor) in response to queries and control requests
from the CC. The NC manages the virtual machine networks in Edge networking mode. The NC is also responsible for
the management of the virtual network endpoint.
System Requirements
To install Eucalyptus, your system must meet the baseline requirements described in this topic.
Note: The specific requirements of your Eucalyptus deployment, including the number of physical machines,
structure of the physical network, storage requirements, and access to software are ultimately determined by the
features you choose for your cloud and the availability of infrastructure required to support those features. For
more information, see the Eucalyptus Reference Architecture and look at the physical resources recommended
for your deployment type. See the Compatibility Matrix in the Release Notes for supported versions.
Compute Requirements
• Physical Machines: All Eucalyptus components must be installed on physical servers, not virtual machines.
• Central Processing Units (CPUs): We recommend that each machine in your Eucalyptus cloud contain either an Intel
or AMD processor with a minimum of two 2GHz cores.
• Operating Systems: Eucalyptus supports the following Linux distributions: CentOS 6 and RHEL 6. Eucalyptus
supports only 64-bit architecture.
• Machine Clocks: Each Eucalyptus host machine and any client machine clocks must be synchronized (for example,
using NTP). These clocks must be synchronized all the time, not only during the installation process.
• Machine Access: Verify that all machines in your network allow SSH login, and that root or sudo access is available
on each of them.
Network Requirements
• All NCs must have access to a minimum of 1Gb Ethernet network connectivity.
• All Eucalyptus components must have at least one Network Interface Card (NIC) for a base-line deployment. For
better network isolation and scale, the CC should have two NICs (one facing the CLC/user network and one facing
the NC/VM network).
• Some configurations require that machines hosting a CC have two network interfaces, each with a minimum of 1Gb
Ethernet.
• For virtual machine traffic isolation, the network ports connecting Ethernet interfaces might need to allow VLAN
trunking.
• For Managed and Managed (No VLAN) modes, Eucalyptus needs two sets of IP addresses.
• For Edge mode, Eucalyptus needs at least one existing network.
• For VPC and MidoNet, Eucalyptus needs MidoNet to be installed. For more information, see Install MidoNet.
• The network connecting machines that host Eucalyptus components (except the CC and NC) must support UDP
multicast for IP address 228.7.7.3. Note that UDP multicast is not used over the network that connects the CC to the
NCs. For information about testing connectivity, see Verify Connectivity.
Once you are satisfied that your systems requirements are met, you are ready to plan your Eucalyptus installation.
Eucalyptus Installation
This section details steps to install Eucalyptus.
To install Eucalyptus, perform the following tasks in the order presented in this section.
To successfully plan for your Eucalyptus installation, you must determine two things:
• The infrastructure you plan to install Eucalyptus on: Think about the application workload performance and
resource utilization tuning. Think about how many machines you want on your system.
• The amount of control you plan to give Eucalyptus on your network: Use your existing architecture and policies
to determine the Eucalyptus networking features you want to enable: elastic IPs, security groups, DHCP server, and
Layer 2 VM isolation.
This section describes how to evaluate each tradeoff to determine the best choice to make, and how to verify that the
resource environment can support the features that are enabled as a consequence of making a choice.
By the end of this section, you should be able to specify how you will deploy Eucalyptus in your environment, any
tradeoffs between feature set and flexibility, and where your deployment will integrate with existing infrastructure
systems.
Tip: For more help in planning your installation, see the Eucalyptus Reference Architecture, which includes
use cases and reference architectures for various deployments.
The cloud components: Cloud Controller (CLC) and Walrus, as well as user components: User-Facing Services (UFS)
and the Management Console, communicate with cluster components: the Cluster Controllers (CCs) and Storage
Controllers (SCs). The CCs and SCs, in turn, communicate with the Node Controllers (NCs). The networks between
machines hosting these components must be able to allow TCP connections between them.
However, if the CCs are on separate subnets (one for the network on which the cloud components are hosted and another
for the network that NCs use) the CCs will act as software routers between these networks in some networking
configurations. Each cluster can use an internal private network for its NCs, and the CCs can route traffic from that
private network to a network shared by the cloud components.
Virtual machines (VMs) run on the machines that host NCs. You can use the CCs as software routers for traffic between
clients outside Eucalyptus and VMs. Or the VMs can use the routing framework already in place without CC software
routers. However, depending on the layer-2 isolation characteristics of your existing network, you might not be able to
implement all of the security features supported by Eucalyptus.
Riak CS clusters provide an alternative to Walrus as an object storage provider. SAN clusters are available to Eucalyptus
subscribers.
So if they share a single machine, the reduced physical resources available to each service might become a performance
bottleneck.
Cloud Services
The main decision for cloud services is whether to install the Cloud Controller (CLC) and Walrus on the same server.
If they are on the same server, they operate as separate web services within a single Java environment, and they use a
fast path for inter-service communication. If they are not on the same server, they use SOAP and REST to work together.
Sometimes the key factor for cloud services is not performance, but server cost and data center configuration. If you
only have one server available for the cloud, then you have to install the services on the same server.
All services should be in the same data center. They use aggressive time-outs to maintain system responsiveness so
separating them over a long-latency, lossy network link will not work.
User Services
The User Facing Services (UFS) handle most of the AWS APIs and provide an entry point for clients and users interacting
with the Eucalyptus cloud. The UFS and the Management Console are often hosted on the same machine since both
must be accessible from the public, client-facing network.
You may optionally choose to have redundant UFS and Management Console host machines behind a load balancer.
Cluster Services
The Eucalyptus services deployed in the cluster level of a Eucalyptus deployment are the Cluster Controller (CC) and
Storage Controller (SC).
You can install all cluster services on a single server, or you can distribute them on different servers. The choice of one
or multiple servers is dictated by the demands of user workload in terms of external network utilization (CC) and EBS
volume access (SC).
Node Services
The Node Controllers are the services that comprise the Eucalyptus backend. All NCs must have network connectivity
to whatever machine hosts their EBS volumes. This host is either a SAN or the SC.
If necessary, create symbolic links or mount points to larger filesystems from the above locations. Make sure that the
'eucalyptus' user owns the directories.
SAN Support
Eucalyptus includes optional, subscription only support for integrating enterprise-grade SAN (Storage Area Network)
hardware devices into a Eucalyptus cloud.
SAN support extends the functionality of the Eucalyptus Storage Controller (SC) to provide a high performance data
conduit between VMs running in Eucalyptus and attached SAN devices. Eucalyptus dynamically manages SAN storage
without the need for the administrator to manually allocate and de-allocate storage, manage snapshots or set up data
connections.
Object Storage
Eucalyptus supports Walrus and Riak CS as its object storage backend. There is no extra planning if you use Walrus.
If you use Riak CS, you can use a single Riak CS cluster for several Eucalyptus clouds. Basho (the vendor of RiakCS)
recommends five nodes for each Riak CS cluster. This also means that you have to set up and configure a load balancer
between the Riak CS nodes and the object storage gateway (OSG).
Elastic IPs Eucalyptus instances typically have two IPs associated with them: a private Edge
one and a public one. Private IPs are intended for internal communications
Managed
between instances and are usually only routable within a Eucalyptus cloud.
Public IPs are used for external access and are usually routable outside Managed (No
Eucalyptus cloud. How these addresses are allocated and assigned to instances VLAN)
is determined by a networking mode. The distinction between public and private
VPC (MidoNet)
addresses becomes important in Edge, Managed, and Managed (No VLAN)
modes, which support elastic IPs. With elastic IPs the user gains control over
a set of static IP addresses. Once allocated to the user, those same IPs can be
dynamically associated to running instances, overriding pre-assigned public
IPs. This allows users to run well-known services (for example, web sites)
within the Eucalyptus cloud and to assign those services fixed IPs that do not
change.
Security groups Security groups are sets of networking rules that define the access rules for all Edge
VM instances associated with a group. For example, you can specify ingress
Managed
rules, such as allowing ping (ICMP) or SSH (TCP, port 22) traffic to reach
VMs in a specific security group. When you create a VM instance, unless Managed (No
otherwise specified at instance run-time, it is assigned to a default security VLAN)
group that denies incoming network traffic from all sources. Thus, to allow
VPC (MidoNet)
login and usage of a new VM instance you must authorize network access to
the default security group with the euca-authorize command.
VM isolation Although network traffic between VM instances belonging to a security group Edge
is always open, Eucalyptus can enforce isolation of network traffic between
Managed
different security groups. This isolation is enforced using ebtables (Edge) or
VLAN tags (Managed), thus, protecting VMs from possible eavesdropping by VPC (MidoNet)
VM instances belonging to other security groups.
DHCP server Eucalyptus assigns IP addresses to VMs in all modes. Edge
Managed
Managed (No
VLAN)
VPC (MidoNet)
If Eucalyptus can control and condition the networks its components use, your deployment will support the full set of
API features. However, if Eucalyptus is confined to using an existing network, some of the API features might be
disabled. So, understanding and choosing the right networking configuration is an important (and complex) step in
deployment planning.
Each networking mode is detailed in the following sections.
Edge Mode
Edge mode offers the most features of the EC2 Classic-compatible networking modes. It is designed to integrate into
already extant (or straightforward to deploy) underlying network topologies. However, Edge mode can impose constraints
in certain environments, in which case you can choose another mode.
In Edge networking mode, the component responsible for implementing Eucalyptus VM networking artifacts is running
at the edge of a Eucalyptus deployment: the Node Controller (NC). Eucalyptus provides a stand-alone component called
eucanetd in each NC. This component dynamically receives changing Eucalyptus networking views and is responsible
for configuring the Linux machine on which the NC is running to reflect the latest view.
Edge networking mode integrates with your existing network infrastructure, allowing you to inform Eucalyptus, through
configuration parameters for Edge mode, about the existing network, which Eucalyptus then will consume when
implementing the networking view.
Edge networking mode integrates with two basic types of pre-existing network setups:
• One flat IP network used to service Eucalyptus component systems, Eucalyptus VM public IPs (elastic IPs), and
Eucalyptus VM private IPs.
• Two networks, one for Eucalyptus components and Eucalyptus VM public IPs, and the other for Eucalyptus VM
private IPs.
Important: Edge networking mode will not set up the network from scratch as do Managed and Managed (No
VLAN) modes. Instead, it integrates with networks that already exist. If the network, netmask, and router don't
already exist, you must create them outside of Eucalyptus before configuring Edge mode.
Managed Mode
In Managed mode, Eucalyptus manages the local network of VM instances and provides all networking features Eucalyptus
currently supports, including VM network isolation, security groups, elastic IPs, and metadata service.
In Managed mode, you define a subnet (usually private, unroutable) from which VM instances will draw their private
IP addresses. Eucalyptus maintains a DHCP server with static mappings for each VM instance that is created. When
you create a new VM instance, you can specify the name of the security group to which that VM will belong. Eucalyptus
then selects a subset of the entire range of IPs to hand out to other VMs in the same security group.
You can also define a number of security groups, and use those groups to apply network ingress rules to any VM that
runs within that network. In this way, Eucalyptus provides functionality similar to Amazon's security groups. In addition,
the administrator can specify a pool of public IP addresses that users may allocate, then assign to VMs either at boot
time or dynamically at run-time. This capability is similar to Amazon's 'elastic IPs'.
Managed mode uses a Virtual LAN (VLAN) to enforce network isolation between instances in different security groups.
If your underlying physical network is also using a VLAN, there can be conflicts that prevent instances from being
network accessible. So you have to determine if your network between the CC and NCs is VLAN clean (that is, if your
VLANs are usable by Eucalyptus). To test if the network is VLAN clean, see Prepare VLAN.
Each VM receives two IP addresses: a public IP address and a private IP address. Eucalyptus maps public IP addresses
to private IP addresses. Access control is managed through security groups.
MidoNet Components
A MidoNet deployment consists of four types of nodes (according to their logical functions or services offered), connected
via four IP networks as depicted in Figure 1. MidoNet does not require any specific hardware, and can be deployed in
commodity x86_64 servers. Interactions with MidoNet are accomplished through Application Programming Interface
(API) calls, which are translated into (virtual) network topology changes. Network state information is stored in a
logically centralized data store, called the Network State Database (NSDB), which is implemented on top of two
open-source distributed coordination and data store technologies: Zookeeper and Cassandra. Implementation of (virtual)
network topology is realized via cooperation and coordination among MidoNet agents, which are deployed in nodes
that participate in MidoNet.
Figure 1: Logical view of a MidoNet deployment. Four components are connected via four networks.
Node types:
• MidoNet Network State Database (NSDB): consists of a cluster of Zookeeper and Cassandra. All MidoNet nodes
must have IP connectivity with NSDB.
• MidoNet API: consists of tomcat and MidoNet web app. Exposes MidoNet REST APIs.
• Hypervisor: MidoNet agent (midolman) are required in all Hypervisors to enable VMs to be connected via MidoNet
overlay networks/SDN.
• Gateway: Gateway nodes are connected to the public network, and enable the network flow from MidoNet overlays
to the public network.
Physical Networks
• NSDB: IP network that connects all nodes that participate in MidoNet. While NSDB and Tunnel Zone networks can
be the same, it is recommended to have an isolated (physical or VLAN) segment.
• API: in Eucalyptus deployments only eucanetd/CLC needs access to the API network. Only "special hosts/processes"
should have access to this network. The use of "localhost" network on the node running CLC/eucanetd is sufficient
and recommended in Eucalyptus deployments.
• Tunnel Zone: IP network that transports the MidoNet overlay traffic (Eucalyptus VM traffic), which is not "visible"
on the physical network.
• Public network: network with access to the Internet (or corporate/enterprise) network.
Figure 2: Logical view of a Eucalyptus with MidoNet deployment. VM private network is created/virtualized by MidoNet,
and 'software-defined' by eucanetd. Ideally, each component and network should have its own set of independent
resources. In practice, components are grouped and consolidated into a set of servers, as detailed in different reference
architectures.
MidoNet components, Eucalyptus components, and three extra networks are present.
Figure 3: PoC deployment topology. A single IP network carries NSDB, Tunnel Zone, and Public Network traffic. A
single server handles MidoNet NSDB, API (and possibly Gateway) functionality.
MidoNet Gateway Bindings
Three ways to realize MidoNet Gateway bindings are discussed below, starting with the most recommended setup.
Public CIDR block(s) allocated for Eucalyptus (Euca_Public_IPs) needs to be routed to MidoNet Gateway by the
customer network - this is an environment requirement, outside of control of both MidoNet and Eucalyptus systems.
One way to accomplish this is to have a BGP terminated link available. MidoNet Gateway will establish a BGP session
with the customer router to: (1) advertise Euca_Public_IPs to the customer router; and (2) get the default route from the
customer router.
If a BGP terminated link is not available, but the routing of Euca_Public_IPs is delegated to MidoNet Gateway
(configuration of customer routing infrastructure), similar setup can be used. In such scenario, static routes are configured
on the customer router (to route Euca_Public_IPs to MidoNet Gateway), and on MidoNet (to use the customer router
as the default route).
Figure 4: How servers are bound to MidoNet in a PoC deployment with BGP. A BGP terminated link is required: the
gateway node eth device is bound to MidoNet virtual router (when BGP is involved, the MidoNet Gateway and Eucalyptus
CLC cannot be co-located). Virtual machine tap devices are bound to MidoNet virtual bridges.
If routed Euca_Public_IPs are not available, static routes on all involved nodes (L2 connectivity is required among
nodes) can be used as illustrated below.
Figure 5: How servers are bound to MidoNet in a PoC deployment without routed Euca_Public_IPs. Clients that need
communication with Euca_Public_IPs configure static routes using MidoNet Gateway as the router. MidoNet Gateway
configures a static default route to customer router.
In the case nodes outside the public network broadcast domain (L2) needs to access Euca_Public_IPs, a setup using
proxy_arp, as illustrated below, can be used.
Figure 6: How servers are bound to MidoNet in a PoC deployment with proxy_arp. When routed Euca_Public_IPs are
not available, the gateway node should proxy arp for public IP addresses allocated for Eucalyptus, and forward to a
veth device that is bound to a MidoNet virtual router. Virtual machine tap devices are bound to MidoNet virtual bridges.
Production: Small
The Production: Small reference architecture is designed for small scale production quality deployments. It supports
MidoNet NSDB fault tolerance (partial failures), and limited MidoNet Gateways fail-over and load balancing/sharing.
Border Gateway Protocol (BGP) terminated uplinks are recommended for production quality deployments.
Requirements
Servers:
• Four (4) or more modern Intel cores or AMD modules - exclude logical cores that share CPU resources from the
count (Hyperthreads and AMD cores within a module) - for gateway nodes, 4 or more cores should be dedicated to
MidoNet agent (midolman)
• 4GB of RAM reserved for MidoNet Agent (when applicable), 8GB for Gateway nodes
• 4GB of free RAM reserved for MidoNet NSDB (when applicable)
• 4GB of free RAM reserved for MidoNet API (when applicable)
• 30GB of free disk space for NSDB (when applicable)
• Two (2) 10Gbps NICs per server
• Three (3) servers dedicated to MidoNet NSDB
• Two (2) servers as MidoNet Gateways
Physical Network:
• One (1) 10Gbps IP Network for public network (if upstream links are 1Gbps, this could be 1Gbps)
• One (1) 10Gbps IP Network for Tunnel Zone and NSDB
• Public Classless Inter-Domain Routing (CIDR) block (Euca_public_IPs)
• Two (2) BGP terminated uplinks
Limits:
• Thirty two (32) MidoNet agents (i.e., 2 Gateway nodes and 30 Hypervisors)
• Two (2) MidoNet Gateways
• Tolerate 1 NSDB server failure
• Tolerate 1 MidoNet Gateway/uplink failure
• Limited uplinks load sharing/balancing
Deployment Topology
• A 3-node cluster for NSDB (co-located Zookeeper and Cassandra)
• eucanetd co-located with MidoNet API Server (Tomcat)
• Two (2) MidoNet Gateway Nodes
• Hypervisors with midolman
• One 10Gbps IP network handling NSDB and Tunnel Zone traffic
• One 10Gbps IP Network handling Public Network traffic
• API communication via loopback/localhost network
Figure 7: Production:Small deployment topology. A 10Gbps IP network carries NSDB and Tunnel Zone traffic. Another
10Gbps IP network carries Public Network traffic. A 3-node cluster for NSDB tolerates 1 server failure, and 2 gateways
enable network fail-over and limited load balancing/sharing.
Figure 8: How servers are bound to MidoNet in a Production:Small deployment. Gateway Nodes have physical devices
bound to a MidoNet virtual router. These devices should have L2 and L3 connectivity to the Customer's Router, and
with BGP terminated links. Virtual machine tap devices are bound to MidoNet virtual bridges.
NSDB Data Replication
• NSDB is deployed in a cluster of 3 nodes
• Zookeeper and Cassandra both have built-in data replication
• One server failure is tolerated
MidoNet Gateway Failover
• Two paths are available to and from MidoNet, and failover is handled by BGP
MidoNet Gateway Load Balancing and Sharing
• Load Balancing from MidoNet is implemented by MidoNet agents (midolman): ports in a stateful port group with
default routes out are used in a round-robin fashion.
• Partial load sharing from the Customer's router to MidoNet can be accomplished by:
• Partition the allocated CIDR in 2 parts. For example, a /24 CIDR can be split into 2 /25 CIDRs.
• One MidoNet BGP port should advertise the top half (/25) and /24; the other advertises the bottom half (/25) and
/24.
• When both ports are operational, routing will favor the most specific route (i.e., /25). If a port fails, the /24 will
be used instead.
Production: Large
The Production:Large reference architecture is designed for large scale (500 to 600 MidoNet agents) production quality
deployments. It supports MidoNet NSDB fault tolerance (partial failures), and MidoNet Gateways fail-over and load
balancing/sharing.
Border Gateway Protocol (BGP) terminated uplinks are required. Each uplink should come from an independent router.
Requirements:
• Eight (8) or more modern Intel cores or AMD modules - exclude logical cores that share CPU resources from the
count (Hyperthreads and AMD cores within a module) - for gateway nodes, 8 or more cores should be dedicated to
MidoNet agent (midolman)
• 4GB of RAM reserved for MidoNet Agent (when applicable), 16GB for Gateway nodes
• 4GB of free RAM reserved for MidoNet NSDB (when applicable)
• 4GB of free RAM reserved for MidoNet API (when applicable)
• 30GB of free disk space for NSDB (when applicable)
• One 1Gbps and 2 10Gbps NICs per server
• Five (5) servers dedicated to MidoNet NSDB
• Three (3) servers as MidoNet Gateways
Physical Network:
• One 1Gbps IP Network for NSDB
• One 10Gbps IP Network for public network (if upstream links are 1Gbps, this could be 1Gbps)
• One 10Gbps IP Network for Tunnel Zone
• Public Classless Inter-Domain Routing (CIDR) block (Euca_public_IPs)
• Three (3) BGP terminated uplinks, each of which coming from an independent router
Limits:
• 500 to 600 MidoNet agents
• Three (3) MidoNet Gateways
• Tolerate 1 to 2 NSDB server failures
• Tolerate 1 to 2 MidoNet Gateway/uplink failures
Deployment Topology
• A 5-node cluster for NSDB (co-located Zookeeper and Cassandra)
• eucanetd co-located with MidoNet API Server (Tomcat)
• Three (3) MidoNet Gateway Nodes
• Hypervisors with midolman
• One 1Gbps IP network handling NSDB traffic
• One 10Gbps IP network handling Tunnel Zone traffic
• One 10Gbps IP network handling Public Network traffic
• API communication via loopback/localhost network
Figure 9: Production:Large deployment topology. A 1Gbps IP network carries NSDB; a 10Gbps IP network carries
Tunnel Zone traffic; and another 10Gbps IP network carries Public Network traffic. A 5-node cluster for NSDB tolerates
2 server failures, and 3 gateways enable network fail-over and load balancing/sharing. Servers are bound to MidoNet
in a way similar to Production:Small.
NSDB Data Replication
• NSDB is deployed in a cluster of 5 nodes
• Zookeeper and Cassandra both have built-in data replication
• Up to 2 server failures tolerated
MidoNet Gateway Failover
• Three paths are available to and from MidoNet, and failover is handled by BGP
MidoNet Gateway Load Balancing/Sharing
• Load Balancing from MidoNet is implemented by MidoNet agents (midolman): ports in a stateful port group with
default routes out are used in a round-robin fashion.
• The customer AS should handle multi path routing in order to support load sharing/balancing to MidoNet; for
example, Equal Cost Multi Path (ECMP).
Reserve Ports
Eucalyptus components use a variety of ports to communicate. The following table lists the all of the important ports
used by Eucalyptus.
Port Description
TCP 5005 DEBUG ONLY: This port is used for debugging Eucalyptus (using the --debug flag).
TCP 8443 Port for getting user credentials on the CLC. Configurable with euctl.
TCP 8772 DEBUG ONLY: JMX port. This is disabled by default, and can be enabled with the --debug
or --jmx options for CLOUD_OPTS.
TCP 8773 Web services port for the CLC, user-facing services (UFS), object storage gateway (OSG),
Walrus SC; also used for external and internal communications by the CLC and Walrus.
Configurable with euctl.
TCP 8774 Web services port on the CC. Configured in the eucalyptus.conf configuration file
TCP 8775 Web services port on the NC. Configured in the eucalyptus.conf configuration file.
TCP 8777 Database port on the CLC
TCP 8779 (or next jGroups failure detection port on CLC, UFS, OSG, Walrus SC. If port 8779 is available, it will
available port, up to be used, otherwise, the next port in the range will be attempted until an unused port is found.
TCP 8849)
TCP 8888 The default port for the Eucalyptus Management Console. Configured in the
/etc/eucalyptus-console/console.ini file.
TCP 16514 TLS port on Node Controller, required for node migrations
UDP 7500 Port for diagnostic probing on CLC, UFS, OSG, Walrus SC
UDP 8773 Membership port for SC, any UFS, Walrus
UDP 8778 The bind port used to establish multicast communication
TCP/UDP 53 DNS port on UFS
Verify Connectivity
Verify connectivity between the machines you’ll be installing Eucalyptus on. Some Linux distributions provide default
TCP/IP firewalling rules that limit network access to machines. Disable these default firewall settings before you install
Eucalyptus components to ensure that the components can communicate with one another.
Note: Any firewall running on the CC must be compatible with the dynamic changes performed by Eucalyptus
when working with security groups. Eucalyptus will flush the 'filter' and 'nat' tables upon boot.
Verify component connectivity by performing the following checks on the machines that will be running the listed
Eucalyptus components.
1. Verify connection from an end-user to the CLC on TCP ports 8443 and 8773
2. Verify connection from an end-user to Walrus on TCP port 8773
3. Verify connection from the CLC, SC, and NC to SC on TCP port 8773
4. Verify connection from the CLC, SC, and NC to Walrus on TCP port 8773
5. Verify connection from Walrus and SC to CLC on TCP port 8777
6. Verify connection from CLC to CC on TCP port 8774
Prepare VLAN
Managed networking mode requires that switches and routers be “VLAN clean.” This means that switches and routers
must allow and forward VLAN tagged packets. If you plan to use the Managed networking mode, you can verify that
the network is VLAN clean between machines running Eucalyptus components by performing the following test.
Tip: You only need to read this section if you are using Managed mode. If you aren’t using Managed mode,
skip this section.
1. Choose two IP addresses from the subnet you plan to use with Eucalyptus, one VLAN tag from the range of VLANs
that you plan to use with Eucalyptus, and the network interface that will connect your planned CC and NC servers.
The examples in this section use the IP addresses 192.168.1.1 and 192.168.1.2, VLAN tag 10, and network interface
eth3, respectively.
2. On the planned CC server, choose the interface on the local Ethernet and run:
vconfig add eth3 10
ifconfig eth3.10 192.168.1.1 up
3. On a planned NC server, choose the interface on the local network and run:
vconfig add eth3 10
ifconfig eth3.10 192.168.1.2 up
Configure Dependencies
Before you install Eucalyptus, ensure you have the appropriate dependencies installed and configured.
Configure Bridges
For Managed (No VLAN) and EDGE modes, you must configure a Linux ethernet bridge on all NCs. This bridge
connects your local ethernet adapter to the cluster network. Under normal operation, NCs will attach virtual machine
instances to this bridge when the instances are booted.
To configure a bridge in CentOS 6 or RHEL6, you need to create a file with bridge configuration (for example, ifcfg-brX)
and modify the file for the physical interface (for example, ifcfg-ethX). The following steps describe how to set up a
bridge on both CentOS 6 and RHEL 6. We show examples for configuring bridge devices that either obtain IP addresses
using DHCP or statically.
1. Install the bridge-utils package.
yum install bridge-utils
3. Open the network script for the device you are adding to the bridge and add your bridge device to it. The edited file
should look similar to the following:
DEVICE=eth0
# change the hardware address to match the hardware address your NIC uses
HWADDR=00:16:76:D6:C9:45
ONBOOT=yes
BRIDGE=br0
NM_CONTROLLED=no
Configure SELinux
Security-enabled Linux (SELinux) is a security feature for Linux that allows you to set access control through policies.
Eucalyptus is not currently compatible with SELinux.
To configure SELinux to allow Eucalyptus access:
1. Open /etc/selinux/config and edit the line SELINUX=enforcing to SELINUX=permissive.
2. Save the file.
3. Run the following command:
setenforce 0
Configure NTP
Eucalyptus requires that each machine have the Network Time Protocol (NTP) daemon started and configured to run
automatically on reboot.
To use NTP:
1. Install NTP on the machines that will host Eucalyptus components.
yum install ntp
2. Open the /etc/ntp.conf file and add NTP servers, if necessary, as in the following example.
server 0.pool.ntp.org
server 1.pool.ntp.org
server 2.pool.ntp.org
6. Start NTP.
service ntpd start
7. Synchronize your system clock, so that when your system is rebooted, it does not get out of sync.
hwclock --systohc
Configure an MTA
All machines running the Cloud Controller must run a mail transport agent server (MTA) on port 25. Eucalyptus uses
the MTA to deliver or relay email messages to cloud users' email addresses.
You can use Sendmail, Exim, postfix, or something simpler. The MTA server does not have to be able to receive incoming
mail.
Many Linux distributions satisfy this requirement with their default MTA. For details about configuring your MTA, go
to the documentation for your specific product.
To test your mail relay for localhost, send email to yourself from the terminal using mail.
Install MidoNet
Eucalyptus requires MidoNet to enable VPC functionality. This section describes how to install MidoNet for use with
Eucalyptus.
Before you begin:
• See the Planning your Network section of the guide to create a map of how MidoNet / Eucalyptus will be deployed
into your environment.
• See the MidoNet Installation Guide to become familiar with the general MidoNet installation procedure and concepts.
Note: If you are not using VPC with Eucalyptus, you do not need to install MidoNet.
Prerequisites
This topic discusses the prerequisites for installing MidoNet.
Repository Access
In order to use MidoNet with Eucalyptus you will need access to the Midokura repositories. You can sign up here:
https://support.midokura.com/access/unauthenticated.
Create /etc/yum.repos.d/midokura.repo on all machines that will run MidoNet components including
Zookeeper. For example:
[midokura]
name=Midokura Repository
baseurl=http://USERNAME:[email protected]/repo/v1.9/stable/RHEL/6/
gpgkey=http://USERNAME:[email protected]/repo/RPM-GPG-KEY-midokura
gpgcheck=1
enabled=1
[midokura-misc]
name=midokura Misc Package Repo
baseurl=http://repo.midonet.org/misc/RHEL/6/misc/
gpgkey=http://repo.midonet.org/RPM-GPG-KEY-midokura
enabled=1
gpgcheck=1
metadata_expire=1
Zookeeper
Zookeeper is where MidoNet stores most of its running state. This service needs to be up and running before any other
installation takes place.
Note: For advanced zookeeper installation (clustered for reliability), please see the MidoNet NSDB Installation
Guide.
For a simple single-server installation, install the zookeeper package on any server that is IP accessible from all midolman
agents (for example: on the Cloud Controller server itself), start the service and ensure that the service is enabled. For
example:
yum install zookeeper
service zookeeper start
chkconfig zookeeper on
Cassandra
Cassandra is used to track flows in MidoNet. This service needs to be up and running before any other installation takes
place. For a simple single-server installation, install Cassandra on any server that is IP accessible from all midolman
agents (for example: on the Cloud Controller server itself), start the service and ensure that the service is enabled.
To install Cassandra, please refer to the documentation for Cassandra installation and configuration.
Note: For advanced MidoNet-specific installation of Cassandra, please refer to the MidoNet NSDB Installation
Guide.
• The midonet-api must run co-located with the Eucalyptus Cloud Controller
• Each Node Controller must run a Midolman agent
• The Cloud Controller must run a Midolman agent
• It is recommended that your User Facing Services host be used as the Midonet Gateway (i.e. running a Midolman
agent) when configuring Eucalyptus
• The Midonet Gateway will take over which ever interface Eucalyptus GatewayInterface is configured for and block
traffic that is not to/from Midonet.
• If you only have 1 interface on your host then you need to follow the instructions from Midokura on setting up
a veth pair so that Midonet can take over a virtual interface rather than a physical one, as in this example (skip
step 6 for Eucalyptus installs): http://docs.midonet.org/docs/latest/operations-guide/content/static_setup.html
{
"InstanceDnsServers": [
"UFS_HOST"
],
"Mido": {
"EucanetdHost": "clcfrontend",
"GatewayHost": "ufsfrontend",
"GatewayIP": "172.19.0.2",
"GatewayInterface": "veth1",
"PublicGatewayIP": "172.19.0.1",
"PublicNetworkCidr": "172.19.0.0/30"
},
"Mode": "VPCMIDO",
"PublicIps": [
"PUBLIC_IPS"
]
}
3. Install midonet-api.
yum install midonet-api
4. Install python-midonetclient.
yum install python-midonetclient
</context-param>
<context-param>
<param-name>auth-admin_role </param-name>
<param-value>admin </param-value>
</context-param>
<!-- Mock auth configuration -->
<context-param>
<param-name>mock_auth-admin_token </param-name>
<param-value>999888777666 </param-value>
</context-param>
<context-param>
<param-name>mock_auth-tenant_admin_token </param-name>
<param-value>999888777666 </param-value>
</context-param>
<context-param>
<param-name>mock_auth-tenant_user_token </param-name>
<param-value>999888777666 </param-value>
</context-param>
<!-- Keystone configuration -->
<context-param>
<param-name>keystone-service_protocol </param-name>
<param-value>http </param-value>
</context-param>
<context-param>
<param-name>keystone-service_host </param-name>
<param-value>127.0.0.1 </param-value>
</context-param>
<context-param>
<param-name>keystone-service_port </param-name>
<param-value>999888777666 </param-value>
</context-param>
<context-param>
<param-name>keystone-admin_token </param-name>
<param-value>999888777666 </param-value>
</context-param>
<!-- This tenant name is used to get the scoped token from Keystone,
and should be the tenant name of the user that owns the token sent in the
request -->
<context-param>
<param-name>keystone-tenant_name </param-name>
<param-value>admin </param-value>
</context-param>
<!-- CloudStack auth configuration -->
<context-param>
<param-name>cloudstack-api_base_uri </param-name>
<param-value>http://127.0.0.1:8080 </param-value>
</context-param>
<context-param>
<param-name>cloudstack-api_path </param-name>
<param-value>/client/api? </param-value>
</context-param>
<context-param>
<param-name>cloudstack-api_key </param-name>
<param-value/>
</context-param>
<context-param>
<param-name>cloudstack-secret_key </param-name>
<param-value/>
</context-param>
<!-- Zookeeper configuration -->
<!-- The following parameters should match the ones in midolman.conf
except 'use_mock' -->
<context-param>
<param-name>zookeeper-use_mock </param-name>
<param-value>false </param-value>
</context-param>
<context-param>
<param-name>zookeeper-zookeeper_hosts </param-name>
<!-- comma separated list of Zookeeper nodes(host:port) -->
<param-value>ZOOKEEPER_IP:2181, </param-value>
</context-param>
<context-param>
<param-name>zookeeper-session_timeout </param-name>
<param-value>30000 </param-value>
</context-param>
<context-param>
<param-name>zookeeper-midolman_root_key </param-name>
<param-value>/midonet/v1 </param-value>
</context-param>
<!-- VXLAN gateway configuration -->
<context-param>
<param-name>midobrain-vxgw_enabled </param-name>
<param-value>false </param-value>
</context-param>
<!-- Servlet Listener -->
<listener>
<listener-class><!-- Use Jersey's Guice compatible context listener
-->
org.midonet.api.servlet.JerseyGuiceServletContextListener
</listener-class>
</listener>
<!-- Servlet filter -->
<filter>
<!-- Filter to enable Guice -->
<filter-name>Guice Filter </filter-name>
<filter-class>com.google.inject.servlet.GuiceFilter </filter-class>
</filter>
<filter-mapping>
<filter-name>Guice Filter </filter-name>
<url-pattern>/* </url-pattern>
</filter-mapping>
</web-app>
9. After approximately one minute, you should be able to access the Midonet API using the Midonet shell:
midonet-cli -A --midonet-url=http://127.0.0.1:8080/midonet-api
Note: If this command does not work, check /var/log/tomcat/catalina.out for possible errors.
4. Start midolman:
service midolman start
chkconfig midolman on
3. After verifying all your hosts are listed, add each host to your tunnel zone as follows. ReplacE HOST_N_IP with
the IP of your Node Controller or User Facing Host that you used to register the component with Eucalyptus:
midonet> tunnel-zone tzone0 add member host host0 address HOST_0_IP
midonet> tunnel-zone tzone0 add member host host1 address HOST_1_IP
midonet> tunnel-zone tzone0 add member host host2 address HOST_2_IP
You are now ready to install and configure Eucalyptus to use this Midonet installation.
Install Repositories
This section guides you through installing Eucalyptus from RPM package downloads.
The first step to installing Eucalyptus is to download the RPM packages. The following terminology might help you as
you proceed through this section.
When you're ready, continue to Software Signing.
Eucalyptus open source software
Eucalyptus release packages include the freely available components, which enable you to deploy a Eucalyptus cloud.
Eucalyptus enterprise software
Paid subscribers have access to additional software features (for example, SAN support). If you are a subscriber, you
receive an entitlement certificate and a private key that allow you to download Eucalyptus subscription software. You
will also receive a GPG public key to be used to verify the software integrity.
Euca2ools CLI
Euca2ools is the Eucalyptus command line interface for interacting with web services. It is compatible with many
Amazon AWS services, so can be used with Eucalyptus as well as AWS.
RPM and YUM and software signing
Eucalyptus CentOS and RHEL download packages are in RPM (Red Hat Package Manager) format and use the YUM
package management tool. We use GPG keys to sign our software packages and package repositories.
EPEL software
EPEL (Extra Packages for Enterprise Linux) are free, open source software, which is fully separated from licensed
RHEL distribution. It requires its own package.
Nightly releases
Eucalyptus nightly packages are the latest Eucalyptus builds, which are available for early testing or development work.
Nightlies should not be used in production.
Software Signing
This topic describes Eucalyptus software signing keys.
We use a number of GPG keys to sign our software packages and package repositories. The necessary public keys are
provided with the relevant products and can be used to automatically verify software updates. You can also verify the
packages or package repositories manually using the keys on this page.
Use the rpm --checksig command on a download file to verify a RPM package for an HP Helion Eucalyptus
product. For example:
rpm --checksig -v myfilename.rpm
Follow the procedure detailed on Debian's SecureApt web page to verify a deb package for an HP Helion Eucalyptus
product.
Please do not use package signing keys to encrypt email messages.
The following keys are used for signing Eucalyptus software:
2. (Optional) If you are a Eucalyptus subscriber, you will receive two RPM package files containing your license for
subscription-only services. Install these packages on each host machine that will run a Eucalyptus service. Install
the license files to access the enterprise repository.
yum install eucalyptus-enterprise-license*.noarch.rpm
http://downloads.eucalyptus.com/software/subscription/eucalyptus-enterprise-release-4.2-1.el6.noarch.rpm
3. Configure the Euca2ools package repository on each host machine that will run a Eucalyptus service or Euca2ools:
yum install
http://downloads.eucalyptus.com/software/euca2ools/3.3/rhel/6/x86_64/euca2ools-release-3.3-1.el6.noarch.rpm
Enter y when prompted to install this package.
4. Configure the EPEL package repository on each host machine that will run a Eucalyptus service or Euca2ools:
yum install
http://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
Enter y when prompted to install this package.
5. If you are installing on RHEL 6, you must enable the Optional repository in Red Hat Network for each NC, as
follows:
a) Go to http://rhn.redhat.com and navigate to the system that will run the NC.
b) Click Alter Channel Subscriptions.
c) Make sure the RHEL Server Optional checkbox is checked.
d) Click Change Subscriptions.
6. The following steps should be performed on each NC host machine.
a) Install the Eucalyptus Node Controller software on each NC host:
yum install eucalyptus-nc
b) Remove the default libvirt network. This step allows the eucanetd dhcpd server to start.
virsh net-destroy default
virsh net-autostart default --disable
c) Check that the KVM device node has proper permissions.
7. On each CLC host machine, install the Eucalyptus Cloud Controller software.
yum install eucalyptus-cloud
8. Note: The VPCMIDO network mode currently requires nginx to be installed on the CLC.
(Optional) If you are using VPCMIDO network mode, install the nginx package with the following command on
the CLC:
yum install nginx
This installs nginx for metadata support.
9. Install the backend service image package on the machine hosting the CLC:
yum install eucalyptus-service-image
This installs worker images for both the load balancer and imaging services.
10. On the UFS host machine, install the Eucalyptus Cloud Controller software.
yum install eucalyptus-cloud
11. (Optional) On the UFS host machine, also install the Management Console.
yum install eucaconsole
The Management Console can run on any host machine, even one that does not have other Eucalyptus services. For
more information, see the Console Guide.
12. Install the software for the remaining Eucalyptus services. The following example shows services being installed on
the same host machine. We recommend that you use a different host machine for each service, when possible:
yum install eucalyptus-cc eucalyptus-sc eucalyptus-walrus
This installs the cloud controller (CC), storage controller (SC), and Walrus Backend services.
13. (Optional) If you are a subscriber and use a SAN, run the appropriate command for your device on each CLC host
machine:
For HP 3PAR SAN:
yum install eucalyptus-enterprise-storage-san-threepar-libs
For NetApp SAN:
yum install eucalyptus-enterprise-storage-san-netapp-libs
For Dell EqualLogic SAN:
yum install eucalyptus-enterprise-storage-san-equallogic-libs
14. (Optional) If you are a subscriber and use a SAN, run the appropriate command for your device on each SC host
machine:
For HP 3PAR SAN:
yum install eucalyptus-enterprise-storage-san-threepar
For NetApp SAN:
2. On all host machines that will run either Eucalyptus or Euca2ools, run the following commands:
yum install
http://downloads.eucalyptus.com/software/euca2ools/nightly/3.3/rhel/6/x86_64/euca2ools-release-nightly-3.3-1.el6.noarch.rpm
Enter y when prompted to install this package.
4. Install Eucalyptus packages. The following example shows most services being installed all on the same host machine.
You can use a different host for each service.
yum install eucalyptus-cloud eucalyptus-cc eucalyptus-sc eucalyptus-walrus
5. Install the backend service image package on the machine hosting the CLC:
yum install eucalyptus-service-image
This installs worker images for both the load balancer and imaging services.
6. On each planned NC host, install the NC package:
yum install eucalyptus-nc
Configure Eucalyptus
This section describes the parameters you need to set in order to launch Eucalyptus for the first time.
The first launch of Eucalyptus is different than a restart of a previously running Eucalyptus deployment in that it sets
up the security mechanisms that will be used by the installation to ensure system integrity.
Eucalyptus configuration is stored in a text file, /etc/eucalyptus/eucalyptus.conf, that contains key-value
pairs specifying various configuration parameters. Eucalyptus reads this file when it launches and when various forms
of reset commands are sent it the Eucalyptus components.
Important: Perform the following tasks after you install Eucalyptus software, but before you start the Eucalyptus
services.
VNET_BRIDGE On an NC, this is the name of the bridge interface to which Edge (on NC)
instances' network interfaces should attach. A physical
Managed (No
interface that can reach the CC must be attached to this
VLAN)
bridge. Common setting for KVM is br0.
VNET_DHCPDAEMON The ISC DHCP executable to use. This is set to a Edge (on NC)
distro-dependent value by packaging. The internal default
Managed
is /usr/sbin/dhcpd3.
Managed (No
VLAN)
VNET_DHCPUSER The user the DHCP daemon runs as on your distribution. Managed
For CentOS 6 and RHEL 6, this is typically root.
Managed (No
Default: dhcpd VLAN)
VNET_PRIVINTERFACE The name of the network interface that is on the same Edge (on NC)
network as the NCs. In Managed and Managed (No VLAN)
Managed
modes this must be a bridge for instances in different
clusters but in the same security group to be able to reach
one another with their private addresses.
Default: eth0
VNET_PUBINTERFACE On a CC, this is the name of the network interface that is Edge (on NC)
connected to the “public” network.
Managed
On an NC, this is the name of the network interface that
Managed (No
is connected to the same network as the CC. Depending
VLAN)
on the hypervisor's configuration this may be a bridge or
a physical interface that is attached to the bridge.
Default: eth0
VNET_SUBNET, VNET_NETMASK These options control the internal private network used by Managed,
instances within Eucalyptus. Eucalyptus assigns a distinct Managed (No
subnet of private IP addresses to each security group. This VLAN)
setting dictates how many addresses each of these subnets
should contain. Specify a power of 2 between 16 and 2048.
This is directly related, though not equal, to the number of
instances that can reside in each security group. Eucalyptus
reserves eleven addresses per security group.
Configure the CC
1. Log in to the CC and open the /etc/eucalyptus/eucalyptus.conf file.
2. Go to the Network Configuration section, uncomment and set the following:
VNET_MODE="EDGE"
"Subnet": {
"_comment": "Subnet definition that this cluster will use for
private addressing"
"Name": "",
"_comment": "Arbitrary name for the subnet"
"Subnet": "",
"_comment": "The subnet that will be used for private
addressing"
"Netmask": "",
"_comment": "Netmask for the subnet defined above"
"Gateway": "",
_comment": "Gateway that will route packets for the
private subnet"
},
"PrivateIps": []
"_comment": "Private IPs that will be handed out to instances
as they launch"
},
]
}
The following example is for a setup with one cluster (AZ), called PARTI00, with a flat network
topology.
{
"InstanceDnsDomain": "eucalyptus.internal",
"InstanceDnsServers": ["10.1.1.254"],
"MacPrefix": "d0:0d",
"PublicIps": [
"10.111.101.84",
"10.111.101.91",
"10.111.101.92",
"10.111.101.93"
],
"Subnets": [
],
"Clusters": [
{
"Name": "PARTI00",
"Subnet": {
"Name": "10.111.0.0",
"Subnet": "10.111.0.0",
"Netmask": "255.255.0.0",
"Gateway": "10.111.0.1"
},
"PrivateIps": [
"10.111.101.94",
"10.111.101.95"
]
},
]
}
For a multi-cluster deployment, add an additional cluster to your configuration for each cluster you
have. The following example has an two clusters, PARTI00 and PARTI01.
{
"InstanceDnsDomain": "eucalyptus.internal",
"InstanceDnsServers": ["10.1.1.254"],
"PublicIps": [
"10.111.101.84",
"10.111.101.91",
"10.111.101.92",
"10.111.101.93"
],
"Subnets": [
],
"Clusters": [
{
"Name": "PARTI00",
"MacPrefix": "d0:0d",
"Subnet": {
"Name": "10.111.0.0",
"Subnet": "10.111.0.0",
"Netmask": "255.255.0.0",
"Gateway": "10.111.0.1"
},
"PrivateIps": [
"10.111.101.94",
"10.111.101.95"
]
},
{
"Name": "PARTI01",
"MacPrefix": "d0:0d",
"Subnet": {
"Name": "10.111.0.0",
"Subnet": "10.111.0.0",
"Netmask": "255.255.0.0",
"Gateway": "10.111.0.1"
},
"PrivateIps": [
"10.111.101.96",
"10.111.101.97"
]
}
]
}
For more information about multi-cluster configuration, see Configure Multi-Cluster Networking.
VNET_DHCPDAEMON
VNET_SUBNET
VNET_NETMASK
VNET_ADDRSPERNET
VNET_DNS
For example:
VNET_MODE="MANAGED"
VNET_PRIVINTERFACE="br0"
VNET_PUBINTERFACE="br0"
VNET_BRIDGE="br0"
VNET_DHCPDAEMON="/usr/sbin/dhcpd"
VNET_SUBNET="172.16.0.0"
VNET_NETMASK="255.255.0.0"
VNET_ADDRSPERNET="32"
VNET_DNS="8.8.8.8"
"ManagedSubnet": {
"Netmask": "255.255.0.0",
"Subnet": "172.16.0.0"
},
"Mode": "MANAGED-NOVLAN",
"PublicIps": [
"10.111.31.177",
"10.111.31.178",
"10.111.31.179",
"10.111.31.180",
"10.111.31.181",
"10.111.31.182",
"10.111.31.183",
"10.111.31.184"
]
}
Start Eucalyptus
Start the Eucalyptus services in the order presented in this section.
Make sure that each host machine you installed a Eucalyptus service on resolves to an IP address. Edit the /etc/hosts
file if necessary.
Note: Eucalyptus 4.2 requires version 7 of the Java Virtual Machine. Make sure that your CLOUD_OPTS
setting in the /etc/eucalyptus/eucalyptus.conf file either does not set --java-home, or that --java-home
points to a version 7 JVM. This needs to happen before services are started.
clcadmin-initialize-cloud
This command might take a minute or more to finish. If it fails, check
/var/log/eucalyptus/cloud-output.log.
3. If you want the CLC service to start at each boot-time, run this command:
chkconfig eucalyptus-cloud on
Start Walrus
Prerequisites
You should have installed and configured Eucalyptus before starting the Walrus Backend.
Note: If you not using Walrus as your object storage backend, or if you installed Walrus on the same host as
the CLC, you can skip this.
2. Log in to the Walrus Backend host machine and enter the following command:
service eucalyptus-cloud start
Start the CC
Prerequisites
You should have installed and configured Eucalyptus before starting the CC.
To start the CC
1. Log in to the Cluster Controller (CC) host machine.
2. If you want the CC service to start at each boot-time, run this command:
chkconfig eucalyptus-cc on
4. If you have a multi-zone setup, repeat this step on the CC in each zone.
Start the SC
Prerequisites
You should have installed and configured Eucalyptus before starting the SC.
Note: If you installed SC on the same host as the CLC, you can skip this.
To start the SC
1. Log in to the Storage Controller (SC) host machine.
2. If you want the SC service to start at each boot-time, run this command:
chkconfig eucalyptus-cloud on
4. If you have a multi-zone setup, repeat this step on the SC in each zone.
Start the NC
Prerequisites
You should have installed and configured Eucalyptus before starting the NC.
To start the NC
1. Log in to the Node Controller (NC) host machine.
2. If you want the NC service to start at each boot-time, run this command:
chkconfig eucalyptus-nc on
• The Type -t of service you are registering. Required. For example: cluster.
• The Host -h of the service being registered. Required. The host must be specified by IP address to function correctly.
Important: IP address is recommended.
• You must specify public IP addresses.
• We recommend that you use IP addresses rather than DNS host names when registering Eucalyptus
services.
• The Zone -z the service belongs to. This is roughly equivalent to the availability zone in AWS.
• The Name SVCINSTANCE you assign to each instance of a service, up to 256 characters. Required. This is the name
used to identify the service in a human-friendly way. This name is also used when reporting system state changes
that require administrator attention.
Note: The SVCINSTANCE name must be globally-unique with respect to other service registrations. To
ensure this uniqueness, we recommend using a combination of the service type (CLC, SC, CC, etc.) and
system IP address (or DNS host name) when you choose your service instance names. For example:
clc-192.168.0.15 or clc-eucahost15.
3. Repeat for each UFS host, replacing the UFS IP address and UFS name.
4. Copy the security credentials from the CLC to each machine running User-Facing Services. Run this command on
the CLC host machine:
clcadmin-copy-keys HOST [HOST ...]
For example:
clcadmin-copy-keys 10.111.5.183
5. Verify that the User-Facing service is registered with the following command for each instance of the UFS:
euserv-describe-services SVCINSTANCE
The registered UFS instances are now ready for your cloud.
2. Copy the security credentials from the CLC to each machine running a Walrus Backend service. Run this command
on the CLC host machine:
clcadmin-copy-keys HOST [HOST ...]
For example:
clcadmin-copy-keys 10.111.5.182
3. Verify that the Walrus Backend service is registered with the following command:
euserv-describe-services SVCINSTANCE
The registered Walrus Backend service is now ready for your cloud.
where:
• IP is the IP address of the CC you are registering with this CLC.
• ZONE name should be a descriptive name for the zone controlled by the CC. For example: zone-1.
• SVCINSTANCE must be a unique name for the CC service. We recommend that you use the IP address of the
machine, for example: cc-IP_ADDRESS.
For example:
euserv-register-service -t cluster -h 10.111.5.182 -z zone-1 cc-10.111.5.182
2. Copy the security credentials from the CLC to each machine running Cluster Controller services. Run this command
on the CLC host machine:
clcadmin-copy-keys -z ZONE HOST
For example:
clcadmin-copy-keys -z zone-1 10.111.5.182
3. Repeat the above steps for each Cluster Controller in each zone.
4. Verify that the Cluster Controller service is registered with the following command:
euserv-describe-services SVCINSTANCE
The registered Cluster Controller service is now ready for your cloud.
For example:
euserv-register-service -t storage -h 10.111.5.182 -z zone-1 sc-10.111.5.182
Important: The SC automatically goes to the BROKEN state after being registered with the CLC; it will
remain in that state until you explicitly configure the SC by configuring the backend storage provider (later).
For more information, see About the BROKEN state.
3. Repeat the above steps for each Storage Controller in each zone.
4. Verify that the Storage Controller service is registered with the following command:
euserv-describe-services SVCINSTANCE
The registered Storage Controller service is now ready for your cloud.
Configure DNS
Eucalyptus provides a DNS service that maps service names, bucket names, and more to IP addresses. This section
details how to configure the Eucalyptus DNS service.
Important: Eucalyptus administration tools are designed to work with DNS-enabled clouds, so configuring
this service is highly recommended. The remainder of this guide is written with the assumption that your cloud
is DNS-enabled.
The DNS service will automatically try to bind to port 53. If port 53 cannot be used, DNS will be disabled. Typically,
other system services like dnsmasq are configured to run on port 53. To use the Eucalyptus DNS service, you must
disable these services.
2. You can configure the load balancer DNS subdomain. To do so, log in to the CLC and enter the following:
euctl services.loadbalancing.dns_subdomain=lb
Turn on IP Mapping
To enable mapping of instance IPs to DNS host names:
1. Enter the following command on the CLC:
euctl bootstrap.webservices.use_instance_dns=true
When this option is enabled, public and private DNS entries are created for each launched instance in Eucalyptus.
This also enables virtual hosting for Walrus. Buckets created in Walrus can be accessed as hosts. For example, the
bucket mybucket is accessible as mybucket.objectstorage.mycloud.example.com.
Instance IP addresses will be mapped as euca-A-B-C-D.eucalyptus.mycloud.example.com, where
A-B-C-D is the IP address (or addresses) assigned to your instance.
2. If you want to modify the subdomain that is reported as part of the instance DNS name, enter the following command:
euctl cloud.vmstate.instance_subdomain=.custom-dns-subdomain
When this value is modified, the public and private DNS names reported for each instance will contain the specified
custom DNS subdomain name, instead of the default value, which is eucalyptus. For example, if this value is
set to foobar, the instance DNS names will appear as euca-A-B-C-D.foobar.mycloud.example.com.
Note: The code example above correctly begins with "." before custom-dns-subdomain.
zone "example.com" IN {
type master;
file "/etc/bind/db.example.com";
};
2. Create /etc/bind/db.example.com if it does not exist. If your master DNS is already set up for
example.com, you will need to add a name server entry for UFS host machines. For example:
$ORIGIN example.com.
$TTL 604800
2. Choose a name for the new user and create it along with an access key:
euare-usercreate -wld DOMAIN USER >~/.euca/FILE.ini
where:
• DOMAIN must match the DNS domain chosen in Configure DNS.
• USER is the name of the new admin user.
• FILE can be anything; we recommend a descriptive name that includes the user's name.
This creates a file with a region name that matches that of your cloud's DNS domain; you can edit the file to change
the region name if needed.
eval `euare-releaserole`
export AWS_DEFAULT_REGION=REGION
where:
• REGION must match the region name from the previous step. By default, this is the same as the cloud's DNS
domain chosen in Configure DNS.
As long as this file exists in ~/.euca, you can use it by repeating the export command above. The remainder of
this guide assumes you have completed the above steps. These euca2ools.ini configuration files are a flexible
means of managing cloud regions and users. See the Euca2ools Reference Guide for more information.
Run the following command to upload the configuration file to the CLC (with valid Eucalyptus admin credentials):
euctl cloud.network.network_configuration=@/path/to/your/json_config_file
• Riak Cloud Storage (CS) - an open source scalable general purpose data platform created by Basho Technologies.
It is intended for deployments which have heavy S3 usage requirements where a single-host system like Walrus
would not be able to serve the volume of operations and amount of data required.
• Ceph Rados Gateway (RGW) - an object storage interface built on top of librados to provide applications with a
RESTful gateway to Ceph Storage Clusters. Ceph-RGW uses the Ceph Object Gateway daemon (radosgw), which
is a FastCGI module for interacting with a Ceph Storage Cluster. Since it provides interfaces compatible with
OpenStack Swift and Amazon S3, the Ceph Object Gateway has its own user management. Ceph Object Gateway
can store data in the same Ceph Storage Cluster used to store data from Ceph Filesystem clients or Ceph Block
Device clients. The S3 and Swift APIs share a common namespace, so you may write data with one API and retrieve
it with the other.
You must configure the OSG to use one of the backend provider options.
Note: If OSG has been registered but not yet properly configured, it will be listed in the broken state when
listed with the euserv-describe-services command. For example:
The Walrus backend and OSG are now ready for production.
Use Riak CS
This topic describes how to configure Riak CS as the object storage backend provider for the Object Storage Gateway
(OSG).
Prerequisites
• Successful completion of all the install sections prior to this section.
• The UFS must be registered and enabled.
• You must have a functioning Riak CS cluster.
• You must execute the steps below as a Eucalyptus administrator.
For more information on Riak CS, see the Riak CS documentation.
To configure Riak CS object storage for the OSG
1. Enter riakcs as the storage provider using the euctl command.
euctl objectstorage.providerclient=riakcs
2. Specify the RiakCS/S3 endpoint that you want to use with Eucalyptus. For example:
euctl objectstorage.s3provider.s3endpoint=riakcs-01.riakcs-cluster.myorg.com
4. After successful configuration, check to ensure that the state of the OSG is enabled by running the
euserv-describe-services command. For example:
[root@g-26-03 ~]# euserv-describe-services --show-headers --filter
service-type=objectstorage
SERVICE TYPE ZONE NAME STATE
SERVICE objectstorage user-api-1 user-api-1.objectstorage enabled
If the state appears as disabled or broken, check the cloud-*.log files in the /var/log/eucalyptus
directory. A disabled state generally indicates that there is a problem with your network or credentials. See
Eucalyptus Log Files for more information.
The Riak CS backend and OSG are now ready for production.
Use Ceph-RGW
This topic describes how to configure Ceph Rados Gateway (RGW) as the backend for the Object Storage Gateway
(OSG).
Prerequisites
• Successful completion of all the install sections prior to this section.
• The UFS must be registered and enabled.
• A Ceph storage cluster is available.
• The ceph-radosgw service has been installed (on the UFS or any other host) and configured to use the Ceph storage
cluster. Eucalyptus recommends using civetweb with ceph-radosgw service. Civetweb is a lightweight web server
and is included in the ceph-radosgw installation. It is relatively easier to install and configure than the alternative
option – a combination of Apache and Fastcgi modules.
• You must execute the steps below as a Eucalyptus administrator.
For more information on Ceph-RGW, see the Ceph-RGW documentation.
To configure Ceph-RGW object storage for the OSG
1. Configure objectstorage.providerclient to ceph-rgw:
euctl objectstorage.providerclient=ceph-rgw
euctl
objectstorage.s3provider.s3endpoint=<radosgw-host-ip>:<radosgw-webserver-port>
The Ceph-RGW backend and OSG are now ready for production.
Eucalyptus provides the following open source (free) backend providers for the SC:
• Overlay - using the local file system
• Direct Attached Storage - DAS-JBOD (just a bunch of disks)
• Ceph-RBD - leverages RADOS block device
Eucalyptus also offers the following subscription-based (paid) storage area network (SAN) backend providers for the
SC:
• HP 3PAR - StorageServ storage systems
• NetApp - Clustered Data ONTAP and 7-mode storage systems
• Dell EqualLogic - stacked or unstacked storage arrays
You must configure the SC to use one of the backend provider options.
About the BROKEN State
This topic describes the initial state of the Storage Controller (SC) after you have registered it with the Cloud Controller
(CLC).
The SC automatically goes to the broken state after being registered with the CLC; it will remain in that state until
you explicitly configure the SC by telling it which backend storage provider to use.
You can check the state of a storage controller by running euserv-describe-services --expert and note
the state and status message of the SC(s). The output for an unconfigured SC looks something like this:
SERVICE storage ZONE1 SC71 BROKEN 37
http://192.168.51.71:8773/services/Storage arn:euca:eucalyptus:ZONE1:storage:SC71/
SERVICEEVENT 6c1f7a0a-21c9-496c-bb79-23ddd5749222
arn:euca:eucalyptus:ZONE1:storage:SC71/
SERVICEEVENT 6c1f7a0a-21c9-496c-bb79-23ddd5749222 ERROR
SERVICEEVENT 6c1f7a0a-21c9-496c-bb79-23ddd5749222 Sun Nov 18 22:11:13 PST 2012
SERVICEEVENT 6c1f7a0a-21c9-496c-bb79-23ddd5749222 SC blockstorageamanger not
configured. Found empty or unset manager(unset). Legal values are:
das,overlay,ceph
Note the error above: SC blockstoragemanager not configured. Found empty or unset
manager(unset). Legal values are: das,overlay,ceph.
This indicates that the SC is not yet configured. It can be configured by setting the
ZONE.storage.blockstoragemanager property to 'das', 'overlay', or 'ceph'.
If you have installed the (paid) Eucalyptus Enterprise packages for your EBS adapter, you will also see additional options
in the output line above, and can set the block storage manager to 'netapp', 'equallogic', or 'threepar' as appropriate.
You can verify that the SC block storage manager is unset using:
euctl ZONE.storage.blockstoragemanager
Use the Overlay Local Filesystem
This topic describes how to configure the local filesystem as the block storage backend provider for the Storage Controller
(SC).
Prerequisites
• Successful completion of all the install sections prior to this section.
• The SC must be installed, registered, and running.
• The local filesystem /var/lib/eucalyptus/volumes must have enough space to hold volumes and snapshots
created in the cloud.
• You must execute the steps below as a Eucalyptus administrator.
In this configuration the SC itself hosts the volume and snapshots for EBS and stores them as files on the local filesystem.
It uses standard Linux iSCSI tools to serve the volumes to instances running on NCs.
To configure overlay block storage for the zone, run the following commands on the CLC
3. Verify that the SC is listed; note that it may be in the broken state:
euserv-describe-services --filter service-type=storage
Your local filesystem (overlay) backend is now ready to use with Eucalyptus.
Use Direct Attached Storage (JBOD)
This topic describes how to configure the DAS-JBOD as the block storage backend provider for the Storage Controller
(SC).
Prerequisites
• Successful completion of all the install sections prior to this section.
• The SC must be installed, registered, and running.
• Direct Attached Storage requires that /var/lib/eucalyptus/volumes have enough space for locally cached
snapshots.
• You must execute the steps below as a Eucalyptus administrator.
To configure DAS-JBOD block storage for the zone, run the following commands on the CLC
1. Configure the SC to use the Direct Attached Storage for EBS.
euctl ZONE.storage.blockstoragemanager=das
The output of the command should be similar to:
one.storage.blockstoragemanager=das
3. Verify that the SC is listed; note that it may be in the broken state:
euserv-describe-services --filter service-type=storage
4. Set the DAS device name property. The device name can be either a raw device (/dev/sdX, for example), or the name
of an existing Linux LVM volume group.
euctl ZONE.storage.dasdevice=DEVICE_NAME
For example:
euctl one.storage.dasdevice=/dev/sdb
• Hypervisor support for Ceph-RBD on NCs. Node Controllers (NCs) are designed to communicate with the Ceph
cluster via libvirt. This interaction requires a hypervisor that supports Ceph-RBD. See Configure Hypervisor Support
for Ceph-RBD to satisfy this prerequisite.
To configure Ceph-RBD block storage for the zone, run the following commands on the CLC
1. Configure the SC to use Ceph-RBD for EBS.
euctl ZONE.storage.blockstoragemanager=ceph-rbd
The output of the command should be similar to:
one.storage.blockstoragemanager=ceph-rbd
3. Check the SC to be sure that it has transitioned out of the BROKEN state and is in the NOTREADY, DISABLED or
ENABLED state before configuring the rest of the properties for the SC.
4. The ceph-rbd provider will assume defaults for the following properties for the SC:
euctl ZONE.storage.ceph
PROPERTY one.storage.cephkeyringfile
/etc/ceph/ceph.client.eucalyptus.keyring
DESCRIPTION one.storage.cephkeyringfile Absolute path to Ceph keyring
(ceph.client.eucalyptus.keyring) file. Default value is
'/etc/ceph/ceph.client.eucalyptus.keyring'
5. The following steps are optional if the default values do not work for your cloud:
a) To set the Ceph username (the default value for Eucalyptus is 'eucalyptus'):
euctl ZONE.storage.cephuser=myuser
b) To set the absolute path to keyring file containing the key for the 'eucalyptus' user (the default value is
'/etc/ceph/ceph.client.eucalyptus.keyring'):
euctl ZONE.storage.cephkeyringfile='/etc/ceph/ceph.client.myuser.keyring'
Note: If cephuser was modified, ensure that cephkeyringfile is also updated with the location to the
keyring for the specific cephuser:
Note: If 'rbd' is listed as one of the supported formats, no further action is required; otherwise proceed to
the next step.
3. If the eucalyptus-nc service is running, terminate/stop all instances. After all instances are terminated, stop the
eucalyptus-nc service.
service eucalyptus-nc stop
5. Install Eucalyptus-built RHEV packages: qemu-kvm-rhev and qemu-img-rhev, which can be found in the
same yum repository as other Eucalyptus packages.
yum install qemu-kvm-rhev qemu-img-rhev
3. Verify that the SC is listed; note that it may be in the broken state:
euserv-describe-services --filter service-type=storage
4. On the CLC, enable SAN support in Eucalyptus by entering your SAN's hostname or IP address, the username,
password, and the paths:
euctl ZONE.storage.sanhost=3PAR_IP_address
euctl ZONE.storage.sanuser=3PAR_admin_user_name
euctl ZONE.storage.sanpassword=3PAR_admin_password
euctl ZONE.storage.scpaths=3PAR_iSCSI_IP
euctl ZONE.storage.ncpaths=3PAR_iSCSI_IP
If you have multiple management IP addresses for the SAN adapter, provide a comma-delimited list of IP addresses
to the ZONE.storage.sanhost property.
6. Assign the 3PAR CPG that should be used for creating virtual volumes to the threeparusercpg property.
euctl ZONE.storage.threeparusercpg=3PAR_user_cpg
7. Assign the 3PAR CPG that should be used for creating virtual volume snapshot space to the threeparcopycpg
property.
euctl ZONE.storage.threeparcopycpg=3PAR_copy_cpg
access to virtual volume. Value must be true to enable multi host access.
Default value is false
PROPERTY one.storage.threeparpersona 2
DESCRIPTION one.storage.threeparpersona Persona (integer value) to be
used when exporting a VLUN to host. Default value is 2 and represents a Linux
initiator
ZONE.storage.blockstoragemanager=netapp
3. Verify that the SC is listed; note that it may be in the broken state:
euserv-describe-services --filter service-type=storage
Note: CHAP support for NetApp was added in Eucalyptus 3.3. An SC will not transition to ENABLED
state until the CHAP username is configured.
euctl ZONE.storage.sanhost=Filer_IP_address
euctl ZONE.storage.sanuser=Filer_admin_username
euctl ZONE.storage.sanpassword=Filer_admin_password
euctl ZONE.storage.chapuser=Chap_username
7. If no aggregate is set, Eucalyptus will query the NetApp Filer for all available aggregates and use the one that has
the highest capacity (free space) by default. To make Eucalyptus use specific aggregate(s) configure the following
property:
euctl ZONE.storage.aggregate=aggregate_1_name,aggregate_2_name,...
If you want Eucalyptus to use the smallest aggregate first configure the following property:
euctl ZONE.storage.uselargestaggregate=false
8. Set the iSCSI data IP on the ENABLED CLC. This IP is used by NCs to perform disk operations on the Filer.
Note: Filer IP address can be used as the data port IP. If this is not set, Eucalyptus will automatically use
the Filer IP address/hostname.
Note: Eucalyptus does not support Multipath I/O for NetApp 7-mode Filers.
euctl ZONE.storage.ncpaths=IP
9. Set the iSCSI data IP on the ENABLED CLC. This IP is used by the SC to perform disk operations on the Filer. The
SC connects to the Filer in order to transfer snapshots to objectstorage during snapshot operations.
Note: The Filer IP address can be used as the data port IP. If this is not set, Eucalyptus will automatically
use the Filer IP address/hostname.
Note: Eucalyptus does not support Multipath I/O for NetApp 7-mode Filers.
euctl ZONE.storage.scpaths=IP
Your NetApp 7-mode SAN backend is now ready to use with Eucalyptus.
3. Verify that the SC is listed; note that it may be in the broken state:
euserv-describe-services --filter service-type=storage
Note: CHAP support for NetApp was added in Eucalyptus 3.3. The SC will not transition to ENABLED
state until the CHAP username is configured.
euctl ZONE.storage.sanhost=Vserver_IP_address
euctl ZONE.storage.sanuser=Vserver_admin_username
euctl ZONE.storage.sanpassword=Vserver_admin_password
euctl ZONE.storage.chapuser=Chap_username
Note: The following command may fail if tried immediately after configuring the block storage manager.
Retry the command a few times, pausing for a few seconds after each retry:
euctl ZONE.storage.vservername=Vserver_name
7. If no aggregate is set, Eucalyptus will query the NetApp Vserver for all available aggregates and use the one that
has the highest capacity (free space) by default. To make Eucalyptus use specific aggregate(s) configure the following
property:
euctl ZONE.storage.aggregate=aggregate_1_name, aggregate_2_name,...
If you want Eucalyptus to use the smallest aggregate first configure the following property:
euctl ZONE.storage.uselargestaggregate=false
8. Set an IP address for the iSCSI data LIF on the ENABLED CLC. This is used for NCs performing disk operations
on the Vserver.
euctl ZONE.storage.ncpaths=IP
9. Set an IP address for the iSCSI data LIF on the ENABLED CLC. This is used by the SC for performing disk operations
on the Vserver. The SC connects to the data LIFs on the Vserver in order to transfer snapshots to objectstorage during
snapshot operations.
euctl ZONE.storage.scpaths=IP
Your NetApp Clustered Data ONTAP SAN backend is now ready to use with Eucalyptus.
Use a Dell EqualLogic SAN
This topic describes how to configure the Dell EqualLogic SAN as the block storage backend provider on the Storage
Controller (SC).
This task assumes the following:
• Successful completion of all the install sections prior to this section.
• The SC must be installed, registered, and running.
• You must have a paid subscription to Eucalyptus in order to use a SAN backend.
• You must have a functioning EqualLogic device available to Eucalyptus cloud.
• You must execute the steps below as a Eucalyptus administrator.
To configure Dell EqualLogic block storage for the zone, run the following commands on the CLC
1. Configure the SC to use Equallogic for EBS.
euctl ZONE.storage.blockstoragemanager=equallogic
The output of the command should be similar to:
one.storage.blockstoragemanager=equallogic
3. Verify that the SC is listed; note that it may be in the broken state:
euserv-describe-services --filter service-type=storage
4. Enable SAN support in Eucalyptus by entering your SAN's hostname or IP address, the username, password, and
the name of the chap user:
euctl ZONE.storage.sanhost=SAN_IP_address
euctl ZONE.storage.sanuser=SAN_admin_user_name
euctl ZONE.storage.sanpassword=SAN_admin_password
euctl ZONE.storage.chapuser=chap_username
5. (Optional) If your EqualLogic setup has dedicated paths for data access that are different from the management IP
address supplied for the ZONE.storage.sanhost property, the following properties must also be configured
in Eucalyptus:
euctl ZONE.storage.scpaths=data-IP-address ZONE.storage.ncpaths=data-IP-address
The SC and NC data IP address might be the same; or they can be different, if EqualLogic has multiple data interfaces.
Your Dell EqualLogic SAN backend is now ready to use with Eucalyptus.
2. If the status of the conversion operation is 'Image conversion failed', but the image is marked as 'available' (in the
output of euca-describe-images), the conversion can be retried by running the EMI again:
euca-run-instances ...
Run the following commands on the machine where you installed the eucalyptus-service-image RPM
package (it will set the imaging.imaging_worker_emi property to the newly created EMI of the imaging
worker):
esi-install-image --install-default
2. You can also check the enabled Load Balancer EMI with:
euctl services.loadbalancing.worker.image
3. If you need to manually set the enabled Load Balancer EMI use:
euctl services.loadbalancing.worker.image=emi-12345678
In Managed mode, each security group network is assigned an additional parameter that is used as the VLAN tag. This
parameter is added to all virtual machine traffic running within the security group. By default, Eucalyptus uses VLAN
tags starting at 2, going to a maximum of 4094. The maximum is dependent on how many security group networks of
the size specified in VNET_ADDRSPERNET fit in the network defined by VNET_SUBNET and VNET_NETMASK.
If your networking environment is already using VLANs for other reasons, Eucalyptus supports the definition of a
smaller range of VLANs that are available to Eucalyptus. To configure Eucalyptus to use VLANs within a specified
range:
1. Choose your range (a contiguous range of VLANs between 2 and 4095).
2. Configure your cluster controllers with a VNET_SUBNET/VNET_NETMASK/VNET_ADDRSPERNET that is
large enough to encapsulate your desired range. For example, for a VLAN range of 1024-2048, you could set
VNET_NETMASK to 255.254.0.0 to get a large enough network (131072 addresses), and VNET_ADDRSPERNET
to 64, to give 2048 possible security groups.
Tip: The number of instances per security group can be calculated as follows:
subnets (SGs) = no. hosts / addrspernet
instances per subnet (SG) = addrspernet - 10
3. Configure your cloud controller to work within that range. Use the following commands to verify that the range is
now set to be 2-2048, a superset of the desired range.
euctl cluster.maxnetworktag
euctl cluster.minnetworktag
4. Constrict the range to be within the range that the CC can support as follows:
euctl cloud.network.global_max_network_tag=max_vlan_tag
euctl cloud.network.global_min_network_tag=min_vlan_tag
This ensures that Eucalyptus will only use tags between 1024 and 2048, giving you a total of 1024 security groups,
one VLAN per security group.
Tip: If VMs are already running in the system using a VLAN tag that is outside the range specified by
global_min_network_tag-global_max_network_tag, that network will continue to run until all VMs within the
network are terminated and the system removes reference to that network. Best practice is to configure these
values in advance of running virtual machines.
2. Shut down all Eucalyptus services. For more information, see Shutdown Services.
service eucalyptus-cloud stop
3. Edit all the config files on NC and CC for Edge networking mode. For more information, see Configure for Edge
Mode.
4. Install eucanetd on all NCs.
yum install eucanetd
6. Start all Eucalyptus services: CLC, CC, WS, SC, NCs. For more information, see Start Eucalyptus.
7. Set the Edge JSON property. For more information, see Create the JSON File.
Your Edge networking mode is now properly configured.
2. Retrieve cluster properties from your current installation using either the euctl command. For example:
euctl ZONE.cluster.maxnetworktag=639
euctl ZONE.cluster.minnetworktag=512
3. Create the JSON configuration. For this example, save the file as network.json. Examples for both MANAGED
and MANAGED-NOVLAN are shown below.
a) The following shows an example JSON configuration file for MANAGED mode:
{
"InstanceDnsServers": [
“10.1.1.254"
],
"Clusters": [
{
"MacPrefix": "d0:0d",
"Name": “<clustername>"
}
],
"PublicIps": [
"10.111.101.31",
"10.111.101.40",
"10.111.101.42",
"10.111.101.43",
"10.111.101.132",
"10.111.101.133",
"10.111.101.134",
"10.111.101.135"
],
"Mode": "MANAGED",
"ManagedSubnet": {
"Subnet": "172.16.0.0",
"Netmask": "255.255.0.0",
"MinVlan": "512",
"MaxVlan": "639"
}
}
b) The following shows an example JSON configuration file for MANAGED-NOVLAN mode:
{
"Clusters": [
{
"MacPrefix": "d0:0d",
"Name": "one"
}
],
"InstanceDnsServers": [
"10.111.1.56"
],
"ManagedSubnet": {
"Netmask": "255.255.0.0",
"Subnet": "172.16.0.0"
},
"Mode": "MANAGED-NOVLAN",
"PublicIps": [
"10.111.31.177",
"10.111.31.178",
"10.111.31.179",
"10.111.31.180",
"10.111.31.181",
"10.111.31.182",
"10.111.31.183",
"10.111.31.184"
]
}
4. Stop all cloud components using the service component_name stop command. For example:
service eucalyptus-cc stop
service eucalyptus-cloud stop
service eucalyptus-nc stop
5. On the machine for each Eucalyptus service, upgrade Eucalyptus. For example:
yum upgrade `euca*`
6. Start the Eucalyptus services on each of the Eucalyptus host machines. For example:
service eucalyptus-cloud start
7. When the CLC completes database upgrade and becomes enabled, set the 'cloud.network.network_configuration'
property to point to the JSON file that was created. For example:
euctl [email protected]
8. Upgrade the CC and SC machines. For example:
yum upgrade `euca*`
9. On the SC machine, start the SC services:
service eucalyptus-cloud start
10. On the CC machine, start the CC services:
service eucalyptus-cloud start
11. On the CCs, start EUCANETD.
service eucanetd start
12. Upgrade each NC.
yum upgrade `euca*`
13. Start the NC services on each NC:
service eucalyptus-nc start
14. Start the EUCANETD service on each NC:
service eucanetd start
You have now upgraded your managed network mode for Eucalyptus 4.2.
Eucalyptus Upgrade
This section details the tasks to upgrade your current version of Eucalyptus.
You can upgrade to Eucalyptus 4.2.2 from 4.1.2 or 4.2.1. If your current version is earlier than 4.2.1, see the prescribed
paths below. Follow the directions in that version's Installation Guide in the documentation archive, and then upgrade
to 4.2.2 using the directions in this section.
Warm upgrade
Eucalyptus supports warm upgrade as of the 3.4.2 release. This means you do not need to shut down EBS-backed or
instance-store-backed instances in order to upgrade. Auto Scaling instances will likely shut down and be replaced, based
on each group's scaling policy and health check criteria.
Note: When you upgrade the underlying OS (RHEL or Centos), this requires a reboot and therefore warm
upgrade is not available in any release when you also upgrade your OS.
• Federated Eucalyptus clouds began with 4.2.0; you can upgrade a 4.2.x cloud to a federated setup. If you
have a 4.1.x or earlier cloud, it cannot have any non-Eucalyptus services accounts created, nor can it be an
LDAP integrated cloud. For more information, see Manage Regions in the Administration Guide.
Tip: You can preview the install and its dependencies by running the following commands. Be sure and respond
with 'N' so you do not start the install before you are ready.
2. (Optional) Test the new Euca2ools release package on each host machine that runs Euca2ools or a Eucalyptus service:
yum install
http://downloads.eucalyptus.com/software/euca2ools/3.3/rhel/6/x86_64/euca2ools-release-3.3-1.el6.noarch.rpm
Review the dependencies and install package information.
Enter N when prompted so you do NOTinstall the package.
3. (Optional) If you have a Eucalyptus subscription, test the new subscription release package on each host machine
that runs a Eucalyptus service:
yum install
http://downloads.eucalyptus.com/software/subscription/eucalyptus-enterprise-release-4.2-1.el6.noarch.rpm
Review the dependencies and install package information.
Enter N when prompted so you do NOTinstall the package.
Shutdown Services
This topic describes how to stop all Eucalyptus services.
Prerequisites
See Prepare for Upgrade for the complete list of upgrade prerequisites.
The steps you take depend upon where Eucalyptus services are hosted.
To shut down Eucalyptus services
1. Log in to the CLC host machine and shut down the CLC service:
service eucalyptus-cloud stop
2. (Optional) If you have a separate SC host machine, log in and shut down the SC service:
service eucalyptus-cloud stop
3. (Optional) If you have a separate Walrus host machine, log in and shut down the Walrus service:
service eucalyptus-cloud stop
4. (Optional) If you have a separate UFS host machine, log in and shut down the UFS services:
service eucalyptus-cloud stop
5. (Optional) If there are any other Eucalyptus services (for example Walrus, SC, UFS) co-located on the CC host
machine, use this command to shut down the other services on the CC host, and in the correct order:
service eucalyptus-cloud stop
9. Log in to each Management Console host machine and shut down the console service:
service eucaconsole stop
For more information, see Upgrade the Management Console.
2. Enter the following command on each host machine that runs a Eucalyptus service or uses Euca2ools:
yum clean all
3. Enter the following command on each host machine that runs a Eucalyptus service or uses Euca2ools:
yum update euca2ools
Enter Y when prompted to upgrade Euca2ools.
This retrieves the package verification keys; for more information, see Software Signing.
4. Repeat these steps for each host machine that runs a Eucalyptus service.
2. If you are not a Eucalyptus subscriber, skip this step. Install the Eucalyptus subscription package on each host that
will run a Eucalyptus service:
yum install
http://downloads.eucalyptus.com/software/subscription/eucalyptus-enterprise-release-4.2-1.el6.noarch.rpm
Review the dependencies and install package information.
Enter y when prompted to install these packages.
3. Enter the following command on each host machine that runs a Eucalyptus service:
yum clean all
4. Enter the following command on each host machine that runs a Eucalyptus service:
yum update 'eucalyptus*'
Enter Y when prompted to upgrade Eucalyptus.
This retrieves the package verification keys; for more information, see Software Signing.
If you have previously customized your configuration files, yum returns a warning, and installs the new configuration
files with a different name. This preserves your customizations. Before you continue, customize and rename the new
Configuration files.
Tip: For larger deployments, use a script to upgrade the host machines. For example:
for host in 28 29 32 33 35 39 40; do echo 192.168.51.$host;
ssh 192.168.51.$host 'yum -y update $( rpm -qa | grep euca )' ; done
5. Perform the steps in Upgrade the Management Console then return to this section.
6. Enter the following command on each NC:
yum install qemu-kvm-rhev
Prerequisites
You should have successfully completed Upgrade Eucalyptus Package Repositories before you begin this process.
You need to restart all Eucalyptus services after upgrade. The steps you take depend upon where Eucalyptus services
are hosted.
To restart Eucalyptus services
1. Log in to the CLC host machine and restart the services:
service eucalyptus-cloud start
If you are upgrading from 4.1.2 you will see that the process starts the database upgrade. Eucalyptus returns output
similar to the following example.
Starting Eucalyptus services: Attempting database upgrade from 4.1.2
at /var/lib/eucalyptus/upgrade/eucalyptus.backup.1446434585...
# UPGRADE INFORMATION
#================================================================================
# Old Version: 4.1.2
# New Version: 4.2.0
# Upgrade keys: false using:
# Start upgrading: db
2. (Optional) If you have a separate SC host machine, log in and restart the services:
service eucalyptus-cloud start
3. (Optional) If you have a separate Walrus host machine, log in and restart the services:
service eucalyptus-cloud start
4. (Optional) If you have a separate UFS host machine, log in and restart the services:
service eucalyptus-cloud start
5. (Optional) If there are any other Eucalyptus services (for example Walrus, SC, UFS) co-located on the CC host
machine, use this command to restart the other services on the CC host, and in the correct order:
service eucalyptus-cloud start
7. If you have a multi-cluster setup, repeat the previous step for each cluster.
8. Log in to each NC server and restart the service:
9. Log in to each Management Console host machine and restart the service:
service eucaconsole start
For more information, see Upgrade the Management Console.
4. Make sure that NCs are presenting available resources to the CC.
euca-describe-availability-zones verbose
The returned output should a non-zero number in the free and max columns, as in the following example.
AVAILABILITYZONE test00 192.168.51.29
arn:euca:eucalyptus:test00:cluster:test00_cc/
AVAILABILITYZONE |- vm types free / max cpu ram disk
AVAILABILITYZONE |- m1.small 0004 / 0004 1 128 2
AVAILABILITYZONE |- c1.medium 0004 / 0004 1 256 5
AVAILABILITYZONE |- m1.large 0002 / 0002 2 512 10
AVAILABILITYZONE |- m1.xlarge 0002 / 0002 2 1024 20
AVAILABILITYZONE |- c1.xlarge 0001 / 0001 4 2048 20
AVAILABILITYZONE test01 192.168.51.35
arn:euca:eucalyptus:test01:cluster:test01_cc/
AVAILABILITYZONE |- vm types free / max cpu ram disk
AVAILABILITYZONE |- m1.small 0004 / 0004 1 128 2
AVAILABILITYZONE |- c1.medium 0004 / 0004 1 256 5
AVAILABILITYZONE |- m1.large 0002 / 0002 2 512 10
Run the following commands to clean up the old imaging worker instance:
# euscale-describe-auto-scaling-groups
AUTO-SCALING-GROUP asg-euca-internal-imaging-worker-01
lc-euca-internal-imaging-worker-01 one 1 1 1 Default
INSTANCE i-ce92fd76 one InService Healthy lc-euca-internal-imaging-worker-01
TAG auto-scaling-group asg-euca-internal-imaging-worker-01 Name
euca-internal-imaging-workers true
# euscale-update-auto-scaling-group asg-euca-internal-imaging-worker-01
--launch-configuration lc-euca-internal-imaging-worker-01 --max-size 1
--min-size 0 --desired-capacity 0
3. Once the imaging worker instance is terminated, delete the related autoscaling group and launch config:
# euscale-delete-auto-scaling-group asg-euca-internal-imaging-worker-01
# euscale-delete-launch-config lc-euca-internal-imaging-worker-01
Downgrade Eucalyptus
You must Shutdown Services before downgrading Eucalyptus.
1. Downgrade to the Eucalyptus 4.2.1 release package on each host machine:
yum downgrade
http://downloads.eucalyptus.com/software/eucalyptus/4.2/centos/6/x86_64/eucalyptus-release-4.2-1.el6.noarch.rpm
Enter y when prompted, to downgrade the release package.
2. If you have a Eucalyptus subscription, downgrade your subscription release package on each host machine to the
release package you used for Eucalyptus 4.2.1:
yum downgrade
http://downloads.eucalyptus.com/software/subscription/eucalyptus-enterprise-release-4.2-1.el6.noarch.rpm
Enter y when prompted, to downgrade the subscription release package.
3. Expire the cache for the yum repositories on each host machine:
yum clean expire-cache
Important:
Use the yum shell command for the following instructions. This will allow you to perform more complex
transactions that are required for the downgrade.
5. Log in to each machine running a Eucalyptus service and run the following command:
yum shell
6. Add the transaction commands listed below for each service installed on the machine host. If more than one service
requires the same transactional command, you only need to specify that command once per machine host.
Transaction commands for a combined machine host with CLC, Walrus, CC, and SC:
downgrade eucalyptus
downgrade eucalyptus-admin-tools
downgrade eucalyptus-axis2c-common
downgrade eucalyptus-blockdev-utils
downgrade eucalyptus-cc
downgrade eucalyptus-cloud
downgrade eucalyptus-common-java
downgrade eucalyptus-common-java-libs
downgrade eucalyptus-sc
downgrade eucalyptus-service-image
downgrade eucalyptus-walrus
downgrade eucanetd
CLC transaction commands:
downgrade eucalyptus
downgrade eucalyptus-admin-tools
downgrade eucalyptus-axis2c-common
downgrade eucalyptus-blockdev-utils
downgrade eucalyptus-cloud
downgrade eucalyptus-common-java
downgrade eucalyptus-common-java-libs
downgrade eucalyptus-service-image
downgrade eucanetd
UFS transaction commands:
downgrade eucalyptus
downgrade eucalyptus-admin-tools
downgrade eucalyptus-cloud
downgrade eucalyptus-common-java
downgrade eucalyptus-common-java-libs
downgrade eucanetd
CC transaction commands:
downgrade eucalyptus
downgrade eucalyptus-admin-tools
downgrade eucalyptus-cc
SC transaction commands:
downgrade eucalyptus
downgrade eucalyptus-admin-tools
downgrade eucalyptus-common-java
downgrade eucalyptus-common-java-libs
downgrade eucalyptus-sc
Walrus Backend transaction commands:
downgrade eucalyptus
downgrade eucalyptus-admin-tools
downgrade eucalyptus-common-java
downgrade eucalyptus-common-java-libs
downgrade eucalyptus-walrus
SAN EqualLogic transaction commands:
downgrade eucalyptus-enterprise-storage-san-equallogic
downgrade eucalyptus-enterprise-storage-san-equallogic-libs
SAN NetApp transaction commands:
downgrade eucalyptus-enterprise-storage-san-netapp
downgrade eucalyptus-enterprise-storage-san-netapp-libs
7. When you have entered all the appropriate yum transaction commands, run the following command to verify that
the transaction will be successful:
ts solve
8. Perform the downgrade by running the following command in the yum transaction shell:
run
10. Remove the /etc/eucalyptus/.upgrade file from each Eucalyptus host machine:
rm /etc/eucalyptus/.upgrade
11. Clear out the /var/run/eucalyptus/classcache/ directory on all Eucalyptus host machines:
rm -rf /var/run/eucalyptus/classcache/
This deletes 4.2 class file artifacts; they will be regenerated as needed for your downgraded cloud.
Downgrade Euca2ools
If Euca2ools is not the source of upgrade failure, you are not required to downgrade Euca2ools.
1. Downgrade to the Euca2ools 3.3.0 release package on each host machine:
yum downgrade
http://downloads.eucalyptus.com/software/euca2ools/3.3/centos/6/x86_64/euca2ools-release-3.3-1.el6.noarch.rpm
Enter y when prompted, to downgrade the release package.
2. Expire the cache for the yum repositories on each host machine:
yum clean expire-cache
Read More
Eucalyptus has the following guides to help you with more information:
• The Administration Guide details ways to manage your Eucalyptus deployment. Refer to this guide to learn more
about managing your Eucalyptus services, like the Cloud Controller; and resources, like instances and images.
• The Identity and Access Management (IAM) Guide provides information to help you securely control access to
services and resources for your Eucalyptus cloud users. Refer to this guide to learn more about managing identities,
authentication and access control best practices, and specifically managing your users and groups.
• The User Guide details ways to use Eucalyptus for your computing and storage needs. Refer to this guide to learn
more about getting and using euca2ools, creating images, running instances, and using dynamic block storage devices.
• The Image Management Guide describes how to create and manage images for your cloud.
• The Management Console Guide describes how to create and manage cloud resources using the Eucalyptus
Management Console.
• The Euca2ools Reference Guide describes the Euca2ools commands. Refer to this guide for more information about
required and optional parameters for each command. Also includes euca2ools.ini information.
Get Involved
The following resources can help you to learn more, connect with other Eucalyptus users, or get actively involved with
Eucalyptus development.
• The Eucalyptus IRC channel is #eucalyptus on Freenode. This channel is used for real-time communication among
users and developers. Information on how to use the network is available from Freenode.
• Subscribe to one or more of the Eucalyptus mailing lists, which provide ways to ask questions and get assistance
from the community.
• Search for technical articles in the Knowledge Base to find answers to your questions and learn about best practices.
• Check out the Eucalyptus Support pages for more ideas.
2. Download Euca2ools:
wget -r --no-parent \
http://downloads.eucalyptus.com/software/euca2ools/3.3/centos/6/x86_64/ \
-P /tmp/euca2ools
3. In step 1 of the existing installation instructions, modify the baseurl to point to your Eucalyptus local repository:
baseurl=file:///tmp/eucalyptus/downloads.eucalyptus.com/software/eucalyptus/4.2/centos/6/x86_64
4. In step 2 of the existing installation instructions, modify the baseurl to point to your local Euca2ools repository:
baseurl=file:///tmp/euca2ools/downloads.eucalyptus.com/software/euca2ools/3.3/centos/6/x86_64
3. Install Euca2ools:
yum install euca2ools
Storage and Install Updates and corrections for the release April 26, 2016
of Eucalyptus 4.2.2.
Dependencies, Imaging, Network, Updates and corrections. March 4, 2016
Storage
Create the Eucalyptus Cloud New section. February 29, 2016
Administrator User
Credentials, DNS, Starting, Install Updates and corrections. February 29, 2016
Repos, NTP
Registering, Planning, Config Updates and corrections. January 31, 2016
Dependencies, VPC
VPC, Overview, Introduction, Updates and corrections. December 31, 2015
Planning, Architecture
Upgrade Updates and corrections. December 7, 2015
Downgrade Updates and corrections. November 6, 2015
Networking Added Midokura Midonet for VPC October 22, 2015
support.
Storage Controller (SC) Added HP 3PAR SAN backend. October 22, 2015
Storage Controller (SC) Changed Ceph-RBD backend from October 22, 2015
tech preview to full support.
Storage Controller (SC) Deprecated EMC and multipathing. October 22, 2015
Storage Controller (SC) Reorganized the section. October 22, 2015
High Availability (HA) Deprecated high availability. October 22, 2015
Global Replaced deprecated commands. October 22, 2015
Index
A N
architecture 9 networking 32, 44, 52, 79
configuration 44, 52
multi-cluster 52
C configuring bridges 32
cloud controller (CLC) 53 security groups 79
starting 53 networking modes 15–19
cluster controller (CC) 54, 57 edge 16
registering 57 managed 17
starting 54 managed (no VLAN) 18
components 6, 9, 12 planning 15
about 6 vpcmido 19
disk space 12 node controller 54, 59
configuration 43, 51 registering 59
loop devices 51 starting 54
configuring 59, 79 NTP 33
concurrency level 79 configuring 33
DNS 59
security groups 79 O
subdomains 59
object storage 15
object storage gateway (OSG) 15
D
DNS 59 S
configuring 59
delegation 59 SELinux 33
IP mapping 59 configuring 33
services 11, 55
cloud 11
E cluster 11
euca2ools 96 node 11
registering 55
startup 52, 55
F verifying 55
storage controller 58
firewalls 32
registering 58
configuring 32
storage controller (SC) 54
starting 54
I support 94
system requirements 8, 13, 30
installation 39 networking 30
installing 43, 95 SAN 13
local repository 95
CentOS 95
nightly packages 43 U
upgrading 84
M user-facing services 56
registering 56
Management Console 55 User-Facing Services (UFS) 53
starting 55 starting 53
MTA 33
configuring 33
multi-cluster 52 W
walrus 53, 57
registering 57
starting 53