Ystem Configuration For Eucalyptus

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

https://computing.seas.harvard.

edu/display/CLOUD/S ystem+Configuration+for+Eucalyptus

System Configuration for Eucalyptus


Configuring Eucalyptus
The general documentation for configuring Eucalyptus 1.6 can be found on the Eucalyptus web site under the Administration Guide. This page comes from the information there, with notes added and irrelevant parts removed.

Initial Setup: Headnode

Much of the below can be captured on Rocks for node installs by an appropriate extend-vm-container.xml site profile, whic listed below.

The Eucalyptus documentation suggests that you can configure Xen to run as non-root. The comment in the config file which says th doesn't actually work turns out to be completely correct. At this point, you probably want to add /opt/eucalyptus/usr/sbin/ to your path: PATH=/opt/eucalyptus/usr/sbin/:$PATH or add a profile.d script to that effect: /etc/profile.d/eucalyptus.sh Also, I like to locate all the config files into /etc so that we can capture them with git or another such tool (unnecessary now for v 1.6.x). root# ln -s /opt/eucalyptus/etc/eucalyptus /etc/ Next test that things are installed; as root execute su eucalyptus -c "virsh list" You should see something like: libvir: Xen Daemon error : internal error failed to connect to xend libvir: Xen Daemon error : internal error failed to connect to xend

Id Name State ---------------------------------0 Domain-0 running which indicates that it can talk to Xen, even though the xend is off. Now, start the cloud controller: sudo service eucalyptus-cloud start sudo service eucalyptus-cc start

Seems that the eucalyptus-cloud service script can't handle java versions with names like "1.6.0_14" and complains about the java ve not being high enough. In this case comment out the "check_java" line in "do_start"

Register Cluster: v1.5.x


Next, register the cluster controller with the cloud controller. These are both running on the frontend, this should do it: sudo euca_conf -addcluster $(hostname -s) localhost /etc/eucalyptus/eucalyptus.conf

Register Nodes: v1.5.x


Then, still from the front end, register the nodes. This both adds them to the config file and pushes out cryptographic keys. The following code snippet should register every installed VM Container:

for nodehost in $( rocks list host |awk '/VM Container/ { sub(/:+$/,"",$1); print $1 }' ); euca_conf -addnode $nodehost /etc/eucalyptus/eucalyptus.conf done

Register Cluster: v1.6.x


Rdfer to this page for the first time setup on v1.6: $EUCALYPTUS/usr/sbin/euca_conf --register-walrus <front end IP address> $EUCALYPTUS/usr/sbin/euca_conf --register-cluster <clustername> <front end IP address> $EUCALYPTUS/usr/sbin/euca_conf --register-sc <clustername> <front end IP address>

Register Nodes: v1.6.x


Then, still from the front end, register the nodes. This both adds them to the config file and pushes out cryptographic keys. The following code snippet should register every installed VM Container: root# for nodehost in $( rocks list host |awk '/VM Container/ { do > euca_conf --register-nodes $nodehost > done

sub(/:+$/,"",$1); print $1

This will need to be run again (or at least the rsync of keys part) every time a node is rebuilt.

Put the Space-Eating Things Where There's Space


Walrus Storage
There does not appear to be a configuration option for where running instances should be are stored, nor for the location of the "bukkits" used by Walrus, the S3 clone. Therefore, /opt/eucalyptus/var/lib/eucalyptus (for v1.5.x) or /var/lib/eucalyptus (for v1.6.x)should be moved to somewhere with gigabytes of actual space (perhaps provided by GPFS) and a symlink put into place. (This is tested and works.) For moving it to eucalyptus user home directory on the local storage on the headnode: root# mv /var/lib/eucalyptus /home/eucalyptus/data root# ln -s /home/eucalyptus/data /var/lib/eucalyptus

Instance Storage
On the compute nodes, make sure to set the INSTANCE_PATH to local storage on a volume that has adequate space. On Rocks this should be/state/partition1/eucalyptus or similar. To do so, stop the eucalyptus-nc service, and execute: root# root# root# root# root# mkdir -p /state/partition1/eucalyptus/instances chown -R eucalyptus /state/partition1/eucalyptus chgrp -R eucalyptus /state/partition1/eucalyptus /usr/sbin/euca_conf --instances /state/partition1/eucalyptus/instances /usr/sbin/euca_conf -setup

Configure System Networking on Nodes

Need to figure out how this will interact with Rocks. Also, currently, enabling network brings down the network when the nc service starts! Curses! It seems that there is a misconfiguration or bug with the standard install so that when the eucalyptusnc service is started, network connectivity is broken (or likely completely filtered). This occurs in the stanza in /etc/init.d/eucalyptus-nc that starts bridge netfiltering on the VM container node: if [ -w /proc/sys/net/bridge/bridge-nf-call-iptables ]; then VAL=`cat /proc/sys/net/bridge/bridge-nf-call-iptables` if [ "$VAL" = "0" ]; then echo echo "Enabling bridge netfiltering for eucalyptus." echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables fi fi An initial workaround is to modify this script to disable that step before starting the service; a perl one liner to do so is as follows: root# perl -i -p -e \

's#\[ -w /proc/sys/net/bridge/bridge-nf-call-iptables \]#[ 0 = 1 ]#g' \ /etc/init.d/eucalyptus-nc

Fix disk driver on nodes


The file /opt/eucalyptus/usr/share/eucalyptus/gen_libvirt_xml on nodes needs to be modified with the proper disk driver for CentOS 5.x Xen. Specifically, it needs: <driver name='tap' type='aio'/> added to each disk. Like so: --- gen_libvirt_xml.orig 2009-06-12 13:04:17.970459000 -0400 +++ gen_libvirt_xml 2009-06-12 13:11:14.826516000 -0400 @@ -41,6 +41,7 @@ <vcpu>VCPUS</vcpu> <devices> <disk type='file'> + <driver name='tap' type='aio'/> <source file='BASEPATH/root'/> <target dev='sda1'/> </disk> @@ -49,6 +50,7 @@ if ( $use_ephemeral ) { print <<EOF; <disk type='file'> + <driver name='tap' type='aio'/> <source file='BASEPATH/ephemeral'/> <target dev='sda2'/> </disk> @@ -57,6 +59,7 @@ print <<EOF; <disk type='file'> + <driver name='tap' type='aio'/> <source file='SWAPPATH/swap'/> <target dev='sda3'/> </disk>

Networking
In general we want to use the most capable networking settings available, but within the limitations of the switch. Some bladecenter switches for example don't support full tagging from the nodes (i.e. they filter), so here we choose the MANAGED-NOVLAN option. This doesn't allow network separation, but works more generally. The networking VNET_INTERFACE="eth0" VNET_PUBINTERFACE="eth1" VNET_PRIVINTERFACE="eth0" VNET_BRIDGE="xenbr.eth0"

# DHCP VNET_DHCPDAEMON="/usr/sbin/dhcpd" VNET_DHCPUSER="root" # VNET_MODE="MANAGED-NOVLAN" VNET_SUBNET="10.12.0.0" VNET_NETMASK="255.255.0.0" VNET_DNS="10.11.12.1" VNET_ADDRSPERNET="256" VNET_PUBLICIPS=140.247.54.171 #VNET_LOCALIP=140.247.54.170 #VNET_CLOUDIP=140.247.54.170

(Re)Start Remaining Services


Once th configuration and credentials are all in place, start (or restart) the service on head and nodes: sudo service eucalyptus-cloud start sudo service eucalyptus-cc start sudo pdsh -a service eucalyptus-nc start After starting the services, the check that everything is registered can be down by running the command bash$ euca-describe-availability-zones verbose which should report a number of available instances in each of the node types (i.e.): parrott@cloud32 ~> euca-describe-availability-zones verbose AVAILABILITYZONE cloud32 140.247.54.170 AVAILABILITYZONE |- vm types free / max cpu ram disk AVAILABILITYZONE |- m1.small 0010 / 0010 1 128 2 AVAILABILITYZONE |- c1.medium 0010 / 0010 1 256 5 AVAILABILITYZONE |- m1.large 0005 / 0005 2 512 10 AVAILABILITYZONE |- m1.xlarge 0005 / 0005 2 1024 20 AVAILABILITYZONE |- c1.xlarge 0000 / 0000 4 2048 20 To get this to work, you'll need to configure your EC2 credentials (which is here : System Access to Eucalyptus).

Node Builds
Node rebuilds will break the nodes as useful VM containers since the credentials and settings are erased on a rebuild, so in order to preserve usablity after a node rebuild for currently running nodes, execute: root# pdsh -a /sbin/service rocks-grub stop root# pdsh -a /sbin/chkconfig rocks-grub off To capture the above configurations in the nodes builds, add the following post install steps, as per then "Rocks" way of doing things, to the site-profile script for vm containers in the file /export/rocks/install/site-profiles/5.1/nodes/extend-vm-container-

client.xml as follows: <post> <file name="/etc/yum.repos.d/epel.repo" perms="0444"> [epel] name=Extra Packages for Enterprise Linux 5 - $basearch #baseurl=http://download.fedoraproject.org/pub/epel/5/$basearch mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=epel-5&amp;arch=$basearch failovermethod=priority enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL </file> <file name="/etc/yum.repos.d/ircs.repo" perms="0444"> [ircs] name=IRCS for CentOS $releasever baseurl=file:///group/grid/linux/ircs/$releasever/i386/ gpgcheck=0 </file> <!-- Add loopback devices --> <file name="/etc/udev/makedev.d/50-udev.nodes" mode="append" perms="0444"> loop8 loop9 loop10 loop11 loop12 loop13 loop14 loop15 loop16 loop17 loop18 loop19 loop20 loop21 loop22 loop23 loop24 loop25 loop26 loop27 loop28 loop29 loop30 loop31 </file> <file name="/etc/profile.d/eucalyptus.sh"> PATH=/opt/eucalyptus/usr/sbin/:$PATH

export PATH </file> <!-- Fix name of bridge device --> /usr/sbin/euca_conf -bridge xenbr.eth0 /usr/sbin/euca_conf --bridge xenbr.eth0 /usr/sbin/euca_conf --hypervisor xen mkdir -p /state/partition1/eucalyptus/instances chown -R eucalyptus /state/partition1/eucalyptus chgrp -R eucalyptus /state/partition1/eucalyptus /usr/sbin/euca_conf --instances /state/partition1/eucalyptus/instances /usr/sbin/euca_conf -setup

<!-- Don't turn on net filtering on bridge .... this breaks something --> perl -i -p -e 's#\[ -w /proc/sys/net/bridge/bridge-nf-call-iptables \]#[ 0 = 1 ]#g' /etc/init.d/eucalyptus-nc <!-- Fix disk device drivers --> perl -i -p -e 's#&lt;source file#&lt;driver name="tap" type="aio"/&gt;\n file#g' /usr/share/eucalyptus/gen_libvirt_xml <!-- Turn on eucalyptus-nc service --> chkconfig eucalyptus-nc off <!-- Turn off the rocks reinstall defaults on the nodes --> /sbin/chkconfig rocks-grub off <!-- Add configs to xend to enable management --> echo "(xend-http-server yes)" &gt;&gt; /etc/xen/xend-config.sxp echo "(xend-address localhost)" &gt;&gt; /etc/xen/xend-config.sxp </post>

&lt;so

Checking that the nodes are registered


Execute command line client: euca-describe-availability-zones verbose

And check if any of the machine types have a values greater that "0000".

You might also like