Ystem Configuration For Eucalyptus
Ystem Configuration For Eucalyptus
Ystem Configuration For Eucalyptus
edu/display/CLOUD/S ystem+Configuration+for+Eucalyptus
Much of the below can be captured on Rocks for node installs by an appropriate extend-vm-container.xml site profile, whic listed below.
The Eucalyptus documentation suggests that you can configure Xen to run as non-root. The comment in the config file which says th doesn't actually work turns out to be completely correct. At this point, you probably want to add /opt/eucalyptus/usr/sbin/ to your path: PATH=/opt/eucalyptus/usr/sbin/:$PATH or add a profile.d script to that effect: /etc/profile.d/eucalyptus.sh Also, I like to locate all the config files into /etc so that we can capture them with git or another such tool (unnecessary now for v 1.6.x). root# ln -s /opt/eucalyptus/etc/eucalyptus /etc/ Next test that things are installed; as root execute su eucalyptus -c "virsh list" You should see something like: libvir: Xen Daemon error : internal error failed to connect to xend libvir: Xen Daemon error : internal error failed to connect to xend
Id Name State ---------------------------------0 Domain-0 running which indicates that it can talk to Xen, even though the xend is off. Now, start the cloud controller: sudo service eucalyptus-cloud start sudo service eucalyptus-cc start
Seems that the eucalyptus-cloud service script can't handle java versions with names like "1.6.0_14" and complains about the java ve not being high enough. In this case comment out the "check_java" line in "do_start"
for nodehost in $( rocks list host |awk '/VM Container/ { sub(/:+$/,"",$1); print $1 }' ); euca_conf -addnode $nodehost /etc/eucalyptus/eucalyptus.conf done
sub(/:+$/,"",$1); print $1
This will need to be run again (or at least the rsync of keys part) every time a node is rebuilt.
Instance Storage
On the compute nodes, make sure to set the INSTANCE_PATH to local storage on a volume that has adequate space. On Rocks this should be/state/partition1/eucalyptus or similar. To do so, stop the eucalyptus-nc service, and execute: root# root# root# root# root# mkdir -p /state/partition1/eucalyptus/instances chown -R eucalyptus /state/partition1/eucalyptus chgrp -R eucalyptus /state/partition1/eucalyptus /usr/sbin/euca_conf --instances /state/partition1/eucalyptus/instances /usr/sbin/euca_conf -setup
Need to figure out how this will interact with Rocks. Also, currently, enabling network brings down the network when the nc service starts! Curses! It seems that there is a misconfiguration or bug with the standard install so that when the eucalyptusnc service is started, network connectivity is broken (or likely completely filtered). This occurs in the stanza in /etc/init.d/eucalyptus-nc that starts bridge netfiltering on the VM container node: if [ -w /proc/sys/net/bridge/bridge-nf-call-iptables ]; then VAL=`cat /proc/sys/net/bridge/bridge-nf-call-iptables` if [ "$VAL" = "0" ]; then echo echo "Enabling bridge netfiltering for eucalyptus." echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables fi fi An initial workaround is to modify this script to disable that step before starting the service; a perl one liner to do so is as follows: root# perl -i -p -e \
Networking
In general we want to use the most capable networking settings available, but within the limitations of the switch. Some bladecenter switches for example don't support full tagging from the nodes (i.e. they filter), so here we choose the MANAGED-NOVLAN option. This doesn't allow network separation, but works more generally. The networking VNET_INTERFACE="eth0" VNET_PUBINTERFACE="eth1" VNET_PRIVINTERFACE="eth0" VNET_BRIDGE="xenbr.eth0"
# DHCP VNET_DHCPDAEMON="/usr/sbin/dhcpd" VNET_DHCPUSER="root" # VNET_MODE="MANAGED-NOVLAN" VNET_SUBNET="10.12.0.0" VNET_NETMASK="255.255.0.0" VNET_DNS="10.11.12.1" VNET_ADDRSPERNET="256" VNET_PUBLICIPS=140.247.54.171 #VNET_LOCALIP=140.247.54.170 #VNET_CLOUDIP=140.247.54.170
Node Builds
Node rebuilds will break the nodes as useful VM containers since the credentials and settings are erased on a rebuild, so in order to preserve usablity after a node rebuild for currently running nodes, execute: root# pdsh -a /sbin/service rocks-grub stop root# pdsh -a /sbin/chkconfig rocks-grub off To capture the above configurations in the nodes builds, add the following post install steps, as per then "Rocks" way of doing things, to the site-profile script for vm containers in the file /export/rocks/install/site-profiles/5.1/nodes/extend-vm-container-
client.xml as follows: <post> <file name="/etc/yum.repos.d/epel.repo" perms="0444"> [epel] name=Extra Packages for Enterprise Linux 5 - $basearch #baseurl=http://download.fedoraproject.org/pub/epel/5/$basearch mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=epel-5&arch=$basearch failovermethod=priority enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL </file> <file name="/etc/yum.repos.d/ircs.repo" perms="0444"> [ircs] name=IRCS for CentOS $releasever baseurl=file:///group/grid/linux/ircs/$releasever/i386/ gpgcheck=0 </file> <!-- Add loopback devices --> <file name="/etc/udev/makedev.d/50-udev.nodes" mode="append" perms="0444"> loop8 loop9 loop10 loop11 loop12 loop13 loop14 loop15 loop16 loop17 loop18 loop19 loop20 loop21 loop22 loop23 loop24 loop25 loop26 loop27 loop28 loop29 loop30 loop31 </file> <file name="/etc/profile.d/eucalyptus.sh"> PATH=/opt/eucalyptus/usr/sbin/:$PATH
export PATH </file> <!-- Fix name of bridge device --> /usr/sbin/euca_conf -bridge xenbr.eth0 /usr/sbin/euca_conf --bridge xenbr.eth0 /usr/sbin/euca_conf --hypervisor xen mkdir -p /state/partition1/eucalyptus/instances chown -R eucalyptus /state/partition1/eucalyptus chgrp -R eucalyptus /state/partition1/eucalyptus /usr/sbin/euca_conf --instances /state/partition1/eucalyptus/instances /usr/sbin/euca_conf -setup
<!-- Don't turn on net filtering on bridge .... this breaks something --> perl -i -p -e 's#\[ -w /proc/sys/net/bridge/bridge-nf-call-iptables \]#[ 0 = 1 ]#g' /etc/init.d/eucalyptus-nc <!-- Fix disk device drivers --> perl -i -p -e 's#<source file#<driver name="tap" type="aio"/>\n file#g' /usr/share/eucalyptus/gen_libvirt_xml <!-- Turn on eucalyptus-nc service --> chkconfig eucalyptus-nc off <!-- Turn off the rocks reinstall defaults on the nodes --> /sbin/chkconfig rocks-grub off <!-- Add configs to xend to enable management --> echo "(xend-http-server yes)" >> /etc/xen/xend-config.sxp echo "(xend-address localhost)" >> /etc/xen/xend-config.sxp </post>
<so
And check if any of the machine types have a values greater that "0000".