Openstack Deployment Manual
Openstack Deployment Manual
Trademarks
Linux is a registered trademark of Linus Torvalds. PathScale is a registered trademark of Cray, Inc. Red
Hat and all Red Hat-based trademarks are trademarks or registered trademarks of Red Hat, Inc. SUSE
is a registered trademark of Novell, Inc. PGI is a registered trademark of NVIDIA Corporation. FLEXlm
is a registered trademark of Flexera Software, Inc. ScaleMP is a registered trademark of ScaleMP, Inc.
All other trademarks are the property of their respective owners.
1 Introduction 1
2 OpenStack Installation 3
2.1 Installation Of OpenStack From cmgui . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.1 OpenStack Setup Wizard Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.2 MySQL Credentials & OpenStack admin User . . . . . . . . . . . . . . . . . . . . . 7
2.1.3 OpenStack Category Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.4 OpenStack Compute Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.5 OpenStack Network Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.6 Ceph Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.7 OpenStack Internal Network Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.8 OpenStack Software Image Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.9 User Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.10 User Instance Isolation from Internal Cluster Network . . . . . . . . . . . . . . . . 15
2.1.11 Network Isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.12 VXLAN Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.13 Dedicated Physical Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.1.14 Bright-Managed Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.15 Virtual Node Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.16 Inbound External Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.1.17 Allow Outbound Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.18 External Network Interface for Network Node . . . . . . . . . . . . . . . . . . . . . 24
2.1.19 VNC Proxy Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.1.20 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2 Installation Of OpenStack From The Shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.1 Start Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.2 Informative Text Prior To Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2.3 Pre-Setup Suggestions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2.4 MySQL root And OpenStack admin Passwords . . . . . . . . . . . . . . . . . . . 30
2.2.5 Reboot After Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.6 Ceph Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.7 Internal Network To Be Used For OpenStack . . . . . . . . . . . . . . . . . . . . . . 33
2.2.8 User Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2.9 Virtual Instance Access To Internal Network . . . . . . . . . . . . . . . . . . . . . . 34
2.2.10 Network Isolation Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.2.11 Choosing The Network That Hosts The User Networks . . . . . . . . . . . . . . . . 35
ii Table of Contents
2.2.12 Setting The Name Of The Hosting Network For User Networks . . . . . . . . . . . 35
2.2.13 Setting The Base Address Of The Hosting Network For User Networks . . . . . . 35
2.2.14 Setting The Number Of Netmask Bits Of The Hosting Network For User Networks 36
2.2.15 Enabling Support For Bright-managed Instances . . . . . . . . . . . . . . . . . . . . 36
2.2.16 Starting IP Address For Bright-managed Instances . . . . . . . . . . . . . . . . . . . 36
2.2.17 Ending IP Address For Bright-managed Instances . . . . . . . . . . . . . . . . . . . 37
2.2.18 Number Of Virtual Nodes For Bright-managed Instances . . . . . . . . . . . . . . . 37
2.2.19 DHCP And Static IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.2.20 Floating IPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.2.21 External Network Starting Floating IP . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.2.22 External Network Ending Floating IP . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.2.23 VNC Proxy Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.2.24 Nova Compute Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.2.25 Neutron Network Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.2.26 Pre-deployment Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.2.27 The State After Running cm-openstack-setup . . . . . . . . . . . . . . . . . . . 40
3 Ceph Installation 41
3.1 Ceph Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.1.1 Ceph Object And Block Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.1.2 Ceph Software Considerations Before Use . . . . . . . . . . . . . . . . . . . . . . . 42
3.1.3 Hardware For Ceph Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2 Ceph Installation With cm-ceph-setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2.1 cm-ceph-setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2.2 Starting With Ceph Installation, Removing Previous Ceph Installation . . . . . . . 44
3.2.3 Ceph Monitors Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2.4 Ceph OSDs Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.3 Checking And Getting Familiar With Ceph Items After cm-ceph-setup . . . . . . . . . 50
3.3.1 Checking On Ceph And Ceph-related Files From The Shell . . . . . . . . . . . . . . 50
3.3.2 Ceph Management With cmgui And cmsh . . . . . . . . . . . . . . . . . . . . . . . 52
3.4 RADOS GW Installation, Initialization, And Properties . . . . . . . . . . . . . . . . . . . . 56
3.4.1 RADOS GW Installation And Initialization With cm-radosgw-setup . . . . . . . 56
3.4.2 Setting RADOS GW Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.4.3 Turning Keystone Authentication On And Off For RADOS GW . . . . . . . . . . . 58
Preface
Welcome to the OpenStack Deployment Manual for Bright Cluster Manager 7.1.
• The User Manual describes the user environment and how to submit jobs for the end user.
• The Cloudbursting Manual describes how to deploy the cloud capabilities of the cluster.
• The Developer Manual has useful information for developers who would like to program with
Bright Cluster Manager.
• The OpenStack Deployment Manual describes how to deploy OpenStack with Bright Cluster Man-
ager.
• The Hadoop Deployment Manual describes how to deploy Hadoop with Bright Cluster Manager.
• The UCS Deployment Manual describes how to deploy the Cisco UCS server with Bright Cluster
Manager.
If the manuals are downloaded and kept in one local directory, then in most pdf viewers, clicking
on a cross-reference in one manual that refers to a section in another manual opens and displays that
section in the second manual. Navigating back and forth between documents is usually possible with
keystrokes or mouse clicks.
For example: <Alt>-<Backarrow> in Acrobat Reader, or clicking on the bottom leftmost naviga-
tion button of xpdf, both navigate back to the previous document.
The manuals constantly evolve to keep up with the development of the Bright Cluster Manager envi-
ronment and the addition of new hardware and/or applications. The manuals also regularly incorporate
customer feedback. Administrator and user input is greatly valued at Bright Computing. So any com-
ments, suggestions or corrections will be very gratefully accepted at manuals@brightcomputing.
com.
1 The term projects must not be confused with the term used in OpenStack elsewhere, where projects, or sometimes tenants,
Not all of these projects are integrated, or needed by Bright Cluster Manager for a working Open-
Stack system. For example, Bright Cluster Manager already has an extensive monitoring system and
therefore does not for now implement Ceilometer, while Trove is ignored for now because it is not
yet production-ready.
Projects that are not yet integrated can in principle be added by administrators on top of what is
deployed by Bright Cluster Manager, even though this is not currently supported or tested by Bright
Computing. Integration of the more popular of such projects, and greater integration in general, is
planned in future versions of Bright Cluster Manager.
This manual explains the installation, configuration, and some basic use examples of the OpenStack
projects that have so far been integrated with Bright Cluster Manager.
• Three regular nodes with 2GB RAM per core. Each regular node has a network interface.
Running OpenStack under Bright Cluster Manager with fewer resources is possible, but may run
into issues. While such issues can be resolved, they are usually not worth the time spent analyzing
them. It is better to run with ample resources, and then analyze the resource consumption to see what
issues to be aware of when scaling up to a production system.
• Using the GUI-based Setup Wizard button from within cmgui (section 2.1). This is the recom-
mended installation method.
• Using the text-based cm-openstack-setup utility (section 2.2). The utility is a part of the stan-
dard cluster-tools package.
The priorities that the package manager uses are expected to be at their default settings, in order for
the installation to work.
By default, deploying OpenStack installs the following projects: Keystone, Nova, Cinder, Glance,
Neutron, Heat and Horizon (the dashboard).
If Ceph is used, then Bright also can also optionally deploy RADOS Gateway to be used as a Swift-
API-compatible object storage system. Using RADOS Gateway instead of the reference Swift object
storage is regarded in the OpenStack community as good practice, and is indeed the only object storage
system that Bright Cluster Manager manages for OpenStack. Alternative backend storage is possible at
the same time as object storage, which means, for example, that block and image storage are options
that can be used in a cluster at the same time as object storage.
Some suggestions and background notes These are given here to help the administrator understand
what the setup configuration does, and to help simplify deployment. Looking at these notes after a
dry-run with the wizard will probably be helpful.
• A VXLAN (Virtual Extensible LAN) network is similar to a VLAN network in function, but has
features that make it more suited to cloud computing.
– If VXLANs are to be used, then the wizard is able to help create a VXLAN overlay network for
OpenStack tenant networks.
An OpenStack tenant network is a network used by a group of users allocated to a particular
virtual cluster.
A VXLAN overlay network is a Layer 2 network “overlaid” on top of a Layer 3 network.
The VXLAN overlay network is a virtual LAN that runs its frames encapsulated within UDP
packets over the regular TCP/IP network infrastructure. It is very similar to VLAN technol-
ogy, but with some design features that make it more useful for cloud computing needs. One
major improvement is that around 16 million VXLANs can be made to run over the under-
lying Layer 3 network. This is in contrast to the 4,000 or so VLANs that can be made to run
over their underlying Layer 2 network, if the switch port supports that level of simultaneous
capability.
By default, if the VXLAN network and VXLAN network object do not exist, then the wizard
helps the administrator create a vxlanhostnet network and network object (section 2.1.12).
The network is attached to, and the object is associated with, all non-head nodes taking part in
the OpenStack deployment. If a vxlanhostnet network is pre-created beforehand, then the
wizard can guide the administrator to associate a network object with it, and ensure that all
the non-head nodes participating in the OpenStack deployment are attached and associated
accordingly.
– The VXLAN network runs over an IP network. It should therefore have its own IP range,
and each node on that network should have an IP address. By default, a network range of
10.161.0.0/16 is suggested in the VXLAN configuration screen (section 2.1.12, figure 2.13).
– The VXLAN network can run over a dedicated physical network, but it can also run over
an alias interface on top of an existing internal network interface. The choice is up to the
administrator.
– It is possible to deploy OpenStack without VXLAN overlay networks if user instances are
given access to the internal network. Care must then be taken to avoid IP addressing conflicts.
• Changing the hostname of a node after OpenStack has been deployed is a best practice, but in-
volves some manual steps. So instead, it is recommended to change the hostnames before running
the wizard. For example, to set up a network node:
– a single regular node which is to be the network node of the deployment should be chosen
and renamed to networknode. The Setup Wizard recognizes networknode as a special
name and will automatically suggest using it during its run.
– The soon-to-be network node should then be restarted
– the Setup Wizard is then run, from the head node. When the wizard reaches the network
node selection screen (section 2.1.5, figure 2.6) networknode is suggested as the network
node.
• When allowing for Floating IPs and/or enabling outbound connectivity from the virtual machines
(VMs) to the external network via the network node, the network node can be pre-configured
manually according to how it is connected to the internal and external networks. Otherwise, if
the node is not pre-configured manually, the wizard then carries out a basic configuration on the
network node that
– configures one physical interface of the network node to be connected to the internal network,
so that the network node can route packets for nodes on the internal network.
– configures the other physical interface of the network node to be connected to the external
network so that the network node can route packets from external nodes.
The wizard asks the user several questions on the details of how OpenStack is to be deployed. From
the answers, it generates an XML document with the intended configuration. Then, in the back-end,
largely hidden from the user, it runs the text-based cm-openstack-setup script (section 2.2) with this
configuration on the active head node. In other words, the wizard can be regarded as a GUI front end
to the cm-openstack-setup utility.
The practicalities of executing the wizard: The explanations given by the wizard during its execution
steps are intended to be verbose enough so that the administrator can follow what is happening.
The wizard is accessed via the OpenStack resource in the left pane of cmgui (figure 2.1). Launching
the wizard is only allowed if the Bright Cluster Manager license (Chapter 4 of the Installation Manual)
entitles the license holder to use OpenStack.
The wizard runs through the screens in sections 2.1.1-2.1.20, described next.
The main overview screen (figure 2.2) gives an overview of how the wizard runs. The Learn more
button displays a pop up screen to further explain what information is gathered, and what the wizard
intends to do with the information.
The main overview screen also asks for input on the following:
• Should the regular nodes that become part of the OpenStack cluster be rebooted? A reboot installs
a new image onto the node, and is recommended if interface objects need to be created in CM-
Daemon for OpenStack use. Creating the objects is typically the case during the first run ever for
a particular configuration. Subsequent runs of the wizard do not normally create new interfaces,
and for small changes do not normally require a node reboot. If in doubt, the reboot option can be
set to enabled. Reboot is enabled as the default.
• Should a dry-run be done? In a dry-run, the wizard pretends to carry out the installation, but
the changes are not really implemented. This is useful for getting familiar with options and their
possible consequences. A dry run is enabled as the default.
The MySQL and OpenStack credentials screen (figure 2.3) allows the administrator to set passwords for
the MySQL root user and the OpenStack admin user. The admin user is how the administrator logs in
to the Dashboard URL to manage OpenStack when it is finally up and running.
The category configuration screen (figure 2.4) sets the node category that will be used as the template
for several OpenStack categories that are going to be created by the wizard.
The compute hosts configuration screen (figure 2.5) allows the administrator to take nodes which are
still available and put them into a category that will have a compute role.
The category can be set either to be an existing category, or a new category can be created. If an
existing category is used, then default can be chosen. If Ceph has been integrated with Bright Cluster
Manager, then the ceph category is another available option.
Creating a new category is recommended, and is the default option. The suggested default category
name is openstack-compute-hosts.
The network node screen (figure 2.6) makes a node the network node, or makes a category of nodes the
network nodes. A network node is a dedicated node that handles OpenStack networking services.
If a category is used to set up network nodes, then either an existing category can be used, or a new
category can be created. If a new category is to be created, then openstack-network-hosts is its
suggested name.
The option to specify a node as a network node is most convenient in the typical case when all of
the non-head nodes have been set as belonging to the compute node category in the preceding screen
(figure 2.5). Indeed, in the case that all non-head nodes have been set to be in the compute node category,
the category options displayed in figure 2.6 are then not displayed, leaving only the option to specify a
particular node.
A network node inherits many of the OpenStack-related compute node settings, but will have some
exceptions to the properties of a compute node. Many of the exceptions are taken care of by assigning the
openstacknetwork role to any network nodes or network node categories, as is done in this screen.
If Ceph has been configured with Bright Cluster Manager before the wizard is run, then the Ceph con-
figuration screen (figure 2.7) is displayed. Choosing any of the Ceph options requires that Ceph be
pre-installed. This is normally done with the cm-ceph-setup script (section 3.2).
Ceph is an object-based distributed parallel filesystem with self-managing and self-healing features.
Object-based means it handles each item natively as an object, along with meta-data for that item. Ceph
is a drop-in replacement for Swift storage, which is the reference OpenStack object storage software
project.
The administrator can decide on:
• Using Ceph for volume storage, instead of on NFS shares. This is then instead of using the Open-
Stack Cinder reference project for the implementation of volume storage.1
• Using Ceph for image storage, instead of on image storage nodes. This is instead of using the
OpenStack Glance reference project for the implementation of virtual machine image storage.
• Using Ceph for root and ephemeral disks storage, instead of on the filesystem of the compute
hosts. This is instead of using the OpenStack Nova reference project implementation for the im-
plementation for disk filesystem storage.2
1 An advantage of Ceph, and one of the reasons for its popularity in OpenStack implementations, is that it supports volume
snapshots in OpenStack. Snapshotting is the ability to take a copy of a chosen storage, and is normally taken to mean using
copy-on-write (COW) technology. More generally, assuming enough storage is available, non-COW technology can also be used
to make a snapshot, despite its relative wastefulness.
In contrast to Ceph, the reference Cinder implementation displays an error if attempting to use the snapshot feature, due to its
NFS driver.
The administrator should understand that root or ephemeral storage are concepts that are valid only inside the Nova project,
and are completely separate storages from Cinder. These have nothing to do with the Cinder reference implementation, so that
carrying out a snapshot for these storages does not display such an error.
2 Compute hosts need to have the root and ephemeral device data belonging to their hosted virtual machines stored some-
where. The default directory location for these images and related data is under /var/lib/nova. If Ceph is not enabled, then a
local mount point of every compute host is used for data storage by default. If Ceph is enabled, then Ceph storage is used instead
according to its mount configuration.
In either case, compute hosts that provide storage for virtual machines, by default store the virtual machine images under
the /var partition. However, the default partition size for /var is usually only large enough for basic testing purposes. For
production use, with or without Ceph, the administrator is advised to either adjust the size of the existing partition, using the
disksetup command (section 3.9.3 of the Administrator Manual), or to use sufficient storage mounted from elsewhere.
To change the default paths used by OpenStack images, the following two path variables in the Nova configuration file /etc/
nova/nova.conf should be changed:
1. state_path=/var/lib/nova
2. lock_path=/var/lib/nova/tmp
If $state_path has been hardcoded by the administrator elsewhere in the file, the location defined there should also be
changed accordingly.
The OpenStack internal network selection screen (figure 2.8) decides the network to be used on the
nodes that run OpenStack.
The OpenStack software image selection screen (figure 2.9) decides the software image name to be used
on the nodes that run OpenStack.
An existing image name can be used if there is more than one available image name.
Creating a new image, with the default name of openstack-image, is recommended. This image,
openstack-image, is to be the base OpenStack image, and it is cloned and modified from an original,
pre-OpenStack-deployment image.
By default, the name openstack-image is chosen. This is recommended because the image that is
to be used by the OpenStack nodes has many modifications from the default image, and it is useful to
keep the default image around for comparison purposes.
The User Instances screen (figure 2.10) allows the administrator to allow the Bright-end-user to
create user instances.
The following overview may help get a perspective on this part of the wizard configuration proce-
dure:
The main function of OpenStack is to manage virtual machines. From the administrator’s point of
view there are two classes of virtual machines, or instances:
The wizard allows OpenStack to be configured to support both types of instances, or only one of
them.
Deploying OpenStack without configuring it for either type of instance is also possible, but such an
OpenStack cluster is very limited in its functionality and typically has to be customized further by the
administrator.
Both types of instances are virtual machines hosted within a hypervisor managed by the OpenStack
compute project, Nova. The main difference between these two types instances include the following:
• User instances are typically created and managed by the end-users of the deployment, either
directly via the OpenStack API, or via OpenStack Dashboard, outside of direct influence from
Bright Cluster Manager. User instances are provisioned using any OpenStack-compatible software
image provided by the user, and thus have no CMDaemon running on them. User instances are
attached to user-created virtual networks. Optionally, they can be allowed to connect directly to
the cluster’s internal network (section 2.1.10). The number of user instances that can be run is not
restricted in any way by the Bright Cluster Manager license.
• Bright-managed instances, sometimes also called virtual nodes, are typically created and
managed by the cluster/cloud administrators using CMDaemon, via cmsh or cmgui or
pythoncm. They are provisioned using a Bright software image, and therefore have CMDae-
mon running on them. Because of CMDaemon, the administrator can manage Bright-managed
instances just like regular nodes under Bright Cluster Manager. Bright-managed instances are
always connected to the cluster’s internal network, but can also be attached to user-created net-
works.
To allow user instances to be created, the Yes radio-button should be ticked in this screen. This will
lead to the wizard asking about user-instance network isolation (VLAN/VXLAN).
Whether or not Bright-managed instances are to be allowed is set later in the Bright-managed in-
stances screen (figure 2.15).
Figure 2.11: User Instance Isolation from Internal Cluster Network Screen
If the creation of user instances has been enabled (figure 2.10), the user instance internal cluster net-
work isolation screen (figure 2.11) allows the administrator to allow OpenStack-end users to create user
instances which have direct network connectivity to the cluster’s internal network.
If the “network isolation” restriction is removed, so that there is “network promiscuity” between
user instances on the internal network, then this allows user instances (figure 2.10) to connect to other
user instances on the internal network of the cluster. End users can then manage other user instances.
Allowing this should only be acceptable if all users that can create instances are trusted.
The network isolation screen (figure 2.12) allows the administrator to set the virtual LAN technology
that user instances can use for their user-defined private networks. Using virtual LANs isolates the IP
networks used by the instances from each other. This means that the instances attached to one private
network will always avoid network conflicts with the instances of another other network, even if using
the same IP address ranges.
Bright Cluster Manager supports two virtual LAN technologies:
• VLAN: VLAN technology tags Ethernet frames as belonging to a particular VLAN. However it re-
quires manual configuration of the VLAN IDs in the switches, and also the number of IDs available
is limited to 4094.
• VXLAN: VXLAN technology has more overhead per packet than VLANs, because it adds a larger
ID tag, and also because it encapsulates layer 2 frames within layer 3 IP packets. However, unlike
with VLANs, configuration of the VXLAN IDs happens automatically, and the number of IDs
available is about 16 million.
By default, VXLAN technology is chosen. This is because for VXLAN, the number of network IDs
available, along with the automatic configuration of these IDs, means that the cluster can scale further
and more easily than for VLAN.
Selecting a network isolation type is mandatory, unless user instances are configured to allow access
to the internal network of the cluster by the administrator (figure 2.10).
Presently, only one type of network isolation is supported at a time.
The VXLAN screen (figure 2.13) shows configuration options for the VXLAN network if VXLAN has
been chosen as the network isolation technology in the preceding screen. If the network isolation tech-
nology chosen was VLAN, then a closely similar screen is shown instead.
For the VXLAN screen, the following options are suggested, with overrideable defaults as listed:
The VXLAN range defines the number of user IP networks that can exist at the same time. While the
range can be set to be 16 million, it is best to keep it to a more reasonable size, such as 50,000, since a
larger range slows down Neutron significantly.
An IP network is needed to host the VXLANs and allow the tunneling of traffic between VXLAN
endpoints. This requires
• either choosing an existing network that has already been configured in Bright Cluster Manager,
but not internalnet
VXLAN networking uses a multicast address to handle broadcast traffic in a virtual network. The
default multicast IP address that is set, 224.0.0.1, is unlikely to be used by another application. How-
ever, if there is a conflict, then the address can be changed using the CMDaemon OpenStackVXLANGroup
directive (Appendix C, page 530 of the Administrator Manual).
The dedicated physical networks screen (figure 2.14): allows the following to be configured:
For each of these types of node (compute node or network node), the interface can:
• or created separately on a physical network interface. The interface must then be given a name.
The name can be arbitrary.
The Bright-managed instances screen (figure 2.15) allows administrators to enable Bright-managed in-
stances. These instances are also known as virtual nodes. Administrators can then run OpenStack
instances using Bright Cluster Manager.
End-users are allowed to run OpenStack instances managed by OpenStack only if explicit permis-
sion has been given. This permission is the default that is set earlier on in the user instances screen
(figure 2.10).
If Bright-managed instances are enabled, then an IP allocation scheme must be set. The values used
to define the pool are:
The screens shown in figures 2.16 to 2.20 are displayed next if Bright-managed instances are enabled.
The virtual node configuration screen (figure 2.16) allows the administrator to set the number, category,
and image for the virtual nodes. The suggestions presented in this screen can be deployed in a test
cluster.
A Virtual node category can be set for virtual nodes further down in the screen.
During a new deployment, virtual nodes can be placed in categories, either by creating a new cate-
gory, or by using an existing category:
– The Virtual node category is given a default value of virtual-nodes. This is a sen-
sible setting for a new deployment.
– The Base category can be selected. This is the category from which the new virtual node
category is derived. Category settings are copied over from the base category to the virtual
node category. The only category choice for the Base category in a newly-installed cluster
is default. Some changes are then made to the category settings in order to make virtual
nodes in that category run as virtual instances.
One of the changes that needs to be made to the category settings for a virtual node is that a
software image must be set. The following options are offered:
– Create a new software image: This option is recommended for a new installation.
Choosing this option presents the following suboptions:
* The Software image is given a default value of virtual-node-image. This is a
sensible setting for a new deployment.
* The Base software image can be selected. This is the software image from which the
new virtual node software image is derived. In a newly-installed cluster, the only base
software image choice is default-image.
– Use existing software image: An existing software image can be set. The only value
for the existing image in a newly-configured cluster is default-image.
– Use software image from category: The software image that the virtual node cate-
gory inherits from its base category is set as the software image.
Setting the software image means that the wizard will copy over the properties of the associated
base image to the new software image, and configure the new software image with the required
virtualization modules. The instance that uses this category then uses a modified image with
virtualization modules that enable it to run on a virtual node.
– The Virtual node category can be selected from the existing categories. In a newly-
installed cluster, the only possible value is default
Setting the category means that the wizard will copy over the properties of the existing category
to the new virtual category, and configure the new software image with the required virtualization
modules. The instance that uses the configured image is then able to run it on a virtual node.
The virtual nodes can be configured to be assigned one of the following types of addresses:
• DHCP
• Static
The addresses in either case go in a sequence, and begin with an address that the administrator sets
in the wizard.
All OpenStack-hosted virtual machines are typically attached to one or more virtual networks. How-
ever, unless they also connected to the internal network of the cluster, there is no simple way to connect
to them from outside their virtual network. To solve this, Floating IPs have been introduced by Open-
Stack. Floating IPs are a range of IP addresses that the administrator specifies on the external network,
and they are made available to the users (tenants) for assignment to their user instances. The number of
Floating IPs available to users is limited by the Floating IP quotas set for the tenants.
Administrators can also assign Floating IP addresses to Bright-managed instances. Currently, how-
ever, this has to be done via the OpenStack API or Dashboard.
The inbound external traffic screen (figure 2.17) reserves a range of IP addresses within the used
external network. In this example it happens to fall within the 10.2.0.0/16 network range.
If specifying a Floating IP range, a single IP address from this range is always reserved by OpenStack
for outbound traffic. This is implemented via sNAT. The address is reserved for instances which have
not been assigned a Floating IP. Therefore, the IP address range specified in this screen normally expects
a minimum of two IP addresses—one for reserved for outbound traffic, and one Floating IP.
If the administrator would like to allow OpenStack instances to have outbound connectivity, but at
the same time not have floating IPs, then this can be done by:
Alternatively, a user instance can be configured to access the external network via the head node,
however this is a bit more complicated to set up.
Since outgoing connections use one IP address for all instances, the remaining number of IP ad-
dresses is what is then available for Floating IPs. The possible connection options are therefore as indi-
cated by the following table:
The Allow Outbound Traffic screen appears if in the previous screen, figure 2.17, Floating IPs have
not been configured, and if the No option has been selected, and if there are user instances being config-
ured.
The Allow Outbound Traffic screen does not appear if the cluster has only Bright-managed in-
stances, and no user-managed instances. This is because Bright-managed instances can simply route
their traffic through the head node, without needing the network node to be adjusted with the configu-
ration option of this screen.
If the Allow Outbound Traffic screen appears, then specifying a single IP address for the
Outbound IP value in the current screen, figure 2.18, sets up the configuration to allow outbound
connections only.
A decision was made earlier about allowing user instances to access the internal network of the clus-
ter (section 2.1.10). In the dialog of figure 2.18, if user instances are not enabled, then the administrator
is offered the option once more to allow access to the internal network of the cluster by user instances.
The external network interface for network node screen (figure 2.19) allows the administrator to config-
ure either a dedicated physical interface or a tagged VLAN interface for the network node. The interface
is to the external network and is used to provide routing functions for OpenStack.
The network node must have a connectivity with the external network when Floating IPs or/and
outbound traffic for instances is being configured.
If the node already has a connection to the external network configured in Bright, the wizard will
skip this step.
The options are:
• Create dedicated physical interface: If this option is chosen, then a dedicated physical
interface is used for connection from the network node to the external network.
– Interface name: The name of the physical node should be set. For example: eth0.
• Create tagged VLAN interface: If this option is chosen, then a tagged VLAN interface is
used for the connection from the network node to the external network.
– Base interface: The base interface is selected. Typically the interface selected is BOOTIF
– Tagged VLAN ID: The VLAN ID for the interface is set.
The VNC proxy hostname screen (figure 2.20) sets the FQDN or external IP address of the head node of
the OpenStack cluster, as seen by a user that would like to access the consoles of the virtual nodes from
the external network.
Example
If the hostname resolves within the brightcomputing.com network domain, then for an Open-
Stack head hostname that resolves to bright71, the value of VNC Proxy Hostname should be set to
bright71.brightcomputing.com.
2.1.20 Summary
The configuration can be saved with the Save Configuration option of figure 2.21.
After exiting the wizard, the XML file can be directly modified if needed in a separate text-based
editor.
• The XML file can be used as the configuration starting point for the text-based
cm-openstack-setup utility (section 2.2), if run as:
• Alternatively, the XML file can be deployed as the configuration by launching the cmgui wizard,
and then clicking on the Load XML button of first screen (figure 2.2). After loading the configura-
tion, a Deploy button appears.
Clicking the Deploy button that appears in figure 2.2 after loading the XML file, or clicking the
Deploy button of figure 2.21, sets up OpenStack in the background. The direct background progress is
hidden from the administrator, and relies on the text-based cm-openstack-setup script (section 2.2).
Some log excerpts from the script are displayed within a Deployment Progress window (figure 2.23).
At the end of its run, the cluster has OpenStack set up and running in an integrated manner with
Bright Cluster Manager.
The administrator can now configure the cluster to suit the particular site requirements.
Removal removes OpenStack-related database entries, roles, networks, virtual nodes, and interfaces.
Images and categories related to OpenStack are however not removed.
If deployment is selected in the preceding screen, an informative text screen (figure 2.25) gives a sum-
mary of what the script does.
The pre-setup suggestions screen (figure 2.26) suggests changes to be done before going on.
The MySQL root password screen (figure 2.27) prompts for the existing root password to MySQL to
be entered, while the OpenStack admin password screen (figure 2.28) prompts for a password to be
entered, and then re-entered, for the soon-to-be-created admin user in OpenStack.
A screen is shown asking if the compute host nodes, that is, the nodes used to host the virtual nodes,
should be re-installed after configuration (figure 2.29). A re-install is usually best, for reasons discussed
on page 6 for the cmgui installation wizard equivalent of this screen option.
Ceph can be set for storing virtual machine images, instead of the OpenStack reference Glance, using
the Ceph image storage screen (figure 2.30).
Ceph can be set for handling block volume storage read and writes, instead of the OpenStack reference
Cinder, by using the Ceph for OpenStack volumes screen (figure 2.31).
Data storage with Ceph can be enabled by the administrator by using the Ceph for OpenStack root and
ephemeral device storage screen (figure 2.32).
The Ceph RADOS gateway screen (figure 2.33) lets the administrator set the Ceph RADOS gateway
service to run when deployment completes.
If there are multiple internal networks, then the internal network selection screen (figure 2.34) lets the
administrator choose which of them is to be used as the internal network to which the OpenStack nodes
are to be connected.
The user instances screen (figure 2.35) lets the administrator decide if end users are to be allowed to
create user instances.
The screen in figure 2.36 lets the administrator allow virtual instances access to the internal network.
This should only be allowed if the users creating the instances are trusted. This is because the creator of
the instance has root access to the instance, which is in turn connected directly to the internal network
of the cluster, which means all the packets in that network can be read by the user.
The network isolation type screen (figure 2.37) allows the administrator to choose what kind of network
isolation type, if any, should be set for the user networks.
If the user networks has their type (VXLAN, VLAN, or no virtual LAN) chosen in section 2.2.10, then
a screen similar to figure 2.38 is displayed. This allows one network to be set as the host for the user
networks.
If there are one or more possible networks already available for hosting the user networks, then one
of them can be selected. Alternatively, a completely new network can be created to host them.
2.2.12 Setting The Name Of The Hosting Network For User Networks
Figure 2.39: Setting The Name Of The Network For User Networks
If a network to host the user networks is chosen in section 2.2.11, then a screen similar to figure 2.39 is
displayed. This lets the administrator set the name of the hosting network for user networks.
2.2.13 Setting The Base Address Of The Hosting Network For User Networks
Figure 2.40: Setting The Base Address Of The Network For User Networks
If the network name for the network that hosts the user networks is chosen in section 2.2.12, then a
screen similar to figure 2.40 is displayed. This lets the administrator set the base address of the hosting
network for user networks.
2.2.14 Setting The Number Of Netmask Bits Of The Hosting Network For User Networks
Figure 2.41: Setting The Number Of Netmask Bits Of The Network For User Networks
If the base address for the network that hosts the user networks is set in section 2.2.13, then a screen,
similar to figure 2.41 is displayed. This lets the administrator set the number of netmask bits of the
hosting network for user networks.
Figure 2.42: Enabling Support For OpenStack Instances Under Bright Cluster Manager
There are two kinds of OpenStack instances, also known as virtual nodes, that can run on the cluster.
These are called user instances and Bright-managed instances. The screen in figure 2.42 decides if Bright-
managed instances to run. Bright-managed instances are actually a special case of user instances, just
managed much more closely by Bright Cluster Manager.
Only if permission is set in the screen of section 2.2.9, can an end user access Bright-managed in-
stances.
The screens from figure 2.43 to figure 2.45 are only shown if support for Bright-managed instances is
enabled.
A starting IP address must be set for the Bright-managed instances (figure 2.43)
An ending IP address must be set for the Bright-managed instances (figure 2.44).
The number of Bright-managed virtual machines must be set (figure 2.45). The suggested number of
instances in the wizard conforms to the defaults that OpenStack sets. These defaults are based on an
overcommit ratio of virtual CPU:real CPU of 16:1, and virtual RAM:real RAM of 1.5:1. The instance
flavor chosen then determines the suggested number of instances.
The instances can be configured to obtain their IP addresses either via DHCP, or via static address as-
signment (figure 2.47).
The Floating IPs screen (figure 2.47) lets the administrator enable Floating IPs on the external network,
so that instances can be accessed using these.
A screen similar to figure 2.48 allows the administrator to specify the starting floating IP address on the
external network.
A screen similar to figure 2.49 allows the administrator to specify the ending floating IP address on the
external network.
The VNC Proxy Hostname screen (figure 2.50) lets the administrator set the FQDN as seen from the
external network. An IP address can be used instead of the FQDN.
The Nova compute hosts screen (figure 2.51) prompts the administrator to set the nodes to use as the
hosts for the virtual machines.
The Neutron network node screen (figure 2.52) prompts the administrator to set the node to use for
Neutron network node.
The pre-deployment summary screen (figure 2.53) displays a summary of the settings that have been
entered using the wizard, and prompts the administrator to deploy or abort the installation with the
chosen settings.
The options can also be saved as an XML configuration, by default cm-openstack-setup.conf
in the directory under which the wizard is running. This can then be used as the input configuration file
for the cm-openstack-setup utility using the -c option.
CephFS
RADOS
OSD MON
OS/Hardware
Figure 3.1: Ceph Concepts
1. Block device access: RADOS Block Device (RBD) access can be carried out in two slightly different
ways:
(i) via a Linux kernel module based interface to RADOS. The module presents itself as a block
device to the machine running that kernel. The machine can then use the RADOS storage,
that is typically provided elsewhere.
(ii) via the librbd library, used by virtual machines based on qemu or KVM. A block device that
uses the library on the virtual machine then accesses the RADOS storage, which is typically
located elsewhere.
2. Gateway API access: RADOS Gateway (RADOS GW) access provides an HTTP REST gateway
to RADOS. Applications can talk to RADOS GW to access object storage in a high level manner,
instead of talking to RADOS directly at a lower level. The RADOS GW API is compatible with the
APIs of Swift and Amazon S3.
3. Ceph Filesystem access: CephFS provides a filesystem access layer. A component called MDS
(Metadata Server) is used to manage the filesystem with RADOS. MDS is used in addition to the
OSD and MON components used by the block and object storage forms when CephFS talks to
RADOS. The Ceph filesystem is not regarded as production-ready by the Ceph project at the time
of writing (July 2014), and is therefore not yet supported by Bright Cluster Manager.
A more useful minimum: if there is a node to spare, installing Ceph over 3 nodes is suggested, where:
• 1 node, the head node, runs one Ceph Monitor.
• 1 node, the regular node, runs the first OSD.
• 1 more node, also a regular node, runs the second OSD.
For production use: a redundant number of Ceph Monitor servers is recommended. Since the number
of Ceph Monitoring servers must be odd, then at least 3 Ceph Monitor servers, with each on a separate
node, are recommended for production purposes. The recommended minimum of nodes for production
purposes is then 5:
• 2 regular nodes running OSDs.
• 2 regular nodes running Ceph Monitors.
• 1 head node running a Ceph Monitor.
Drives usable by Ceph: Ceph OSDs can use any type of disk that presents itself as a block device in
Linux. This means that a variety of drives can be used.
• Set up Ceph
If the setup option is chosen, then a screen for the general Ceph cluster settings (figure 3.3) is dis-
played. The general settings can be adjusted via subscreens that open up when selected. The possible
general settings are:
• Public network: This is the network used by Ceph Monitoring to communicate with OSDs.
For a standard default Type 1 network this is internalnet.
• Private network: This is the network used by OSDs to communicate with each other. For a
standard default Type 1 network this is internalnet.
• Journal size: The default OSD journal size, in MiBs, used by an OSD. The actual size must
always be greater than zero. This is a general setting, and can be overridden by a category or node
setting later on.
Defining a value of 0 MiB here means that the default that the Ceph software itself provides is set.
At the time of writing (March 2015), Ceph software provides a default of 5GiB.
In this screen:
• Existing Ceph Monitors can be edited or removed (figure 3.5), from nodes or categories.
• The OSD configuration screen can be reached after making changes, if any, to the Ceph Monitor
configuration.
Typically in a first run, the head node has a Ceph Monitor added to it.
Figure 3.5: Ceph Installation Monitors Editing: Bootstrap And Data Path
The Edit option in figure 3.4 opens up a screen, figure 3.5, that allows the editing of existing or newly-
added Ceph Monitors for a node or category:
• The bootstrap option can be set. The option configures initialization of the maps on the Ceph
Monitors services, prior to the actual setup process. The bootstrap option can take the following
values:
– auto: This is the default and recommended option. If the majority of nodes are tagged with
auto during the current configuration stage, and configured to run Ceph Monitors, then
* If they are up according to Bright Cluster Manager at the time of deployment of the setup
process, then the Monitor Map is initialized for those Ceph Monitors on those nodes.
* If they are down at the time of deployment of the setup process, then the maps are not
initialized.
– true: If nodes are tagged true and configured to run Ceph Monitors, then they will be
initialized at the time of deployment of the setup process, even if they are detected as being
down during the current configuration stage.
– false: If nodes are tagged false and configured to run Ceph Monitors, then they will not
be initialized at the time of deployment of the setup process, even if they are detected as being
up during the current configuration stage.
/var/lib/ceph/mon/$cluster-$hostname
where:
• The Back option can be used after accessing the editing screen, to return to the Ceph Monitors
configuration screen (figure 3.4).
If Proceed to OSDs is chosen from the Ceph Monitors configuration screen in figure 3.4, then a screen
for Ceph OSDs configuration (figure 3.6) is displayed, where:
• OSDs can be added to nodes or categories. On adding, the OSDs must be edited with the edit
menu.
• Existing OSDs can be edited or removed (figure 3.7), from nodes or categories.
• To finish up on the installation, after any changes to the OSD configuration have been made, the
Finish option runs the Ceph setup procedure itself.
Figure 3.7: Ceph Installation OSDs Editing: Block Device Path, OSD Path, Journals For Categories Or
Nodes
The Edit option in figure 3.6 opens up a screen, figure 3.7, that allows the editing of the properties of
existing or newly-added Ceph OSDs for a node or category. In this screen:
• When considering the Number of OSDs and the Block devices, then it is best to set either
or
Setting both the number of OSDs and block devices is also possible, but then the number of OSDs
must match the number of block devices.
• If only a number of OSDs is set, and the block devices field is left blank, then each OSD is given its
own filesystem under the data-path specified.
• Block devices can be set as a comma- or a space-separated list, with no difference in meaning.
Example
/dev/sda,/dev/sdb,/dev/sdc
and
/dev/sda /dev/sdb /dev/sdc
are equivalent.
• For the OSD Data path, the recommended, and default value is:
/var/lib/ceph/osd/$cluster-$id
Here:
• For the Journal path, the recommended, and default value is:
/var/lib/ceph/osd/$cluster-$id/journal
• The Journal size, in MiB, can be set for the category or node. A value set here overrides the
default global journal size setting (figure 3.3). This is just the usual convention where a node
setting can override a category setting, and a node or category setting can both override a global
setting.
Also, just like in the case of the global journal size setting, a journal size for categories or nodes
must always be greater than zero. Defining a value of 0 MiB means that the default that the Ceph
software itself provides is set. At the time of writing (March 2015), Ceph software provides a
default of 5GiB.
The Journal size for a category or node is unset by default, which means that the value set
for Journal size in this screen is determined by whatever the global journal size setting is, by
default.
• Setting Journal on partition to yes means that the OSD uses a dedicated partition. In this
case:
– The disk setup used is modified so that the first partition, with a size of Journal size is
used
– A value of 0 for the Journal size is invalid, and does not cause a Ceph default size to be
used.
• The Shared journal device path must be set if a shared device is used for all the OSD journals
in the category or node for which this screen applies. The path is unset by default, which means it
is not used by default..
• The Shared journal size in MiB can be set. For n OSDs each of size x MiB, the value of
Shared journal size is n × x. That is, its value is the sum of the sizes of all the individual
OSD journals that are kept on the shared journal device. If it is used, then:
– The value of Shared journal size is used to automatically generate the disk layout setup
of the individual OSD journals.
– A value of 0 for the Journal size is invalid, and does not cause a Ceph default size to be
used.
The Back option can be used after accessing the editing screen, to return to the Ceph OSDs configu-
ration screen (figure 3.6).
After selecting the Finish option of figure 3.6, the Ceph setup proceeds. On successful completion, a
screen as in figure 3.8 is displayed.
3.3 Checking And Getting Familiar With Ceph Items After cm-ceph-setup
3.3.1 Checking On Ceph And Ceph-related Files From The Shell
The status of Ceph can be seen from the command line by running:
Example
The -h option to ceph lists many options. Users of Bright Cluster Manager should usually not need
to use these, and should find it more convenient to use the cmgui or cmsh front ends instead.
The name of the Ceph instance is by default ceph. If a new instance is to be configured with the
cm-ceph-setup utility, then a new name must be set in the configuration file, and the new configura-
tion file must be used.
Example
Example
<cephConfig>
<networks>
<public>internalnet</public>
<cluster>internalnet</cluster>
</networks>
<journalsize>0</journalsize>
<monitor>
<hostname>raid-test</hostname>
<monitordata>/var/lib/ceph/mon/$cluster-$hostname</monitordata>
</monitor>
<osd>
<hostname>node001</hostname>
<osdassociation>
<name>osd0</name>
<blockdev>/dev/sdd</blockdev>
<osddata>/var/lib/ceph/osd/$cluster-$id</osddata>
<journaldata>/var/lib/ceph/osd/$cluster-$id/journal</journaldata>
<journalsize>0</journalsize>
</osdassociation>
<osdassociation>
<name>osd1</name>
<blockdev>/dev/sde</blockdev>
<osddata>/var/lib/ceph/osd/$cluster-$id</osddata>
<journaldata>/var/lib/ceph/osd/$cluster-$id/journal</journaldata>
<journalsize>0</journalsize>
</osdassociation>
<osdassociation>
<name>osd2</name>
<blockdev>/dev/sdf</blockdev>
<osddata>/var/lib/ceph/osd/$cluster-$id</osddata>
<journaldata>/var/lib/ceph/osd/$cluster-$id/journal</journaldata>
<journalsize>0</journalsize>
</osdassociation>
</osd>
</cephConfig>
A disk setup (section 3.9.3 of the Administrator Manual) can be specified to place the OSDs on an XFS
device, on partition a2 as follows:
Example
<diskSetup>
<device>
<blockdev>/dev/sda</blockdev>
<partition id="a1">
<size>10G</size>
<type>linux</type>
<filesystem>ext3</filesystem>
<mountPoint>/</mountPoint>
<mountOptions>defaults,noatime,nodiratime</mountOptions>
</partition>
<partition id="a2">
<size>10G</size>
<type>linux</type>
<filesystem>xfs</filesystem>
<mountPoint>/var</mountPoint>
<mountOptions>defaults,noatime,nodiratime</mountOptions>
</partition>
<partition id="a3">
<size>2G</size>
<type>linux</type>
<filesystem>ext3</filesystem>
<mountPoint>/tmp</mountPoint>
<mountOptions>defaults,noatime,nodiratime,nosuid,nodev</mountOptions>
</partition>
<partition id="a4">
<size>1G</size>
<type>linux swap</type>
</partition>
<partition id="a5">
<size>max</size>
<type>linux</type>
<filesystem>ext3</filesystem>
<mountPoint>/local</mountPoint>
<mountOptions>defaults,noatime,nodiratime</mountOptions>
</partition>
</device>
</diskSetup>
Installation Logs
Installation logs to Ceph are kept at:
/var/log/cm-ceph-setup.log
Example
From within ceph mode, the overview command lists an overview of Ceph OSDs, MONs, and
placement groups for the ceph instance. Parts of the displayed output are elided in the example that
follows for viewing convenience:
Example
Number of OSDs up 2
Number of OSDs in 2
Number of mons 1
Number of placements groups 192
Placement groups data size 0B
Placement groups used size 10.07GB
Placement groups available size 9.91GB
Placement groups total size 19.98GB
The cmgui equivalent of the overview command is the Overview tab, accessed from within the
Ceph resource.
Some of the major Ceph configuration parameters can be viewed and their values managed by CM-
Daemon from ceph mode. The show command shows parameters and their values for the ceph in-
stance:
Example
[bright71->ceph]% show ceph
Parameter Value
------------------------------ ----------------------------------------
Admin keyring path /etc/ceph/ceph.client.admin.keyring
Bootstrapped yes
Client admin key AQDkUM5T4LhZFxAA/JQHvzvbyb9txH0bwvxUSQ==
Cluster networks
Config file path /etc/ceph/ceph.conf
Creation time Thu, 25 Sep 2014 13:54:11 CEST
Extra config parameters
Monitor daemon port 6789
Monitor key AQDkUM5TwM2lEhAA0CcdH/UFhGJ902n3y/Avng==
Monitor keyring path /etc/ceph/ceph.mon.keyring
Public networks
Revision
auth client required cephx yes
auth cluster required cephx yes
auth service required cephx yes
filestore xattr use omap no
fsid abf8e6af-71c0-4d75-badc-3b81bc2b74d8
mon max osd 10000
mon osd full ratio 0.95
mon osd nearfull ratio 0.85
name ceph
osd pool default min size 0
osd pool default pg num 8
osd pool default pgp num 8
osd pool default size 2
version 0.80.5
[bright71->ceph]%
The cmgui equivalent of these settings is the Settings tab, accessed from within the Ceph resource.
[mds.2]
host=rabbit
Example
If a section name, enclosed in square brackets, [], is used, then the section is recognized at the start
of an appended line by CMDaemon.
If a section that is specified in the square brackets does not already exist in /etc/ceph.conf, then
it will be created. The \n is interpreted as a new line at its position. After the commit, the extra configu-
ration parameter setting is maintained by the cluster manager.
If the section already exists in /etc/ceph.conf, then the associated key=value pair is appended.
For example, the following appends host2=bunny to an existing mds.2 section:
If no section name is used, then the key=value entry is appended to the [global] section.
The /etc/ceph.conf file has the changes written into it about a minute after the commit, and may
then look like (some lines removed for clarity):
[global]
auth client required = cephx
osd journal size=128
[mds.2]
host=rabbit
host2=bunny
• The removefrom command operates as the opposite of the append command, by removing
key=value pairs from the specified section.
There are similar extraconfigparameters for Ceph OSD filesystem associations (page 55) and
for Ceph monitoring (page 56).
Example
Within a device or category mode, the roles submode allows parameters of an assigned cephosd
role to be configured and managed.
Example
Within the cephosd role the templates for OSD filesystem associations, osdassociations, can be
set or modified:
Example
The -f option is used here with the list command merely in order to format the output so that it
stays within the margins of this manual.
The cmgui equivalent of the preceding cmsh settings is accessed from within a particular Nodes
or Categories item in the resource tree, then accessing the Ceph tab, and then choosing the OSD
checkbox. The Advanced button allows cephosd role parameters to be set for the node or category.
Example
Example
Ceph monitoring extraconfigparameters setting: Ceph monitoring can also have extra config-
urations set via the extraconfigparameters option, in a similar way to how it is done for Ceph
general configuration (page 54).
Monitors are similarly accessible from within cmgui for nodes and categories, with an Advanced
button in their Ceph tab allowing the parameters for the Monitor checkbox to be set.
Ceph bootstrap
For completeness, the bootstrap command within ceph mode can be used by the administrator to
initialize Ceph Monitors on specified nodes if they are not already initialized. Administrators are how-
ever not expected to use it, because they are expected to use the cm-ceph-setup installer utility when
installing Ceph in the first place. The installer utility carries out the bootstrap initialization as part of
its tasks. The bootstrap command is therefore only intended for use in the unusual case where the
administrator would like to set up Ceph storage without using the cm-ceph-setup utility.
Example
If cm-radosgw-setup is run without the -o option, then RADOS GW is installed, but Keystone
authentication is disabled, and the gateway is therefore then not available to OpenStack instances.
Command line installation with the -o option initializes RADOS GW for OpenStack instances the
first time it is run in Bright Cluster Manager.
This brings up the RADOS GW Advanced Role Settings window (figure 3.10), which allows
RADOS GW properties to be managed. For example, ticking the Enable Keystone Authentication
checkbox and saving the setting makes RADOS GW services available to OpenStack instances, if they
have already been initialized (section 3.4.1) to work with Bright Cluster Manager.
command line
cm-radosgw-setup -o cm-radosgw-setup
installation
cmgui device, Ticked checkbox for Enable Unticked checkbox for Enable
Ceph subtab Keystone Authentication Keystone Authentication