Kayobe
Kayobe
Release 12.1.0.dev48
OpenStack Foundation
1 Overview 1
2 Kayobe 3
2.1 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.4 Bugs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.5 Community . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.6 License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3 Contents 5
3.1 Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.3 Support Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.4 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.5 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.6 Configuration Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.7 Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
3.8 Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
3.9 Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
3.10 Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
3.11 Advanced Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
3.12 Contributor Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
i
ii
CHAPTER
ONE
OVERVIEW
Welcome to the Kayobe documentation, the official source of information for understanding and using
Kayobe.
This documentation is maintained at opendev.org here. Feedback and contributions welcome, see con-
tributing for information on how.
1
kayobe Documentation, Release 12.1.0.dev48
2 Chapter 1. Overview
CHAPTER
TWO
KAYOBE
2.1 Features
3
kayobe Documentation, Release 12.1.0.dev48
2.2 Documentation
https://docs.openstack.org/kayobe/latest/
https://docs.openstack.org/releasenotes/kayobe/
2.4 Bugs
https://storyboard.openstack.org/#!/project/openstack/kayobe
2.5 Community
2.6 License
4 Chapter 2. Kayobe
CHAPTER
THREE
CONTENTS
We advise new users start by reading the Architecture documentation first in order to understand Kayobes
various components.
For users wishing to learn interactively we recommend starting at either the all-in-one overcloud deploy-
ment or the A Universe From Nothing deployment guide.
Once familiar with Kayobes constituent parts, move on to the Installation section to prepare a baremetal
environment and then Deployment to deploy to it.
• Architecture - The function of Kayobes host and networking components
• Installation - The prerequisites and options for installing Kayobe
• Usage - An introduction to the Kayobe CLI
• Configuration - How to configure Kayobes various components
• Deployment- Using Kayobe to deploy OpenStack
• Upgrading - Upgrading from one OpenStack release to another
• Administration - Post-deploy administration tasks
• Resources - External links to Kayobe resources
• Contributor - Contributing to Kayobe and deploying Kayobe development environments
3.2 Architecture
5
kayobe Documentation, Release 12.1.0.dev48
Infrastructure VM hosts Infrastructure VMs (or Infra VMs) are virtual machines that may be deployed
to provide supplementary infrastructure services. They may be for things like proxies or DNS
servers that are dependencies of the Cloud hosts.
Cloud hosts The cloud hosts run the OpenStack control plane, network, monitoring, storage, and virtu-
alised compute services. Typically the cloud hosts run on bare metal but this is not mandatory.
Bare metal compute hosts In a cloud providing bare metal compute services to tenants via ironic, these
hosts will run the bare metal tenant workloads. In a cloud with only virtualised compute this
category of hosts does not exist.
Note: In many cases the control and seed host will be the same, although this is not mandatory.
Cloud Hosts
3.2.2 Networks
Kayobes network configuration is very flexible but does define a few default classes of networks. These
are logical networks and may map to one or more physical networks in the system.
Overcloud out-of-band network Name of the network used by the seed to access the out-of-band man-
agement controllers of the bare metal overcloud hosts.
Overcloud provisioning network The overcloud provisioning network is used by the seed host to pro-
vision the cloud hosts.
Workload out-of-band network Name of the network used by the overcloud hosts to access the out-of-
band management controllers of the bare metal workload hosts.
Workload provisioning network The workload provisioning network is used by the cloud hosts to pro-
vision the bare metal compute hosts.
Internal network The internal network hosts the internal and admin OpenStack API endpoints.
Public network The public network hosts the public OpenStack API endpoints.
External network The external network provides external network access for the hosts in the system.
6 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Note: CentOS 7 is no longer supported as a host OS. The Train release supports both CentOS 7 and 8,
and provides a route for migration. See the Kayobe Train documentation for information on migrating to
CentOS 8.
Note: CentOS Linux 8 (as opposed to CentOS Stream 8) is no longer supported as a host OS. The
Victoria release supports both CentOS Linux 8 and CentOS Stream 8, and provides a route for migration.
For details of container image distributions supported by Kolla Ansible, see the support matrix.
For details of which images are supported on which distributions, see the Kolla support matrix.
3.4 Installation
Kayobe can be installed via the released Python packages on PyPI, or from source. Installing from PyPI
ensures the use of well used and tested software, whereas installing from source allows for the use of
unreleased or patched code. Installing from a Python package is supported from Kayobe 5.0.0 onwards.
3.4.1 Prerequisites
Currently Kayobe supports the following Operating Systems on the Ansible control host:
• CentOS Stream 8 (since Wallaby 10.0.0 release)
• Rocky Linux 8 (since Yoga 12.0.0 release)
• Ubuntu Jammy 22.04 (since Zed 13.0.0 release)
See the support matrix for details of supported Operating Systems for other hosts.
To avoid conflicts with python packages installed by the system package manager it is recommended to
install Kayobe in a virtualenv. Ensure that the virtualenv python module is available on the Ansible
control host. It is necessary to install the GCC compiler chain in order to build the extensions of some
of kayobes python dependencies.
On CentOS/Rocky:
On Ubuntu:
If installing Kayobe from source, then Git is required for cloning and working with the source code
repository.
On CentOS/Rocky:
On Ubuntu:
The directory structure for a Kayobe Ansible control host environment is configurable, but the following
is recommended, where <base_path> is the path to a top level directory:
<base_path>/
src/
kayobe/
kayobe-config/
kolla-ansible/
venvs/
kayobe/
kolla-ansible/
This pattern ensures that all dependencies for a particular environment are installed under a single top
level path, and nothing is installed to a shared location. This allows for the option of using multiple
Kayobe environments on the same control host.
Creation of a kayobe-config source code repository will be covered in the configuration guide. The
Kolla Ansible source code checkout and Python virtual environment will be created automatically by
kayobe.
Not all of these directories will be used in all scenarios - if Kayobe or Kolla Ansible are installed from a
Python package then the source code repository is not required.
8 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
This section describes how to install Kayobe from a Python package in a virtualenv. This is supported
from Kayobe 5.0.0 onwards.
First, change to the top level directory, and make the directories for source code repositories and python
virtual environments:
$ cd <base_path>
$ mkdir -p src venvs
$ virtualenv <base_path>/venvs/kayobe
$ source <base_path>/venvs/kayobe/bin/activate
(kayobe) $ pip install -U pip
(kayobe) $ deactivate
$ cd <base_path>
$ mkdir -p src venvs
$ virtualenv <base_path>/venvs/kayobe
3.4. Installation 9
kayobe Documentation, Release 12.1.0.dev48
$ source <base_path>/venvs/kayobe/bin/activate
(kayobe) $ pip install -U pip
Install Kayobe and its dependencies using the source code checkout:
(kayobe) $ cd <base_path>/src/kayobe
(kayobe) $ pip install .
(kayobe) $ deactivate
From Kayobe 5.0.0 onwards it is possible to create an editable install of Kayobe. In an editable install,
any changes to the Kayobe source tree will immediately be visible when running any Kayobe commands.
To create an editable install, add the -e flag:
(kayobe) $ cd <base_path>/src/kayobe
(kayobe) $ pip install -e .
3.5 Usage
Note: Where a prompt starts with (kayobe) it is implied that the user has activated the Kayobe vir-
tualenv. This can be done as follows:
$ source /path/to/venv/bin/activate
(kayobe) $ deactivate
To see information on how to use the kayobe CLI and the commands it provides:
As the kayobe CLI is based on the cliff package (as used by the openstack client), it supports tab auto-
completion of subcommands. This can be activated by generating and then sourcing the bash completion
script:
10 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
If Ansible vault has been used to encrypt Kayobe configuration files, it will be necessary to provide the
kayobe command with access to vault password. There are three options for doing this:
Prompt Use kayobe --ask-vault-pass to prompt for the password.
File Use kayobe --vault-password-file <file> to read the password from a (plain text) file.
Environment variable Export the environment variable KAYOBE_VAULT_PASSWORD to read the pass-
word from the environment.
Limiting Hosts
Sometimes it may be necessary to limit execution of kayobe or kolla-ansible plays to a subset of the hosts.
The --limit <SUBSET> argument allows the kayobe ansible hosts to be limited. The --kolla-limit
<SUBSET> argument allows the kolla-ansible hosts to be limited. These two options may be combined
in a single command. In both cases, the argument provided should be an Ansible host pattern, and will
ultimately be passed to ansible-playbook as a --limit argument.
Tags
Ansible tags provide a useful mechanism for executing a subset of the plays or tasks in a playbook. The
--tags <TAGS> argument allows execution of kayobe ansible playbooks to be limited to matching plays
and tasks. The --kolla-tags <TAGS> argument allows execution of kolla-ansible ansible playbooks to
be limited to matching plays and tasks. The --skip-tags <TAGS> and --kolla-skip-tags <TAGS>
arguments allow for avoiding execution of matching plays and tasks.
Ansible supports check and diff modes, which can be used to improve visibility into changes that would be
made on target systems. The Kayobe CLI supports the --check argument, and since 11.0.0, the --diff
argument. Note that these modes are not always guaranteed to work, when some tasks are dependent on
earlier ones.
The configuration guide is split into two parts - scenarios and reference. The scenarios section pro-
vides information on configuring Kayobe for different scenarios. The reference section provides detailed
information on many of Kayobes configuration options.
Note: This documentation is intended as a walk through of the configuration required for a minimal
all-in-one overcloud host. If you are looking for an all-in-one environment for test or development, see
Automated Setup.
This scenario describes how to configure an all-in-one controller and compute node using Kayobe. This
is a very minimal setup, and not one that is recommended for a production environment, but is useful for
learning about how to use and configure Kayobe.
Prerequisites
Overview
An all in one environment consists of a single node that provides both control and compute services.
There is no seed host, and no provisioning of the overcloud host. Customisation is minimal, in order to
demonstrate the basic required configuration in Kayobe:
+---------------------------+
| Overcloud host |
| |
| |
| +-------------+ |
| | |+ |
| | Containers || |
| | || |
| +-------------+| |
| +-------------+ |
| |
+---------+-------+---------+
(continues on next page)
12 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
The networking in particular is relatively simple. The main interface of the overcloud host, labelled NIC
1 in the above diagram, will be used only for connectivity to the host and Internet access. A single Kayobe
network called aio carries all control plane traffic, and is based on virtual networking that is local to the
host.
Later in this tutorial, we will create a dummy interface called dummy0, and plug it into a bridge called
br0:
+--------------+
| |
| OVS |
| |
+--------------+
|
|
+--------------+
| |
| br0 |
| 192.168.33.3 |
| 192.168.33.2 |
+--------------+
| dummy0 |
+--------+
The use of a bridge here allows Kayobe to connect this network to the Open vSwitch network, while
maintaining an IP address on the bridge. Ordinarily, dummy0 would be a NIC providing connectivity
to a physical network. Were using a dummy interface here to keep things simple by using a fixed IP
subnet, 192.168.33.0/24. The bridge will be assigned a static IP address of 192.168.33.3, and
this address will by used for various things, including Ansible SSH access and OpenStack control plane
traffic. Kolla Ansible will manage a Virtual IP (VIP) address of 192.168.33.2 on br0, which will be
used for OpenStack API endpoints.
Contents
Overcloud
Note: This documentation is intended as a walk through of the configuration required for a minimal
all-in-one overcloud host. If you are looking for an all-in-one environment for test or development, see
Automated Setup.
Preparation
Installation
Follow the instructions in Installation to set up an Ansible control host environment. Typically this would
be on a separate machine, but here we are keeping things as simple as possible.
Configuration
Clone the kayobe-config git repository, using the correct branch for the release you are deploying. In this
example we will use the master branch.
cd <base path>/src
git clone https://opendev.org/openstack/kayobe-config.git -b master
cd kayobe-config
This repository is bare, and needs to be populated. The repository includes an example inventory, which
should be removed:
git rm etc/kayobe/inventory/hosts.example
Create an Ansible inventory file and add the machine to it. In this example our machine is called
controller0. Since this is an all-in-one environment, we add the controller to the compute group,
however normally dedicated compute nodes would be used.
Listing 1: etc/kayobe/inventory/hosts
# This host acts as the configuration management Ansible control host. This␣
,→must be
# localhost.
localhost ansible_connection=local
[controllers]
controller0
14 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
The inventory directory also contains group variables for network interface configuration. In this exam-
ple we will assume that the machine has a single network interface called dummy0. We will create a bridge
called br0 and plug dummy0 into it. Replace the network interface configuration for the controllers
group with the following:
Listing 2: etc/kayobe/inventory/group_vars/
controllers/network-interfaces
# Controller interface on all-in-one network.
aio_interface: br0
In this scenario a single network called aio is used. We must therefore set the name of the default
controller networks to aio:
Listing 3: etc/kayobe/networks.yml
---
# Kayobe network configuration.
##############################################################################
,→#
# Name of the network used by the seed to manage the bare metal overcloud
# hosts via their out-of-band management controllers.
#oob_oc_net_name:
# Name of the network used by the seed to provision the bare metal overcloud
# hosts.
#provision_oc_net_name:
# Name of the network used by the overcloud hosts to manage the bare metal
# compute hosts via their out-of-band management controllers.
#oob_wl_net_name:
# Name of the network used by the overcloud hosts to provision the bare metal
# workload hosts.
(continues on next page)
# Name of the network used to expose the internal OpenStack API endpoints.
#internal_net_name:
internal_net_name: aio
# Name of the network used to expose the public OpenStack API endpoints.
#public_net_name:
public_net_name: aio
# Name of the network used by Neutron to carry tenant overlay network traffic.
#tunnel_net_name:
tunnel_net_name: aio
# Name of the network used to perform hardware introspection on the bare metal
# workload hosts.
#inspection_net_name:
# Name of the network used to perform cleaning on the bare metal workload
# hosts
#cleaning_net_name:
##############################################################################
,→#
# Network definitions.
16 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Next the aio network must be defined. This is done using the various attributes described in Network
Configuration. These values should be adjusted to match the environment. The aio_vip_address
variable should be a free IP address in the same subnet for the virtual IP address of the OpenStack API.
Listing 4: etc/kayobe/networks.yml
<omitted for clarity>
##############################################################################
,→#
# Network definitions.
# All-in-one network.
aio_cidr: 192.168.33.0/24
aio_vip_address: 192.168.33.2
##############################################################################
,→#
Kayobe will automatically allocate IP addresses. In this case however, we want to ensure that the host
uses the same IP address it has currently, to avoid loss of connectivity. We can do this by populating the
network allocation file. Use the correct hostname and IP address for your environment.
Listing 5: etc/kayobe/network-allocation.yml
---
aio_ips:
controller0: 192.168.33.3
The default OS distribution in Kayobe is CentOS. If using an Ubuntu host, set the os_distribution
variable in etc/kayobe/globals.yml to ubuntu or rocky if using Rocky Linux..
Listing 6: etc/kayobe/globals.yml
os_distribution: "ubuntu"
Kayobe uses a bootstrap user to create a stack user account. By default, this user is centos on CentOS,
rocky on Rocky and ubuntu on Ubuntu, in line with the default user in the official cloud images. If you
are using a different bootstrap user, set the controller_bootstrap_user variable in etc/kayobe/
controllers.yml. For example, to set it to cloud-user (as seen in MAAS):
Listing 7: etc/kayobe/controllers.yml
controller_bootstrap_user: "cloud-user"
By default, on systems with SELinux disabled, Kayobe will put SELinux in permissive mode and re-
boot the system to apply the change. In a test or development environment this can be a bit disruptive,
particularly when using ephemeral network configuration. To avoid rebooting the system after enabling
SELinux, set selinux_do_reboot to false in etc/kayobe/globals.yml.
Listing 8: etc/kayobe/globals.yml
selinux_do_reboot: false
In a development environment, we may wish to tune some Kolla Ansible variables. Using QEMU as the
virtualisation type will be necessary if KVM is not available. Reducing the number of OpenStack service
workers helps to avoid using too much memory.
Listing 9: etc/kayobe/kolla/globals.yml
# Most development environments will use nested virtualisation, and we can't
# guarantee that nested KVM support is available. Use QEMU as a lowest common
# denominator.
nova_compute_virt_type: qemu
# Reduce the control plane's memory footprint by limiting the number of worker
# processes to one per-service.
openstack_service_workers: "1"
We can see the changes that have been made to the configuration.
cd <base path>/src/kayobe-config
git status
On branch master
Your branch is up to date with 'origin/master'.
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
deleted: etc/kayobe/inventory/hosts.example
Untracked files:
(use "git add <file>..." to include in what will be committed)
etc/kayobe/inventory/hosts
etc/kayobe/network-allocation.yml
The git diff command is also helpful. Once all configuration changes have been made, they should
be committed to the kayobe-config git repository.
cd <base path>/src/kayobe-config
git add etc/kayobe/inventory/hosts etc/kayobe/network-allocation.yml
(continues on next page)
18 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Deployment
cd <base path>/venvs/kayobe
source bin/activate
cd <base path>/src/kayobe-config
source kayobe-env
After this command has run, some files in the kayobe-config repository will have changed. Kayobe
performs static allocation of IP addresses, and tracks them in etc/kayobe/network-allocation.
yml. Normally there may be changes to this file, but in this case we manually added the IP address of
controller0 earlier. Kayobe uses tools provided by Kolla Ansible to generate passwords, and stores
them in etc/kayobe/kolla/passwords.yml. It is important to track changes to this file.
cd <base path>/src/kayobe-config
git add etc/kayobe/kolla/passwords.yml
git commit -m "Add autogenerated passwords for Kolla Ansible"
Testing
The init-runonce script provided by Kolla Ansible (not for production) can be used to setup some
resources for testing. This includes:
• some flavors
• a cirros image
• an external network
• a tenant network and router
• security group rules for ICMP, SSH, and TCP ports 8000 and 8080
• an SSH key
• increased quotas
For the external network, use the same subnet as before, with an allocation pool range containing free IP
addresses:
Create a server instance, assign a floating IP address, and check that it is accessible.
Kayobe Configuration
This section covers configuration of Kayobe. As an Ansible-based project, Kayobe is for the most part
configured using YAML files.
20 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Configuration Location
Kayobe configuration is by default located in /etc/kayobe on the Ansible control host. This location
can be overridden to a different location to avoid touching the system configuration directory by setting
the environment variable KAYOBE_CONFIG_PATH. Similarly, kolla configuration on the Ansible control
host will by default be located in /etc/kolla and can be overridden via KOLLA_CONFIG_PATH.
The Kayobe configuration directory contains Ansible extra-vars files and the Ansible inventory. An
example of the directory structure is as follows:
extra-vars1.yml
extra-vars2.yml
inventory/
group_vars/
group1-vars
group2-vars
groups
host_vars/
host1-vars
host2-vars
hosts
Configuration Patterns
Ansibles variable precedence rules are fairly well documented and provide a mechanism we can use for
providing site localisation and customisation of OpenStack in combination with some reasonable default
values. For global configuration options, Kayobe typically uses the following patterns:
• Playbook group variables for the all group in <kayobe repo>/ansible/group_vars/all/*
set global defaults. These files should not be modified.
• Playbook group variables for other groups in <kayobe repo>/ansible/group_vars/
<group>/* set defaults for some subsets of hosts. These files should not be modified.
• Extra-vars files in ${KAYOBE_CONFIG_PATH}/*.yml set custom values for global variables and
should be used to apply global site localisation and customisation. By default these variables are
commented out.
Additionally, variables can be set on a per-host basis using inventory host variables files in
${KAYOBE_CONFIG_PATH}/inventory/host_vars/*. It should be noted that variables set in extra-
vars files take precedence over per-host variables.
Configuring Kayobe
The kayobe-config git repository contains a Kayobe configuration directory structure and unmodified
configuration files. This repository can be used as a mechanism for version controlling Kayobe configu-
ration. As Kayobe is updated, the configuration should be merged to incorporate any upstream changes
with local modifications.
Alternatively, the baseline Kayobe configuration may be copied from a checkout of the Kayobe repository
to the Kayobe configuration path:
$ mkdir -p ${KAYOBE_CONFIG_PATH:-/etc/kayobe/}
$ cp -r etc/kayobe/* ${KAYOBE_CONFIG_PATH:-/etc/kayobe/}
Once in place, each of the YAML and inventory files should be manually inspected and configured as
required.
Inventory
Configuration of Ansible
Ansible configuration is described in detail in the Ansible documentation. In addition to the standard
locations, Kayobe supports using an Ansible configuration file located in the Kayobe configuration at
${KAYOBE_CONFIG_PATH}/ansible.cfg. Note that if the ANSIBLE_CONFIG environment variable is
specified it takes precedence over this file.
22 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Encryption of Secrets
Kayobe supports the use of Ansible vault to encrypt sensitive information in its configuration. The
ansible-vault tool should be used to manage individual files for which encryption is required. Any
of the configuration files may be encrypted. Since encryption can make working with Kayobe difficult,
it is recommended to follow best practice, adding a layer of indirection and using encryption only where
necessary.
Kayobe needs to know where to find any files not contained within its python package; this includes its
Ansible playbooks and any other files it needs for runtime operation. These files are known collectively
as data files.
Kayobe will attempt to detect the location of its data files automatically. However, if you have installed
kayobe to a non-standard location this auto-detection may fail. It is possible to manually override the
path using the environment variable: KAYOBE_DATA_FILES_PATH. This should be set to a path with the
following structure:
requirements.yml
ansible/
roles/
...
...
Where ansible is the ansible directory from the source checkout and ... is an elided representation
of any files and subdirectories contained within that directory.
Ansible
SSH pipelining
SSH pipelining is disabled in Ansible by default, but is generally safe to enable, and provides a reasonable
performance improvement.
Forks
By default Ansible executes tasks using a fairly conservative 5 process forks. This limits the parallelism
that allows Ansible to scale. Most Ansible control hosts will be able to handle far more forks than this.
You will need to experiment to find out the CPU, memory and IO limits of your machine.
For example, to increase the number of forks to 20:
Fact caching
Note: Fact caching will not work correctly in Kayobe prior to the Ussuri release.
By default, Ansible gathers facts for each host at the beginning of every play, unless gather_facts is
set to false. With a large number of hosts this can result in a significant amount of time spent gathering
facts.
One way to improve this is through Ansibles support for fact caching. In order to make this work with
Kayobe, it is necessary to change Ansibles gathering configuration option to smart. Additionally, it
is necessary to use separate fact caches for Kayobe and Kolla Ansible due to some of the facts (e.g.
ansible_facts.user_uid and ansible_facts.python) differing.
Example
In the following example we configure Kayobe and Kolla Ansible to use fact caching using the jsonfile
cache plugin.
24 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
You may also wish to set the expiration timeout for the cache via [defaults]
fact_caching_timeout.
Fact gathering
Fact filtering
Filtering of facts can be used to speed up Ansible. Environments with many network interfaces on the
network and compute nodes can experience very slow processing with Kayobe and Kolla Ansible. This
happens due to the processing of the large per-interface facts with each task. To avoid storing certain facts,
we can use the kayobe_ansible_setup_filter variable, which is used as the filter argument to
the setup module.
One case where this is particularly useful is to avoid collecting facts for virtual tap (beginning with t)
and bridge (beginning with q) interfaces created by Neutron. These facts are large map values which can
consume a lot of resources on the Ansible control host. Kayobe and Kolla Ansible typically do not need
to reference them, so they may be filtered. For example, to avoid collecting facts beginning with q or t:
Similarly, for Kolla Ansible (notice the similar but different file names):
This causes Ansible to collect but not store facts matching that pattern, which includes the virtual interface
facts. Currently we are not referencing other facts matching the pattern within Kolla Ansible. Note that
including the ansible prefix causes meta facts module_setup and gather_subset to be filtered, but
this seems to be the only way to get a good match on the interface facts.
The exact improvement will vary, but has been reported to be as large as 18x on systems with many
virtual interfaces.
Similarly, for Kolla Ansible (notice the similar but different file names):
OS Distribution
As of the Wallaby 10.0.0 release, Kayobe supports multiple Operating System (OS) distributions. See
the support matrix for a list of supported OS distributions. The same OS distribution should be used
throughout the system.
The os_distribution variable in etc/kayobe/globals.yml can be used to set the OS distribution
to use. It may be set to either centos or or rocky or ubuntu, and defaults to centos.
The os_release variable in etc/kayobe/globals.yml can be used to set the release of the OS.
When os_distribution is set to centos it may be set to 8-stream, and this is its default value.
When os_distribution is set to ubuntu it may be set to jammy, and this is its default value. When
os_distribution is set to rocky it may be set to 8, and this is its default value.
These variables are used to set various defaults, including:
• Bootstrap users
• Overcloud host root disk image build configuration
• Seed VM root disk image
• Kolla base container image
26 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Kayobe supports configuration of physical network devices. This feature is optional, and this section may
be skipped if network device configuration will be managed via other means.
Devices are added to the Ansible inventory, and configured using Ansibles networking modules. Config-
uration is applied via the kayobe physical network configure command. See Physical Network
for details.
The following switch operating systems are currently supported:
• Arista EOS
• Cumulus Linux (via Network Command Line Utility (NCLU))
• Dell OS 6
• Dell OS 9
• Dell OS 10
• Dell PowerConnect
• Juniper Junos OS
• Mellanox MLNX OS
Network devices should be added to the Kayobe Ansible inventory, and should be members of the
switches group.
In some cases it may be useful to differentiate different types of switches, For example, a mgmt network
might carry out-of-band management traffic, and a ctl network might carry control plane traffic. A
group could be created for each of these networks, with each group being a child of the switches group.
[mgmt-switches]
switch0
(continues on next page)
[ctl-switches]
switch1
Configuration is typically specific to each network device. It is therefore usually best to add a host_vars
file to the inventory for each device. Common configuration for network devices can be added in a
group_vars file for the switches group or one of its child groups.
The type of switch should be configured via the switch_type variable. See Device-specific Configura-
tion Variables for details of the value to set for each device type.
ansible_host should be set to the management IP address used to access the device. ansible_user
should be set to the user used to access the device.
Global switch configuration is specified via the switch_config variable. It should be a list of configu-
ration lines to apply.
Per-interface configuration is specified via the switch_interface_config variable. It should be an
object mapping switch interface names to configuration objects. Each configuration object contains a
description item and a config item. The config item should contain a list of per-interface configu-
ration lines.
The switch_interface_config_enable_discovery and switch_interface_config_disable_discovery
variables take the same format as the switch_interface_config variable. They define interface
configuration to apply to enable or disable hardware discovery of bare metal compute nodes.
28 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
ansible_user: alice
switch_config:
- global config line 1
- global config line 2
switch_interface_config:
interface-0:
description: controller0
config:
- interface-0 config line 1
- interface-0 config line 2
interface-1:
description: compute0
config:
- interface-1 config line 1
- interface-1 config line 2
Network device configuration can become quite repetitive, so it can be helpful to define group variables
that can be referenced by multiple devices. For example:
switch_interface_config_controller:
- controller interface config line 1
- controller interface config line 2
switch_interface_config_compute:
- compute interface config line 1
- compute interface config line 2
ansible_user: alice
switch_interface_config:
interface-0:
description: controller0
config: "{{ switch_interface_config_controller }}"
interface-1:
description: compute0
config: "{{ switch_interface_config_compute }}"
Arista EOS
Configuration for these devices is applied using the arista-switch Ansible role in Kayobe. The role
configures Arista switches using the eos Ansible modules.
switch_type should be set to arista.
Provider
Configuration for these devices is applied using the nclu Ansible module.
switch_type should be set to nclu.
SSH configuration
As with any non-switch host in the inventory, the nclu module relies on the default connection parameters
used by Ansible:
• ansible_host is the hostname or IP address. Optional.
• ansible_user is the SSH username.
30 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Configuration for these devices is applied using the dellos6_config, dellos9_config, and
dellos10_config Ansible modules.
switch_type should be set to dellos6, dellos9, or dellos10.
Provider
Dell PowerConnect
Provider
Juniper Junos OS
Configuration for these devices is applied using the junos_config Ansible module.
switch_type should be set to junos.
switch_junos_config_format may be used to set the format of the configuration. The variable is
passed as the src_format argument to the junos_config module. The default value is text.
Provider
Mellanox MLNX OS
Configuration for these devices is applied using the stackhpc.mellanox-switch Ansible role. The
role uses the expect Ansible module to automate interaction with the switch CLI via SSH.
switch_type should be set to mellanox.
Provider
Network Configuration
Kayobe provides a flexible mechanism for configuring the networks in a system. Kayobe networks are
assigned a name which is used as a prefix for variables that define the networks attributes. For ex-
ample, to configure the cidr attribute of a network named arpanet, we would use a variable named
arpanet_cidr.
32 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Note: Use of the fqdn attribute is deprecated. Instead use kolla_internal_fqdn and
kolla_external_fqdn.
Fully Qualified Domain Name (FQDN) used by API services on this network.
routes
List of static IP routes. Each item should be a dict containing the item cidr, and optionally
gateway, table and options. cidr is the CIDR representation of the routes destination.
gateway is the IP address of the next hop. table is the name or ID of a routing table to which the
route will be added. options is a list of option strings to add to the route.
rules List of IP routing rules.
On CentOS or Rocky, each item should be a string describing an iproute2 IP routing rule.
On Ubuntu, each item should be a dict containing optional items from, to, priority and table.
from is the source address prefix to match with optional prefix. to is the destination address prefix
to match with optional prefix. priority is the priority of the rule. table is the routing table ID.
physical_network Name of the physical network on which this network exists. This aligns with the
physical network concept in neutron.
libvirt_network_name A name to give to a Libvirt network representing this network on the seed
hypervisor.
no_ip Whether to allocate an IP address for this network. If set to true, an IP address will not be
allocated.
Configuring an IP Subnet
An IP subnet may be configured by setting the cidr attribute for a network to the CIDR representation
of the subnet.
To configure a network called example with the 10.0.0.0/24 IP subnet:
Configuring an IP Gateway
An IP gateway may be configured by setting the gateway attribute for a network to the IP address of the
gateway.
To configure a network called example with a gateway at 10.0.0.1:
This gateway will be configured on all hosts to which the network is mapped. Note that configuring
multiple IP gateways on a single host will lead to unpredictable results.
A virtual IP (VIP) address may be configured for use by Kolla Ansible on the internal and external
networks, on which the API services will be exposed. The variable will be passed through to the
kolla_internal_vip_address or kolla_external_vip_address Kolla Ansible variable.
To configure a network called example with a VIP at 10.0.0.2:
A Fully Qualified Domain Name (FQDN) may be configured for use by Kolla Ansible on the internal
and external networks, on which the API services will be exposed. The variable will be passed through
to the kolla_internal_fqdn or kolla_external_fqdn Kolla Ansible variable.
To configure a network called example with an FQDN at api.example.com:
34 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Static IP routes may be configured by setting the routes attribute for a network to a list of routes.
To configure a network called example with a single IP route to the 10.1.0.0/24 subnet via 10.0.0.1:
These routes will be configured on all hosts to which the network is mapped.
If necessary, custom options may be added to the route:
Configuring a VLAN
A VLAN network may be configured by setting the vlan attribute for a network to the ID of the VLAN.
To configure a network called example with VLAN ID 123:
IP Address Allocation
IP addresses are allocated automatically by Kayobe from the allocation pool defined by
allocation_pool_start and allocation_pool_end. If these variables are undefined, the
entire network is used, except for network and broadcast addresses. IP addresses are only allocated if
the network cidr is set and DHCP is not used (see bootproto in Per-host Network Configuration).
The allocated addresses are stored in ${KAYOBE_CONFIG_PATH}/network-allocation.yml using
the global per-network attribute ips which maps Ansible inventory hostnames to allocated IPs.
If static IP address allocation is required, the IP allocation file network-allocation.yml may be man-
ually populated with the required addresses.
To configure a network called example with the 10.0.0.0/24 IP subnet and an allocation pool spanning
from 10.0.0.4 to 10.0.0.254:
Note: This pool should not overlap with an inspection or neutron allocation pool on the same network.
To configure a network called example with statically allocated IP addresses for hosts host1 and host2:
Policy-based routing can be useful in complex networking environments, particularly where asymmetric
routes exist, and strict reverse path filtering is enabled.
Custom IP routing tables may be configured by setting the global variable network_route_tables in
${KAYOBE_CONFIG_PATH}/networks.yml to a list of route tables. These route tables will be added to
/etc/iproute2/rt_tables.
To configure a routing table called exampleroutetable with ID 1:
36 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
To configure route tables on specific hosts, use a host or group variables file.
IP routing policy rules may be configured by setting the rules attribute for a network to a list of rules.
The format of each rule currently differs between CentOS/Rocky and Ubuntu.
CentOS/Rocky
The format of a rule is the string which would be appended to ip rule <add|del> to create or delete
the rule.
To configure a network called example with an IP routing policy rule to handle traffic from the subnet
10.1.0.0/24 using the routing table exampleroutetable:
These rules will be configured on all hosts to which the network is mapped.
Ubuntu
The format of a rule is a dictionary with optional items from, to, priority, and table.
To configure a network called example with an IP routing policy rule to handle traffic from the subnet
10.1.0.0/24 using the routing table exampleroutetable:
These rules will be configured on all hosts to which the network is mapped.
A route may be added to a specific routing table by adding the name or ID of the table to a table attribute
of the route:
To configure a network called example with a default route and a connected (local subnet) route to the
subnet 10.1.0.0/24 on the table exampleroutetable:
Some network attributes are specific to a hosts role in the system, and these are stored in
${KAYOBE_CONFIG_PATH}/inventory/group_vars/<group>/network-interfaces. The follow-
ing attributes are supported:
interface The name of the network interface attached to the network.
bootproto Boot protocol for the interface. Valid values are static and dhcp. The default is static.
When set to dhcp, an external DHCP server must be provided.
defroute Whether to set the interface as the default route. This attribute can be used to disable config-
uration of the default gateway by a specific interface. This is particularly useful to ignore a gateway
address provided via DHCP. Should be set to a boolean value. The default is unset. This attribute
is only supported on distributions of the Red Hat family.
bridge_ports For bridge interfaces, a list of names of network interfaces to add to the bridge.
bond_mode For bond interfaces, the bonds mode, e.g. 802.3ad.
bond_ad_select For bond interfaces, the 802.3ad aggregation selection logic to use. Valid values are
stable (default selection logic if not configured), bandwidth or count.
bond_slaves For bond interfaces, a list of names of network interfaces to act as slaves for the bond.
bond_miimon For bond interfaces, the time in milliseconds between MII link monitoring.
bond_updelay For bond interfaces, the time in milliseconds to wait before declaring an interface up
(should be multiple of bond_miimon).
38 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
bond_downdelay For bond interfaces, the time in milliseconds to wait before declaring an interface
down (should be multiple of bond_miimon).
bond_xmit_hash_policy For bond interfaces, the xmit_hash_policy to use for the bond.
bond_lacp_rate For bond interfaces, the lacp_rate to use for the bond.
ethtool_opts
Physical network interface options to apply with ethtool. When used on bond and bridge inter-
faces, settings apply to underlying interfaces. This should be a string of arguments passed to the
ethtool utility, for example "-G ${DEVICE} rx 8192 tx 8192".
zone
IP Addresses
An interface will be assigned an IP address if the associated network has a cidr attribute. The IP address
will be assigned from the range defined by the allocation_pool_start and allocation_pool_end
attributes, if one has not been statically assigned in network-allocation.yml.
An Ethernet interface may be configured by setting the interface attribute for a network to the name
of the Ethernet interface.
To configure a network called example with an Ethernet interface on eth0:
A Linux bridge interface may be configured by setting the interface attribute of a network to the name
of the bridge interface, and the bridge_ports attribute to a list of interfaces which will be added as
member ports on the bridge.
To configure a network called example with a bridge interface on breth1, and a single port eth1:
Bridge member ports may be Ethernet interfaces, bond interfaces, or VLAN interfaces. In the case of
bond interfaces, the bond must be configured separately in addition to the bridge, as a different named
network. In the case of VLAN interfaces, the underlying Ethernet interface must be configured separately
in addition to the bridge, as a different named network.
A bonded interface may be configured by setting the interface attribute of a network to the name of
the bonds master interface, and the bond_slaves attribute to a list of interfaces which will be added as
slaves to the master.
To configure a network called example with a bond with master interface bond0 and two slaves eth0
and eth1:
Optionally, the bond mode and MII monitoring interval may also be configured:
Bond slaves may be Ethernet interfaces, or VLAN interfaces. In the case of VLAN interfaces, underlying
Ethernet interface must be configured separately in addition to the bond, as a different named network.
A VLAN interface may be configured by setting the interface attribute of a network to the name of
the VLAN interface. The interface name must be of the form <parent interface>.<VLAN ID>.
To configure a network called example with a VLAN interface with a parent interface of eth2 for VLAN
123:
40 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Ethernet interfaces, bridges, and bond master interfaces may all be parents to a VLAN interface.
Adding a VLAN interface to a bridge directly will allow tagged traffic for that VLAN to be forwarded by
the bridge, whereas adding a VLAN interface to an Ethernet or bond interface that is a bridge member
port will prevent tagged traffic for that VLAN being forwarded by the bridge.
For example, if you are bridging eth1 to breth1 and want to access VLAN 1234 as a tagged VLAN
from the host, while still allowing Neutron to access traffic for that VLAN via Open vSwitch, your setup
should look like this:
This second configuration may be desirable to prevent specific traffic, e.g. of the internal API network,
from reaching Neutron.
Kayobe supports configuration of hosts DNS resolver via resolv.conf. DNS configuration should be
added to dns.yml. For example:
In order to provide flexibility in the systems network topology, Kayobe maps the named networks to
logical network roles. A single named network may perform multiple roles, or even none at all. The
available roles are:
Overcloud admin network (admin_oc_net_name) Name of the network used to access the overcloud
for admin purposes, e.g for remote SSH access.
Overcloud out-of-band network (oob_oc_net_name) Name of the network used by the seed to access
the out-of-band management controllers of the bare metal overcloud hosts.
Overcloud provisioning network (provision_oc_net_name) Name of the network used by the seed
to provision the bare metal overcloud hosts.
Workload out-of-band network (oob_wl_net_name) Name of the network used by the overcloud
hosts to access the out-of-band management controllers of the bare metal workload hosts.
Workload provisioning network (provision_wl_net_name) Name of the network used by the over-
cloud hosts to provision the bare metal workload hosts.
Workload cleaning network (cleaning_net_name) Name of the network used by the overcloud hosts
to clean the baremetal workload hosts.
Internal network (internal_net_name) Name of the network used to expose the internal OpenStack
API endpoints.
Public network (public_net_name) Name of the network used to expose the public OpenStack API
endpoints.
Tunnel network (tunnel_net_name) Name of the network used by Neutron to carry tenant overlay
network traffic.
External networks (external_net_names, deprecated: external_net_name) List of names of
networks used to provide external network access via Neutron. If external_net_name is de-
fined, external_net_names defaults to a list containing only that network.
Storage network (storage_net_name) Name of the network used to carry storage data traffic.
42 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Storage management network (storage_mgmt_net_name) Name of the network used to carry stor-
age management traffic.
Swift storage network (swift_storage_net_name) Name of the network used to carry Swift storage
data traffic. Defaults to the storage network (storage_net_name).
Swift storage replication network (swift_storage_replication_net_name) Name of the net-
work used to carry storage management traffic. Defaults to the storage management network
(storage_mgmt_net_name)
Workload inspection network (inspection_net_name) Name of the network used to perform hard-
ware introspection on the bare metal workload hosts.
These roles are configured in ${KAYOBE_CONFIG_PATH}/networks.yml.
To configure network roles in a system with two networks, example1 and example2:
The admin network is intended to be used for remote access to the overcloud hosts. Kayobe will use
the address assigned to the host on this network as the ansible_host when executing playbooks. It is
therefore a necessary requirement to configure this network.
By default Kayobe will use the overcloud provisioning network as the admin network. It is, however,
possible to configure a separate network. To do so, you should override admin_oc_net_name in your
networking configuration.
If a separate network is configured, the following requirements should be taken into consideration:
• The admin network must be configured to use the same physical network interface as the provi-
sioning network. This is because the PXE MAC address is used to lookup the interface for the
cloud-init network configuration that occurs during bifrost provisioning of the overcloud.
If using a seed to inspect the bare metal overcloud hosts, it is necessary to define a DHCP allocation
pool for the seeds ironic inspector DHCP server using the inspection_allocation_pool_start and
inspection_allocation_pool_end attributes of the overcloud provisioning network.
Note: This example assumes that the example network is mapped to provision_oc_net_name.
example_inspection_allocation_pool_start: 10.0.0.128
example_inspection_allocation_pool_end: 10.0.0.254
Note: This pool should not overlap with a kayobe allocation pool on the same network.
A separate cleaning network, which is used by the overcloud to clean baremetal workload (compute)
hosts, may optionally be specified. Otherwise, the Workload Provisoning network is used. It is nec-
essary to define an IP allocation pool for neutron using the neutron_allocation_pool_start and
neutron_allocation_pool_end attributes of the cleaning network. This controls the IP addresses
that are assigned to workload hosts during cleaning.
Note: This example assumes that the example network is mapped to cleaning_net_name.
example_neutron_allocation_pool_start: 10.0.1.128
example_neutron_allocation_pool_end: 10.0.1.195
Note: This pool should not overlap with a kayobe or inspection allocation pool on the same network.
If using the overcloud to provision bare metal workload (compute) hosts, it is necessary
to define an IP allocation pool for the overclouds neutron provisioning network using the
neutron_allocation_pool_start and neutron_allocation_pool_end attributes of the work-
load provisioning network.
Note: This example assumes that the example network is mapped to provision_wl_net_name.
44 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
example_neutron_allocation_pool_start: 10.0.1.128
example_neutron_allocation_pool_end: 10.0.1.195
Note: This pool should not overlap with a kayobe or inspection allocation pool on the same network.
If using the overcloud to inspect bare metal workload (compute) hosts, it is necessary to
define a DHCP allocation pool for the overclouds ironic inspector DHCP server using the
inspection_allocation_pool_start and inspection_allocation_pool_end attributes of the
workload provisioning network.
Note: This example assumes that the example network is mapped to provision_wl_net_name.
example_inspection_allocation_pool_start: 10.0.1.196
example_inspection_allocation_pool_end: 10.0.1.254
Note: This pool should not overlap with a kayobe or neutron allocation pool on the same network.
Neutron Networking
Note: This assumes the use of the neutron openvswitch ML2 mechanism driver for control plane
networking.
Certain modes of operation of neutron require layer 2 access to physical networks in the system. Hosts in
the network group (by default, this is the same as the controllers group) run the neutron networking
services (Open vSwitch agent, DHCP agent, L3 agent, metadata agent, etc.).
The kayobe network configuration must ensure that the neutron Open vSwitch bridges on the network
hosts have access to the external network. If bare metal compute nodes are in use, then they must also
have access to the workload provisioning network. This can be done by ensuring that the external and
workload provisioning network interfaces are bridges. Kayobe will ensure connectivity between these
Linux bridges and the neutron Open vSwitch bridges via a virtual Ethernet pair. See Configuring Bridge
Interfaces.
Networks are mapped to hosts using the variable network_interfaces. Kayobes playbook group vari-
ables define some sensible defaults for this variable for hosts in the top level standard groups. These
defaults are set using the network roles typically required by the group.
Seed
Seed Hypervisor
By default, the seed hypervisor is attached to the same networks as the seed.
This list may be extended by setting seed_hypervisor_extra_network_interfaces to a list of
names of additional networks to attach. Alternatively, the list may be completely overridden by setting
seed_hypervisor_network_interfaces. These variables are found in ${KAYOBE_CONFIG_PATH}/
seed-hypervisor.yml.
Infra VMs
Controllers
46 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
• internal network
• storage network
In addition, if the controllers are also in the network group, they are attached to the following networks:
• public network
• external network
• tunnel network
This list may be extended by setting controller_extra_network_interfaces to a list of names
of additional networks to attach. Alternatively, the list may be completely overridden by set-
ting controller_network_interfaces. These variables are found in ${KAYOBE_CONFIG_PATH}/
controllers.yml.
Network Hosts
By default, controllers provide Neutron network services and load balancing. If separate network hosts
are used (see Example 1: Adding Network Hosts), they are attached to the following networks:
• overcloud admin network
• internal network
• storage network
• public network
• external network
• tunnel network
This list may be extended by setting controller_network_host_extra_network_interfaces to
a list of names of additional networks to attach. Alternatively, the list may be completely overrid-
den by setting controller_network_host_network_interfaces. These variables are found in
${KAYOBE_CONFIG_PATH}/controllers.yml.
Monitoring Hosts
By default, the monitoring hosts are attached to the same networks as the controllers when they are in
the controllers group. If the monitoring hosts are not in the controllers group, they are attached
to the following networks by default:
• overcloud admin network
• internal network
• public network
This list may be extended by setting monitoring_extra_network_interfaces to a list of names
of additional networks to attach. Alternatively, the list may be completely overridden by set-
ting monitoring_network_interfaces. These variables are found in ${KAYOBE_CONFIG_PATH}/
monitoring.yml.
Storage Hosts
Other Hosts
If additional hosts are managed by kayobe, the networks to which these hosts are attached may be defined
in a host or group variables file. See Control Plane Service Placement for further details.
Complete Example
The following example combines the complete network configuration into a single system configuration.
In our example cloud we have three networks: management, cloud and external:
| | | | | | | ␣
,→ | | |
| | | | | | | ␣
,→ | | | (continues on next page)
48 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
management +--------+------------------------+--------------------------------
,→--------------+
| | ␣
,→ |
cloud +------------------------------------+-----------------------------
,→-+------------+
|
external +---------------------------------------+--------------------------
,→--------------+
The management network is used to access the servers BMCs and by the seed to inspect and provision
the cloud hosts. The cloud network carries all internal control plane and storage traffic, and is used by
the control plane to provision the bare metal compute hosts. Finally, the external network links the
cloud to the outside world.
We could describe such a network as follows:
We can map these networks to network interfaces on the seed and controller hosts:
We have defined a bridge for the cloud network on the controllers as this will allow it to be plugged into
a neutron Open vSwitch bridge.
Kayobe will allocate IP addresses for the hosts that it manages:
50 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Note that although this file does not need to be created manually, doing so allows for a predictable IP
address mapping which may be desirable in some cases.
This section describes configuration for routed control plane networks. This is an advanced concept and
generally applies only to larger deployments that exceed the reasonable size of a broadcast domain.
Concept
Kayobe currently supports the definition of various different networks - public, internal, tunnel, etc.
These typically map to a VLAN or flat network, with an associated IP subnet. When a cloud exceeds the
reasonable size of a single VLAN/subnet, or is physically distributed, this approach no longer works.
One way to resolve this is to have multiple subnets that map to a single logical network, and provide
routing between them. This is a similar concept to Neutrons routed provider networks, but for the control
plane networks.
Limitations
There are currently a few limitations to using routed control plane networks. Only the following networks
have been tested:
• admin_oc
• internal
• tunnel
• storage
• storage_mgmt
Additionally, only compute nodes and storage nodes have been tested with routed control plane networks
- controllers were always placed on the same set of networks during testing.
Bare metal provisioning (of the overcloud or baremetal compute) has not been tested with routed control
plane networks, and would not be expected to work without taking additional steps.
Configuration
The approach to configuring Kayobe for routed control plane networks is as follows:
• create groups in the inventory for the different sets of networks
• place hosts in the appropriate groups
• move vip_address and fqdn network attributes to global variables
• move global network name configuration to group variables
• add new networks to configuration
• add network interface group variables
Example
52 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
• internal_1
– 10.1.1.0/24
– VLAN 111
• tunnel_1
– 10.1.2.0/24
– VLAN 112
• storage_1
– 10.1.3.0/24
– VLAN 113
• storage_mgmt_1
– 10.1.4.0/24
– VLAN 114
The network must provide routes between the following networks:
• internal_0 and internal_1
• tunnel_0 and tunnel_1
• storage_0 and storage_1
• storage_mgmt_0 and storage_mgmt_1
Now we can connect the new hosts to these networks:
• compute[128:255]: internal_1, tunnel_1, storage_1
• storage[64:127]: internal_1, storage_1, storage_mgmt_1
Inventory
[controllers]
controller[0:2]
[compute]
compute[0:255]
[storage]
storage[0:127]
[network-0]
controller[0:2]
[storage-network-0]
storage[0:63]
[network-0:children]
compute-network-0
storage-network-0
[network-1]
[compute-network-1]
compute[128:255]
[storage-network-1]
storage[64:127]
[network-1:children]
compute-network-1
storage-network-1
Remove all variables defining vip_address or fqdn network attributes from networks.yml, and move
the configuration to the API address variables in kolla.yml.
Network names
To move global network name configuration to group variables, the following variables should be com-
mented out in networks.yml:
54 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Networks
Now, ensure both sets of networks are defined in networks.yml. Static routes are added between the
pairs of networks here, although these will depend on your routing configuration. Other network at-
tributes may be necessary, we are including cidr, vlan and routes only here for brevity:
internal_0_cidr: 10.0.1.0/24
internal_0_vlan: 101
internal_0_routes:
- cidr: "{{ internal_1_cidr }}"
gateway: 10.0.1.1
internal_1_cidr: 10.1.1.0/24
internal_1_vlan: 111
internal_1_routes:
- cidr: "{{ internal_0_cidr }}"
gateway: 10.1.1.1
tunnel_0_cidr: 10.0.2.0/24
tunnel_0_vlan: 102
tunnel_0_routes:
- cidr: "{{ tunnel_1_cidr }}"
gateway: 10.0.2.1
tunnel_1_cidr: 10.1.2.0/24
tunnel_1_vlan: 112
tunnel_1_routes:
- cidr: "{{ tunnel_0_cidr }}"
gateway: 10.1.2.1
storage_0_cidr: 10.0.3.0/24
storage_0_vlan: 103
(continues on next page)
storage_1_cidr: 10.1.3.0/24
storage_1_vlan: 113
storage_1_routes:
- cidr: "{{ storage_0_cidr }}"
gateway: 10.1.3.1
storage_mgmt_0_cidr: 10.0.4.0/24
storage_mgmt_0_vlan: 104
storage_mgmt_0_routes:
- cidr: "{{ storage_mgmt_1_cidr }}"
gateway: 10.0.4.1
storage_mgmt_1_cidr: 10.1.4.0/24
storage_mgmt_1_vlan: 114
storage_mgmt_1_routes:
- cidr: "{{ storage_mgmt_0_cidr }}"
gateway: 10.1.4.1
Network interfaces
Since there are now differently named networks, the network interface variables are named differently.
This means that we must provide a group variables file for each set of networks and each type of host.
For example:
56 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Alternative approach
There is an alternative approach which has not been tested, but may be of interest. Rather than having
differently named networks (e.g. internal_0 and internal_1), it should be possible to use the same
name everywhere (e.g. internal), but define the network attributes in group variables. This approach
may be a little less verbose, and allows the same group variables file to set the network interfaces as
normal (e.g. via internal_interface).
Host Configuration
This section covers configuration of hosts. It does not cover configuration or deployment of containers.
Hosts that are configured by Kayobe include:
• Seed hypervisor (kayobe seed hypervisor host configure)
• Seed (kayobe seed host configure)
• Infra VMs (kayobe infra vm host configure)
• Overcloud (kayobe overcloud host configure)
Unless otherwise stated, all host configuration described here is applied to each of these types of host.
See also:
Ansible tags for limiting the scope of Kayobe commands are included under the relevant sections of this
page (for more information see Tags).
Configuration Location
Some host configuration options are set via global variables, and others have a variable for each type of
host. The latter variables are included in the following files under ${KAYOBE_CONFIG_PATH}:
• seed-hypervisor.yml
• seed.yml
• compute.yml
• controller.yml
• infra-vms.yml
• monitoring.yml
• storage.yml
Note that any variable may be set on a per-host or per-group basis, by using inventory host or group
variables - these delineations are for convenience.
Paths
Several directories are used by Kayobe on the remote hosts. There is a hierarchy of variables in
${KAYOBE_CONFIG_PATH}/globals.yml that can be used to control where these are located.
• base_path (default /opt/kayobe/) sets the default base path for various directories.
• config_path (default {{ base_path }}/etc) is a path in which to store configuration files.
• image_cache_path (default {{ base_path }}/images) is a path in which to cache down-
loaded or built images.
• source_checkout_path (default {{ base_path }}/src) is a path into which to store clones
of source code repositories.
• virtualenv_path (default {{ base_path }}/venvs) is a path in which to create Python virtual
environments.
tags:
ssh-known-host
While strictly this configuration is applied to the Ansible control host (localhost), it is applied dur-
ing the host configure commands. The ansible_host of each host is added as an SSH known
host. This is typically the hosts IP address on the admin network (admin_oc_net_name), as defined in
${KAYOBE_CONFIG_PATH}/network-allocation.yml (see IP Address Allocation).
tags:
kayobe-ansible-user
Kayobe uses a user account defined by the kayobe_ansible_user variable (in
${KAYOBE_CONFIG_PATH}/globals.yml) for remote SSH access. By default, this is stack.
Typically, the image used to provision these hosts will not include this user account, so Kayobe performs
a bootstrapping step to create it, as a different user. In cloud images, there is often a user named after the
OS distro, e.g. centos, rocky or ubuntu. This user defaults to the os_distribution variable, but
may be set via the following variables:
• seed_hypervisor_bootstrap_user
• seed_bootstrap_user
58 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
• infra_vm_bootstrap_user
• compute_bootstrap_user
• controller_bootstrap_user
• monitoring_bootstrap_user
• storage_bootstrap_user
For example, to set the bootstrap user for controllers to example-user:
tags:
pip
Kayobe supports configuration of a PyPI mirror and/or proxy, via variables in
${KAYOBE_CONFIG_PATH}/pip.yml. Mirror functionality is enabled by setting the
pip_local_mirror variable to true and proxy functionality is enabled by setting pip_proxy
variable to a proxy URL.
Kayobe will generate configuration for:
• pip to use the mirror and proxy
• easy_install to use the mirror
for the list of users defined by pip_applicable_users (default kayobe_ansible_user and root),
in addition to the user used for Kolla Ansible (kolla_ansible_user). The mirror URL is configured
via pip_index_url, and pip_trusted_hosts is a list of trusted hosts, for which SSL verification will
be disabled.
For example, to configure use of the test PyPI mirror at https://test.pypi.org/simple/:
tags:
kayobe-target-venv
By default, Ansible executes modules remotely using the system python interpreter, even if the Ansible
control process is executed from within a virtual environment (unless the local connection plugin is
used). This is not ideal if there are python dependencies that must be installed with isolation from the
system python packages. Ansible can be configured to use a virtualenv by setting the host variable
ansible_python_interpreter to a path to a python interpreter in an existing virtual environment.
If kayobe detects that ansible_python_interpreter is set and references a virtual environment, it
will create the virtual environment if it does not exist. Typically this variable should be set via a group
variable in the inventory for hosts in the seed, seed-hypervisor, and/or overcloud groups.
The default Kayobe configuration in the kayobe-config repository sets
ansible_python_interpreter to {{ virtualenv_path }}/kayobe/bin/python for the
seed, seed-hypervisor, and overcloud groups.
Disk Wiping
tags:
wipe-disks
Using hosts that may have stale data on their disks could affect the deployment of the cloud. This is not
a configuration option, since it should only be performed once to avoid losing useful data. It is triggered
by passing the --wipe-disks argument to the host configure commands.
tags:
users
Linux user accounts and groups can be configured using the users_default variable in
${KAYOBE_CONFIG_PATH}/users.yml. The format of the list is that used by the users variable of
the singleplatform-eng.users role. The following variables can be used to set the users for specific types
of hosts:
• seed_hypervisor_users
• seed_users
• infra_vm_users
• compute_users
• controller_users
60 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
• monitoring_users
• storage_users
In the following example, a single user named bob is created. A password hash has been generated via
mkpasswd --method=sha-512. The user is added to the wheel group, and an SSH key is authorised.
The SSH public key should be added to the Kayobe configuration.
groups:
- wheel
append: True
ssh_key:
- "{{ lookup('file', kayobe_config_path ~ '/ssh-keys/id_rsa_bob.pub') }}"
tags:
dnf
On CentOS and Rocky, Kayobe supports configuration of package repositories via DNF, via variables in
${KAYOBE_CONFIG_PATH}/dnf.yml.
Configuration of dnf.conf
Global configuration of DNF is stored in /etc/dnf/dnf.conf, and options can be set via the
dnf_config variable. Options are added to the [main] section of the file. For example, to config-
ure DNF to use a proxy server:
CentOS/Rocky and EPEL mirrors can be enabled by setting dnf_use_local_mirror to true. CentOS
repository mirrors are configured via the following variables:
• dnf_centos_mirror_host (default mirror.centos.org) is the mirror hostname.
• dnf_centos_mirror_directory (default centos) is a directory on the mirror in which repos-
itories may be accessed.
Rocky repository mirrors are configured via the following variables:
It is also possible to configure a list of custom DNF repositories via the dnf_custom_repos variable.
The format is a dict/map, with repository names mapping to a dict/map of arguments to pass to the
Ansible yum_repository module.
For example, the following configuration defines a single DNF repository called widgets.
Prior to the Yoga release, the EPEL DNF repository was enabled by default (dnf_install_epel:
true). Since Yoga, it is disabled by default (dnf_install_epel: false).
Previously, EPEL was required to install some packages such as python-pip, however this is no longer
the case.
It is possible to enable or disable the EPEL DNF repository by setting dnf_install_epel to true or
false respectively.
62 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
DNF Automatic
DNF Automatic provides a mechanism for applying regular updates of packages. DNF Automatic is
disabled by default, and may be enabled by setting dnf_automatic_enabled to true.
By default, only security updates are applied. Updates for all packages may be installed by setting
dnf_automatic_upgrade_type to default. This may cause the system to be less predictable as pack-
ages are updated without oversight or testing.
Apt
Apt cache
The Apt cache timeout may be configured via apt_cache_valid_time (in seconds) in etc/kayobe/
apt.yml, and defaults to 3600.
Apt proxy
Apt can be configured to use a proxy via apt_proxy_http and apt_proxy_https in etc/kayobe/
apt.yml. These should be set to the full URL of the relevant proxy (e.g. http://squid.example.
com:3128).
Apt configuration
Arbitrary global configuration options for Apt may be defined via the apt_config variable in etc/
kayobe/apt.yml since the Yoga release. The format is a list, with each item mapping to a dict/map
with the following items:
• content: free-form configuration file content
• filename: name of a file in /etc/apt/apt.conf.d/ in which to write the configuration
The default of apt_config is an empty list.
For example, the following configuration tells Apt to use 2 attempts when downloading packages:
apt_config:
- content: |
Acquire::Retries 1;
filename: 99retries
Apt repositories
Kayobe supports configuration of custom Apt repositories via the apt_repositories variable in etc/
kayobe/apt.yml since the Yoga release. The format is a list, with each item mapping to a dict/map
with the following items:
• types: whitespace-separated list of repository types, e.g. deb or deb-src (optional, default is
deb)
• url: URL of the repository
• suites: whitespace-separated list of suites, e.g. jammy (optional, default is ansible_facts.
distribution_release)
• components: whitespace-separated list of components, e.g. main (optional, default is main)
• signed_by: whitespace-separated list of names of GPG keyring files in apt_keys_path (op-
tional, default is unset)
• architecture: whitespace-separated list of architectures that will be used (optional, default is
unset)
The default of apt_repositories is an empty list.
For example, the following configuration defines a single Apt repository:
In the following example, the Ubuntu Jammy 22.04 repositories are consumed from a local package
mirror. The apt_disable_sources_list variable is set to true, which disables all repositories in
/etc/apt/sources.list, including the default Ubuntu ones.
64 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
apt_disable_sources_list: true
Apt keys
Some repositories may be signed by a key that is not one of Apts trusted keys. Kayobe avoids the use of
the deprecated apt-key utility, and instead allows keys to be downloaded to a directory. This enables
repositories to use the SignedBy option to state that they are signed by a specific key. This approach is
more secure than using globally trusted keys.
Keys to be downloaded are defined by the apt_keys variable. The format is a list, with each item
mapping to a dict/map with the following items:
• url: URL of key
• filename: Name of a file in which to store the downloaded key in apt_keys_path. The extension
should be .asc for ASCII-armoured keys, or .gpg otherwise.
The default value of apt_keys is an empty list.
In the following example, a key is downloaded, and a repository is configured that is signed by the key.
apt_repositories:
- types: deb
url: https://example.com/repo
suites: jammy
components: all
signed_by: example-key.asc
SELinux
tags:
selinux
SELinux is not supported by Kolla Ansible currently, so it is set to permissive by Kayobe. If neces-
sary, it can be configured to disabled by setting selinux_state to disabled. Kayobe will reboot
systems when required for the SELinux configuration. The timeout for waiting for systems to reboot is
selinux_reboot_timeout. Alternatively, the reboot may be avoided by setting selinux_do_reboot
to false.
Network Configuration
tags:
network
Configuration of host networking is covered in depth in Network Configuration.
Firewalld
tags:
firewall
Firewalld can be used to provide a firewall on supported systems. Since the Xena release, Kayobe provides
support for enabling or disabling firewalld, as well as defining zones and rules. Since the Zed 13.0.0
release, Kayobe added support for configuring firewalld on Ubuntu systems.
The following variables can be used to set whether to enable firewalld:
• seed_hypervisor_firewalld_enabled
• seed_firewalld_enabled
66 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
• infra_vm_firewalld_enabled
• compute_firewalld_enabled
• controller_firewalld_enabled
• monitoring_firewalld_enabled
• storage_firewalld_enabled
When firewalld is enabled, the following variables can be used to configure a list of zones to create. Each
item is a dict containing a zone item:
• seed_hypervisor_firewalld_zones
• seed_firewalld_zones
• infra_vm_firewalld_zones
• compute_firewalld_zones
• controller_firewalld_zones
• monitoring_firewalld_zones
• storage_firewalld_zones
The following variables can be used to set a default zone. The default is unset, in which case the default
zone will not be changed:
• seed_hypervisor_firewalld_default_zone
• seed_firewalld_default_zone
• infra_vm_firewalld_default_zone
• compute_firewalld_default_zone
• controller_firewalld_default_zone
• monitoring_firewalld_default_zone
• storage_firewalld_default_zone
The following variables can be used to set a list of rules to apply. Each item is a dict containing arguments
to pass to the firewalld module. Arguments are omitted if not provided, with the following exceptions:
offline (default true), permanent (default true), state (default enabled):
• seed_hypervisor_firewalld_rules
• seed_firewalld_rules
• infra_vm_firewalld_rules
• compute_firewalld_rules
• controller_firewalld_rules
• monitoring_firewalld_rules
• storage_firewalld_rules
In the following example, firewalld is enabled on controllers. public and internal zones are created,
with their default rules disabled. TCP port 8080 is open in the internal zone, and the http service is
open in the public zone:
controller_firewalld_enabled: true
controller_firewalld_zones:
- zone: public
- zone: internal
controller_firewalld_rules:
# Disable default rules in internal zone.
- service: dhcpv6-client
state: disabled
zone: internal
- service: samba-client
state: disabled
zone: internal
- service: ssh
state: disabled
zone: internal
# Disable default rules in public zone.
- service: dhcpv6-client
state: disabled
zone: public
- service: ssh
state: disabled
zone: public
# Enable TCP port 8080 in internal zone.
- port: 8080/tcp
zone: internal
# Enable the HTTP service in the public zone.
- service: http
zone: public
Tuned
tags:
tuned
Built-in tuned profiles can be applied to hosts. The following variables can be used to set a tuned profile
to specific types of hosts:
• seed_hypervisor_tuned_active_builtin_profile
• seed_tuned_active_builtin_profile
• compute_tuned_active_builtin_profile
• controller_tuned_active_builtin_profile
• monitoring_tuned_active_builtin_profile
68 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
• storage_tuned_active_builtin_profile
• infra_vm_tuned_active_builtin_profile
By default, Kayobe applies a tuned profile matching the role of each host in the system:
• seed hypervisor: virtual-host
• seed: virtual-guest
• infrastructure VM: virtual-guest
• compute: virtual-host
• controllers: throughput-performance
• monitoring: throughput-performance
• storage: throughput-performance
For example, to change the tuned profile of controllers to network-throughput:
Sysctls
tags:
sysctl
Arbitrary sysctl configuration can be applied to hosts. The variable format is a dict/map, mapping pa-
rameter names to their required values. The following variables can be used to set sysctl configuration
specific types of hosts:
• seed_hypervisor_sysctl_parameters
• seed_sysctl_parameters
• infra_vm_sysctl_parameters
• compute_sysctl_parameters
• controller_sysctl_parameters
• monitoring_sysctl_parameters
• storage_sysctl_parameters
For example, to set the net.ipv4.ip_forward parameter to 1 on controllers:
tags:
ip-routing
snat
IP routing and source NAT (SNAT) can be configured on the seed host, which allows it to be used as a
default gateway for overcloud hosts. This is disabled by default since the Xena 11.0.0 release, and may
be enabled by setting seed_enable_snat to true in ${KAYOBE_CONFIG_PATH}/seed.yml.
The seed-hypervisor host also can be configured the same way to be used as a default gateway. This is
disabled by default too, and may be enabled by setting seed_hypervisor_enable_snat to true in
${KAYOBE_CONFIG_PATH}/seed-hypervisor.yml.
Disable cloud-init
tags:
disable-cloud-init
cloud-init is a popular service for performing system bootstrapping. If you are not using cloud-init, this
section can be skipped.
If using the seeds Bifrost service to provision the control plane hosts, the use of cloud-init may be con-
figured via the kolla_bifrost_dib_init_element variable.
cloud-init searches for network configuration in order of increasing precedence; each item overriding the
previous. In some cases, on subsequent boots cloud-init can automatically reconfigure network interfaces
and cause some issues in network configuration. To disable cloud-init from running after the initial server
bootstrapping, set disable_cloud_init to true in ${KAYOBE_CONFIG_PATH}/overcloud.yml.
Disable Glean
tags:
disable-glean
The glean service can be used to perform system bootstrapping, serving a similar role to cloud-init.
If you are not using glean, this section can be skipped.
If using the seeds Bifrost service to provision the control plane hosts, the use of glean may be configured
via the kolla_bifrost_dib_init_element variable.
After the initial server bootstrapping, the glean service can cause problems as it attempts to enable all
network interfaces, which can lead to timeouts while booting. To avoid this, the glean service is disabled.
Additionally, any network interface configuration files generated by glean and not overwritten by Kayobe
are removed.
70 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Timezone
tags:
timezone
The timezone can be configured via the timezone variable in ${KAYOBE_CONFIG_PATH}/time.yml.
The value must be a valid Linux timezone. For example:
NTP
tags:
ntp
Kayobe will configure Chrony on all hosts in the ntp group. The default hosts in this group are:
[ntp:children]
# Kayobe will configure Chrony on members of this group.
seed
seed-hypervisor
overcloud
This provides a flexible way to opt in or out of having kayobe manage the NTP service.
Variables
Software RAID
tags:
mdadm
While it is possible to use RAID directly with LVM, some operators may prefer the userspace tools
provided by mdadm or may have existing software RAID arrays they want to manage with Kayobe.
Software RAID arrays may be configured via the mdadm_arrays variable. For convenience, this is
mapped to the following variables:
• seed_hypervisor_mdadm_arrays
• seed_mdadm_arrays
• infra_vm_mdadm_arrays
• compute_mdadm_arrays
• controller_mdadm_arrays
• monitoring_mdadm_arrays
• storage_mdadm_arrays
The format of these variables is as defined by the mdadm_arrays variable of the mrlesmithjr.mdadm
Ansible role.
For example, to configure two of the seeds disks as a RAID1 mdadm array available as /dev/md0:
Encryption
tags:
luks
Encrypted block devices may be configured via the luks_devices variable. For convenience, this is
mapped to the following variables:
• seed_hypervisor_luks_devices
• seed_luks_devices
• infra_vm_luks_devices
• compute_luks_devices
• controller_luks_devices
72 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
• monitoring_luks_devices
• storage_luks_devices
The format of these variables is as defined by the luks_devices variable of the stackhpc.luks Ansible
role.
For example, to encrypt the software raid device, /dev/md0, on the seed, and make it available as /dev/
mapper/md0crypt
LVM
tags:
lvm
Logical Volume Manager (LVM) physical volumes, volume groups, and logical volumes may be config-
ured via the lvm_groups variable. For convenience, this is mapped to the following variables:
• seed_hypervisor_lvm_groups
• seed_lvm_groups
• infra_vm_lvm_groups
• compute_lvm_groups
• controller_lvm_groups
• monitoring_lvm_groups
• storage_lvm_groups
The format of these variables is as defined by the lvm_groups variable of the mrlesmithjr.manage-lvm
Ansible role.
LVM is not configured by default on the seed hypervisor. It is possible to configure LVM to provide
storage for a libvirt storage pool, typically mounted at /var/lib/libvirt/images.
To use this configuration, set the seed_hypervisor_lvm_groups variable to "{{
seed_hypervisor_lvm_groups_with_data }}" and provide a list of disks via the
seed_hypervisor_lvm_group_data_disks variable.
Note: In Train and earlier releases of Kayobe, the data volume group was always enabled by default.
If the devicemapper Docker storage driver is in use, the default LVM configuration is optimised for it.
The devicemapper driver requires a thin provisioned LVM volume. A second logical volume is used
for storing Docker volume data, mounted at /var/lib/docker/volumes. Both logical volumes are
created from a single data volume group.
This configuration is enabled by the following variables, which default to true if the devicemapper
driver is in use or false otherwise:
• compute_lvm_group_data_enabled
• controller_lvm_group_data_enabled
• seed_lvm_group_data_enabled
• infra_vm_lvm_group_data_enabled
• storage_lvm_group_data_enabled
These variables can be set to true to enable the data volume group if the devicemapper driver is not
in use. This may be useful where the docker-volumes logical volume is required.
To use this configuration, a list of disks must be configured via the following variables:
• seed_lvm_group_data_disks
• infra_vm_lvm_group_data_disks
• compute_lvm_group_data_disks
• controller_lvm_group_data_disks
• monitoring_lvm_group_data_disks
• storage_lvm_group_data_disks
For example, to configure two of the seeds disks for use by LVM:
The Docker volumes LVM volume is assigned a size given by the following variables, with a default
value of 75% (of the volume groups capacity):
• seed_lvm_group_data_lv_docker_volumes_size
• infra_vm_lvm_group_data_lv_docker_volumes_size
• compute_lvm_group_data_lv_docker_volumes_size
• controller_lvm_group_data_lv_docker_volumes_size
• monitoring_lvm_group_data_lv_docker_volumes_size
74 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
• storage_lvm_group_data_lv_docker_volumes_size
If using a Docker storage driver other than devicemapper, the remaining 25% of the volume group can
be used for Docker volume data. In this case, the LVM volumes size can be increased to 100%:
If using a Docker storage driver other than devicemapper, it is possible to avoid using LVM entirely,
thus avoiding the requirement for multiple disks. In this case, set the appropriate <host>_lvm_groups
variable to an empty list:
Custom LVM
To define additional logical logical volumes in the default data volume group, modify one of the follow-
ing variables:
• seed_lvm_group_data_lvs
• infra_vm_lvm_group_data_lvs
• compute_lvm_group_data_lvs
• controller_lvm_group_data_lvs
• monitoring_lvm_group_data_lvs
• storage_lvm_group_data_lvs
Include the variable <host>_lvm_group_data_lv_docker_volumes in the list to include the LVM
volume for Docker volume data:
It is possible to define additional LVM volume groups via the following variables:
• seed_lvm_groups_extra
• infra_vm_lvm_groups_extra
• compute_lvm_groups_extra
• controller_lvm_groups_extra
• monitoring_lvm_groups_extra
• storage_lvm_groups_extra
For example:
Alternatively, replace the entire volume group list via one of the <host>_lvm_groups variables to re-
place the default configuration with a custom one.
Kolla-Ansible bootstrap-servers
Kolla Ansible provides some host configuration functionality via the bootstrap-servers command,
which may be leveraged by Kayobe.
See the Kolla Ansible documentation for more information on the functions performed by this command,
and how to configure it.
Note that from the Ussuri release, Kayobe creates a user account for Kolla Ansible rather than this being
done by Kolla Ansible during bootstrap-servers. See User account creation for details.
76 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
tags:
kolla-ansible
kolla-target-venv
See Context: Remote Execution Environment for information about remote Python virtual environments
for Kolla Ansible.
Docker Engine
tags:
docker
Docker engine configuration is applied by both Kayobe and Kolla Ansible (during bootstrap-servers).
The docker_storage_driver variable sets the Docker storage driver, and by default the overlay2
driver is used. If using the devicemapper driver, see LVM for information about configuring LVM for
Docker.
Various options are defined in ${KAYOBE_CONFIG_PATH}/docker.yml for configuring the
devicemapper storage.
A private Docker registry may be configured via docker_registry, with a Certificate Authority (CA)
file configured via docker_registry_ca.
To use one or more Docker Registry mirrors, use the docker_registry_mirrors variable.
If using an MTU other than 1500, docker_daemon_mtu can be used to configure this. This setting does
not apply to containers using net=host (as Kolla Ansibles containers do), but may be necessary when
building images.
Dockers live restore feature can be configured via docker_daemon_live_restore, although it is dis-
abled by default due to issues observed.
tags:
libvirt-host
Note: This section is about the libvirt daemon on compute nodes, as opposed to the seed hypervisor.
Since Yoga, Kayobe provides support for deploying and configuring a libvirt host daemon, as an al-
ternative to the nova_libvirt container support by Kolla Ansible. The host daemon is not used by
default, but it is possible to enable it by setting kolla_enable_nova_libvirt_container to false
in $KAYOBE_CONFIG_PATH/kolla.yml.
Migration of hosts from a containerised libvirt to host libvirt is currently not supported.
The following options are available in $KAYOBE_CONFIG_PATH/compute.yml and are relevant only
when using the libvirt daemon rather than the nova_libvirt container:
To customise the libvirt daemon log output to send level 3 to the journal:
78 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Example: SASL
SASL authentication is enabled by default. This provides authentication for TCP and TLS connections
to the libvirt API. A password is required, and should be encrypted using Ansible Vault.
␣
,→3463623265653030323665383337376462363434396361320a653737376237353261303066616637
␣
,→66613562316533313632613433643537346463303363376664396661343835373033326261383065
␣
,→3731643633656636360a623534313665343066656161333866613338313266613465336332376463
3234
When the TLS listener is enabled, it is necessary to provide client, server and CA certificates. The
following files should be provided:
cacert.pem CA certificate used to sign client and server certificates.
clientcert.pem Client certificate.
clientkey.pem Client key.
On CentOS and Rocky hosts, a CentOS Storage SIG Ceph repository is installed that provides more
recent Ceph libraries than those available in CentOS/Rocky AppStream. This may be necessary when
using Ceph for Cinder volumes or Nova ephemeral block devices. In some cases, such as when using
local package mirrors, the upstream repository may not be appropriate. The installation of the repository
may be disabled as follows:
In some cases it may be useful to install additional packages on compute hosts for use by libvirt. The
stackhpc.libvirt-host Ansible role supports this via the libvirt_host_extra_daemon_packages vari-
able. The variable should be defined via group variables in the Ansible inventory, to avoid applying the
change to the seed hypervisor. For example, to install the trousers package used for accessing TPM
hardware:
80 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Kolla Configuration
Anyone using Kayobe to build images should familiarise themselves with the Kolla projects documenta-
tion.
Images are built on hosts in the container-image-builders group. The default Kayobe Ansible
inventory places the seed host in this group, although it is possible to put a different host in the group, by
modifying the inventory.
For example, to build images on localhost:
Kolla Installation
Prior to building container images, Kolla and its dependencies will be installed on the container image
build host. The following variables affect the installation of Kolla:
kolla_ctl_install_type Type of installation, either binary (PyPI) or source (git). Default is
source.
kolla_source_path Path to directory for Kolla source code checkout. Default is {{
source_checkout_path ~ '/kolla' }}.
kolla_source_url URL of Kolla source code repository if type is source. Default is https://opendev.
org/openstack/kolla.
kolla_source_version Version (branch, tag, etc.) of Kolla source code repository if type is source.
Default is {{ openstack_branch }}, which is the same as the Kayobe upstream branch name.
kolla_venv Path to virtualenv in which to install Kolla on the container image build host. Default is
{{ virtualenv_path ~ '/kolla' }}.
kolla_build_config_path Path in which to generate kolla configuration. Default is {{
config_path ~ '/kolla' }}.
For example, to install from a custom Git repository:
Global Configuration
The following variables are global, affecting all container images. They are used to generate the Kolla
configuration file, kolla-build.conf, and also affect Kolla Ansible configuration.
kolla_base_distro Kolla base container image distribution. Options are centos, debian, or
ubuntu. Default is {{ os_distribution }}.
kolla_install_type Kolla container image type: binary or source. Default is source.
kolla_docker_namespace Docker namespace to use for Kolla images. Default is kolla.
kolla_docker_registry URL of docker registry to use for Kolla images. Default is to use the value
of docker_registry variable (see Docker Engine).
kolla_docker_registry_username Username to use to access a docker registry. Default is not set,
in which case the registry will be used without authentication.
kolla_docker_registry_password Password to use to access a docker registry. Default is not set,
in which case the registry will be used without authentication.
kolla_openstack_release Kolla OpenStack release version. This should be a Docker image tag.
Default is the OpenStack release name (e.g. rocky) on stable branches and tagged releases, or
master on the Kayobe master branch.
kolla_tag Kolla container image tag. This is the tag that will be applied to built container images.
Default is kolla_openstack_release.
For example, to build the Kolla centos binary images with a namespace of example, and a private
Docker registry at registry.example.com:4000, tagged with 7.0.0.1:
The ironic-api image built with this configuration would be referenced as follows:
registry.example.com:4000/example/centos-binary-ironic-api:7.0.0.1
Further customisation of the Kolla configuration file can be performed by writing a file at
${KAYOBE_CONFIG_PATH/kolla/kolla-build.conf. For example, to enable debug logging:
82 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Seed Images
The kayobe seed container image build command builds images for the seed services. The only
image required for the seed services is the bifrost-deploy image.
Overcloud Images
The kayobe overcloud container image build command builds images for the control plane.
The default set of images built depends on which services and features are enabled via the
kolla_enable_<service> flags in $KAYOBE_CONFIG_PATH/kolla.yml.
For example, the following configuration will enable the Magnum service and add the magnum-api and
magnum-conductor containers to the set of overcloud images that will be built:
If a required image is not built when the corresponding flag is set, check the image sets defined in
overcloud_container_image_sets in ansible/group_vars/all/kolla.
Image Customisation
There are three main approaches to customising the Kolla container images:
1. Overriding Jinja2 blocks
2. Overriding Jinja2 variables
3. Source code locations
Kollas images are defined via Jinja2 templates that generate Dockerfiles. Jinja2 blocks are frequently
used to allow specific statements in one or more Dockerfiles to be replaced with custom statements. See
the Kolla documentation for details.
Blocks are configured via the kolla_build_blocks variable, which is a dict mapping Jinja2 block
names in to their contents.
For example, to override the block header to add a custom label to every image:
This will result in Kayobe generating a template-override.j2 file with the following content:
{% block header %}
LABEL foo="bar"
{% endblock %}
Jinja2 variables offer another way to customise images. See the Kolla documentation for details of using
variable overrides to modify the list of packages to install in an image.
Variable overrides are configured via the kolla_build_customizations variable, which is a dict/map
mapping names of variables to override to their values.
For example, to add mod_auth_openidc to the list of packages installed in the keystone-base image,
we can set the variable keystone_base_packages_append to a list containing mod_auth_openidc.
This will result in Kayobe generating a template-override.j2 file with the following content:
84 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
For source image builds, configuration of source code locations for packages installed in containers by
Kolla is possible via the kolla_sources variable. The format is a dict/map mapping names of sources
to their definitions. See the Kolla documentation for details. The default is to specify the URL and
version of Bifrost, as defined in ${KAYOBE_CONFIG_PATH}/bifrost.yml.
For example, to specify a custom source location for the ironic-base package:
[ironic-base]
type = git
location = https://git.example.com/ironic
reference = downstream
Note that it is currently necessary to include the Bifrost source location if using a seed.
These features can also be used for installing plugins and additions to source type images.
For example, to install a networking-ansible plugin in the neutron-server image:
The neutron-server image automatically installs any plugins provided to it. For images that do not, a
block such as the following may be required:
86 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Kayobe relies heavily on Kolla Ansible for deployment of the OpenStack control plane. Kolla Ansible is
installed locally on the Ansible control host (the host from which Kayobe commands are executed), and
Kolla Ansible commands are executed from there.
Kolla Ansible configuration is stored in ${KAYOBE_CONFIG_PATH}/kolla.yml.
Configuration of Ansible
Ansible configuration is described in detail in the Ansible documentation. In addition to the stan-
dard locations, Kayobe supports using an Ansible configuration file located in the Kayobe configu-
ration at ${KAYOBE_CONFIG_PATH}/kolla/ansible.cfg or ${KAYOBE_CONFIG_PATH}/ansible.
cfg. Note that if the ANSIBLE_CONFIG environment variable is specified it takes precedence over this
file.
Prior to deploying containers, Kolla Ansible and its dependencies will be installed on the Ansible control
host. The following variables affect the installation of Kolla Ansible:
kolla_ansible_ctl_install_type Type of Kolla Ansible control installation. One of binary
(PyPI) or source (git). Default is source.
kolla_ansible_source_url URL of Kolla Ansible source code repository if type is source. Default
is https://opendev.org/openstack/kolla-ansible.
kolla_ansible_source_version Version (branch, tag, etc.) of Kolla Ansible source code repository
if type is source. Default is the same as the Kayobe upstream branch.
kolla_ansible_venv_extra_requirements Extra requirements to install inside the Kolla Ansible
virtualenv. Default is an empty list.
kolla_upper_constraints_file Upper constraints file for installation of Kolla. Default is {{
pip_upper_constraints_file }}, which has a default of https://releases.openstack.
org/constraints/upper/{{ openstack_branch }}.
Extra Python packages can be installed inside the Kolla Ansible virtualenv, such as when required by
Ansible plugins.
For example, to use the hashi_vault Ansible lookup plugin, its hvac dependency can be installed using:
Local environment
The following variables affect the local environment on the Ansible control host. They reference environ-
ment variables, and should be configured using those rather than modifying the Ansible variable directly.
The file kayobe-env in the kayobe-config git repository sets some sensible defaults for these variables,
based on the recommended environment directory structure.
kolla_ansible_source_path Path to directory for Kolla Ansible source code checkout. Default is
$KOLLA_SOURCE_PATH, or $PWD/src/kolla-ansible.
kolla_ansible_venv Path to virtualenv in which to install Kolla Ansible on the Ansible control host.
Default is $KOLLA_VENV_PATH or $PWD/venvs/kolla-ansible.
kolla_config_path Path to Kolla Ansible configuration directory. Default is $KOLLA_CONFIG_PATH
or /etc/kolla.
Global Configuration
The following variables are global, affecting all containers. They are used to generate the Kolla Ansible
configuration file, globals.yml, and also affect Kolla image build configuration.
88 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Kolla Images
The following variables affect which Kolla images are used, and how they are accessed.
kolla_base_distro Kolla base container image distribution. Default is centos.
kolla_install_type Kolla container image type: binary or source. Default is source.
kolla_docker_registry URL of docker registry to use for Kolla images. Default is not set, in which
case Dockerhub will be used.
kolla_docker_registry_insecure Whether docker should be configured to use an insecure reg-
istry for Kolla images. Default is false, unless docker_registry_enabled is true and
docker_registry_enable_tls is false.
kolla_docker_namespace Docker namespace to use for Kolla images. Default is kolla.
kolla_docker_registry_username Username to use to access a docker registry. Default is not set,
in which case the registry will be used without authentication.
kolla_docker_registry_password Password to use to access a docker registry. Default is not set,
in which case the registry will be used without authentication.
kolla_openstack_release Kolla OpenStack release version. This should be a Docker image tag.
Default is {{ openstack_release }}, which takes the OpenStack release name (e.g. rocky)
on stable branches and tagged releases, or master on the Kayobe master branch.
For example, to deploy Kolla centos binary images with a namespace of example, and a private
Docker registry at registry.example.com:4000, tagged with 7.0.0.1:
registry.example.com:4000/example/centos-binary-ironic-api:7.0.0.1
Ansible
The following variables affect how Ansible accesses the remote hosts.
kolla_ansible_user User account to use for Kolla SSH access. Default is kolla.
kolla_ansible_group Primary group of Kolla SSH user. Default is kolla.
kolla_ansible_become Whether to use privilege escalation for all operations performed via Kolla
Ansible. Default is false since the 8.0.0 Ussuri release.
kolla_ansible_target_venv Path to a virtual environment on remote hosts to use for Ansible mod-
ule execution. Default is {{ virtualenv_path }}/kolla-ansible. May be set to None to use
the system Python interpreter.
By default, Ansible executes modules remotely using the system python interpreter, even if the Ansible
control process is executed from within a virtual environment (unless the local connection plugin is
used). This is not ideal if there are python dependencies that must be installed with isolation from the
system python packages. Ansible can be configured to use a virtualenv by setting the host variable
ansible_python_interpreter to a path to a python interpreter in an existing virtual environment.
The variable kolla_ansible_target_venv configures the use of a virtual environment on the remote
hosts. The default configuration should work in most cases.
Since the Ussuri release, Kayobe creates a user account for Kolla Ansible rather than this being done
during Kolla Ansibles bootstrap-servers command. This workflow is more compatible with Ansible
fact caching, but does mean that Kolla Ansibles create_kolla_user variable cannot be used to disable
creation of the user account. Instead, set kolla_ansible_create_user to false.
kolla_ansible_create_user Whether to create a user account, configure passwordless sudo and
authorise an SSH key for Kolla Ansible. Default is true.
OpenStack Logging
In certain situations it may be necessary to enable debug logging for all OpenStack services. This is not
usually advisable in production.
API Addresses
Note: These variables should be used over the deprecated vip_address and fqdn network attributes.
The following variables affect the addresses used for the external and internal API.
kolla_internal_vip_address Virtual IP address of OpenStack internal API. Default is the
vip_address attribute of the internal network.
90 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
kolla_internal_fqdn Fully Qualified Domain Name (FQDN) of OpenStack internal API. Default is
the fqdn attribute of the internal network if set, otherwise kolla_internal_vip_address.
kolla_external_vip_address Virtual IP address of OpenStack external API. Default is the
vip_address attribute of the external network.
kolla_external_fqdn Fully Qualified Domain Name (FQDN) of OpenStack external API. Default is
the fqdn attribute of the external network if set, otherwise kolla_external_vip_address.
It is highly recommended to use TLS encryption to secure the public API. Here is an example:
It is highly recommended to use TLS encryption to secure the internal API. Here is an example:
Other certificates
In an environment with a private CA, it may be necessary to add the root CA certificate to the trust store
of containers.
92 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Kolla Ansible backend TLS can be used to provide end-to-end encryption of API traffic.
See the Kolla Ansible documentation for how to provide service and/or host-specific certificates and keys.
Kolla Ansible uses a single file for global variables, globals.yml. Kayobe provides configuration vari-
ables for all required variables and many of the most commonly used the variables in this file. Some of
these are in $KAYOBE_CONFIG_PATH/kolla.yml, and others are determined from other sources such
as the networking configuration in $KAYOBE_CONFIG_PATH/networks.yml.
Additional global configuration may be provided by creating $KAYOBE_CONFIG_PATH/kolla/
globals.yml. Variables in this file will be templated using Jinja2, and merged with the Kayobe
globals.yml configuration.
For more fine-grained control over images, Kolla Ansible allows a tag to be defined for each image. For
example, for nova-api:
Enabling debug logging globally can lead to a lot of additional logs being generated. Often we are only
interested in a particular service. For example, to enable debug logging for Nova services:
Host variables
Kayobe generates a host_vars file for each host in the Kolla Ansible inventory. These contain network
interfaces and other host-specific things. Some Kayobe Ansible variables are passed through to Kolla
Ansible, as defined by the following variables. The default set of variables should typically be kept.
Additional variables may be passed through via the *_extra variables, as described below. If a passed
through variable is not defined for a host, it is ignored.
kolla_seed_inventory_pass_through_host_vars List of names of host variables to
pass through from kayobe hosts to the Kolla Ansible seed host, if set. See also
kolla_seed_inventory_pass_through_host_vars_map. The default is:
kolla_seed_inventory_pass_through_host_vars:
- "ansible_host"
- "ansible_port"
- "ansible_ssh_private_key_file"
- "kolla_api_interface"
- "kolla_bifrost_network_interface"
kolla_seed_inventory_pass_through_host_vars_map:
kolla_api_interface: "api_interface"
kolla_bifrost_network_interface: "bifrost_network_interface"
kolla_overcloud_inventory_pass_through_host_vars:
- "ansible_host"
- "ansible_port"
- "ansible_ssh_private_key_file"
- "kolla_network_interface"
- "kolla_api_interface"
- "kolla_storage_interface"
- "kolla_cluster_interface"
- "kolla_swift_storage_interface"
- "kolla_swift_replication_interface"
- "kolla_provision_interface"
- "kolla_inspector_dnsmasq_interface"
- "kolla_dns_interface"
- "kolla_tunnel_interface"
- "kolla_external_vip_interface"
- "kolla_neutron_external_interfaces"
- "kolla_neutron_bridge_names"
94 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
kolla_overcloud_inventory_pass_through_host_vars_map:
kolla_network_interface: "network_interface"
kolla_api_interface: "api_interface"
kolla_storage_interface: "storage_interface"
kolla_cluster_interface: "cluster_interface"
kolla_swift_storage_interface: "swift_storage_interface"
kolla_swift_replication_interface: "swift_replication_interface"
kolla_provision_interface: "provision_interface"
kolla_inspector_dnsmasq_interface: "ironic_dnsmasq_interface"
kolla_dns_interface: "dns_interface"
kolla_tunnel_interface: "tunnel_interface"
kolla_neutron_external_interfaces: "neutron_external_interface"
kolla_neutron_bridge_names: "neutron_bridge_name"
In this example we pass through a variable named my_kayobe_var from Kayobe to Kolla Ansible.
Group variables can be used to set configuration for all hosts in a group. They can be set in Kolla
Ansible by placing files in ${KAYOBE_CONFIG_PATH}/kolla/inventory/group_vars/*. Since this
directory is copied directly into the Kolla Ansible inventory, Kolla Ansible group names should be used.
It should be noted that extra-vars and host_vars take precedence over group_vars. For more
information on variable precedence see the Ansible documentation.
In Kolla Ansible, Nova cells are configured via group variables. For example, to configure cell0001
the following file could be created:
Passwords
Kolla Ansible auto-generates passwords to a file, passwords.yml. Kayobe handles the or-
chestration of this, as well as encryption of the file using an Ansible Vault password speci-
fied in the KAYOBE_VAULT_PASSWORD environment variable, if present. The file is generated to
$KAYOBE_CONFIG_PATH/kolla/passwords.yml, and should be stored along with other Kayobe con-
figuration files. This file should not be manually modified.
kolla_ansible_custom_passwords Dictionary containing custom passwords to add or override in
the Kolla passwords file. Default is {{ kolla_ansible_default_custom_passwords }},
which contains SSH keys for use by Kolla Ansible and Bifrost.
96 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
Kolla Ansible provides a flexible mechanism for configuring the services that it deploys. Kayobe adds
some commonly required configuration options to the defaults provided by Kolla Ansible, but also allows
for the free-form configuration supported by Kolla Ansible. The Kolla Ansible documentation should be
used as a reference.
Enabling Services
A common task is enabling a new OpenStack service. This may be done via the kolla_enable_* flags,
for example:
Note that in some cases additional configuration may be required to successfully deploy a service - check
the Kolla Ansible configuration reference.
Service Configuration
Kolla-ansibles flexible configuration is described in the Kolla Ansible service configuration documen-
tation. We wont duplicate that here, but essentially it involves creating files under a directory which
for users of kayobe will be $KOLLA_CONFIG_PATH/config. In kayobe, files in this directory are auto-
generated and managed by kayobe. Instead, users should create files under $KAYOBE_CONFIG_PATH/
kolla/config with the same directory structure. These files will be templated using Jinja2, merged
with kayobes own configuration, and written out to $KOLLA_CONFIG_PATH/config.
The following files, if present, will be templated and provided to Kolla Ansible. All paths are relative
to $KAYOBE_CONFIG_PATH/kolla/config. Note that typically Kolla Ansible does not use the same
wildcard patterns, and has a more restricted set of files that it will process. In some cases, it may be
necessary to inspect the Kolla Ansible configuration tasks to determine which files are supported.
98 Chapter 3. Contents
kayobe Documentation, Release 12.1.0.dev48
To provide custom configuration for the glance API service, create $KAYOBE_CONFIG_PATH/kolla/
config/glance/glance-api.conf. For example:
Bifrost
This section covers configuration of the Bifrost service that runs on the seed host. Bifrost configuration is
typically applied in ${KAYOBE_CONFIG_PATH}/bifrost.yml. Consult the Bifrost documentation for
further details of Bifrost usage and configuration.
Bifrost installation
Note: This section may be skipped if using an upstream Bifrost container image.
The following options are used if building the Bifrost container image locally.
kolla_bifrost_source_url URL of Bifrost source code repository. Default is https://opendev.org/
openstack/bifrost.
kolla_bifrost_source_version Version (branch, tag, etc.) of Bifrost source code repository. De-
fault is {{ openstack_branch }}, which is the same as the Kayobe upstream branch name.
For example, to install Bifrost from a custom git repository:
Bifrost uses Diskimage builder (DIB) to build a root disk image that is deployed to overcloud hosts when
they are provisioned. The following options configure how this image is built. Consult the Diskimage-
builder documentation for further information on building disk images.
The default configuration builds a CentOS 8 whole disk (partitioned) image with SELinux disabled and
a serial console enabled. Cloud-init is used to process the configuration drive built by Bifrost, rather than
the Bifrost default of simple-init.
kolla_bifrost_dib_os_element DIB base OS element. Default is {{ os_distribution }}.
kolla_bifrost_dib_os_release DIB image OS release. Default is {{ os_release }}.
kolla_bifrost_dib_elements_default Added in the Train release. Use
kolla_bifrost_dib_elements in earlier releases.
List of default DIB elements. Default is ["disable-selinux", "enable-serial-console",
"vm"] when os_distribution is centos, or ["enable-serial-console", "vm"] other-
wise. The vm element is poorly named, and causes DIB to build a whole disk image rather than a
single partition.
kolla_bifrost_dib_elements_extra Added in the Train release. Use kolla_bifrost_dib_elements
in earlier releases.
List of additional DIB elements. Default is none.
kolla_bifrost_dib_elements List of DIB elements. Default is a combination of
kolla_bifrost_dib_elements_default and kolla_bifrost_dib_elements_extra.
kolla_bifrost_dib_init_element DIB init element. Default is cloud-init-datasources.
kolla_bifrost_dib_env_vars_default Added in the Train release. Use
kolla_bifrost_dib_env_vars in earlier releases.
DIB default environment variables. Default is {DIB_BOOTLOADER_DEFAULT_CMDLINE: "nofb
nomodeset gfxpayload=text net.ifnames=1", "DIB_CLOUD_INIT_DATASOURCES":
"ConfigDrive"}.
kolla_bifrost_dib_env_vars_extra Added in the Train release. Use kolla_bifrost_dib_env_vars
in earlier releases.
DIB additional environment variables. Default is none.
kolla_bifrost_dib_env_vars DIB environment variables. Default is combination of
kolla_bifrost_dib_env_vars_default and kolla_bifrost_dib_env_vars_extra.
kolla_bifrost_dib_packages List of DIB packages to install. Default is to install no extra packages.
The disk image is built during the deployment of seed services. It is worth noting that currently, the
image will not be rebuilt if it already exists. To force rebuilding the image, it is necessary to remove the
file. On the seed:
In the following, we extend the list of DIB elements to add the growpart element:
By default, DIB will format the image as ext4. In some cases it might be useful to use XFS, for example
when using the overlay Docker storage driver which can reach the maximum number of hardlinks
allowed by ext4.
In DIB, we achieve this by setting the FS_TYPE environment variable to xfs.
When debugging a failed deployment, it can sometimes be necessary to allow access to the image via a
preconfigured user account with a known password. This can be achieved via the devuser element.
This example shows how to add the devuser element, and configure a username and password for an
account that has passwordless sudo:
kolla_bifrost_dib_env_vars_extra:
DIB_DEV_USER_USERNAME: "devuser"
DIB_DEV_USER_PASSWORD: "correct horse battery staple"
DIB_DEV_USER_PWDLESS_SUDO: "yes"
Alternatively, the dynamic-login element can be used to authorize SSH keys by appending them to the
kernel arguments.
It can be necessary to install additional packages in the root disk image. Rather than needing to write a
custom DIB element, we can use the kolla_bifrost_dib_packages variable. For example, to install
the biosdevname package:
The name of the root disk image to deploy can be configured via the
kolla_bifrost_deploy_image_filename option, which defaults to deployment_image.qcow2. It
can be defined globally in ${KAYOBE_CONFIG_PATH}/bifrost.yml, or defined per-group or per-host
in the Kayobe inventory. This can be used to provision different images across the overcloud.
While only a single disk image can be built with Bifrost, starting from the Yoga 12.0.0 release, Kayobe
supports building multiple disk images directly through Diskimage builder. Consult the overcloud host
disk image build documentation for more details.
Ironic configuration
The following options configure the Ironic service in the bifrost-deploy container.
kolla_bifrost_enabled_hardware_types List of hardware types to enable for Bifrosts Ironic. De-
fault is ["ipmi"].
kolla_bifrost_extra_kernel_options List of extra kernel parameters for Bifrosts Ironic PXE
configuration. Default is none.
The following options configure the Ironic Inspector service in the bifrost-deploy container.
kolla_bifrost_inspector_processing_hooks List of of inspector processing plugins. Default is
{{ inspector_processing_hooks }}, defined in ${KAYOBE_CONFIG_PATH}/inspector.
yml.
kolla_bifrost_inspector_port_addition Which MAC addresses to add as ports during intro-
spection. One of all, active or pxe. Default is {{ inspector_add_ports }}, defined in
${KAYOBE_CONFIG_PATH}/inspector.yml.
kolla_bifrost_inspector_extra_kernel_options List of extra kernel parameters for the inspec-
tor default PXE configuration. Default is {{ inspector_extra_kernel_options }}, defined
in ${KAYOBE_CONFIG_PATH}/inspector.yml. When customising this variable, the default ex-
tra kernel parameters should be kept to retain full node inspection capabilities.
kolla_bifrost_inspector_rules List of introspection rules for Bifrosts Ironic Inspector service.
Default is {{ inspector_rules }}, defined in ${KAYOBE_CONFIG_PATH}/inspector.yml.
Note: If building IPA images locally (ipa_build_images is true) this section can be skipped.
The following options configure the source of Ironic Python Agent images used by Bifrost for inspection
and deployment. Consult the Ironic Python Agent documentation for full details.
kolla_bifrost_ipa_kernel_upstream_url URL of Ironic Python Agent (IPA) ker-
nel image. Default is {{ inspector_ipa_kernel_upstream_url }}, defined in
${KAYOBE_CONFIG_PATH}/inspector.yml.
kolla_bifrost_ipa_kernel_checksum_url URL of checksum of Ironic Python Agent (IPA)
kernel image. Default is {{ inspector_ipa_kernel_checksum_url }}, defined in
${KAYOBE_CONFIG_PATH}/inspector.yml.
kolla_bifrost_ipa_kernel_checksum_algorithm Algorithm of checksum of Ironic Python
Agent (IPA) kernel image. Default is {{ inspector_ipa_kernel_checksum_algorithm }},
defined in ${KAYOBE_CONFIG_PATH}/inspector.yml.
kolla_bifrost_ipa_ramdisk_upstream_url URL of Ironic Python Agent (IPA) ramdisk
image. Default is {{ inspector_ipa_ramdisk_upstream_url }}, defined in
${KAYOBE_CONFIG_PATH}/inspector.yml.
kolla_bifrost_ipa_ramdisk_checksum_url URL of checksum of Ironic Python Agent (IPA)
ramdisk image. Default is {{ inspector_ipa_ramdisk_checksum_url }}, defined in
${KAYOBE_CONFIG_PATH}/inspector.yml.
kolla_bifrost_ipa_ramdisk_checksum_algorithm Algorithm of checksum of Ironic Python
Agent (IPA) ramdisk image. Default is {{ inspector_ipa_ramdisk_checksum_algorithm
}}, defined in ${KAYOBE_CONFIG_PATH}/inspector.yml.
Inventory configuration
Note: This feature is currently not well tested. It is advisable to use autodiscovery of overcloud servers
instead.
The following option is used to configure a static inventory of servers for Bifrost.
kolla_bifrost_servers
Server inventory for Bifrost in the JSON file format.
Custom Configuration
Further configuration of arbitrary Ansible variables for Bifrost can be provided via the following files:
• ${KAYOBE_CONFIG_PATH}/kolla/config/bifrost/bifrost.yml
• ${KAYOBE_CONFIG_PATH}/kolla/config/bifrost/dib.yml
These are both passed as extra variables files to ansible-playbook, but the naming scheme provides a
separation of DIB image related variables from other variables. It may be necessary to inspect the Bifrost
source code for the full set of variables that may be configured.
For example, to configure debug logging for Ironic Inspector:
This section covers configuration for building overcloud host disk images with Diskimage builder
(DIB), which is available from the Yoga 12.0.0 release. This configuration is applied in
${KAYOBE_CONFIG_PATH}/overcloud-dib.yml.
From the Yoga release, disk images for overcloud hosts can be built directly using Diskimage builder
rather than through Bifrost. This is enabled with the following option:
overcloud_dib_build_host_images Whether to build host disk images with DIB directly instead of
through Bifrost. Setting it to true disables Bifrost image build and allows images to be built with
the kayobe overcloud host image build command. Default value is false, except on Rocky
where it is true. This will change in a future release.
With this option enabled, Bifrost will be configured to stop building a root disk image. This will become
the default behaviour in a future release.
Kayobe uses Diskimage builder (DIB) to build root disk images that are deployed to overcloud hosts
when they are provisioned. The following options configure how these images are built. Consult the
Diskimage-builder documentation for further information on building disk images.
The default configuration builds a whole disk (partitioned) image using the selected OS distribution
(CentOS Stream 8 by default) with serial console enabled, and SELinux disabled if CentOS Stream
or Rocky Linux is used. Cloud-init is used to process the configuration drive built by Bifrost during
provisioning.
overcloud_dib_host_images List of overcloud host disk images to build. Each element is
a dict defining an image in a format accepted by the stackhpc.os-images role. Default
is to build an image named deployment_image configured with the overcloud_dib_*
variables defined below: {"name": "deployment_image", "elements": "{{
overcloud_dib_elements }}", "env": "{{ overcloud_dib_env_vars }}",
"packages": "{{ overcloud_dib_packages }}"}.
overcloud_dib_os_element DIB base OS element. Default is {{ 'rocky-container' if
os_distribution == 'rocky' else os_distribution }}.
overcloud_dib_os_release DIB image OS release. Default is {{ os_release }}.
overcloud_dib_elements_default List of default DIB elements. De-
fault is ["centos", "cloud-init-datasources", "disable-selinux",
"enable-serial-console", "vm"] when overcloud_dib_os_element is centos,
or ["rocky-container", "cloud-init-datasources", "disable-selinux",
"enable-serial-console", "vm"] when overcloud_dib_os_element is rocky or
["ubuntu", "cloud-init-datasources", "enable-serial-console", "vm"] when
overcloud_dib_os_element is ubuntu. The vm element is poorly named, and causes DIB to
build a whole disk image rather than a single partition.
overcloud_dib_elements_extra List of additional DIB elements. Default is none.
overcloud_dib_elements List of DIB elements. Default is a combination of
overcloud_dib_elements_default and overcloud_dib_elements_extra.
overcloud_dib_env_vars_default DIB default environment variables. De-
fault is {"DIB_BOOTLOADER_DEFAULT_CMDLINE": "nofb nomodeset
gfxpayload=text net.ifnames=1", "DIB_CLOUD_INIT_DATASOURCES":
"ConfigDrive", "DIB_CONTAINERFILE_RUNTIME": "docker",
"DIB_CONTAINERFILE_NETWORK_DRIVER": "host", DIB_RELEASE": "{{
overcloud_dib_os_release }}"}.
overcloud_dib_env_vars_extra DIB additional environment variables. Default is none.
overcloud_dib_env_vars DIB environment variables. Default is combination of
overcloud_dib_env_vars_default and overcloud_dib_env_vars_extra.
overcloud_dib_packages List of DIB packages to install. Default is to install no extra packages.
overcloud_dib_git_elements_default List of default git repositories containing Diskimage
Builder (DIB) elements. See stackhpc.os-images role for usage. Default is empty.
overcloud_dib_git_elements_extra List of additional git repositories containing Diskimage
Builder (DIB) elements. See stackhpc.os-images role for usage. Default is empty.
It is worth noting that images will not be rebuilt if they already exist. To force rebuilding images, it is
necessary to use the --force-rebuild argument.
In the following, we extend the list of DIB elements to add the growpart element:
By default, DIB will format the image as ext4. In some cases it might be useful to use XFS, for example
when using the overlay Docker storage driver which can reach the maximum number of hardlinks
allowed by ext4.
In DIB, we achieve this by setting the FS_TYPE environment variable to xfs.
When debugging a failed deployment, it can sometimes be necessary to allow access to the image via a
preconfigured user account with a known password. This can be achieved via the devuser element.
This example shows how to add the devuser element, and configure a username and password for an
account that has passwordless sudo:
overcloud_dib_env_vars_extra:
DIB_DEV_USER_USERNAME: "devuser"
DIB_DEV_USER_PASSWORD: "correct horse battery staple"
DIB_DEV_USER_PWDLESS_SUDO: "yes"
Alternatively, the dynamic-login element can be used to authorize SSH keys by appending them to the
kernel arguments.
Sometimes it is useful to use custom DIB elements that are not shipped with DIB itself. This can be done
by sharing them in a git repository.
overcloud_dib_git_elements:
- repo: "https://git.example.com/custom-dib-elements"
local: "{{ source_checkout_path }}/custom-dib-elements"
version: "master"
elements_path: "elements"
It can be necessary to install additional packages in the root disk image. Rather than needing to write a
custom DIB element, we can use the overcloud_dib_packages variable. For example, to install the
biosdevname package:
It can be necessary to build multiple images to support the various types of hardware present in a de-
ployment or the different functions performed by overcloud hosts. This can be configured with the
overcloud_dib_host_images variable, using a format accepted by the stackhpc.os-images role. Note
that image names should not include the file extension. For example, to build a second image with a
development user account and the biosdevname package:
devuser_env_vars:
DIB_DEV_USER_USERNAME: "devuser"
DIB_DEV_USER_PASSWORD: "correct horse battery staple"
DIB_DEV_USER_PWDLESS_SUDO: "yes"
Running the kayobe overcloud host image build command with this configuration will create
two images: deployment_image.qcow2 and debug_deployment_image.qcow2.
See disk image deployment configuration in Bifrost for how to configure the root disk image to be used
to provision each host.
This section covers configuration of Ironic Python Agent (IPA) which is used by Ironic and Ironic In-
spector to deploy and inspect bare metal nodes. This is used by the Bifrost services that run on the
seed host, and also by Ironic and Ironic Inspector services running in the overcloud for bare metal
compute, if enabled (kolla_enable_ironic is true). IPA configuration is typically applied in
${KAYOBE_CONFIG_PATH}/ipa.yml. Consult the IPA documentation for full details of IPA usage and
configuration.
Note: This section may be skipped if not building IPA images locally (ipa_build_images is false).
The following options cover building of IPA images via Diskimage-builder (DIB). Consult the
Diskimage-builder documentation for full details.
The default configuration builds a CentOS 8 ramdisk image which includes the upstream IPA source
code, and has a serial console enabled.
The images are built for Bifrost via kayobe seed deployment image build, and for Ironic in the
overcloud (if enabled) via kayobe overcloud deployment image build.
ipa_build_images Whether to build IPA images from source. Default is False.
ipa_build_source_url URL of IPA source repository. Default is https://opendev.org/openstack/
ironic-python-agent
ipa_build_source_version Version of IPA source repository. Default is {{ openstack_branch
}}.
ipa_builder_source_url URL of IPA builder source repository. Default is https://opendev.org/
openstack/ironic-python-agent-builder
ipa_builder_source_version Version of IPA builder source repository. Default is master.
ipa_build_dib_elements_default List of default Diskimage Builder (DIB) elements to
use when building IPA images. Default is ["centos", "enable-serial-console",
"ironic-python-agent-ramdisk"].
ipa_build_dib_elements_extra List of additional Diskimage Builder (DIB) elements to use when
building IPA images. Default is empty.
ipa_build_dib_elements List of Diskimage Builder (DIB) elements to use when build-
ing IPA images. Default is combination of ipa_build_dib_elements_default and
ipa_build_dib_elements_extra.
ipa_build_dib_env_default Dictionary of default environment variables to provide to Diskim-
age Builder (DIB) during IPA image build. Default is {"DIB_RELEASE": "8-stream",
"DIB_REPOLOCATION_ironic_python_agent": "{{ ipa_build_source_url }}",
"DIB_REPOREF_ironic_python_agent": "{{ ipa_build_source_version }}",
"DIB_REPOREF_requirements": "{{ openstack_branch }}"}.
ipa_build_dib_env_extra Dictionary of additional environment variables to provide to Diskimage
Builder (DIB) during IPA image build. Default is empty.
ipa_build_dib_env Dictionary of environment variables to provide to Diskimage Builder (DIB)
during IPA image build. Default is a combination of ipa_build_dib_env_default and
ipa_build_dib_env_extra.
ipa_build_dib_git_elements_default List of default git repositories containing Diskimage
Builder (DIB) elements. See stackhpc.os-images role for usage. Default is one item for IPA builder.
ipa_build_dib_git_elements_extra List of additional git repositories containing Diskimage
Builder (DIB) elements. See stackhpc.os-images role for usage. Default is none.
In the following example, we extend the list of DIB elements to add the mellanox element, which can be
useful for inspecting hardware with Mellanox InfiniBand NICs.
When debugging a failed deployment, it can sometimes be necessary to allow access to the image via a
preconfigured user account with a known password. This can be achieved via the devuser element.
This example shows how to add the devuser element, and configure a username and password for an
account that has passwordless sudo:
ipa_build_dib_env_extra:
DIB_DEV_USER_USERNAME: "devuser"
DIB_DEV_USER_PASSWORD: "correct horse battery staple"
DIB_DEV_USER_PWDLESS_SUDO: "yes"
Alternatively, the dynamic-login element can be used to authorize SSH keys by appending them to the
kernel arguments.
Further information on troubleshooting IPA can be found here.
Sometimes it is useful to use custom DIB elements that are not shipped with DIB itself. This can be done
by sharing them in a git repository.
ipa_build_dib_git_elements:
- repo: "https://git.example.com/custom-dib-elements"
local: "{{ source_checkout_path }}/custom-dib-elements"
version: "master"
elements_path: "elements"
It can be necessary to install additional packages in the IPA image. Rather than needing to write a
custom DIB element, we can use the ipa_build_dib_packages variable. For example, to install the
biosdevname package:
Note: If building IPA images locally (ipa_build_images is true) this section can be skipped.
The following options configure the source of Ironic Python Agent images for inspection and deployment.
Consult the Ironic Python Agent documentation for full details.
ipa_images_upstream_url_suffix Suffix of upstream Ironic deployment image files. Default is
based on {{ openstack_branch }}.
ipa_images_kernel_name Name of Ironic deployment kernel image to register in Glance. Default is
ipa.kernel.
ipa_kernel_upstream_url URL of Ironic deployment kernel image to download. Default is
https://tarballs.openstack.org/ironic-python-agent/dib/files/ipa-centos8{{
ipa_images_upstream_url_suffix }}.kernel.
ipa_kernel_checksum_url URL of checksum of Ironic deployment kernel image. Default is {{
ipa_kernel_upstream_url }}.{{ ipa_kernel_checksum_algorithm }}.
ipa_kernel_checksum_algorithm Algorithm of checksum of Ironic deployment kernel image. De-
fault is sha256.
ipa_images_ramdisk_name Name of Ironic deployment ramdisk image to register in Glance. Default
is ipa.initramfs.
ipa_ramdisk_upstream_url URL of Ironic deployment ramdisk image to download. Default is
https://tarballs.openstack.org/ironic-python-agent/dib/files/ipa-centos8{{
ipa_images_upstream_url_suffix }}.initramfs.
ipa_ramdisk_checksum_url URL of checksum of Ironic deployment ramdisk image. Default is {{
ipa_ramdisk_upstream_url }}.{{ ipa_ramdisk_checksum_algorithm }}.
ipa_ramdisk_checksum_algorithm Algorithm of checksum of Ironic deployment ramdisk image.
Default is sha256.
The following options configure how IPA operates during deployment and inspection.
ipa_collect_lldp Whether to enable collection of LLDP TLVs. Default is True.
ipa_collectors_default
Note: extra-hardware is not currently included as it requires a ramdisk with the hardware
python module installed.
Note: The extra-hardware collector must be enabled in order to execute benchmarks during
inspection.
The extra-hardware collector may be used to collect additional information about hardware during
inspection. It is also a requirement for running benchmarks. This collector depends on the Python
hardware package, which is not installed in IPA images by default.
The following example enables the extra-hardware collector:
The following example shows how to pass additional kernel arguments to IPA:
Docker registry
This section covers configuration of the Docker registry that may be deployed, by default on the seed host.
Docker registry configuration is typically applied in ${KAYOBE_CONFIG_PATH}/docker-registry.
yml. Consult the Docker registry documentation for further details of registry usage and configuration.
The registry is deployed during the kayobe seed host configure command.
TLS
Basic authentication
It is recommended to enable HTTP basic authentication for the registry. This needs to be done in con-
junction with enabling TLS for the registry: using basic authentication over unencrypted HTTP is not
supported.
docker_registry_enable_basic_auth Whether to enable basic authentication for the registry. De-
fault is false.
docker_registry_basic_auth_htpasswd_path Path to a htpasswd formatted password store for
the registry. Default is none.
The password store uses a htpasswd format. The following example shows how to generate a password
and add it to the kolla user in the password store. The password store may be stored with the Kayobe
configuration, under ${KAYOBE_CONFIG_PATH}/docker-registry/. The file may be encrypted via
Ansible Vault.
Next we configure Kayobe to enable basic authentication for the registry, and specify the path to the
password store.
Enabling the registry does not automatically set the configuration for Docker engine to use it. This should
be done via the docker_registry variable.
TLS
If the registry is using a privately signed TLS certificate, it is necessary to configure Docker engine with
the CA certificate.
If TLS is enabled, Docker engine should be configured to use HTTPS to communicate with it:
Basic authentication
If basic authentication is enabled, Kolla Ansible needs to be configured with the username and password.
This section covers configuration of the user-defined containers deployment functionality that runs on
the seed host.
Configuration
Please notice the optional pre and post Ansible task files - those need to be created in kayobe-config
path and will be run before and after particular container deployment.
Possible options for container deployment:
seed_containers:
containerA:
capabilities:
command:
comparisons:
detach:
env:
network_mode:
image:
init:
ipc_mode:
pid_mode:
ports:
privileged:
restart_policy:
shm_size:
sysctls:
tag:
ulimits:
user:
volumes:
For a detailed explanation of each option - please see Ansible docker_container module page.
List of Kayobe applied defaults to required docker_container variables:
---
deploy_containers_defaults:
comparisons:
image: strict
env: strict
volumes: strict
detach: True
network_mode: "host"
init: True
privileged: False
restart_policy: "unless-stopped"
deploy_containers_docker_api_timeout: 120
Infrastructure VMs
Kayobe can deploy infrastructure VMs to the seed-hypervisor. These can be used to provide supplemen-
tary services that do not run well within a containerised environment or are dependencies of the control
plane.
Configuration
To deploy an infrastructure VM, add a new host to the the infra-vms group in the inventory:
The configuration of the virtual machine should be done using host_vars. These override the
group_vars defined for the infra-vms group. Most variables have sensible defaults defined, but there
are a few variables which must be set.
Mandatory variables
All networks must have an interface defined, as described in Per-host Network Configuration. By default
the VMs are attached to the admin overcloud network. If, for example, admin_oc_net_name was set
to example_net, you would need to define example_net_interface. It is possible to change the
list of networks that a VM is attached to by modifying infra_vm_network_interfaces. Additional
interfaces can be added by setting infra_vm_network_interfaces_extra.
List of Kayobe applied defaults to required docker_container variables. Any of these variables can be
overridden with a host_var.
---
##############################################################################
,→#
# Infrastructure VM configuration.
# Memory in MB.
infra_vm_memory_mb: "{{ 16 * 1024 }}"
# Number of vCPUs.
infra_vm_vcpus: 4
# List of volumes.
infra_vm_volumes:
- "{{ infra_vm_root_volume }}"
- "{{ infra_vm_data_volume }}"
# Root volume.
(continues on next page)
# Data volume.
infra_vm_data_volume:
name: "{{ infra_vm_name }}-data"
pool: "{{ infra_vm_pool }}"
capacity: "{{ infra_vm_data_capacity }}"
format: "{{ infra_vm_data_format }}"
# otherwise.
infra_vm_root_image: >-
{%- if os_distribution == 'ubuntu' %}
https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.
,→img
{%- endif %}
##############################################################################
,→#
# User with which to access the infrastructure vm via SSH during bootstrap, in
# order to setup the Kayobe user account. Default is {{ os_distribution }}.
infra_vm_bootstrap_user: "{{ os_distribution }}"
##############################################################################
,→#
##############################################################################
,→#
##############################################################################
,→#
# Whether a 'data' LVM volume group should exist on the infrastructure vm. By
# default this contains a 'docker-volumes' logical volume for Docker volume
# storage. It will also be used for Docker container and image storage if
# 'docker_storage_driver' is set to 'devicemapper'. Default is true if
# 'docker_storage_driver' is set to 'devicemapper', or false otherwise.
infra_vm_lvm_group_data_enabled: "{{ docker_storage_driver == 'devicemapper' }
,→}"
# List of disks for use by infrastructure vm LVM data volume group. Default to
# an invalid value to require configuration.
infra_vm_lvm_group_data_disks:
- changeme
# Filesystem for docker volumes LVM backing volume. ext4 allows for shrinking.
infra_vm_lvm_group_data_lv_docker_volumes_fs: ext4
##############################################################################
,→#
##############################################################################
,→#
##############################################################################
,→#
##############################################################################
,→#
# A firewalld zone to set as the default. Default is unset, in which case the
# default zone will not be changed.
infra_vm_firewalld_default_zone:
Customisations
By default the VM has 16G of RAM. This may be changed via infra_vm_memory_mb:
The default network configuration attaches infra VMs to the admin network. If this is not appropriate,
modify infra_vm_network_interfaces. At a minimum the network interface name for the network
should be defined.
Configuration for all VMs can be set using extra_vars defined in $KAYOBE_CONFIG_PATH/
infra-vms.yml. Note that normal Ansible precedence rules apply and the variables will override any
host_vars. If you need to override the defaults, but still maintain per-host settings, use group_vars
instead.
Once the initial configuration has been done follow the steps in Infrastructure VMs.
Nova cells
In the Train release, Kolla Ansible gained full support for the Nova cells v2 scale out feature. Whilst
configuring Nova cells is documented in Kolla Ansible, implementing that configuration in Kayobe is
documented here.
In Kolla Ansible, Nova cells are configured via group variables. In Kayobe, these group variables can be
set via Kayobe configuration. For example, to configure cell0001 the following file could be created:
After defining the cell group_vars the Kayobe inventory can be configured. In Kayobe, cell controllers
and cell compute hosts become part of the existing controllers and compute Kayobe groups because
typically they will need to be provisioned in the same way. In Kolla Ansible, to prevent non-cell services
being mapped to cell controllers, the controllers group must be split into two. The inventory file
should also include the cell definitions. The following groups and hosts files give an example of how this
may be achieved:
#############################################################################
,→##
# Seed groups.
[seed]
# Empty group to provide declaration of seed group.
(continues on next page)
[seed-hypervisor]
# Empty group to provide declaration of seed-hypervisor group.
[container-image-builders:children]
# Build container images on the seed by default.
seed
#############################################################################
,→##
# Overcloud groups.
[controllers]
# Empty group to provide declaration of controllers group.
[network:children]
# Add controllers to network group by default for backwards compatibility,
# although they could be separate hosts.
top-level-controllers
[monitoring]
# Empty group to provide declaration of monitoring group.
[storage]
# Empty group to provide declaration of storage group.
[compute]
# Empty group to provide declaration of compute group.
[overcloud:children]
controllers
network
monitoring
storage
compute
#############################################################################
,→##
# Docker groups.
[docker:children]
# Hosts in this group will have Docker installed.
seed
controllers
network
monitoring
(continues on next page)
[docker-registry:children]
# Hosts in this group will have a Docker Registry deployed. This group should
# generally contain only a single host, to avoid deploying multiple␣
,→independent
#############################################################################
,→##
[baremetal-compute]
# Empty group to provide declaration of baremetal-compute group.
#############################################################################
,→##
# Networking groups.
[mgmt-switches]
# Empty group to provide declaration of mgmt-switches group.
[ctl-switches]
# Empty group to provide declaration of ctl-switches group.
[hs-switches]
# Empty group to provide declaration of hs-switches group.
[switches:children]
mgmt-switches
ctl-switches
hs-switches
# This host acts as the configuration management Ansible control host. This␣
,→must be
# localhost.
localhost ansible_connection=local
[seed-hypervisor]
# Add a seed hypervisor node here if required. This host will run a seed node
# Virtual Machine.
(continues on next page)
[seed]
operator
[controllers:children]
top-level-controllers
cell-controllers
[top-level-controllers]
control01
[cell-controllers:children]
cell01-control
cell02-control
[compute:children]
cell01-compute
cell02-compute
[cell01:children]
cell01-control
cell01-compute
cell01-vnc
[cell01-control]
control02
[cell01-vnc]
control02
[cell01-compute]
compute01
[cell02:children]
cell02-control
cell02-compute
cell02-vnc
[cell02-control]
control03
[cell02-vnc]
control03
[cell02-compute]
compute02
compute03
##################################
(continues on next page)
[mgmt-switches]
# Add management network switches here if required.
[ctl-switches]
# Add control and provisioning switches here if required.
[hs-switches]
# Add high speed switches here if required.
Having configured the Kayobe inventory, the Kolla Ansible inventory can be configured. Currently this
can be done via the kolla_overcloud_inventory_top_level_group_map variable. For example, to
configure the two cells defined in the Kayobe inventory above, the variable could be set to the following:
3.7 Deployment
This section describes usage of Kayobe to install an OpenStack cloud onto a set of bare metal servers.
We assume access is available to a node which will act as the hypervisor hosting the seed node in a VM.
We also assume that this seed hypervisor has access to the bare metal nodes that will form the OpenStack
control plane. Finally, we assume that the control plane nodes have access to the bare metal nodes that
will form the workload node pool.
See also:
Information on the configuration of a Kayobe environment is available here.
Before starting deployment we must bootstrap the Ansible control host. Tasks performed here include:
• Install required Ansible roles from Ansible Galaxy.
• Generate an SSH key if necessary and add it to the current users authorised keys.
• Install Kolla Ansible locally at the configured version.
To bootstrap the Ansible control host:
The physical network can be managed by Kayobe, which uses Ansibles network modules. Currently
Dell Network OS 6 and Dell Network OS 9 switches are supported but this could easily be extended. To
provision the physical network:
The --group argument is used to specify an Ansible group containing the switches to be configured.
The names or descriptions should be separated by commas. This may be useful when adding compute
nodes to an existing deployment, in order to avoid changing the configuration interfaces in use by active
nodes.
The --display argument will display the candidate switch configuration, without actually applying it.
See also:
Information on configuration of physical network devices is available here.
Note: It is not necessary to run the seed services in a VM. To use an existing bare metal host or a VM
provisioned outside of Kayobe, this section may be skipped.
Host Configuration
To configure the seed hypervisors host OS, and the Libvirt/KVM virtualisation support:
See also:
Information on configuration of hosts is available here.
3.7.4 Seed
VM Provisioning
Note: It is not necessary to run the seed services in a VM. To use an existing bare metal host or a VM
provisioned outside of Kayobe, this step may be skipped. Ensure that the Ansible inventory contains a
host for the seed.
The seed hypervisor should have CentOS or Rocky or Ubuntu with libvirt installed. It should have
libvirt networks configured for all networks that the seed VM needs access to and a libvirt storage
pool available for the seed VMs volumes. To provision the seed VM:
When this command has completed the seed VM should be active and accessible via SSH. Kayobe will
update the Ansible inventory with the IP address of the VM.
Host Configuration
Note: If the seed host uses disks that have been in use in a previous installation, it may be necessary
to wipe partition and LVM data from those disks. To wipe all disks that are not mounted during host
configuration:
See also:
Information on configuration of hosts is available here.
Note: It is possible to use prebuilt container images from an image registry such as Dockerhub. In this
case, this step can be skipped.
It is possible to use prebuilt container images from an image registry such as Dockerhub. In some cases it
may be necessary to build images locally either to apply local image customisation or to use a downstream
version of kolla. Images are built by hosts in the container-image-builders group, which by default
includes the seed.
To build container images:
It is possible to build a specific set of images by supplying one or more image name regular expressions:
In order to push images to a registry after they are built, add the --push argument.
See also:
Information on configuration of Kolla for building container images is available here.
At this point the seed services need to be deployed on the seed VM. These services are deployed in the
bifrost_deploy container.
This command will also build the Operating System image that will be used to deploy the overcloud
nodes using Disk Image Builder (DIB), unless overcloud_dib_build_host_images is set to True.
Note: If you are using Rocky Linux - building of the Operating System image needs to be done using
kayobe overcloud host image build.
After this command has completed the seed services will be active.
See also:
Information on configuration of Kolla Ansible is available here. See here for information about configur-
ing Bifrost. Overcloud root disk image configuration provides information on configuring the root disk
image build process. See here for information about deploying additional, custom services (containers)
on a seed node.
Note: It is possible to use prebuilt deployment images. In this case, this step can be skipped.
It is possible to use prebuilt deployment images from the OpenStack hosted tarballs or another source.
In some cases it may be necessary to build images locally either to apply local image customisa-
tion or to use a downstream version of Ironic Python Agent (IPA). In order to build IPA images, the
ipa_build_images variable should be set to True.
To build images locally:
If images have been built previously, they will not be rebuilt. To force rebuilding images, use the
--force-rebuild argument.
See also:
See here for information on how to configure the IPA image build process.
Host disk images are deployed on overcloud hosts during provisioning. To build host disk images:
If images have been built previously, they will not be rebuilt. To force rebuilding images, use the
--force-rebuild argument.
See also:
See here for information on how to configure the overcloud host disk image build process.
For SSH access to the seed, first determine the seeds IP address. We can use the kayobe
configuration dump command to inspect the seeds IP address:
The kayobe_ansible_user variable determines which user account will be used by Kayobe when
accessing the machine via SSH. By default this is stack. Use this user to access the seed:
$ docker ps
Leave the seed VM and return to the shell on the Ansible control host:
$ exit
Warning: Support for infrastructure VMs is considered experimental: its design may change in
future versions without a deprecation period.
Note: It necessary to perform some configuration before these steps can be followed. Please see Infras-
tructure VMs.
VM Provisioning
The hypervisor used to host a VM is controlled via the infra_vm_hypervisor variable. It defaults
to use the seed hypervisor. All hypervisors should have CentOS or Ubuntu with libvirt installed. It
should have libvirt networks configured for all networks that the VM needs access to and a libvirt
storage pool available for the VMs volumes. The steps needed for for the seed and the seed hypervisor
can be found above.
To provision the infra VMs:
When this command has completed the infra VMs should be active and accessible via SSH. Kayobe will
update the Ansible inventory with the IP address of the VM.
Host Configuration
Note: If the infra VM host uses disks that have been in use in a previous installation, it may be necessary
to wipe partition and LVM data from those disks. To wipe all disks that are not mounted during host
configuration:
See also:
Information on configuration of hosts is available here.
A no-op service deployment command is provided to perform additional configuration. The intention is
for users to define hooks to custom playbooks that define any further configuration or service deployment
necessary.
To trigger the hooks:
Example
In this example we have an infra VM host called dns01 that provides DNS services. The host could be
added to a dns-servers group in the inventory:
[infra-vms:children]
dns-servers
We have a custom playbook targeting the dns-servers group that sets up the DNS server:
Finally, we add a symlink to set up the playbook as a hook for the kayobe infra vm service deploy
command:
3.7.6 Overcloud
Discovery
Note: If discovery of the overcloud is not possible, a static inventory of servers using the
bifrost servers.yml file format may be configured using the kolla_bifrost_servers variable in
${KAYOBE_CONFIG_PATH}/bifrost.yml.
Discovery of the overcloud is supported by the ironic inspector service running in the bifrost_deploy
container on the seed. The service is configured to PXE boot unrecognised MAC addresses with an IPA
ramdisk for introspection. If an introspected node does not exist in the ironic inventory, ironic inspector
will create a new entry for it.
Discovery of the overcloud is triggered by causing the nodes to PXE boot using a NIC attached to the
overcloud provisioning network. For many servers this will be the factory default and can be performed
by powering them on.
On completion of the discovery process, the overcloud nodes should be registered with the ironic service
running in the seed hosts bifrost_deploy container. The node inventory can be viewed by executing
the following on the seed:
In order to interact with these nodes using Kayobe, run the following command to add them to the Kayobe
and Kolla-Ansible inventories:
See also:
This blog post provides a case study of the discovery process, including automatically naming Ironic
nodes via switch port descriptions, Ironic Inspector and LLDP.
If ironic inspector is in use on the seed host, introspection data will be stored in the local nginx service.
This data may be saved to the control host:
--output-dir may be used to specify the directory in which introspection data files will be saved.
--output-format may be used to set the format of the files.
Note: BIOS and RAID configuration may require one or more power cycles of the hardware to complete
the operation. These will be performed automatically.
Note: Currently, BIOS and RAID configuration of overcloud hosts is supported for Dell servers only.
Configuration of BIOS settings and RAID volumes is currently performed out of band as a separate task
from hardware provisioning. To configure the BIOS and RAID:
After configuring the nodes RAID volumes it may be necessary to perform hardware inspection of the
nodes to reconfigure the ironic nodes scheduling properties and root device hints. To perform manual
hardware inspection:
Provisioning
Note: There is a cloud-init issue which prevents Ironic nodes without names from being accessed via
SSH after provisioning. To avoid this issue, ensure that all Ironic nodes in the Bifrost inventory are
named. This may be achieved via autodiscovery, or manually, e.g. from the seed:
Provisioning of the overcloud is performed by the ironic service running in the bifrost container on the
seed. To provision the overcloud nodes:
After this command has completed the overcloud nodes should have been provisioned with an OS image.
The command will wait for the nodes to become active in ironic and accessible via SSH.
Host Configuration
Note: If the controller hosts use disks that have been in use in a previous installation, it may be necessary
to wipe partition and LVM data from those disks. To wipe all disks that are not mounted during host
configuration:
See also:
Information on configuration of hosts is available here.
Note: It is possible to use prebuilt container images from an image registry such as Dockerhub. In this
case, this step can be skipped.
In some cases it may be necessary to build images locally either to apply local image customisation or
to use a downstream version of kolla. Images are built by hosts in the container-image-builders
group, which by default includes the seed. If no seed host is in use, for example in an all-in-one controller
development environment, this group may be modified to cause containers to be built on the controllers.
To build container images:
It is possible to build a specific set of images by supplying one or more image name regular expressions:
In order to push images to a registry after they are built, add the --push argument.
See also:
Information on configuration of Kolla for building container images is available here.
Note: It is possible to build container images locally avoiding the need for an image registry such as
Dockerhub. In this case, this step can be skipped.
In most cases suitable prebuilt kolla images will be available on Dockerhub. The kolla account provides
image repositories suitable for use with kayobe and will be used by default. To pull images from the
configured image registry:
Note: It is possible to use prebuilt deployment images. In this case, this step can be skipped.
Note: Deployment images are only required for the overcloud when Ironic is in use. Otherwise, this
step can be skipped.
It is possible to use prebuilt deployment images from the OpenStack hosted tarballs or another source.
In some cases it may be necessary to build images locally either to apply local image customisa-
tion or to use a downstream version of Ironic Python Agent (IPA). In order to build IPA images, the
ipa_build_images variable should be set to True.
If images have been built previously, they will not be rebuilt. To force rebuilding images, use the
--force-rebuild argument.
See also:
See here for information on how to configure the IPA image build process.
Swift uses ring files to control placement of data across a cluster. These files can be generated automat-
ically using the following command:
Once this command has completed the overcloud nodes should have OpenStack services running in
Docker containers.
See also:
Information on configuration of Kolla Ansible is available here.
Kolla-ansible writes out an environment file that can be used to access the OpenStack admin endpoints
as the admin user:
$ source ${KOLLA_CONFIG_PATH:-/etc/kolla}/admin-openrc.sh
Kayobe also generates an environment file that can be used to access the OpenStack public endpoints as
the admin user which may be required if the admin endpoints are not available from the Ansible control
host:
$ source ${KOLLA_CONFIG_PATH:-/etc/kolla}/public-openrc.sh
3.8 Upgrading
This section describes how to upgrade from one OpenStack release to another.
The Wallaby release introduced support for CentOS Stream 8 as a host operating system. CentOS Stream
8 support was also added to Victoria in version 9.2.0. CentOS Linux users upgrading from Victoria
should first migrate hosts and container images from CentOS Linux to CentOS Stream before upgrading
to Wallaby.
3.8.2 Preparation
Before you start, be sure to back up any local changes, configuration, and data.
Kayobe configuration options may be changed between releases of kayobe. Ensure that all site local
configuration is migrated to the target version format. If using the kayobe-config git repository to manage
local configuration, this process can be managed via git. For example, to fetch version 1.0.0 of the
configuration from the origin remote and merge it into the current branch:
The configuration should be manually inspected after the merge to ensure that it is correct. Any new
configuration options may be set at this point. In particular, the following options may need to be changed
if not using their default values:
• kolla_openstack_release
• kolla_tag
• kolla_sources
• kolla_build_blocks
• kolla_build_customizations
Once the configuration has been migrated, it is possible to view the global variables for all hosts:
The output of this command is a JSON object mapping hosts to their configuration. The output of the
command may be restricted using the --host, --hosts, --var-name and --dump-facts options.
If using the kayobe-env environment file in kayobe-config, this should also be inspected for changes
and modified to suit the local ansible control host environment if necessary. When ready, source the
environment file:
$ source kayobe-env
The Kayobe release notes provide information on each new release. In particular, the Upgrade Notes and
Deprecation Notes sections provide information that might affect the configuration migration.
All changes made to the configuration should be committed and pushed to the hosting git repository.
Ensure that the Kayobe configuration is checked out at the required commit.
First, ensure that there are no uncommitted local changes to the repository:
$ cd <base_path>/src/kayobe-config/
$ git status
Pull down changes from the hosting repository. For example, to fetch changes from the master branch
of the origin remote:
This section describes how to upgrade Kayobe from a Python package in a virtualenv. This is supported
from Kayobe 5.0.0 onwards.
Ensure that the virtualenv is activated:
$ source <base_path>/venvs/kayobe/bin/activate
Note: When updating Ansible above version 2.9.x, first uninstall it with pip uninstall ansible.
A newer version will be installed with the next command, as a Kayobe dependency. If Ansible 2.10.x
was installed and you want to use a newer version, also uninstall the ansible-base package with pip
uninstall ansible-base.
$ cd <base_path>/src/kayobe
$ git pull origin 5.0.0
$ source <base_path>/venvs/kayobe/bin/activate
(kayobe) $ cd <base_path>/src/kayobe
(kayobe) $ pip install -U .
Alternatively, if using an editable install of Kayobe (version 5.0.0 onwards, see Editable source installa-
tion for details):
(kayobe) $ cd <base_path>/src/kayobe
(kayobe) $ pip install -U -e .
Before starting the upgrade we must upgrade the Ansible control host. Tasks performed here include:
• Install updated Ansible role dependencies from Ansible Galaxy.
• Generate an SSH key if necessary and add it to the current users authorised keys.
• Upgrade Kolla Ansible locally to the configured version.
To upgrade the Ansible control host:
Currently, upgrading the seed hypervisor services is not supported. It may however be necessary to
upgrade host packages and some host services.
Prior to upgrading the seed hypervisor, it may be desirable to upgrade system packages on the seed
hypervisor host.
To update all eligible packages, use *, escaping if necessary:
Note that this will not perform full configuration of the host, and will instead perform a targeted upgrade
of specific services where necessary.
The seed services are upgraded in two steps. First, new container images should be obtained either
by building them locally or pulling them from an image registry. Second, the seed services should be
replaced with new containers created from the new container images.
Prior to upgrading the seed, it may be desirable to upgrade system packages on the seed host.
To update all eligible packages, use *, escaping if necessary:
Note that these commands do not affect packages installed in containers, only those installed on the host.
Note: It is possible to use prebuilt deployment images. In this case, this step can be skipped.
It is possible to use prebuilt deployment images from the OpenStack hosted tarballs or another source.
In some cases it may be necessary to build images locally either to apply local image customisa-
tion or to use a downstream version of Ironic Python Agent (IPA). In order to build IPA images, the
ipa_build_images variable should be set to True. To build images locally:
Note that this will not perform full configuration of the host, and will instead perform a targeted upgrade
of specific services where necessary.
Note: It is possible to use prebuilt container images from an image registry such as Dockerhub. In this
case, this step can be skipped.
In some cases it may be necessary to build images locally either to apply local image customisation or to
use a downstream version of kolla. To build images locally:
In order to push images to a registry after they are built, add the --push argument.
Containerised seed services may be upgraded by replacing existing containers with new containers using
updated images which have been pulled from a registry or built locally.
To upgrade the containerised seed services:
The overcloud services are upgraded in two steps. First, new container images should be obtained either
by building them locally or pulling them from an image registry. Second, the overcloud services should
be replaced with new containers created from the new container images.
Prior to upgrading the OpenStack control plane, it may be desirable to upgrade system packages on the
overcloud hosts.
To update all eligible packages, use *, escaping if necessary:
Note that these commands do not affect packages installed in containers, only those installed on the host.
Prior to upgrading the OpenStack control plane, the overcloud host services should be upgraded:
Note that this will not perform full configuration of the host, and will instead perform a targeted upgrade
of specific services where necessary.
Note: It is possible to use prebuilt deployment images. In this case, this step can be skipped.
It is possible to use prebuilt deployment images from the OpenStack hosted tarballs or another source.
In some cases it may be necessary to build images locally either to apply local image customisa-
tion or to use a downstream version of Ironic Python Agent (IPA). In order to build IPA images, the
ipa_build_images variable should be set to True. To build images locally:
Prior to upgrading the OpenStack control plane you should upgrade the deployment images. If you are
using prebuilt images, update the following variables in etc/kayobe/ipa.yml accordingly:
• ipa_kernel_upstream_url
• ipa_kernel_checksum_url
• ipa_kernel_checksum_algorithm
• ipa_ramdisk_upstream_url
• ipa_ramdisk_checksum_url
• ipa_ramdisk_checksum_algorithm
Alternatively, you can update the files that the URLs point to. If building the images locally, follow the
process outlined in Building Ironic Deployment Images.
To get Ironic to use an updated set of overcloud deployment images, you can run:
This will register the images in Glance and update the deploy_ramdisk and deploy_kernel properties
of the Ironic nodes.
Before rolling out the update to all nodes, it can be useful to test the image on a limited subset. To do this,
you can use the baremetal-compute-limit option. See Update Deployment Image for more details.
Note: It is possible to use prebuilt container images from an image registry such as Dockerhub. In this
case, this step can be skipped.
In some cases it may be necessary to build images locally either to apply local image customisation or to
use a downstream version of kolla. To build images locally:
It is possible to build a specific set of images by supplying one or more image name regular expressions:
In order to push images to a registry after they are built, add the --push argument.
Note: It is possible to build container images locally avoiding the need for an image registry such as
Dockerhub. In this case, this step can be skipped.
In most cases suitable prebuilt kolla images will be available on Dockerhub. The kolla account provides
image repositories suitable for use with kayobe and will be used by default. To pull images from the
configured image registry:
It is often useful to be able to save the configuration of the control plane services for inspection or com-
parison with another configuration set prior to a reconfiguration or upgrade. This command will gather
and save the control plane configuration for all hosts to the Ansible control host:
The default location for the saved configuration is $PWD/overcloud-config, but this can be changed
via the output-dir argument. To gather configuration from a directory other than the default /etc/
kolla, use the node-config-dir argument.
Prior to deploying, reconfiguring, or upgrading a control plane, it may be useful to generate the configura-
tion that will be applied, without actually applying it to the running containers. The configuration should
typically be generated in a directory other than the default configuration directory of /etc/kolla, to
avoid overwriting the active configuration:
The configuration will be generated remotely on the overcloud hosts in the specified directory, with
one subdirectory per container. This command may be followed by kayobe overcloud service
configuration save to gather the generated configuration to the Ansible control host.
Containerised control plane services may be upgraded by replacing existing containers with new con-
tainers using updated images which have been pulled from a registry or built locally.
To upgrade the containerised control plane services:
It is possible to specify tags for Kayobe and/or kolla-ansible to restrict the scope of the upgrade:
3.9 Administration
This section describes how to use kayobe to simplify post-deployment administrative tasks.
There are several pieces of software and configuration that must be installed and synchronised on the
Ansible Control host:
• Kayobe configuration
• Kayobe Python package
• Ansible Galaxy roles
• Kolla Ansible Python package
A change to the configuration may require updating the Kolla Ansible Python package. Updating the
Kayobe Python package may require updating the Ansible Galaxy roles. Its not always easy to know
which of these are required, so the simplest option is to apply all of the following steps when any of the
above are changed.
In some situations it may be necessary to run an individual Kayobe playbook. Playbooks are stored in
<kayobe repo>/ansible/*.yml. To run an arbitrary Kayobe playbook:
The Ansible configuration space is quite large, and it can be hard to determine the final values of Ansible
variables. We can use Kayobes configuration dump command to view individual variables or the
variables for one or more hosts. To dump Kayobe configuration for one or more hosts:
In complex networking environments it can be useful to be able to automatically check network connec-
tivity and diagnose networking issues. To perform some simple connectivity checks:
Note that this will run on the seed, seed hypervisor, and overcloud hosts. If any of these hosts are not
expected to be active (e.g. prior to overcloud deployment), the set of target hosts may be limited using
the --limit argument.
These checks will attempt to ping the external IP address 8.8.8.8 and external hostname google.
com. They can be configured with the nc_external_ip and nc_external_hostname variables in
$KAYOBE_CONFIG_PATH/networks.yml.
Note: This step will destroy the seed VM and its data volumes.
Updating Packages
Package Repositories
If using custom DNF package repositories on CentOS or Rocky, it may be necessary to update these prior
to running a package update. To do this, update the configuration in ${KAYOBE_CONFIG_PATH}/dnf.
yml and run the following command:
Package Update
Note that these commands do not affect packages installed in containers, only those installed on the host.
Packages can also be updated on the seed hypervisor host, if one is in use:
Kernel Updates
If the kernel has been updated, you will probably want to reboot the seed host to boot into the new kernel.
This can be done using a command such as the following:
(kayobe) $ kayobe seed host command run --command "shutdown -r" --become
The seed host runs various services required for a standalone Ironic deployment. These all run in a single
bifrost_deploy container.
It can often be helpful to execute a shell in the bifrost container for diagnosing operational issues:
(bifrost_deploy) systemctl
Logs are stored in /var/log/kolla/, which is mounted to the kolla_logs Docker volume.
The Ironic and Ironic inspector APIs can be accessed via the baremetal command line interface:
There are two main approaches to backing up and restoring data on the seed. A backup may be taken of
the Ironic databases. Alternatively, a Virtual Machine backup may be used if running the seed services in
a VM. The former will consume less storage. Virtual Machine backups are not yet covered here, neither
is scheduling of backups. Any backup and restore procedure should be tested in advance.
A backup may be taken of the database, using one of the many tools that exist for backing up MariaDB
databases.
A simple approach that should work for the typically modestly sized seed database is mysqldump. The
following commands should all be executed on the seed.
Backup
It should be safe to keep services running during the backup, but for maximum safety they may optionally
be stopped:
If the services were stopped prior to the backup, start them again:
Restore
Prior to restoring the database, the Ironic and Ironic Inspector services should be stopped:
Running Commands
For example:
(kayobe) $ kayobe seed host command run --command "service docker restart"
Commands can also be run on the seed hypervisor host, if one is in use:
To execute the command with root privileges, add the --become argument. Adding the --verbose
argument allows the output of the command to be seen.
Note: This step will destroy the infrastructure VMs and associated data volumes. Make sure you backup
any data you want to keep.
This can be limited to a subset of the nodes using the --limit option:
Updating Packages
Package Repositories
If using custom DNF package repositories on CentOS or Rocky, it may be necessary to update these prior
to running a package update. To do this, update the configuration in ${KAYOBE_CONFIG_PATH}/dnf.
yml and run the following command:
Package Update
Note that these commands do not affect packages installed in containers, only those installed on the host.
Kernel Updates
If the kernel has been updated, you will probably want to reboot the host to boot into the new kernel.
This can be done using a command such as the following:
(kayobe) $ kayobe infra vm host command run --command "shutdown -r" --become
Running Commands
For example:
(kayobe) $ kayobe infra vm host command run --command "service docker restart"
Commands can also be run on the seed hypervisor host, if one is in use:
To execute the command with root privileges, add the --become argument. Adding the --verbose
argument allows the output of the command to be seen.
Updating Packages
Package Repositories
If using custom DNF package repositories on CentOS or Rocky, it may be necessary to update these prior
to running a package update. To do this, update the configuration in ${KAYOBE_CONFIG_PATH}/dnf.
yml and run the following command:
Package Update
Note that these commands do not affect packages installed in containers, only those installed on the host.
Kernel Updates
If the kernel has been updated, you will probably want to reboot the hosts to boot into the new kernel.
This can be done using a command such as the following:
(kayobe) $ kayobe overcloud host command run --command "shutdown -r" --become
It is normally best to apply this to control plane hosts in batches to avoid clustered services from losing
quorum. This can be achieved using the --limit argument, and ensuring services are fully up after
rebooting before proceeding with the next batch.
Running Commands
For example:
(kayobe) $ kayobe overcloud host command run --command "service docker restart
,→"
To execute the command with root privileges, add the --become argument. Adding the --verbose
argument allows the output of the command to be seen.
When configuration is changed, it is necessary to apply these changes across the system in an automated
manner. To reconfigure the overcloud, first make any changes required to the configuration on the Ansible
control host. Next, run the following command:
In case not all services configuration have been modified, performance can be improved by specifying
Ansible tags to limit the tasks run in kayobe and/or kolla-ansibles playbooks. This may require knowledge
of the inner workings of these tools but in general, kolla-ansible tags the play used to configure each
service by the name of that service. For example: nova, neutron or ironic. Use -t or --tags to
specify kayobe tags and -kt or --kolla-tags to specify kolla-ansible tags. For example:
A common task is to deploy updated container images, without configuration changes. This might be to
roll out an updated container OS or to pick up some package updates. This should be faster than a full
deployment or reconfiguration.
To deploy updated container images:
Note that if there are configuration changes, these will not be applied using this command so if in doubt,
use a normal kayobe overcloud service deploy.
In case not all services containers have been modified, performance can be improved by specifying Ansi-
ble tags to limit the tasks run in kayobe and/or kolla-ansibles playbooks. This may require knowledge of
the inner workings of these tools but in general, kolla-ansible tags the play used to configure each service
by the name of that service. For example: nova, neutron or ironic. Use -t or --tags to specify
kayobe tags and -kt or --kolla-tags to specify kolla-ansible tags. For example:
Containerised control plane services may be upgraded by replacing existing containers with new con-
tainers using updated images which have been pulled from a registry or built locally. If using an updated
version of Kayobe or upgrading from one release of OpenStack to another, be sure to follow the kayobe
upgrade guide. It may be necessary to upgrade one or more services within a release, for example to
apply a patch or minor release.
To upgrade the containerised control plane services:
As for the reconfiguration command, it is possible to specify tags for Kayobe and/or kolla-ansible:
Running Prechecks
As for other similar commands, it is possible to specify tags for Kayobe and/or kolla-ansible:
Note: This step will stop all containers on the overcloud hosts.
It should be noted that this state is persistent - containers will remain stopped after a reboot of the host
on which they are running.
It is possible to limit the operation to particular hosts via --kolla-limit, or to particular services via
--kolla-tags. It is also possible to avoid stopping the common containers via --kolla-skip-tags
common. For example:
(kayobe) $ kayobe overcloud service stop kolla-tags glance,nova kolla-skip-tags common
Note: This step will destroy all containers, container images, volumes and data on the overcloud hosts.
Note: This step will power down the overcloud hosts and delete their nodes instance state from the seeds
ironic service.
It is often useful to be able to save the configuration of the control plane services for inspection or com-
parison with another configuration set prior to a reconfiguration or upgrade. This command will gather
and save the control plane configuration for all hosts to the Ansible control host:
The default location for the saved configuration is $PWD/overcloud-config, but this can be changed
via the output-dir argument. To gather configuration from a directory other than the default /etc/
kolla, use the node-config-dir argument.
Prior to deploying, reconfiguring, or upgrading a control plane, it may be useful to generate the configura-
tion that will be applied, without actually applying it to the running containers. The configuration should
typically be generated in a directory other than the default configuration directory of /etc/kolla, to
avoid overwriting the active configuration:
The configuration will be generated remotely on the overcloud hosts in the specified directory, with
one subdirectory per container. This command may be followed by kayobe overcloud service
configuration save to gather the generated configuration to the Ansible control host.
Database backups can be performed using the underlying support in Kolla Ansible.
In order to enable backups, enable Mariabackup in ${KAYOBE_CONFIG_PATH}/kolla.yml:
kolla_enable_mariabackup: true
To apply this change, use the kayobe overcloud service reconfigure command.
To perform a full backup, run the following command:
Further information on backing up and restoring the database is available in the Kolla Ansible documen-
tation.
Recover a completely stopped MariaDB cluster using the underlying support in Kolla Ansible.
To perform recovery run the following command:
By default the underlying kolla-ansible will automatically determine which host to use, and this option
should not be used.
Gathering Facts
The following command may be used to gather facts for all overcloud hosts, for both Kayobe and Kolla
Ansible:
When enrolling new hardware or performing maintenance, it can be useful to be able to manage many
bare metal compute nodes simultaneously.
In all cases, commands are delegated to one of the controller hosts, and executed concurrently. Note
that ansibles forks configuration option, which defaults to 5, may limit the number of nodes configured
concurrently.
By default these commands wait for the state transition to complete for each node. This
behavior can be changed by overriding the variable baremetal_compute_wait via -e
baremetal_compute_wait=False
Manage
A node may need to be set to the manageable provision state in order to perform certain management
operations, or when an enrolled node is transitioned into service. In order to manage a node, it must be
in one of these states: enroll, available, cleaning, clean failed, adopt failed or inspect
failed. To move the baremetal compute nodes to the manageable provision state:
Provide
In order for nodes to be scheduled by nova, they must be available. To move the baremetal compute
nodes from the manageable state to the available provision state:
Inspect
Nodes must be in one of the following states: manageable, inspect failed, or available. To trigger
hardware inspection on the baremetal compute nodes:
Rename
Once nodes have been discovered, it is helpful to associate them with a name to make them easier to work
with. If you would like the nodes to be named according to their inventory host names, you can run the
following command:
This command will use the ipmi_address host variable from the inventory to map the inventory host
name to the correct node.
When the overcloud deployment images have been rebuilt or there has been a change to one of the fol-
lowing variables:
• ipa_kernel_upstream_url
• ipa_ramdisk_upstream_url
either by changing the url, or if the image to which they point has been changed, you need to update the
deploy_ramdisk and deploy_kernel properties on the Ironic nodes. To do this you can run:
You can optionally limit the nodes in which this affects by setting baremetal-compute-limit:
which should take the form of an ansible host pattern. This is matched against the Ironic node name.
To access the baremetal nodes from within Horizon you need to enable the serial console. For this to work
the you must set kolla_enable_nova_serialconsole_proxy to true in etc/kayobe/kolla.yml:
kolla_enable_nova_serialconsole_proxy: true
The console interface on the Ironic nodes is expected to be ipmitool-socat, you can check this with:
where <node_id> should be the UUID or name of the Ironic node you want to check.
If you have set kolla_ironic_enabled_console_interfaces in etc/kayobe/ironic.yml, it
should include ipmitool-socat in the list of enabled interfaces.
The playbook to enable the serial console currently only works if the Ironic node name matches the
inventory hostname.
Once these requirements have been satisfied, you can run:
This will reserve a TCP port for each node to use for the serial console interface. The alloca-
tions are stored in ${KAYOBE_CONFIG_PATH}/console-allocation.yml. The current implemen-
tation uses a global pool, which is specified by ironic_serial_console_tcp_pool_start and
ironic_serial_console_tcp_pool_end; these variables can set in etc/kayobe/ironic.yml.
To disable the serial console you can use:
The port allocated for each node is retained and must be manually removed from
${KAYOBE_CONFIG_PATH}/console-allocation.yml if you want it to be reused by another
Ironic node with a different name.
You can optionally limit the nodes targeted by setting baremetal-compute-limit:
To enable the serial consoles automatically on kayobe overcloud post configure, you can set
ironic_serial_console_autoenable in etc/kayobe/ironic.yml:
ironic_serial_console_autoenable: true
3.10 Resources
Note: The A Universe From Nothing deployment guide is intended for educational & testing purposes
only. It is not production ready.
Originally created as a workshop, A Universe From Nothing is an example guide for the deployment of
Kayobe on virtual hardware. You can find it on GitHub here.
The repository contains a configuration suitable for deploying containerised OpenStack using Kolla,
Ansible and Kayobe. The guide makes use of Tenks to provision a virtual baremetal environment running
on a single hypervisor.
To complete the walkthrough you will require a baremetal or VM hypervisor running CentOS 8 or Ubuntu
Jammy 22.04 (since Zed 13.0.0) with at least 32GB RAM & 80GB disk space. Preparing the deployment
can take some time - where possible it is beneficial to snapshot the hypervisor. We advise making a
snapshot after creating the initial seed VM as this will make additional deployments significantly faster.
Note: This is an advanced topic and should only be attempted when familiar with kayobe and OpenStack.
The default configuration in kayobe places all control plane services on a single set of servers described
as controllers. In some cases it may be necessary to introduce more than one server role into the control
plane, and control which services are placed onto the different server roles.
Configuration
If using a seed host to enable discovery of the control plane services, it is necessary to configure how
the discovered hosts map into kayobe groups. This is done using the overcloud_group_hosts_map
variable, which maps names of kayobe groups to a list of the hosts to be added to that group.
This variable will be used during the command kayobe overcloud inventory discover. An in-
ventory file will be generated in ${KAYOBE_CONFIG_PATH}/inventory/overcloud with discovered
hosts added to appropriate kayobe groups based on overcloud_group_hosts_map.
Once hosts have been discovered and enrolled into the kayobe inventory, they must be added to the kolla-
ansible inventory. This is done by mapping from top level kayobe groups to top level kolla-ansible groups
using the kolla_overcloud_inventory_top_level_group_map variable. This variable maps from
kolla-ansible groups to lists of kayobe groups, and variables to define for those groups in the kolla-ansible
inventory.
Certain variables must be defined for hosts in the overcloud group. For hosts in the controllers
group, many variables are mapped to other variables with a controller_ prefix in files under ansible/
group_vars/controllers/. This is done in order that they may be set in a global extra variables
file, typically controllers.yml, with defaults set in ansible/group_vars/all/controllers. A
similar scheme is used for hosts in the monitoring group.
If configuring BIOS and RAID via kayobe overcloud bios raid configure, the following vari-
ables should also be defined:
These variables can be defined in inventory host or group variables files, under
${KAYOBE_CONFIG_PATH}/inventory/host_vars/<host> or ${KAYOBE_CONFIG_PATH}/
inventory/group_vars/<group> respectively.
As an advanced option, it is possible to fully customise the content of the kolla-ansible inventory, at
various levels. To facilitate this, kayobe breaks the kolla-ansible inventory into three separate sections.
Top level groups define the roles of hosts, e.g. controller or compute, and it is to these groups that
hosts are mapped directly.
Components define groups of services, e.g. nova or ironic, which are mapped to top level groups.
Services define single containers, e.g. nova-compute or ironic-api, which are mapped to compo-
nents.
The default top level inventory is generated from kolla_overcloud_inventory_top_level_group_map.
Kayobes component- and service-level inventory for kolla-ansible is static, and taken from the kolla-
ansible example multinode inventory. The complete inventory is generated by concatenating these
inventories.
Each level may be separately overridden by setting the following variables:
Examples
This example walks through the configuration that could be applied to enable the use of separate
hosts for neutron network services and load balancing. The control plane consists of three con-
trollers, controller-[0-2], and two network hosts, network-[0-1]. All file paths are relative to
${KAYOBE_CONFIG_PATH}.
First, we must make the network group separate from controllers:
[network]
# Empty group to provide declaration of network group.
Finally, we create a group variables file for hosts in the network group, providing the necessary variables
for a control plane host.
Here we are using the controller-specific values for some of these variables, but they could equally be
different.
This example shows how to override one or more sections of the kolla-ansible inventory. All file paths
are relative to ${KAYOBE_CONFIG_PATH}.
It is typically best to start with an inventory template taken from the Kayobe source code, and then
customize it. The templates can be found in ansible/roles/kolla-ansible/templates, e.g. com-
ponents template is overcloud-components.j2.
First, create a file containing the customised inventory section. Well use the components section in this
example.
[ironic]
{% if kolla_enable_ironic | bool %}
control
{% endif %}
...
Here we use the template lookup plugin to render the Jinja2-formatted inventory template.
Kayobe supports running custom Ansible playbooks located outside of the kayobe project. This provides
a flexible mechanism for customising a control plane. Access to the kayobe variables is possible, ensuring
configuration does not need to be repeated.
Explicitly allowing users to run custom playbooks with access to the kayobe variables elevates the vari-
able namespace and inventory to become an interface. This raises questions about the stability of this
interface, and the guarantees it provides.
The following guidelines apply to the custom playbook API:
• Only variables defined in the kayobe configuration files under etc/kayobe are supported.
• The groups defined in etc/kayobe/inventory/groups are supported.
• Any change to a supported variable (rename, schema change, default value change, or removal) or
supported group (rename or removal) will follow a deprecation period of one release cycle.
• Kayobes internal roles may not be used.
Note that these are guidelines, and exceptions may be made where appropriate.
Playbooks do not by default have access to the Kayobe playbook group variables, filter plugins, and test
plugins, since these are relative to the current playbooks directory. This can be worked around by creating
symbolic links to the Kayobe repository from the Kayobe configuration.
The kayobe project encourages its users to manage configuration for a cloud using version control, based
on the kayobe-config repository. Storing custom Ansible playbooks in this repository makes a lot of
sense, and kayobe has special support for this.
It is recommended to store custom playbooks in $KAYOBE_CONFIG_PATH/ansible/. Roles located in
$KAYOBE_CONFIG_PATH/ansible/roles/ will be automatically available to playbooks in this direc-
tory.
With this directory layout, the following commands could be used to create symlinks that allow access
to Kayobes filter plugins, group variables and test plugins:
cd ${KAYOBE_CONFIG_PATH}/ansible/
ln -s ../../../../kayobe/ansible/filter_plugins/ filter_plugins
ln -s ../../../../kayobe/ansible/group_vars/ group_vars
ln -s ../../../../kayobe/ansible/test_plugins/ test_plugins
Note: These symlinks rely on having a kayobe source checkout at the same level as the kayobe-config
repository checkout, as described in Installation from source.
Ansible Galaxy
Ansible Galaxy provides a means for sharing Ansible roles and collections. Kayobe configuration may
provide a Galaxy requirements file that defines roles and collections to be installed from Galaxy. These
roles and collections may then be used by custom playbooks.
Galaxy dependencies may be defined in $KAYOBE_CONFIG_PATH/ansible/requirements.yml.
These roles and collections will be installed in $KAYOBE_CONFIG_PATH/ansible/roles/ and
$KAYOBE_CONFIG_PATH/ansible/collections when bootstrapping the Ansible control host:
Example: roles
The following example adds a foo.yml playbook to a set of kayobe configuration. The playbook uses a
Galaxy role, bar.baz.
Here is the kayobe configuration repository structure:
etc/kayobe/
ansible/
foo.yml
requirements.yml
roles/
bifrost.yml
...
---
- hosts: controllers
roles:
- name: bar.baz
---
roles:
- bar.baz
We should first install the Galaxy role dependencies, to download the bar.baz role:
Example: collections
The following example adds a foo.yml playbook to a set of kayobe configuration. The playbook uses a
role from a Galaxy collection, bar.baz.qux.
Here is the kayobe configuration repository structure:
etc/kayobe/
ansible/
collections/
foo.yml
requirements.yml
bifrost.yml
...
---
- hosts: controllers
roles:
- name: bar.baz.qux
---
collections:
- bar.baz
We should first install the Galaxy dependencies, to download the bar.baz collection:
Hooks
Warning: Hooks are an experimental feature and the design could change in the future. You may
have to update your config if there are any changes to the design. This warning will be removed when
the design has been stabilised.
Hooks allow you to automatically execute custom playbooks at certain points during the execution of a
kayobe command. The point at which a hook is run is referred to as a target. Please see the list of
available targets.
Hooks are created by symlinking an existing playbook into the the relevant directory under
$KAYOBE_CONFIG_PATH/hooks. Kayobe will search the hooks directory for sub-directories matching
<command>.<target>.d, where command is the name of a kayobe command with any spaces replaced
with dashes, and target is one of the supported targets for the command.
For example, when using the command:
(kayobe) $ cd ${KAYOBE_CONFIG_PATH}/hooks/overcloud-host-configure/post.d
(kayobe) $ ln -s ../../../ansible/foo.yml 10-foo.yml
Failure handling
If the exit status of any playbook, including built-in playbooks and custom hooks, is non-zero, kayobe
will not run any subsequent hooks or built-in kayobe playbooks. Ansible provides several methods for
preventing a task from producing a failure. Please see the Ansible documentation for more details. Below
is an example showing how you can use the ignore_errors option to prevent a task from causing the
playbook to report a failure:
---
- name: Failure example
(continues on next page)
A failure in the Deliberately fail task would not prevent subsequent tasks, hooks, and playbooks
from running.
Targets
Warning: Support for multiple Kayobe environments is considered experimental: its design may
change in future versions without a deprecation period.
Sometimes it can be useful to support deployment of multiple environments from a single Kayobe con-
figuration. Most commonly this is to support a deployment pipeline, such as the traditional development,
test, staging and production combination. Since the Wallaby release, it is possible to include multiple en-
vironments within a single Kayobe configuration, each providing its own Ansible inventory and variables.
This section describes how to use multiple environments with Kayobe.
$KAYOBE_CONFIG_PATH/
environments/
ăă production/
ăă ăă inventory/
(continues on next page)
Ansible Inventories
Each environment can include its own inventory, which overrides any variable declaration done in the
shared inventory. Typically, a shared inventory may be used to define groups and group variables, while
hosts and host variables would be set in environment inventories. The following layout (ignoring non-
inventory files) shows an example of multiple inventories.
$KAYOBE_CONFIG_PATH/
environments/
ăă production/
ăă ăă inventory/
ăă ăă ăă hosts
ăă ăă ăă host_vars/
ăă ăă ăă overcloud
ăă staging/
ăă inventory/
ăă ăă hosts
ăă ăă host_vars/
ăă ăă overcloud
inventory/
groups
(continues on next page)
All of the extra variables files in the Kayobe configuration directory ($KAYOBE_CONFIG_PATH/*.yml)
are shared between all environments. Each environment can override these extra variables through
environment-specific extra variables files ($KAYOBE_CONFIG_PATH/environments/<environment>/
*.yml).
This means that all configuration in shared extra variable files must apply to all environments. Where
configuration differs between environments, move the configuration to extra variables files under each
environment.
For example, to add environment-specific DNS configuration for variables in dns.yml, set these variables
in $KAYOBE_CONFIG_PATH/environments/<environment>/dns.yml:
$KAYOBE_CONFIG_PATH/
dns.yml
environments/
ăă production/
ăă ăă dns.yml
ăă staging/
ăă ăă dns.yml
Network Configuration
Networking is an area in which configuration is typically specific to an environment. There are two main
global configuration files that need to be considered: networks.yml and network-allocation.yml.
Move the environment-specific parts of this configuration to environment-specific extra variables files:
• networks.yml -> $KAYOBE_CONFIG_PATH/environments/<environment>/networks.yml
• network-allocation.yml -> $KAYOBE_CONFIG_PATH/environments/<environment>/
network-allocation.yml
Other network configuration that may differ between environments includes:
• DNS (dns.yml)
• network interface names, which may be set via group variables in environment inventories
Other Configuration
Kolla Configuration
For files that are independent in each environment, i.e. they do not support combining the environment-
specific and shared configuration file content, there are some techniques that may be used to avoid dupli-
cation.
For example, symbolic links can be used to share common variable definitions. It is advised to avoid
sharing credentials between environments by making each Kolla passwords.yml file unique.
The following files and directories are currently shared across all environments:
• Ansible playbooks, roles and requirements file under $KAYOBE_CONFIG_PATH/ansible
• Ansible configuration at $KAYOBE_CONFIG_PATH/ansible.cfg and $KAYOBE_CONFIG_PATH/
kolla/ansible.cfg
• Hooks under $KAYOBE_CONFIG_PATH/hooks
It may be beneficial to define variables in a file shared by multiple environments, but still set vari-
ables to different values based on the environment. The Kayobe environment in use can be re-
trieved within Ansible via the kayobe_environment variable. For example, some variables from
$KAYOBE_CONFIG_PATH/networks.yml could be shared in the following way:
This would configure the external FQDN for the staging environment at staging-api.example.com,
while the production external FQDN would be at production-api.example.com.
Final Considerations
While its clearly desirable to keep staging functionally as close to production, this is not always possible
due to resource constraints and other factors. Test and development environments can deviate further,
perhaps only providing a subset of the functionality available in production, in a substantially different
environment. In these cases it will clearly be necessary to use environment-specific configuration in a
number of files. We cant cover all the cases here, but hopefully weve provided a set of techniques that
can be used.
Once environments are defined, Kayobe can be instructed to manage them with the
$KAYOBE_ENVIRONMENT environment variable or the --environment command-line argument:
The kayobe-env environment file in kayobe-config can also take an --environment argument,
which exports the KAYOBE_ENVIRONMENT environment variable.
Warning: The locations of the Kolla Ansible source code and Python virtual environment remain
the same for all environments when using the kayobe-env file. When using the same control host to
manage multiple environments with different versions of Kolla Ansible, clone the Kayobe configura-
tion in different locations, so that Kolla Ansible source repositories and Python virtual environments
will not conflict with each other. The generated Kolla Ansible configuration is also shared: Kayobe
will store the name of the active environment under $KOLLA_CONFIG_PATH/.environment and
produce a warning if a conflict is detected.
Kayobe users already managing multiple environments will already have multiple Kayobe configurations,
whether in separate repositories or in different branches of the same repository. Kayobe provides the
kayobe environment create command to help migrating to a common repository and branch with
multiple environments. For example, the following commands will create two new environments for
production and staging based on existing Kayobe configurations.
--environment production
(kayobe) $ kayobe environment create --source-config-path ~/kayobe-config-
,→staging/etc/kayobe \
--environment staging
This command recursively copies files and directories (except the environments directory if one exists)
under the existing configuration to a new environment. Merging shared configuration must be done
manually.
This guide is for contributors of the Kayobe project. It includes information on proposing your first patch
and how to participate in the community. It also covers responsibilities of core reviewers and the Project
Team Lead (PTL), and information about development processes.
We welcome everyone to join our project!
For general information on contributing to OpenStack, please check out the contributor guide to get
started. It covers all the basics that are common to all OpenStack projects: the accounts you need, the
basics of interacting with our Gerrit review system, how we communicate as a community, etc.
Below will cover the more project specific information you need to get started with Kayobe.
Basics
Communication
New features are discussed via IRC or mailing list (with [kolla] prefix). Kayobe project keeps RFEs in
Storyboard. Specs are welcome but not strictly required.
Task Tracking
Kolla project tracks tasks in Storyboard. Note this is the same place as for bugs.
A more lightweight task tracking is done via etherpad - Whiteboard.
Reporting a Bug
You found an issue and want to make sure we are aware of it? You can do so on Storyboard. Note this is
the same place as for tasks.
Most changes proposed to Kayobe require two +2 votes from core reviewers before +W. A release note
is required on most changes as well. Release notes policy is described in its own section.
Significant changes should have documentation and testing provided with them.
All common PTL duties are enumerated in the PTL guide. Release tasks are described in the Kayobe
releases guide.
Development
There are a number of layers to Kayobe, so here we provide a few pointers to the major parts.
CLI
The Command Line Interface (CLI) is built using the cliff library. Commands are exposed as Python entry
points in setup.cfg. These entry points map to classes in kayobe/cli/commands.py. The helper modules
kayobe/ansible.py and kayobe/kolla_ansible.py are used to execute Kayobe playbooks and Kolla Ansible
commands respectively.
Ansible
Kayobes Ansible playbooks live in ansible/*.yml, and these typically execute roles in ansible/roles/.
Global variable defaults are defined in group variable files in ansible/group_vars/all/ and these typically
map to commented out variables in the configuration files in etc/kayobe/*.yml. A number of custom
Jinja filters exist in ansible/filter_plugins/*.py. Kayobe depends on roles and collections hosted on An-
sible Galaxy, and these and their version requirements are defined in requirements.yml.
Ansible Galaxy
Kayobe uses a number of Ansible roles and collections hosted on Ansible Galaxy. The role dependencies
are tracked in requirements.yml, and specify required versions. The process for changing a Galaxy
role or collection is as follows:
1. If required, develop changes for the role or collection. This may be done outside of Kayobe, or
by modifying the code in place during development. If upstream changes to the code have already
been made, this step can be skipped.
2. Commit changes to the role or collection, typically via a Github pull request.
3. Request that a tagged release of the role or collection be made, or make one if you have the neces-
sary privileges.
4. Ensure that automatic imports are configured for the repository using e.g. a webhook notification,
or perform a manual import of the role on Ansible Galaxy.
5. Modify the version in requirements.yml to match the new release of the role or collection.
Vagrant
Kayobe provides a Vagrantfile that can be used to bring up a virtual machine for use as a development
environment. The VM is based on the centos/8 CentOS 8 image, and supports the following providers:
• VirtualBox
• VMWare Fusion
The VM is configured with 4GB RAM and a 20GB HDD. It has a single private network in addition to
the standard Vagrant NAT network.
Preparation
First, ensure that Vagrant is installed and correctly configured to use the required provider. Also install
the following vagrant plugins:
Note: if using Ubuntu 16.04 LTS, you may be unable to install any plugins. To work around this install
the upstream version from www.virtualbox.org.
Usage
Later sections in the development guide cover in more detail how to use the development VM in different
configurations. These steps cover bringing up and accessing the VM.
Clone the kayobe repository:
git clone https://opendev.org/openstack/kayobe.git -b master
Change the current directory to the kayobe repository:
cd kayobe
less Vagrantfile
vagrant up
vagrant ssh
Manual Setup
This section provides a set of manual steps to set up a development environment for an OpenStack con-
troller in a virtual machine using Vagrant and Kayobe.
For a more automated and flexible procedure, see Automated Setup.
Preparation
Follow the steps in Vagrant to prepare your environment for use with Vagrant and bring up a Vagrant
VM.
Manual Installation
Sometimes the best way to learn a tool is to ditch the scripts and perform a manual installation.
SSH into the controller VM:
vagrant ssh
source kayobe-venv/bin/activate
cd /vagrant
source kayobe-env
At this point, container images must be acquired. They can either be built locally or pulled from an image
repository if appropriate images are available.
Either build container images:
source ${KOLLA_CONFIG_PATH:-/etc/kolla}/admin-openrc.sh
Next Steps
The OpenStack control plane should now be active. Try out the following:
• register a user
• create an image
• upload an SSH keypair
• access the horizon dashboard
The cloud is your oyster!
To Do
Automated Setup
This section provides information on the development tools provided by Kayobe to automate the deploy-
ment of various development environments.
For a manual procedure, see Manual Setup.
Overview
The Kayobe development environment automation tooling is built using simple shell scripts. Some min-
imal configuration can be applied by setting the environment variables in dev/config.sh. Control plane
configuration is typically provided via the kayobe-config-dev repository, although it is also possible to
use your own Kayobe configuration. This allows us to build a development environment that is as close
to production as possible.
Environments
Overcloud
Preparation
cd kayobe
Inspect the Kayobe configuration and make any changes necessary for your environment.
If using Vagrant, follow the steps in Vagrant to prepare your environment for use with Vagrant and bring
up a Vagrant VM.
If not using Vagrant, the default development configuration expects the presence of a bridge
interface on the OpenStack controller host to carry control plane traffic. The bridge should
be named breth1 with a single port eth1, and an IP address of 192.168.33.3/24. This
can be modified by editing config/src/kayobe-config/etc/kayobe/inventory/group_vars/
controllers/network-interfaces.
This can be added using the following commands:
Usage
If using Vagrant, SSH into the Vagrant VM and change to the shared directory:
vagrant ssh
cd /vagrant
If not using Vagrant, run the dev/install-dev.sh script to install Kayobe and its dependencies in a
Python virtual environment:
./dev/install-dev.sh
Note: This will create an editable install. It is also possible to install Kayobe in a non-editable way, such
that changes will not been seen until you reinstall the package. To do this you can run ./dev/install.
sh.
./dev/overcloud-deploy.sh
Upon successful completion of this script, the control plane will be active.
Testing
Scripts are provided for testing the creation of virtual and bare metal instances.
Virtual Machines
The control plane can be tested by running the dev/overcloud-test-vm.sh script. This will run the
init-runonce setup script provided by Kolla Ansible that registers images, networks, flavors etc. It
will then deploy a virtual server instance, and delete it once it becomes active:
./dev/overcloud-test-vm.sh
For a control plane with Ironic enabled, a bare metal instance can be deployed. We can use the Tenks
project to create fake bare metal nodes.
Clone the tenks repository:
./dev/tenks-deploy-compute.sh ./tenks
Check that Tenks has created VMs called tk0 and tk1:
~/tenks-venv/bin/vbmc list
Configure the firewall to allow the baremetal nodes to access OpenStack services:
./dev/configure-firewall.sh
We are now ready to run the dev/overcloud-test-baremetal.sh script. This will run the
init-runonce setup script provided by Kolla Ansible that registers images, networks, flavors etc. It
will then deploy a bare metal server instance, and delete it once it becomes active:
./dev/overcloud-test-baremetal.sh
The machines and networking created by Tenks can be cleaned up via dev/tenks-teardown-compute.
sh:
./dev/tenks-teardown-compute.sh ./tenks
Upgrading
./dev/overcloud-upgrade.sh
Seed
These instructions cover deploying the seed services directly rather than in a VM.
Preparation
cd kayobe
Inspect the Kayobe configuration and make any changes necessary for your environment.
The default development configuration expects the presence of a bridge interface on the seed host to
carry provisioning traffic. The bridge should be named breth1 with a single port eth1, and an IP
address of 192.168.33.5/24. This can be modified by editing config/src/kayobe-config/etc/
kayobe/inventory/group_vars/seed/network-interfaces. Alternatively, this can be added us-
ing the following commands:
Usage
Run the dev/install.sh script to install Kayobe and its dependencies in a Python virtual environment:
./dev/install.sh
export KAYOBE_SEED_VM_PROVISION=0
./dev/seed-deploy.sh
Testing
The seed services may be tested using the Tenks project to create fake bare metal nodes.
If your seed has a non-standard MTU, you should set it via aio_mtu in etc/kayobe/networks.yml.
Clone the tenks repository:
./dev/tenks-deploy-overcloud.sh ./tenks
~/tenks-venv/bin/vbmc list
source dev/environment-setup.sh
kayobe overcloud inventory discover
kayobe overcloud hardware inspect
kayobe overcloud provision
The controller VM is now accessible via SSH as the bootstrap user (centos or ubuntu) at 192.168.
33.3.
The machines and networking created by Tenks can be cleaned up via dev/
tenks-teardown-overcloud.sh:
./dev/tenks-teardown-overcloud.sh ./tenks
Upgrading
./dev/seed-upgrade.sh
Testing
Kayobe has a number of test suites covering different areas of code. Many tests are run in virtual envi-
ronments using tox.
Preparation
System Prerequisites
The following packages should be installed on the development system prior to running kayobes tests.
• Ubuntu/Debian:
• OpenSUSE/SLE 12:
Python Prerequisites
If your distro has at least tox 1.8, use your system package manager to install the python-tox package.
Otherwise install this on all distros:
You may need to explicitly upgrade virtualenv if youve installed the one from your OS distribution
and it is too old (tox will complain). You can upgrade it individually, if you need to:
If you havent already, the kayobe source code should be pulled directly from git:
# from your home or source directory
cd ~
git clone https://opendev.org/openstack/kayobe.git -b master
cd kayobe
Kayobe defines a number of different tox environments in tox.ini. The default environments may be
displayed:
tox -list
tox
To run one or more specific environments, including any of the non-default environments:
tox -e <environment>[,<environment>]
Environments
Writing Tests
Unit Tests
Unit tests follow the lead of OpenStack, and use unittest. One difference is that tests are run using
the discovery functionality built into unittest, rather than ostestr/stestr. Unit tests are found in
kayobe/tests/unit/, and should be added to cover all new python code.
Two types of test exist for Ansible roles - pure Ansible and molecule tests.
These tests exist for the kolla-ansible role, and are found in ansible/<role>/tests/*.yml. The
role is exercised using an ansible playbook.
Molecule is an Ansible role testing framework that allows roles to be tested in isolation, in a stable
environment, under multiple scenarios. Kayobe uses Docker engine to provide the test environment, so
this must be installed and running on the development system.
Molecule scenarios are found in ansible/<role>/molecule/<scenario>, and defined by the config
file ansible/<role>/molecule/<scenario>/molecule.yml Tests are written in python using the
pytest framework, and are found in ansible/<role>/molecule/<scenario>/tests/test_*.py.
Molecule tests currently exist for the kolla-openstack role, and should be added for all new roles
where practical.
Release notes
Kayobe (just like Kolla) uses the following release notes sections:
• features for new features or functionality; these should ideally refer to the blueprint being im-
plemented;
• fixes for fixes closing bugs; these must refer to the bug being closed;
• upgrade for notes relevant when upgrading from previous version; these should ideally be added
only between major versions; required when the proposed change affects behaviour in a non-
backwards compatible way or generally changes something impactful;
• deprecations to track deprecated features; relevant changes may consist of only the commit
message and the release note;
• prelude filled in by the PTL before each release or RC.
Other release note types may be applied per common sense. Each change should include a release note
unless being a TrivialFix change or affecting only docs or CI. Such changes should not include a
release note to avoid confusion. Remember release notes are mostly for end users which, in case of
Kolla, are OpenStack administrators/operators. In case of doubt, the core team will let you know what is
required.
To add a release note, run the following command:
tox -e releasenotes
Note this requires the release note to be tracked by git so you have to at least add it to the gits staging
area.
Releases
This guide is intended to complement the OpenStack releases site, and the project team guides section
on release management.
Team members make themselves familiar with the release schedule for the current release, for example
https://releases.openstack.org/train/schedule.html.
Release Model
As a deployment project, Kayobes release model differs from many other OpenStack projects. Kayobe
follows the cycle-trailing release model, to allow time after the OpenStack coordinated release to wait
for distribution packages and support new features. This gives us three months after the final release to
prepare our final releases. Users are typically keen to try out the new release, so we should aim to release
as early as possible while ensuring we have confidence in the release.
Release Schedule
While we dont wish to repeat the OpenStack release documentation, we will point out the high level
schedule, and draw attention to areas where our process is different.
Milestones
At each of the various release milestones, pay attention to what other projects are doing.
Feature Freeze
As with projects following the common release model, Kayobe uses a feature freeze period to allow the
code to stabilise prior to release. There is no official feature freeze date for the cycle-trailing model, but
we typically freeze around three weeks after the common feature freeze. During this time, no features
should be merged to the master branch.
Before RC1
Prior to creating a release candidate and stable branch, the following tasks should be performed.
Testing
Clone the Kolla Ansible repository, and run the Kayobe tools/kolla-feature-flags.sh script:
Copy the output of the script, and replace the kolla_feature_flags list in ansible/roles/
kolla-ansible/vars/main.yml.
The kolla.yml configuration file should be updated to match:
tools/feature-flags.py
Copy the output of the script, and replace the list of kolla_enable_* flags in etc/kayobe/kolla.yml.
Clone the Kolla Ansible repository, and copy across any relevant changes. The Kayobe inventory is based
on the ansible/inventory/multinode inventory, but split into 3 parts - top-level, components and
services.
Top level
Components
# Additional control implemented here. These groups allow you to control which
# services run on which hosts at a per-service level.
Services
# Additional control implemented here. These groups allow you to control which
# services run on which hosts at a per-service level.
There are some small changes in this section which should be maintained.
Prior to the release, we update the dependencies and upper constraints on the master branch to use the
upcoming release. This is now quite easy to do, following the introduction of the openstack_release
variable. This is done prior to creating a release candidate. For example, see https://review.opendev.org/
#/c/694616/.
Synchronise kayobe-config
Ensure that configuration defaults in kayobe-config are in sync with those under etc/kayobe in
kayobe. This can be done via:
Synchronise kayobe-config-dev
Ensure that configuration defaults in kayobe-config-dev are in sync with those in kayobe-config.
This requires a little more care, since some configuration options have been changed from the defaults.
Choose a method to suit you and be careful not to lose any configuration.
Commit the changes and submit for review.
Its possible to add a prelude to the release notes for a particular release using a prelude section in a
reno note.
Ensure that release notes added during the release cycle are tidy and consistent. The following command
is useful to list release notes added this cycle:
RC1
Prior to cutting a stable branch, the master branch should be tagged as a release candidate. This allows
the reno tool to determine where to stop searching for release notes for the next release. The tag should
take the following form: <release tag>.0rc$n, where $n is the release candidate number.
This should be done for each deliverable using the releases tooling. A release candidate and sta-
ble branch defintitions should be added for each Kayobe deliverable (kayobe, kayobe-config,
kayobe-config-dev). These are defined in deliverables/<release name>/kayobe.yaml. Cur-
rently the same version is used for each deliverable.
The changes should be proposed to the releases repository. For example: https://review.opendev.org/#/
c/700174.
After RC1
The OpenStack proposal bot will propose changes to the new branch and the master branch. These need
to be approved.
After the stable branch has been cut, the master branch can be unfrozen and development on features for
the next release can begin. At this point it will still be using dependencies and upper constraints from the
release branch, so revert the patch created in Update dependencies to upcoming release. For example,
see https://review.opendev.org/701747.
Finally, set the previous release used in upgrade jobs to the new release. For example, see https://review.
opendev.org/709145.
RC2+
Further release candidates may be created on the stable branch as necessary in a similar manner to RC1.
Final Releases
A release candidate may be promoted to a final release if it has no critical bugs against it.
Tags should be created for each deliverable (kayobe, kayobe-config, kayobe-config-dev). Cur-
rently the same version is used for each.
The changes should be proposed to the releases repository. For example: https://review.opendev.org/
701724.
Post-release activites
An email will be sent to the release-announce mailing list about the new release.
Continuing Development
Search for TODOs in the codebases describing tasks to be performed during the next release cycle.
Stable Releases
Stable branch releases should be made periodically for each supported stable branch, no less than once
every 45 days.