100% found this document useful (2 votes)
643 views12 pages

UnderCloud & Overcloud

The Red Hat OpenStack Platform director is a tool that installs and manages a complete OpenStack environment. It uses an undercloud node to provision and control bare metal overcloud nodes based on the OpenStack-On-OpenStack (TripleO) project. The director provides a simple way to install a lean and robust Red Hat OpenStack Platform environment using concepts of an undercloud and overcloud.

Uploaded by

ravi kant
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
643 views12 pages

UnderCloud & Overcloud

The Red Hat OpenStack Platform director is a tool that installs and manages a complete OpenStack environment. It uses an undercloud node to provision and control bare metal overcloud nodes based on the OpenStack-On-OpenStack (TripleO) project. The director provides a simple way to install a lean and robust Red Hat OpenStack Platform environment using concepts of an undercloud and overcloud.

Uploaded by

ravi kant
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

What is Director ?

1. The Red Hat OpenStack Platform director is a toolset for installing and
managing a complete OpenStack environment.
2. The undercloud is the main director node.
3. It is based primarily on the OpenStack project TripleO, which is an
abbreviation for "OpenStack-On-OpenStack".
4. This project takes advantage of OpenStack components to install a fully
operational OpenStack environment.
5. This includes new OpenStack components that provision and control bare
metal systems to use as OpenStack nodes.
6. This provides a simple method for installing a complete Red Hat OpenStack
Platform environment that is both lean and robust.
7. The Red Hat OpenStack Platform director uses two main concepts: an
undercloud and an overcloud.
Types of Method to deploy overcloud node:--- CLI & GUI

Command Line Tools and a Web UI The Red Hat OpenStack Platform
director performs these undercloud functions through a terminalbased
command line interface or a web-based user interface.
Undercloud Components The undercloud uses OpenStack components as
its base tool set.

This includes the following components:

1. OpenStack Identity (keystone) - Provides authentication and


authorization for the director’s components.
2. OpenStack Bare Metal (ironic) and OpenStack Compute (nova) -
Manages bare metal nodes.
3. OpenStack Networking (neutron) and Open vSwitch - Controls
networking for bare metal nodes.
4. OpenStack Image Service (glance) - Stores images th at are written to
bare metal machines.
5. OpenStack Orchestration (heat) and Puppet - Provides orchestration of
nodes and configuration of nodes after the director writes the overcloud
image to disk.
6. OpenStack Telemetry (ceilometer) - Performs monitoring and data
collection. This also includes:
7. OpenStack Telemetry Metrics (gnocchi) - Provides a time series database
for metrics.
8. OpenStack Telemetry Alarming (aodh) - Provides a an alarming
component for monitoring.
9. OpenStack Workflow Service (mistral) - Provides a set of workflows for
certain directorspecific actions, such as importing and deploying plans.
10. OpenStack Messaging Service (zaqar) - Provides a messaging service for
the OpenStack Workflow Service.
11. OpenStack Object Storage (swift) - Provides object storage for various
OpenStack Platform components, including:

Image storage for OpenStack Image Service


Introspection data for OpenStack Bare Metal
Deployment plans for OpenStack Workflow Service

OVERCLOUD ?

The overcloud is the resulting Red Hat OpenStack Platform environment


created using the undercloud.

This includes different nodes roles which you define based on the
OpenStack Platform environment you aim to create.

The undercloud includes a default set of overcloud node roles, which


include:
1.- Controller Node 2- Compute Node 3- Storage Node
1- Controller Nodes ?

• that provide administration, networking, and high availability for the


OpenStack environment.

• An ideal OpenStack environment recommends three of these nodes


together in a high availability cluster.

• A default Controller node contains the following components:

1. OpenStack Dashboard (horizon)


2. OpenStack Identity (keystone)
3. OpenStack Compute (nova) API
4. OpenStack Networking (neutron)
5. OpenStack Image Service (glance)
6. OpenStack Block Storage (cinder)
7. OpenStack Object Storage (swift)
8. OpenStack Orchestration (heat)
9. OpenStack Telemetry (ceilometer)
10.OpenStack Telemetry Metrics (gnocchi)
11.OpenStack Telemetry Alarming (aodh)
12.OpenStack Clustering (sahara)
13.OpenStack Shared File Systems (manila)
14.OpenStack Bare Metal (ironic)
15.MariaDB
16.Open vSwitch
17.Pacemaker and Galera for high availability services.

Compute ?
These nodes provide computing resources for the OpenStack environment.
You can add more Compute nodes to scale out your environment over
time. A default Compute node contains the following components:

1. OpenStack Compute (nova)


2. KVM/QEMU
3. OpenStack Telemetry (ceilometer) agent
4. Open vSwitch

Storage Nodes ?

that provide storage for the OpenStack environment.

This includes nodes for:

1- Ceph Storage nodes - Used to form storage clusters. Each node contains
a Ceph Object Storage Daemon (OSD). In addition, the director installs
Ceph Monitor onto the Controller nodes in situations where it deploys
Ceph Storage nodes.

2- Block storage (cinder) - Used as external block storage for HA Controller


nodes. This node contains the following components:

• OpenStack Block Storage (cinder) volume


• OpenStack Telemetry (ceilometer) agent
• Open vSwitch.

2- Object storage (swift) - These nodes provide a external storage layer for
Openstack Swift. The Controller nodes access these nodes through the
Swift proxy. This node contains the following components:

• OpenStack Object Storage (swift) storage


• OpenStack Telemetry (ceilometer) agent
• Open vSwitch.

What is HIGH AVAILABILITY Cluster Concept in Openstack ?

o The Red Hat OpenStack Platform director uses a Controller node


cluster to provide high availability services to your OpenStack
Platform environment.

o The director installs a duplicate set of components on each Controller


node and manages them together as a single service.

o This type of cluster configuration provides a fallback in the event of


operational failures on a single Controller node.

1- Pacemaker - Pacemaker is a cluster resource manager. Pacemaker


manages and monitors the availability of OpenStack components across
all nodes in the cluster.

2- HAProxy - Provides load balancing and proxy services to the cluster.


3- Galera - Replicates the Red Hat OpenStack Platform database across the
cluster.
4- Memcached - Provides database caching.

Hardware requirements for Director Node ?


The hardware requirements for hosts that the director provisions for
OpenStack services.
ENVIRONMENT REQUIREMENTS
Minimum Requirements:-
• 1 host machine for the Red Hat OpenStack Platform director
• 1 host machine for a Red Hat OpenStack Platform Compute node
• 1 host machine for a Red Hat OpenStack Platform Controller node

Recommended Requirements:-
• 1 host machine for the Red Hat OpenStack Platform director
• 3 host machines for Red Hat OpenStack Platform Compute nodes
• 3 host machines for Red Hat OpenStack Platform Controller nodes in a
cluster
• 3 host machines for Red Hat Ceph Storage nodes in a cluster

Note the following:


It is recommended to use bare metal systems for all nodes. At minimum, the
Compute nodes require bare metal systems.

All overcloud bare metal systems require an Intelligent Platform Management


Interface (IPMI). This is because the director controls the power management.

What it is IPMI in Openstack ?


• The Intelligent Platform Management Interface (IPMI) is a remote
hardware health monitoring and management system that defines
interfaces for use in monitoring the physical health of servers, such
as temperature, voltage, fans, power supplies and chassis.

• It was developed by Dell, HP, Intel and NEC, but has many more
industry promoters, adopters and contributors.
• The IPMI Initiative derives its name from the main specification
(IPMI), which defines the messages and system interface to platform
management hardware.

https://docs.openstack.org/ironic/latest/install/configure-ipmi.html

UNDERCLOUD REQUIREMENTS

• An 8-core 64-bit x86 processor with support for the Intel 64 or AMD64
CPU extensions.
• A minimum of 16 GB of RAM.
• A minimum of 40 GB of available disk space on the root disk. Make sure
to leave at least 10 GB free space before attempting an overcloud
deployment or update. This free space accommodates image conversion
and caching during the node provisioning process.
• A minimum of 2 x 1 Gbps Network Interface Cards. However, it is
recommended to use a 10 Gbps interface for Provisioning network traffic,
especially if provisioning a large number of nodes in your overcloud
environment.
• Red Hat Enterprise Linux 7.3 installed as the host operating system.
• SELinux is enabled on the host.

OVERCLOUD REQUIREMENTS
1- Compute Node Requirements:-
• Compute nodes are responsible for running virtual machine instances
after they are launched.
• Compute nodes must support hardware virtualization

Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions,
and the AMD-V or Intel VT hardware virtualization extensions enabled. It is
recommended this processor has a minimum of 4 cores.

Memory
A minimum of 6 GB of RAM. + Add additional RAM to this requirement based on
the amount of memory that you intend to make available to virtual machine
instances.

Disk Space
A minimum of 40 GB of available disk space.

Network Interface Cards


• A minimum of one 1 Gbps Network Interface Cards, although it is
recommended to use at least two NICs in a production environment.
• Use additional network interface cards for bonded interfaces or to
delegate tagged VLAN traffic.

Power Management
Each Controller node requires a supported power management interface, such
as an Intelligent Platform Management Interface (IPMI) functionality, on the
server’s motherboard.

2- Controller Node Requirements


Controller nodes are responsible for hosting the core services in a RHEL
OpenStack Platform environment, such as the Horizon dashboard, the back-end
database server, Keystone authentication, and High Availability services.

Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.

Memory
Minimum amount of memory is 32 GB.
However, the amount of recommended memory depends on the number of
vCPUs (which is based on CPU cores multiplied by hyper-threading value). Use
the following calculations as guidance:

Controller RAM minimum calculation:


Use 1.5 GB of memory per vCPU. For example, a machine with 48 vCPUs should
have 72 GB of RAM.

Controller RAM recommended calculation:


Use 3 GB of memory per vCPU. For example, a machine with 48 vCPUs should
have 144 GB of RAM

3- Ceph Storage Node Requirements


Ceph Storage nodes are responsible for providing object storage in a RHEL
OpenStack Platform environment.
Processor
• 64-bit x86 processor with support for the Intel 64 or AMD64 CPU
extensions.

Memory
• Memory requirements depend on the amount of storage space.
• Ideally, use at minimum 1 GB of memory per 1 TB of hard disk space.

Disk Space
• Storage requirements depends on the amount of memory.
• Ideally, use at minimum 1 GB of memory per 1 TB of hard disk space.

Network Interface Cards


• A minimum of one 1 Gbps Network Interface Cards, although it is
recommended to use at least two NICs in a production environment.

• Use additional network interface cards for bonded interfaces or to


delegate tagged VLAN traffic.

• It is recommended to use a 10 Gbps interface for storage node, especially


if creating an OpenStack Platform environment that serves a high volume
of traffic.

Power Management
• Each Controller node requires a supported power management interface,
such as an Intelligent Platform Management Interface (IPMI)
functionality, on the server’s motherboard.
Lab
• Install Director manually
• Deploy overcloud node- Controller node, Compute node, Ceph storage
node

Note: Describe the Network Flow between undercloud and overcloud nodes.

********* Thanks - Krishna***********

You might also like