Cloud Deployment Notes
Cloud Deployment Notes
Introduction to OpenStack
Architecture and Component Overview
OpenStack is an open-source cloud computing platform that enables the creation, management, and
operation of cloud environments, offering services similar to those provided by public cloud providers like
AWS, Microsoft Azure, or Google Cloud. It supports Infrastructure-as-a-Service (IaaS), allowing users to
manage and provision computing, storage, and networking resources in a virtualized environment.
1. Open-Source
o Free to use, modify, and distribute. It has a large and active global community.
2. Modularity
o Composed of independent but interrelated components (services) that can be deployed
together or separately depending on user needs.
3. Flexibility
o Supports private, public, and hybrid cloud models.
4. Scalability
o Can handle small-scale to large-scale deployments, scaling resources up or down based on
demand.
5. Interoperability
o Compatible with a wide range of hardware and software environments.
6. Automation
o Automates resource provisioning, deployment, and scaling, reducing manual effort.
7. Horizon (Dashboard)
1. Private Cloud Deployment: Organizations can use OpenStack to build private clouds for better
control over data and resources.
2. Development and Testing Environments: Ideal for creating isolated environments for developers.
3. High-Performance Computing (HPC): Supports large-scale compute-intensive tasks.
4. Hybrid Cloud: Combines private OpenStack clouds with public clouds for seamless operations.
OpenStack follows a modular architecture comprising (include) various services, each designed to handle
specific functions such as compute, storage, networking, and orchestration. These services work together to
create a complete cloud environment that supports Infrastructure-as-a-Service (IaaS).
1. Core Components
o These handle primary cloud operations such as provisioning, networking, and storage.
2. Supportive Components
o Services that provide additional functionalities like orchestration, monitoring, and
authentication.
3. Users/Clients
o Interact with OpenStack via the dashboard, command-line interface (CLI), or APIs.
Browser Program
Neutron
………………………………………………………………………………………………………………………………………………………
Ceilometer
….. Linux
VM
windos
Keystone
Nova Glance
heat
……………………………………………………………………………………………………………………………………...………………
………… Storage
Cinder
Swift
Core Components of OpenStack
Key Features:
Multi-tenant support.
Service catalog for endpoint discovery.
Key Features:
Key Features:
Key Features:
Key Features:
Key Features:
7. Horizon (Dashboard)
Key Features:
Key Features:
Key Features:
RDO Installation
What is RDO?
RDO is a community-supported distribution of OpenStack that is designed to provide an easy way for
users to deploy and manage OpenStack on their own hardware. It is essentially a free and open-source
version of OpenStack, designed to be installed using the RDO packstack installer, making it easier to set up
OpenStack environments on your machines.
1. OpenStack Distribution:
o RDO is based on Red Hat OpenStack Platform and is supported by Red Hat and CentOS (and other
RHEL-based distributions like Fedora).
o It aims to be a simple and flexible way for developers and enthusiasts to experiment with
OpenStack, particularly for those who want to test and deploy OpenStack in a production-like
environment without the cost of a full enterprise solution.
2. Installation with Packstack:
o RDO simplifies the OpenStack installation process through Packstack—an installer that uses Puppet
modules to automate the configuration and setup of OpenStack services.
o Packstack can deploy a multi-node OpenStack environment, or even a single-node environment for
testing and development.
3. Supported by the Community:
o RDO is supported by a community of developers, users, and contributors, including contributors
from Red Hat, CentOS, and Fedora.
o It provides support for various OpenStack services such as Nova (compute), Neutron (networking),
Cinder (storage), Keystone (identity), and others.
4. Ecosystem Compatibility:
o RDO works well with CentOS, Fedora, and Red Hat Enterprise Linux (RHEL). It also allows you to
integrate other tools and services from the Red Hat ecosystem for enterprise-level management and
orchestration.
Identity Management
What is Keystone?
Keystone is the identity service in OpenStack that provides authentication, authorization, and service
discovery. It ensures secure access to OpenStack resources by managing users, roles, projects, and
services.
Keystone Architecture
Unit 2
OpenStack Management
Image Management
Glance in OpenStack (Image Management)
What is Glance?
Glance is the image management service in OpenStack that allows users to store, discover, and retrieve
virtual machine (VM) images. It provides a centralized repository for disk images that instances (VMs) can
boot from.
1. Glance API
2. Glance Registry
A registry is a centralized system that stores and manages virtual machine (VM) images, allowing users to list,
retrieve, and distribute images efficiently. In OpenStack, Glance acts as the image registry, keeping track of available
VM images and their metadata.
1. Stores Image Metadata – Keeps details like image name, format, size, OS type, and architecture.
2. Manages Image Versions – Supports updates and different versions of the same image.
3. Controls Image Access – Allows public, private, and shared images for security.
4. Supports Multiple Image Formats – QCOW2, RAW, VMDK, VHD, etc.
5. Distributes Images to Compute Nodes – Nova retrieves images from Glance when launching instances
Neutron in OpenStack
Neutron is the networking service in OpenStack that provides networking-as-a-service (NaaS) for cloud
instances. It allows users to create and manage networks, subnets, routers, and security groups dynamically.
allowing users to create, manage, and control networking resources such as networks, subnets, routers,
firewalls, and load balancers for virtual machines (VMs).
✔Self-service networking: Users can create virtual networks, routers, and security groups.
✔Multiple network types: Supports VLAN, VXLAN, GRE, and Flat networks.
✔Floating IPs: Allows external access to VMs.
✔Security groups: Acts as a firewall to control incoming/outgoing traffic.
✔Load balancing (LBaaS): Distributes traffic across multiple instances.
✔Firewall (FWaaS) & VPN (VPNaaS): Provides network security and VPN connectivity.
1. Networks
2. Subnets
3. Routers
4. Floating IPs
5. Security Groups
6. Ports
Neutron Architecture
2. Neutron Plugins
Different backends for implementing network functions (e.g., Open vSwitch, LinuxBridge, SDN controllers).
A network fabric is the underlying architecture that interconnects all network devices and resources within a data
center. In OpenStack Neutron, the network fabric refers to the virtual and physical network topology that connects
compute instances, storage systems, and external networks.
Instance Management
What is Nova?
Nova is the compute service in OpenStack that manages virtual machines (instances). It provides APIs for launching,
scheduling, and managing instances, ensuring that compute resources (CPU, RAM, disk) are allocated efficiently.
Flavors define the hardware specifications for VMs. Each flavor includes:
vCPUs – Number of virtual CPUs
RAM – Memory allocation
Disk Space – Root and ephemeral disk size
Swap Space – Optional extra memory
Features of Nova
State Description
Active The instance is running.
Shutoff The instance is stopped.
Paused The instance is temporarily halted.
Suspended The instance is saved to disk and stopped.
Rescued The instance is in recovery mode.
Shelved The instance is removed from the hypervisor but preserved.
Flavors define the compute resources (CPU, RAM, Disk) assigned to virtual machines (VMs) in OpenStack. They act
like instance types in AWS (e.g., t2.micro) or machine sizes in Azure (e.g., Standard_B2s).
Key pairs in OpenStack are SSH keys used to securely access instances (VMs). Instead of using passwords, key pairs
allow authentication using public-private key cryptography.
In OpenStack, an instance refers to a virtual machine (VM) running in the cloud. The process of launching an instance
involves selecting an image, flavor, key pair, network, and security group to create a virtual machine.
Unit 3
OpenStack Storage
Block Storage
Cinder in OpenStack
Cinder is the block storage service in OpenStack, designed to provide persistent storage to virtual machines (VMs)
and other cloud services. It enables users to create, attach, detach, and manage block storage volumes
independently from instances, similar to Amazon Elastic Block Store (EBS).
2. Cinder Architecture
Cinder consists of several components that work together to provide block storage:
(a) Cinder API
Cinder Scheduler Selects the best storage backend for volume creation.
Cinder Volume Manages the actual storage backends (LVM, Ceph, etc.).
1. Create a volume
2. Attach the volume to a VM
3. Use the volume in the VM (Format, Mount, Store Data)
4. Detach the volume
5. Delete the volume (optional)
Cinder is the Block Storage Service in OpenStack, used to manage persistent storage volumes for instances. It
provides block-level storage, similar to traditional hard drives, allowing instances to store and retrieve data.
Telemetry
Ceilometer is the Telemetry Service in OpenStack. It collects and provides data on resource usage for billing,
monitoring, and performance analysis. Ceilometer works closely with Gnocchi (for metric storage) and Aodh (for
alarming).
Components of Ceilometer
1. Collects telemetry data from OpenStack services (Nova, Cinder, Neutron, etc.)
2. Processes the data through pipelines and stores it in databases (MongoDB, Gnocchi)
3. Generates reports for metering, monitoring, and billing
4. Triggers alarms when predefined thresholds are exceeded
Ceilometer helps in tracking and measuring resource usage in OpenStack. To do this, it uses different
components. Let's understand them one by one:
1. Data Store
A database where Ceilometer stores all collected data (e.g., CPU usage, disk space).
Commonly uses Gnocchi, MongoDB, or InfluxDB to store and retrieve data.
2. Configuration Terms
These are settings that define how Ceilometer works, such as what data to collect, how often, and
where to store it.
3. Pipelines
A data flow system that tells Ceilometer how to process and send collected data (e.g., storing in a
database or sending it for billing).
Example: CPU usage data can be sent to both a monitoring system and a billing system.
4. Meters
Think of these as measuring tools that track different resources like CPU, RAM, disk, and
network usage.
Example: A meter for CPU will track how much CPU is being used over time.
5. Samples
6. Statistics
Summarized data based on multiple samples (e.g., average CPU usage over 1 hour).
Helps in analyzing trends and predicting future needs.
7. Alarms
Triggers an alert or action when a resource crosses a set limit.
Example: If CPU usage goes above 90%, an alarm can automatically scale resources or notify the
admin.
Visualizing the collected data using graphs and charts to understand trends, peaks, and patterns.
Example: A graph can show how memory usage has increased over a week.
Unit 4
Openstack Automation && Scaling Horizontally
OpenstackAutomation
What is Heat in OpenStack?
Heat is the orchestration service in OpenStack. It helps you automate the creation and management of cloud
resources (like virtual machines, storage, networks) using template files.
You can think of Heat as a robot cloud engineer 🛠️. You give it a blueprint (called a template), and it builds your
entire cloud environment based on that.
It allows users to define the entire cloud infrastructure in a single file (called a template) and then deploy
everything together as one stack.
Heat is the brain of OpenStack that reads a template and builds your cloud setup automatically, just like following a
recipe to make a complete meal.
Key Features:
Feature Description
Service Name heat
What it does Automates the deployment and management of OpenStack resources
Uses Templates (written in YAML or JSON) to define infrastructure
Template Format HOT (Heat Orchestration Template) or AWS CloudFormation-style
Built For Developers, system admins, DevOps engineers to build repeatable setups
Scaling Horizontally
What does “Off-the-Shelf Hardware” mean?
Off-the-shelf hardware refers to standard, commonly available servers — not expensive or specialized equipment.
Think of regular servers from Dell, HP, or even custom PC builds with decent processors, RAM, and storage.
Cost-effective
Easily replaceable
Good for scaling horizontally
3. Node Roles
4. Networking
Definition:
This means adding more servers (called compute nodes) to your cloud so you can run more virtual machines
(VMs).
Explanation:
Imagine your computer is full, and you can't run any more programs. So, you bring in a second computer to
share the load. That’s what scaling compute nodes is in cloud — adding more machines to run more VMs.
Definition:
Adding more controller and network nodes to handle more requests, services, and users.
Explanation:
If too many people are using your cloud at the same time, one controller (main manager) can't handle all. So,
you install another controller and network manager to split the load. It's like hiring more managers in a busy
company.
Definition:
This means running multiple copies of important services like Keystone, Glance, Neutron on different
servers to make them faster and more reliable.
Explanation:
If one service gets busy or crashes, others are still running. It’s like having 3 customer service agents instead
of 1 — better performance and no single point of failure.
4. Load-Balancing Keystone
Definition:
It means distributing the user login/authentication requests across multiple Keystone servers.
Explanation:
If everyone logs in at the same time and there's only one gate (Keystone), it becomes slow. Load balancing
is like opening 3 gates and dividing the crowd, so login is faster and more secure.
Explanation:
You can make Keystone faster and more secure by tweaking its settings — like setting token expiry time,
caching, number of worker processes, etc.
Definition:
Distributing image upload/download requests across multiple Glance servers.
Explanation:
Glance handles VM disk images. When lots of users are downloading or uploading images, you need more
Glance servers and a load balancer to share the load. This keeps image access smooth and quick.
Definition:
It means increasing the capacity of other OpenStack services like Cinder, Neutron, Swift, etc., by adding
more servers.
Explanation:
Like compute and control services, storage and networking also need scaling when users increase. You scale
them by deploying more nodes of those services.
Definition:
Making sure your cloud services are always running — even if a server crashes.
Explanation:
You set up multiple copies of services. If one fails, another takes over. So, users don’t face downtime. It’s
like having backup generators in case electricity goes off.
Definition:
Running your database (like MariaDB) and message system (like RabbitMQ) in HA mode — with
multiple nodes and failover options.
Explanation:
These are the heart of OpenStack. If they stop, everything stops. So, you run 2-3 copies of them, use
clustering and automatic failover, to make sure OpenStack keeps working smoothly.
Unit 5
Openstack Monitoring and Troubleshooting Monitoring
Openstack Monitoring
Monitoring in OpenStack refers to the process of continuously observing the health, performance, and
availability of cloud services and infrastructure components. The goal is to ensure everything in the
OpenStack environment (like Nova, Neutron, Glance, etc.) is running smoothly, and any failures or issues
are detected early.
OR
Monitoring is the process of continuously checking the health, performance, and resource usage of
OpenStack services to ensure the cloud environment runs smoothly.
Nagios is one of the most popular open-source monitoring tools. In an OpenStack environment, Nagios is
commonly used to monitor the health of cloud components and infrastructure.
Step-by-Step: Installing Nagios.
curl -L -O https://assets.nagios.com/downloads/nagioscore/releases/nagios-
4.4.6.tar.gz
cd nagios-4.4.6
make all
Accessing Nagios
http://<your_server_ip>/nagios
Adding Nagios Host Checks
A host check in Nagios means checking if a machine (host) — like a server or OpenStack node — is
reachable (UP) or not (DOWN).
use linux-server
host_name openstack-node1
address 192.168.1.50
max_check_attempts 5
check_period 24x7
notification_interval 30
notification_period 24x7
or
cfg_file=/usr/local/nagios/etc/objects/hosts.cfg
Nagios commands
Why? Even if OpenStack is perfect, if your SSH or Apache server dies, users can’t access anything!
Goal: Users and VMs need working networks to talk to each other.
Troubleshooting Monitoring
The debug command line option
The --debug flag is used in OpenStack CLI commands to show detailed internal processes, like:
API requests/responses
Token information
HTTP calls
Errors or warnings
Detailed logs of what's happening in the background
Example:
openstack --debug server list
Tailing logs means viewing the latest lines of a log file in real-time. It helps you monitor what’s
happening on the server as it happens — super useful for debugging OpenStack services like Nova,
Keystone, etc.
Command Used:
tail -f /var/log/nova/nova-api.log
Keystone handles:
User authentication
Service authentication
Token generation
Endpoint management
When something goes wrong with login, access, or tokens — Keystone is where you look!
1. Authentication Failures
source admin-openrc.sh
2. Token Errors
3. Invalid Endpoint
Logs to Check
Keystone Logs:
tail -f /var/log/keystone/keystone.log
Task Command
Show Keystone users openstack user list
Show services openstack service list
Check tokens openstack token issue
Glance handles:
When image upload, download, or usage fails — Glance is the place to troubleshoot!
Check:
Fix:
o Check file permissions on /var/lib/glance/images/
o Check Glance log for path errors or read permission issues.
Logs to Monitor
tail -f /var/log/glance/api.log
tail -f /var/log/glance/registry.log
If VMs can’t ping each other, get no IP, or don’t connect to the internet → Neutron is the place to check!
Possible Causes:
Fix Steps:
Restart if needed:
Possible Causes:
Fix Steps:
Allow ICMP:
Possible Causes:
Fix Steps:
Make sure router has external gateway and the interface is added.
Fix Steps:
Launch VMs
Manage hypervisors (like KVM, QEMU)
Interact with Glance (images), Neutron (networking), and Cinder (volumes)
Causes:
Nova-compute is down
Scheduler can’t find a host
Resource shortage (CPU, RAM, disk)
Fix:
tail -f /var/log/nova/nova-scheduler.log
Causes:
Fix:
Use:
Causes:
Fix:
tail -f /var/log/nova/nova-compute.log
4. VM has no networking
Cause:
Neutron misconfiguration
Port not created or bound
Fix:
Fix:
Use:
tail -f /var/log/nova/nova-compute.log