Unit 3
Unit 3
Web Services
UNIT 3-Cloud Applications
• CloudSim
• OpenStack
• AWS
Chapter 1: CloudSim
• Introduction to Simulator,
• understanding CloudSim simulator,
• CloudSim Architecture(User code, CloudSim, GridSim, SimJava)
Understanding Working platform for CloudSim,
• What is CloudSim?
• CloudSim is an open-source framework, which is used to simulate cloud
computing infrastructure and services. It is developed by the CLOUDS Lab
organization and is written entirely in Java. It is used for modelling and
simulating a cloud computing environment as a means for evaluating a
hypothesis prior to software development in order to reproduce tests and
results.
• For example, if you were to deploy an application or a website on the cloud
and wanted to test the services and load that your product can handle and
also tune its performance to overcome bottlenecks before risking
deployment, then such evaluations could be performed by simply coding a
simulation of that environment with the help of various flexible and
scalable classes provided by the CloudSim package, free of cost.
• Benefits of Simulation over the Actual Deployment:
• Following are the benefits of CloudSim:
• No capital investment involved. With a simulation tool like CloudSim there
is no installation or maintenance cost.
• Easy to use and Scalable. You can change the requirements such as adding
or deleting resources by changing just a few lines of code.
• Risks can be evaluated at an earlier stage. In Cloud Computing utilization of
real testbeds limits the experiments to the scale of the testbed and makes
the reproduction of results an extremely difficult undertaking. With
simulation, you can test your product against test cases and resolve issues
before actual deployment without any limitations.
• No need for try-and-error approaches. Instead of relying on theoretical and
imprecise evaluations which can lead to inefficient service performance and
revenue generation, you can test your services in a repeatable and
controlled environment free of cost with CloudSim.
• Why use CloudSim?
• Below are a few reasons to opt for CloudSim:
• Open source and free of cost, so it favours researchers/developers working in
the field.
• Easy to download and set-up.
• It is more generalized and extensible to support modelling and
experimentation.
• Does not require any high-specs computer to work on.
• Provides pre-defined allocation policies and utilization models for managing
resources, and allows implementation of user-defined algorithms as well.
• The documentation provides pre-coded examples for new developers to get
familiar with the basic classes and functions.
• Tackle bottlenecks before deployment to reduce risk, lower costs, increase
performance, and raise revenue.
• Introduction to Simulator
• Simulation provides a powerful platform for conducting research
experiments with greater efficiency and accuracy.
• Creating a virtual environment allows for testing and verification of
solutions that can greatly optimize applications.
• This innovative technique involves constructing a model or real-time
system, resulting in reduced costs associated with computing resources.
• CloudSim simulation tool is one such simulator that can benefit researchers
in their pursuits.
• Understanding CloudSim Simulator
• CloudSim is a widely used, open-source simulation framework specifically designed for modeling
and simulating cloud computing infrastructures and services. It enables researchers, developers, and
cloud computing enthusiasts to simulate cloud environments and experiment with various scenarios
without the need for physical infrastructure. CloudSim provides a comprehensive platform to model
and evaluate the behavior, performance, and scalability of cloud-based applications and services.
• Key concepts and features of CloudSim include:
• 1. Cloud Infrastructure Modeling: CloudSim allows the creation of a virtual cloud infrastructure
comprising data centers, hosts, VMs (Virtual Machines) and cloud users.
• 2. Resource Provisioning: It provides mechanisms to allocate and manage resources like CPU cores,
memory, storage, and bandwidth to VMs based on different policies and algorithms.
• 3. Time-Based Simulation: CloudSim operates on discrete-event simulation principles, allowing the
simulation of cloud environments over time. Users can simulate various events and activities
occurring in the cloud ecosystem.
• 4. Networking and Communication Modeling : CloudSim enables the modeling of network
topologies, data transfer, and communication patterns among cloud components.
• 5. Energy Consumption Modeling: It includes facilities for modeling power consumption and energy-
aware algorithms to simulate the impact of different resource allocation strategies on energy usage.
• CloudSim Architecture:
• CloudSim is a versatile simulation tool composed of three integral layers.
• The first layer, referred to as the "User Code" layer, encompasses the
fundamental components of the cloud, including the definition of the
simulation parameters such as the number of virtual machines, users, and
the desired scheduling policy, such as Round Robin. At this layer, the
simulation experiments are tailored to the specific needs of the user,
including the location of the data center.
• The second layer, aptly named "CloudSim", offers a robust support system
for creating a comprehensive cloud-based environment. This includes the
implementation of a user interface that encompasses crucial elements
such as Cloudlets and Virtual Machines. Within this layer, users can
configure important aspects of the cloud component, such as bandwidth,
memory and CPU usage.
User Code
• The User Code acts as the interface through which the user controls the system. Within this layer,
the developer can specify the hardware requirements based on the specific scenario at hand.
• The user code layer exposes basic entities such as the number of machines, their specifications,
etc, as well as applications, VMs, number of users, application types, and scheduling policies.
• Following are the major classes used in CloudSim User code:
• DatacenterBroker is an entity acting on behalf of the user/customer. It is responsible for the
functioning of VMs, including VM creation, management, destruction, and submission of
cloudlets to the VM.
• The broker class acts on behalf of applications. Its prime role is to query the CIS to discover
suitable resources/services and undertakes negotiations for the allocation of resources/services
that can fulfill the application's QoS needs. This class must be extended for evaluating and testing
custom brokering policies.
• DatacenterCharacteristics: This class contains configuration information of data center resources
like the available host list, the fine-grained cost for each resource type, etc.
• CloudletScheduler: This is responsible for the implementation of different policies that determine
the share of processing power among Cloudlets in a VM. There are two types of provisioning
policies offered. space-shared (using CloudetSchedulerSpaceShared CloudletScheduler
TimeShared class).
CloudSim Layer
• The different layers of cloudsim are shown in the Fig.
• Network Layer: This layer of CloudSim has the responsibility to make communication possible
between different layers. This layer also identifies how resources in cloud environment are
placed and managed.
• Cloud Resources: This layer includes different main resources like datacenters, cloud coordinator
(ensures that different resources of the cloud can work in a collaborative way) in the cloud
environment.
• Cloud Services: This layer includes different service provided to the cloud service users. The
various services of clouds include Information as a Service (laaS), Platform as a Service (PaaS),
and Software as a Service (SaaS).
• VM Services: This layer is responsible to manage virtual machines by providing data members
defining a VM's bandwidth, RAM, mips (million instructions per second), size while also providing
setter and getter methods for these parameters.
• Cloudlet : It represents any task that is run on a VM, like a processing task, or a memory access
task, or a file updating task etc. It stores parameters defining the characteristics of a task such as
its length, size, mi (million instructions) and provides methods similarly to VM class while also
providing methods that define a task's execution time, status, cost and history.
• User Interface: This layer provides the interaction between the user and the simulator.
• GridSim
• GridSim is an earlier version of the simulation framework designed for modeling
distributed computing infrastructures, including grids and clusters.
• It focuses on simulating resource sharing and scheduling in distributed computing
environments, enabling the modeling of heterogeneous resources, task scheduling, and
data transfer.
• GridSim was used primarily in the context of grid computing, where resources from
multiple administrative domains are shared and utilized to solve large-scale
computational problems.
• SimJava
• SimJava is the underlying simulation library that both CloudSim and GridSim are built
upon.
• It provides a discrete-event simulation framework, offering classes and functionalities to
facilitate the development of simulation models in Java.
• SimJava offers features for modeling events, managing event-driven simulations, and
handling time-based simulation scenarios. It serves as the foundation for building
simulation frameworks like CloudSim and GridSim.
Understanding Working Platform for CloudSim
• CloudSim is typically utilized in conjunction with Java as its primary working
platform. As an open-source simulation framework for cloud computing,
CloudSim is implemented in Java, and its APIs (Application Programming
Interfaces) are designed to be used within Java-based applications for
creating, running and analyzing cloud simulations.
• CloudSim operations within the Java working platform are as follows:
• 1. Java Language: CloudSim is written in Java, and its core functionalities,
classes, and APIs are available as Java libraries. To utilize CloudSim, a basic
understanding of Java programming is needed.
• 2. Integration with Java IDES: Users can develop CloudSim-based
simulations using Integrated Development Environments (IDES) such as
Eclipse, IntelliJ IDEA, NetBeans, etc., which support Java development.
• 3. Java-Based Simulation Development: CloudSim provides Java APIs that
enable developers to create and manipulate cloud infrastructures, define
simulation scenarios, model various cloud components, and simulate
diverse cloud-related activities.
• 4. Java Virtual Machine (JVM): To execute CloudSim-based simulations,
Java Virtual Machine (JVM) compatibility is required on the system where
the simulation is run. JVM allows Java applications, including those using
CloudSim, to be executed on different platforms.
• 5. Java Libraries and Frameworks: CloudSim can be integrated with other
Java-based libraries and frameworks, extending its functionality or
incorporating additional features required for specific simulation scenarios.
• 6. Execution in Java Runtime Environment (JRE): CloudSim simulations run
within the Java Runtime Environment (JRE), allowing users to execute and
observe simulations on their local machines or distributed systems
supporting Java.
• Prerequisites to work with CloudSim using Java :
• Setup: Install Java Development Kit (JDK) on your system.
• Download CloudSim: Obtain the CloudSim library or CloudSim Plus (an
extended version) and include it in your Java project.
• Development: Write Java code utilizing CloudSim APIs to create the desired
cloud simulation scenarios.
• Execution: Run the Java-based simulation code within a Java-compatible
environment (JRE) to simulate the cloud environment based on your
defined scenarios.
• Data Analysis: Collect and analyze simulation results within your Java
application for performance evaluation or research purposes.
Chapter 2 – OpenStack
• Introduction to OpenStack,
• OpenStack test-drive,
• Basic OpenStack operations,
• OpenStack CLI and APIs,
• Tenant model operations,
• Quotas,
• Private cloud building blocks,
• Controller deployment,
• Networking deployment,
• Block Storage deployment,
• Compute deployment,
• deploying and utilizing OpenStack in production environments,
• Building a production environment,
• Application orchestration using OpenStack Heat
Definition of OpenStack?
• OpenStack is a cloud operating system that controls large pools of compute,
storage, and networking resources throughout a datacenter, all managed
through a dashboard that gives administrators control while empowering
their users to provision resources through a web interface.
Introduction to OpenStack
• Out of many commercial and open source cloud management packages to be
developed, the OpenStack project was one of the most popular.
• OpenStack provides a common platform for controlling clouds of servers,
storage, networks, and even application resources.
• OpenStack is managed through a webbased interface, a command-line
interface (CLI), and an application programming interface (API).
• Without any specific hardware and software vendors, this platform control
the resources.
What is OpenStack?
• For cloud/system/storage/network administrators—OpenStack controls many
types of commercial and open source hardware and software, providing a cloud
management layer on top of vendor-specific resources.
• Repetitive manual tasks like disk and network provisioning are automated with
the OpenStack framework.
• In fact, the entire process of provisioning virtual machines and even applications
can be automated using the OpenStack framework.
• For the developer—OpenStack is a platform that can be used not only as an
Amazon-like service for getting resources (virtual machines, storage, and so on)
used in development environments, but also as a cloud orchestration platform for
deploying extensible applications based on application templates.
• For the end user—OpenStack is a self-service system for infrastructure and
applications. Users can do everything from simply provisioning virtual machines
(VMs) like with AWS, to constructing advanced virtual networks and applications,
all within an isolated tenant (project) space.
• OpenStack is a framework for managing, defining, and utilizing cloud
resources.
• The official OpenStack website (www.openstack.org) describes the
framework as “open source software for creating private and public clouds.”
• “OpenStack Software delivers a massively scalable cloud operating system.”
• Following Figure shows several of the resource components that OpenStack
coordinates to create public and private clouds.
• As the figure illustrates, OpenStack doesn’t replace these resource providers;
it simply manages them, through control points built into the framework.
• Introducing OpenStack components
• OpenStack is an open source cloud software which consists of a series of
linked projects controlling large pools of computing, storage, and network
resources in a data center while managing through a dashboard.
• The various core components are:
Compute (Nova)
• OpenStack Compute is a cloud computing fabric controller, which manages pools of
computer resources and work with virtualization technologies, bare metals, and
high-performance computing configurations. Nova’s architecture provides flexibility
to design the cloud with no proprietary software or hardware requirements and
also delivers the ability to integrate the legacy systems and third-party products.
• Nova can be deployed using hypervisor technologies such as KVM, VMware, LXC,
XenServer, etc. It is used to manage numerous virtual machines and other instances
that handle various computing tasks.
Image Service (Glance)
• OpenStack image service offers discovering, registering, and restoring virtual
machine images. Glance has client-server architecture and delivers a user REST API,
which allows querying of virtual machine image metadata and also retrieval of the
actual image. While deploying new virtual machine instances, Glance uses the
stored images as templates.
• OpenStack Glance supports Raw, VirtualBox (VDI), VMWare (VMDK, OVF), Hyper-V
(VHD), and Qemu/KVM (qcow2) virtual machine images.
Object Storage (Swift)
• OpenStack Swift creates redundant, scalable data storage to store petabytes of
accessible data. The stored data can be leveraged, retrieved and updated. It
has a distributed architecture, providing greater redundancy, scalability, and
performance, with no central point of control.
• Swift is a profoundly available, shared, eventually consistent object store. It
helps organizations to store lots of data safely, cheaply and efficiently. Swift
ensures data replication and distribution over various devices, which makes it
ideal for cost-effective, scale-out storage.
Dashboard (Horizon)
• Horizon is the authorized implementation of OpenStack’s Dashboard, which is
the only graphical interface to automate cloud-based resources. To service
providers and other commercial vendors, it supports with third party services
such as monitoring, billing, and other management tools. Developers can
automate tools to manage OpenStack resources using EC2 compatibility API or
the native OpenStack API.
Identity Service (Keystone)
• Keystone provides a central list of users, mapped against all the OpenStack services,
which they can access. It integrates with existing backend services such as LDAP while
acting as a common authentication system across the cloud computing system.
• Keystone supports various forms of authentication like standard username & password
credentials, AWS-style (Amazon Web Services) logins and token-based systems.
Additionally, the catalog provides an endpoint registry with a queryable list of the
services deployed in an OpenStack cloud.
Networking (Neutron)
• Neutron provides networking capability like managing networks and IP addresses for
OpenStack. It ensures that the network is not a limiting factor in a cloud deployment
and offers users with self-service ability over network configurations. OpenStack
networking allows users to create their own networks and connect devices and servers
to one or more networks. Developers can use SDN technology to support great levels of
multi-tenancy and massive scale.
• Neutron also offers an extension framework, which supports deploying and managing
of other network services such as virtual private networks (VPN), firewalls, load
balancing, and intrusion detection system (IDS)
Block Storage (Cinder)
• OpenStack Cinder delivers determined block-level storage devices for application
with OpenStack compute instances. A cloud user can manage their storage needs by
integrating block storage volumes with Dashboard and Nova.
• Cinder can use storage platforms such as Linux server, EMC (ScaleIO, VMAX, and
VNX), Ceph, Coraid, CloudByte, IBM, Hitachi data systems, SAN volume controller,
etc. It is appropriate for expandable file systems and database storage.
Telemetry (Ceilometer)
• Ceilometer delivers a single point of contact for billing systems obtaining all of the
measurements to authorize customer billing across all OpenStack core components.
By monitoring notifications from existing services, developers can collect the data
and may configure the type of data to meet their operating requirements.
Orchestration (Heat)
• Heat is a service to orchestrate multiple composite cloud applications through both
the CloudFormation-compatible Query API and OpenStack-native REST API, using
the AWS CloudFormation template format.
OpenStack test drive
• OpenStack is technically just an API specification for managing cloud servers
and overall cloud infrastructures.
• Different organizations have created software packages that implements
OpenStack. To use OpenStack, we need to acquire such software.
• There are several free and open source solutions.
• We can create a test drive for OpenStack using DevStack, which is a rapid OpenStack
deployment tool.
• DevStack lets you interact with OpenStack on a small scale that’s representative of a
much larger deployment.
• DevStack is a collection of documented Bash (command-line interpreter) shell scripts
that are used to prepare an environment for, configure, and deploy OpenStack.
• Since OpenStack is for managing cloud infrastructure, to get minimal setup,
we need two machines: one will be the infrastructure we are managing and
one will be the manager.
Basic OpenStack operations
• The basic OpenStack operations are applied to a DevStack deployment.
Using the OpenStack CLI(command-line interface)
• Before we can run CLI commands, we must first set the appropriate environment
variables in our shell. Environment variables tell the CLI how and where to identify us.
• To set these variables, we have to run the commands in our shell. Each time we log in to
a session, we’ll have to set our environment variables.
• To Set environmental variables
source /opt/stack/python-novaclient/tools/nova.bash_completion
source openrc demo demo
• Setting environment variables manually
export OS_USERNAME=admin
export OS_PASSWORD=devstack
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://10.0.2.32:5000/v2.0
• To make sure our variables have been properly set, we should test if we can
run an OpenStack CLI command.
• In our shell, run the nova image-list command.
• This CLI command reads the environment variables we just set and uses them
as identification.
• Setting variables and executing a first CLI command
• Launching an instance from the CLI
• Using the OpenStack APIs
• The component-specific APIs interface with a number of sources, including
other APIs and relational databases.
• All OpenStack interactions eventually lead back to the OpenStack API layer.
• Following command will query the OpenStack APIs for information, which will
be returned in JavaScript Object Notation (JSON) format. Python is used to
parse the JSON so it can be read on your screen.
curl -s -X POST http://10.0.2.32:5000/v2.0/tokens \
-d '{"auth": {"passwordCredentials": \
{"username":"demo", "password":"devstack"}, \
"tenantName":"demo"}}' -H "Content-type: application/json" | \
python -m json.tool
Tenant model operations
• OpenStack is natively multi-tenant-aware.
• Instead of OpenStack provides computational resources, The number of resources
(vCPU, RAM, storage, and the like), images (tenant-specific software images), and
the configuration of the network are all based on tenant-specific configurations.
• Users are independent of tenants, but users may hold roles for specific tenants.
• A single user might hold the role of administrator in multiple tenants.
• Every time a new user is added to OpenStack, they must be assigned a tenant.
• Every time a new instance (VM) is created, it must be created in a tenant.
Management of all OpenStack resources is based on the management of tenant
resources.
• Because our access to OpenStack resources is based on tenant configurations, we
must know how to create new tenants, users, roles, and quotas.
• The tenant model
• Creating tenants, users, and roles
• Tenant networks
The tenant model:
• Before we start the process of creating tenant and user identity objects, we need to get
an idea of how these items are related.
• a role is a designation independent of a tenant until a user is assigned a role in a tenant.
• user is an admin in the General tenant and a Member in the Another tenant.
• both users and roles have one-to-many relationships with tenants.
• tenants are the way OpenStack organizes role assignments.
• In OpenStack all resource configuration (users with roles, instances, networks, and so
on) is organized based on tenant separation.
• In OpenStack jargon, the term tenant can be used synonymously with project.
• every tenant can have a specific user with the role Member, but that specific user
would only have one home tenant.
Creating tenants, users, and roles
• Command to create a new tenant
keystone tenant-create --name General
• When you run the command, you’ll see output like the following:
• +-------------+----------------------------------+
• | Property | Value |
• +-------------+----------------------------------+
• | description | |
• | enabled | True |
• | id | 9932bc0607014caeab4c3d2b94d5a40c |
• | name | General |
• +-------------+----------------------------------+
• The admin and Member roles were created as part of the DevStack deployment of
OpenStack.
• “Listing tenants and roles” explains how to list all tenants and roles for a particular
OpenStack deployment.
• devstack@devstack:~/devstack$ keystone tenant-list -----command is used to list all the
tenants.
• devstack@devstack:~/devstack$ keystone role-list-----command is used to list all the roles.
• How to create a new user:
keystone user-create
--name=johndoe
--pass=openstack1
--tenant-id 9932bc0607014caeab4c3d2b94d5a40c
[email protected]
• Listing users in a tenant
devstack@devstack:~/devstack$ keystone user-list \
--tenant-id 9932bc0607014caeab4c3d2b94d5a40c
• Tenant networks
Quotas
• Quotas are applied on the tenant and tenant-user level to limit the amount of
resources any one tenant can use.
• When we create a new tenant, a default quota is applied.
• By default, all users have the same quota as the tenant quota.
• Tenant quotas
• Tenant-user quotas
• Additional quotas
Private cloud building blocks
• Since the first public release of OpenStack in 2010, the framework has grown from a few core components
to nearly ten.
• There are now hundreds of OpenStack-related projects, each with various levels of interoperability.
• These projects range from OpenStack library dependencies to projects where the OpenStack framework is
the dependency.
• These project designations and their descriptions can be found in table 4.1.
Controller deployment
• Figure
shows the architecture.
• There are 4 nodes:
• Controller—The node that hosts controller and other shared services. This
node maintains the server-side API services. The controller coordinates
component requests and serves as the primary interface for an OpenStack
deployment.
• Network—The node that provides network resources for virtual machines.
This node bridges the gap between internal OpenStack networks and external
networks.
• Storage—The node that provides and manages storage resources for virtual
machines.
• Compute—The node that provides compute resources for virtual machines.
Code execution will occur on these nodes. virtual machines managed by
OpenStack as living on these nodes.
• Deploying controller prerequisites
• Working in a multi-node environment greatly increases deployment and
troubleshooting complexity.
• A small, mistake in the configuration of one component or dependency can cause
issues that are very hard to track down.
1. Preparing the environment:
• Aside from network configuration, the environment preparation is similar for all
nodes.
• If additional nodes are available, additional resource nodes (compute, network, or
storage) can be added to the deployment.
2. Configuring the network interface:
• If we want to configure the network interface on the controller node so one interface
is used for client-facing traffic and another is used for internal OpenStack
management.
• OpenStack allows to specify several networks (public, internal, and admin) for
OpenStack operation.
3. Updating packages:
• Ubuntu 14.04 LTS includes the Icehouse (2014.1) release of OpenStack, which
includes the following components:
• Nova—The OpenStack Compute project, which works as an IaaS cloud fabric
controller.
• Glance—Provides services for VM images, discovery, retrieval, and registration
• Swift—Provides highly scalable, distributed, object store services
• Horizon—The OpenStack Dashboard project, which provides a web-based admin and
user GUI
• Keystone—Provides identity, token, catalog, and policy services for the Open- Stack
suite
• Neutron—Provides network management services for OpenStack components
• Cinder—Provides block storage as a service to OpenStack Compute
• Ceilometer—Provides a central point of record for resource utilization metrics
• Heat—Provides application-level orchestration of OpenStack resources
4. Installing software dependencies
• In the context of OpenStack, a dependency is software that’s not part of the OpenStack project but
is required by OpenStack components.
• This includes software used to run OpenStack code (Python and modules), the queueing system
(RabbitMQ), and the database platform (MySQL), among other things.
Deploying shared services
• OpenStack shared services are those services that span Compute, Storage, and Network services
and are shared by OpenStack components.
• These are the official Open- Stack shared services:
• Identity Service (Keystone)—Provides identity, token, catalog, and policy services for the
OpenStack suite.
• Image Service (Glance)—Provides services for VM image discovery, retrieval, and registration.
• Telemetry Service (Ceilometer)—Provides a central service for monitoring and measurement
information in the OpenStack suite.
• Orchestration Service (Heat)—Enables applications to be deployed using scripts from VM
resources managed by OpenStack.
• Database Service (Trove)—Provides cloud-based relational and non-relational database services
using OpenStack resources.
1. Deploying the Identity Service (Keystone):
• OpenStack Identity Service, as the name implies, is the system of record for all
identities (users, roles, tenants, and so on) across the OpenStack framework.
• It provides a common shared identity service for authentication, authorization, and
resource inventory for all OpenStack components.
• This service can be configured to integrate with existing backend services such as
Microsoft Active Directory (AD) and Lightweight Directory Access Protocol (LDAP),
or it can operate independently.
• It supports multiple forms of authentication, including username and password,
token-based credentials, and AWS-style (REST) logins.
• Users with administrative roles will use the Identity Service (Keystone) to manage
user identities across all OpenStack components, performing the following tasks:
• Creating users, tenants, and roles
• Assigning resource rights based on role-based access control (RBAC) policies
• Configuring authentication and authorization
• Users with non-administrative roles will primarily interact with Keystone for
authentication and authorization.
• Keystone maintains the following objects:
• Users—These are the users of the system, such as the admin or guest user.
• Tenants —These are the projects (tenants) that are used to group resources,
rights, and users together.
• Roles—These define what a user can do in a particular tenant.
• Services—This is the list of service components registered with a Keystone
instance, such as the Compute, Network, Image, and Storage services. This is
the listing of services provided by an OpenStack deployment.
• Endpoints—These are the URL locations of service-specific APIs registered
with a particular Keystone server. This is the contact information for services
provided by an OpenStack deployment.
INSTALLING IDENTITY SERVICE (KEYSTONE)
• The first step is to install the Keystone package with related dependencies
with the following command.
sudo apt-get -y install keystone
2. Deploying the Image Service (Glance):
• Virtual machine images are copies of previously configured VM instances.
• These images can be cloned and applied to new VMs as the VMs are created.
• This process saves the user from having to deploy the operating system and
other software when deploying VMs.
• Glance is the module of OpenStack that’s used to discover, deploy, and
manage VM images in the OpenStack environment.
• By default, Glance will take advantage of RabbitMQ services, which allow
OpenStack components to remotely communicate with Glance without
communicating through the controller.
• CREATING THE GLANCE DATA STORE
• to create the Glance database, which database will be used to store
configuration and state information about images.
• Then we can grant the MySQL user glance_dbu access to the new database.
• In MySQL, user creation and rights authorization functions can be completed
in the same step.
• First, log in to the database server as root, as described in the subsection
“Accessing the MySQL console.”
• Then use the MySQL GRANT command, as follows.
CREATE DATABASE glance;
GRANT ALL ON glance.* TO 'glance_dbu'@'localhost' \ IDENTIFIED BY
'openstack1';
Deploying the Block Storage (Cinder) service:
• Cinder is the module of OpenStack used to provide block (volume) storage to
VM images in the OpenStack environment.
• It manages the process of provisioning remotely available storage to VMs
running on compute nodes.
• This relationship is shown in figure 5.4, where VM Compute and VM Volume
are provided by two separate physical resources, Compute hardware and
Cinder resource node.
Deploying the Networking (Neutron) service:
• OpenStack Neutron is the core of the cloud network service.
• Neutron APIs form the primary interface used to manage network services
inside OpenStack.
• Figure 5.5 shows Neutron managing both the VM network interface on the
VM and the routing and switching for the network that the VM network is
attached to.
• Simply put, Neutron manages all physical and virtual components required to
connect, create, and extend networks between VMs and public network
interfaces (gateways outside OpenStack networking).
Deploying the Compute (Nova) service
• the OpenStack Nova component is the core of the cloud framework
controller.
• Although each component has its own set of APIs, the Nova API forms the
primary interface used to manage pools of resources.
• Figure 5.6 shows how Nova both manages local compute (CPU and MEM)
resources and orchestrates the provisioning of secondary resources (network
and storage).
Networking deployment
• the multi-node architecture
Is shown in the figure.
• Deploying network prerequisites
• DevStack installed and configured OpenStack dependencies.
• We can manually install these dependencies.
• We can use a package management system to install the software: there’s no
compiling required, but you must still manually configure many of the
components.
1. Preparing the environment
• With the exception of the network configuration, environment preparation
will be similar to preparing the controller node.
• We have to pay close attention to the network interfaces and addresses in
the configurations. It’s easy to make a type, and often hard to track down
problems when we do.
2. Configuring the network interfaces
• We want to configure the network with three interfaces:
• Node interface—Traffic not directly related to OpenStack. This interface will be used
for administrative tasks like SSH console access, software updates, and even node-
level monitoring.
• Internal interface—Traffic related to OpenStack component-to-component
communication. This includes API and AMPQ type traffic.
• VM interface—Traffic related to OpenStack VM-to-VM and VM-to-external
communication.
3. Updating packages
• The APT package index is a database of all available packages defined in the
/etc/apt/sources.list file.
• We need to make sure our local database is synchronized with the latest packages
available in the repository for our specific Linux distribution.
• Prior to installation, we should also upgrade any repository items, including the Linux
kernel, that might be out of date.
4. Software and configuration dependencies
• install a few software dependencies and make a few configuration changes in
preparation for the install.
• INSTALLING LINUX BRIDGE AND VLAN UTILITIES
• We need to install the package bridge-utils, which provides a set of applications for
working with network bridges on the system (OS) level.
• Network bridging on the OS level is critical to the operation of OpenStack
Networking.
• To Install vlan and bridge-utils use following command:
$ sudo apt-get -y install vlan bridge-utils
5. Installing Open vSwitch
• OpenStack Networking takes advantage of the open source distributed virtual-
switching package, Open vSwitch (OVS).
• OVS provides the same data-switching functions as a physical switch, but it runs in
software on servers.
6. Configuring Open vSwitch
• We need to add an internal br-int bridge and an external br-ex OVS bridge.
• The br-int bridge interface will be used for communication within
Neutronmanaged networks.
• Virtual machines communicating within internal OpenStack Neutron-created
networks will use this bridge for communication.
• This interface shouldn’t be confused with the internal interface on the
operating system level.
Block Storage deployment
• the multi-node architecture
is shown in the figure.
Deploying Block Storage prerequisites